Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Announcements and Chat > General Discussion

Reply
 
Thread Tools Search this Thread Display Modes
Old 26th December 2001, 10:58   #1  |  Link
Latexxx
Registered User
 
Join Date: Nov 2001
Location: Tampere, Finland
Posts: 618
Weird idea about smp and clustering

We all know that we wan't allways more speed. How to get that speed without paying €1000+? Well there are some programs that supports multiprocessors in same machine. But what it'll be if somebody writes a program which "transforms" this multiprocessor suport to clustering support. Then it would be easy to use all old computers, which are normally in garace or similar place, to get more en/decoding speed.
__________________
A/V moderator @ hydrogenaudio.org
My weird old sh*t: http://www.nic.fi/~lhahne/
http://last.fm/user/Latexxx/
Latexxx is offline   Reply With Quote
Old 26th December 2001, 13:26   #2  |  Link
Doom9
clueless n00b
 
Join Date: Oct 2001
Location: somewhere over the rainbow
Posts: 10,570

do you have any idea how hard this is? Even the theory behind distributed computing isn't that easy to understand and programming it is a major pain in the ass.
__________________
For the web's most comprehensive collection of DVD backup guides go to www.doom9.org
Doom9 is offline   Reply With Quote
Old 27th December 2001, 15:24   #3  |  Link
Latexxx
Registered User
 
Join Date: Nov 2001
Location: Tampere, Finland
Posts: 618

I have no idea how hard this would be, but it can't be too hard because smp programs use different threads in different processors. The transformer has only to "move" these threads from one computer to another.
__________________
A/V moderator @ hydrogenaudio.org
My weird old sh*t: http://www.nic.fi/~lhahne/
http://last.fm/user/Latexxx/
Latexxx is offline   Reply With Quote
Old 27th December 2001, 16:12   #4  |  Link
Doom9
clueless n00b
 
Join Date: Oct 2001
Location: somewhere over the rainbow
Posts: 10,570

may I suggest your read up on the issue first? Here's a pretty good book on operating systems in general.. used at many universities thruout the world: Modern Operating Systems by Andrew Tannenbaum. http://www.amazon.com/exec/obidos/AS...732696-7334225

it's hard enough to manage multiple threads on a single machine let alone do the same over multiple.
__________________
For the web's most comprehensive collection of DVD backup guides go to www.doom9.org
Doom9 is offline   Reply With Quote
Old 27th December 2001, 21:52   #5  |  Link
WitzWolf
Registered User
 
Join Date: Oct 2001
Posts: 10

Doom9, I agree with you, that programming a distributed program or clustering software by oneself is a very hard task, but luckily that has all been done before.
In the world of Linux there is clustering software "for the masses" available, for instance Mosix (www.mosix.org) or Beowulf (www.beowulf.org) that aren't that hard to setup for an experienced computer user, and are even easier to use.
As I've read in the forum before, you read the german magazine c't, so maybe you can get a copy of the german "Linux-Magazin" Issue 01/2002, which has a nice introductory article about "Mosix". The article uses a parallelized mp3-encoder as example and even ends with the words
"Wir sind gespannt, wann die erste parallelisiserte Software für DVD-Video-Konvertierung erhätlich ist."
which roughly translates to "We're eager to see, when the first parallelized software for DVD-video-conversion is available."

A candidate for this could be "transcode" (http://www.theorie.physik.uni-goetti...ich/transcode/), a linux commandline-tool for video conversion. I haven't had the time to test this tool in a Mosix environment, but maybe one of the forum readers could give it a try.
WitzWolf is offline   Reply With Quote
Old 28th December 2001, 12:21   #6  |  Link
Latexxx
Registered User
 
Join Date: Nov 2001
Location: Tampere, Finland
Posts: 618

Another interesting idea:
How anout a tool that allows you to run virtualcub filters in different computers like:
1.(computer) Read and write files
2. cropping and/or deinterlacing
2. resizing
4. compression
That should be too hard because the filters only move images from filter to another. Ofcourse the system must fully configurable.
__________________
A/V moderator @ hydrogenaudio.org
My weird old sh*t: http://www.nic.fi/~lhahne/
http://last.fm/user/Latexxx/
Latexxx is offline   Reply With Quote
Old 28th December 2001, 14:11   #7  |  Link
ChristianHJW
Guest
 
Posts: n/a

.... there were similar threads over at http://www.projectmayo.com , experts talking to other experts. Result : .... not feasible !!
  Reply With Quote
Old 28th December 2001, 14:32   #8  |  Link
maven
Registered User
 
maven's Avatar
 
Join Date: Oct 2001
Location: DE
Posts: 122
heart of the problem

the core of the problem is not the distribution of the workload among different machines/processors/clusters/threads.
the real problem is distributing data:
720x480x30x4:2:2 per second.
you don't want to decode the mpeg2-stream more than once, so if you want a distributed approach, the only solution (IMO) is to compress in chunks (machine (a) does 0:00 - 1:00, (b) does 1:00 - 2:00, ...). but then again, the results of a 1st pass would've to be collected (and therefore waited for) centrally to sensibly distribute bitrate among the frames... then everyone can go off and compress their chunks and hope everyone takes the same time...
maven is offline   Reply With Quote
Old 28th December 2001, 16:56   #9  |  Link
gldblade
XviD Junkie
 
Join Date: Oct 2001
Posts: 313

You'd need a fairly fast network dedicated to encoding. Otherwise, if the network becomes busy with some other task, the encoding program would timeout eventually, like the old versions of FairUse.

And the more computers you plan to use, the faster the network has to be since you'll be sending even more data in realtime.
gldblade is offline   Reply With Quote
Old 29th December 2001, 00:09   #10  |  Link
omol
Registered User
 
Join Date: Nov 2001
Location: Hong Kong
Posts: 136
Re:

Quote:
Originally posted by gldblade
You'd need a fairly fast network dedicated to encoding. Otherwise, if the network becomes busy with some other task, the encoding program would timeout eventually, like the old versions of FairUse.

And the more computers you plan to use, the faster the network has to be since you'll be sending even more data in realtime.
Actually, we need a network technology that scales well and does not rely on collision. Fat pipe does not gurantee message passing efficiency in this application. Currently, ethernet is most widely used but it's a dirty hack that only works well in small work group, and does not scale at all. With network utilisation more than 20%, you better throw in more switch. If it hasn't been HP's terrible marketting of their 10/100VG (the original VG not the IEEE 802.12), and Intel (or is it IBM?) backstabbing them, we will be in a world of no packet collision, and thus, message passing (network efficiency) in the clustering area may not be such a big hurdle. ATM does suit this purpose but the cost is still sky high.

regards,
omol
omol is offline   Reply With Quote
Old 29th December 2001, 04:04   #11  |  Link
gldblade
XviD Junkie
 
Join Date: Oct 2001
Posts: 313

Maybe I should get some real network experience before speaking.
gldblade is offline   Reply With Quote
Old 29th December 2001, 06:37   #12  |  Link
macdaddy
Registered User
 
Join Date: Oct 2001
Location: City of Angels
Posts: 109
This is like a free class in programming theory...

Keep throwing up the links, they make for interesting reading for those of us with a little too much time on our hands...

Thanks for the info. Happy New Year!

Last edited by macdaddy; 29th December 2001 at 06:39.
macdaddy is offline   Reply With Quote
Old 30th December 2001, 19:21   #13  |  Link
Latexxx
Registered User
 
Join Date: Nov 2001
Location: Tampere, Finland
Posts: 618

What about prsessing something like 50 frames on first machine then sending them to second machine then it processes them and sends them to next machine etc. And maybe the frames can be compressed with some lossless compressing.
__________________
A/V moderator @ hydrogenaudio.org
My weird old sh*t: http://www.nic.fi/~lhahne/
http://last.fm/user/Latexxx/
Latexxx is offline   Reply With Quote
Old 30th December 2001, 19:54   #14  |  Link
Peter von Frosta
Registered User
 
Join Date: Dec 2001
Location: Duesseldorf, Germany
Posts: 10

Compressing the processed frames?? I thought you wanted to speed up your encoding...
__________________
Kelly can be a guy's name too!
Peter von Frosta is offline   Reply With Quote
Old 31st December 2001, 06:52   #15  |  Link
omol
Registered User
 
Join Date: Nov 2001
Location: Hong Kong
Posts: 136
Re:

Quote:
Originally posted by Latexxx
What about prsessing something like 50 frames on first machine then sending them to second machine then it processes them and sends them to next machine etc. And maybe the frames can be compressed with some lossless compressing.
Supposedly programmers become smarter, OS becomes more bug free and contains less deadlock, clustering option is a builtin, and every video encoding program magically could take benefit of this, still, you need to send enormous data down the pipe. Let do a simple math, say a rip of anamorphic movie @ 512x224, you got a total 114688 pixels. Each pixel in RGB (for the simplicity of illustration, you could get less with YUY2 or YUV12) takes up 24 bit, that's 3 byte of data. Total that up you have 336K byte of data on each frame. For every second, you have 336K byte x 24 = 8064K byte of data, that you need to send over the pipe. It seems not much at first glance, only 336K byte of data per 1/24 sec or 7.875M byte per second. However, ethernet, either 10/100/1000 base, has a very fundamental flaw: at any given time, only one node on the same network could send a packet, if more than one node happen to be sending data at the same time, this will cause a collision. The consequence is, at random, one node will continue sending data while the rest will instantly stop. How long will they stop? At random! Then they will keep on trying again and again after then until no collision. Now, imagine that the data is sending back and forth the pipe continously at the rate of 336K byte per 1/24 second, I'm sure you will cause a hell of collision. And NOT to forget that the maximum ethernet frame size (somebody call that packet, but that's incorrect) is only 1500 byte. You will see the collision LED on the HUB blinks more severely than X'mas light. In our case, this is simply not viable, at least at the moment, b'cos you can easily pump out video frames and encode them faster than real time. Maybe when ATM come down in cost, video encoding cluster would be viable. But by then, we maybe using Multithread or Multicore CPU on our desktop

regards
omol
omol is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 14:37.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2017, vBulletin Solutions Inc.