Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > Software players

Reply
 
Thread Tools Search this Thread Display Modes
Old 25th November 2013, 21:25   #20961  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
@makakam
Those predicted dropped or repeated frames are not due to the speed of your system but due to a mismatch between your monitors refresh rate and the video's refresh rate, turning on smooth motion will help. You can also tune your refresh rate by making a custom resolution or use Reclock.
Asmodian is offline   Reply With Quote
Old 25th November 2013, 21:41   #20962  |  Link
makakam
Registered User
 
Join Date: Jan 2010
Posts: 60
Asmodian: I do know that, it's just that my avr, gpu and tv are still the same but still this difference occurs, weird..
makakam is offline   Reply With Quote
Old 26th November 2013, 01:19   #20963  |  Link
Asmodian
Registered User
 
Join Date: Feb 2002
Location: San Jose, California
Posts: 4,407
Maybe a change in the GPU drivers? There is also a new clock which is part of the motherboard.
Asmodian is offline   Reply With Quote
Old 26th November 2013, 15:26   #20964  |  Link
THX-UltraII
Registered User
 
Join Date: Aug 2008
Location: the Netherlands
Posts: 851
It has been months since the last madVR version was released (v0.86.11). Can you tell us any details if you are planning on a new release Madshi?
THX-UltraII is offline   Reply With Quote
Old 26th November 2013, 17:50   #20965  |  Link
flashmozzg
Registered User
 
Join Date: May 2013
Posts: 77
Quote:
Originally Posted by THX-UltraII View Post
It has been months since the last madVR version was released (v0.86.11). Can you tell us any details if you are planning on a new release Madshi?
Just look trough the latest pages. A lot of test versions have been released recently.
flashmozzg is offline   Reply With Quote
Old 26th November 2013, 18:31   #20966  |  Link
pirlouy
_
 
Join Date: May 2008
Location: France
Posts: 692
Why asking for an update if you don't need anything and everything is working well...
pirlouy is offline   Reply With Quote
Old 26th November 2013, 20:53   #20967  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
A broadcaster, deeply involved in the standardization of 1080p- and Ultra HD-receivers, has kindly asked for my opinion regarding down/up-sampling of progressive scan video, but as I am no video expert I would like to ask for comments RE: what I would like to suggest as starting points. (I don't mind being called stupid, esp. if a better solution is proposed.)

Context TTBOMK (and I am not really at the liberty of saying much, and I certainly don't want to be put on the spot if what I describe fits nothing that ever sees the light of day):

Let's talk luma.

The full res signal will be downsampled by one of rational ratios that range between 1 (no downsampling) and 1/4, then compressed. Some of the rational ratios do not have a unit numerator (5/6, for example).

This downsampled signal will then be decompressed, and upsampled or downsampled to fit the display by ratios ranging from 1/2 (further downsampled) and 4 (enlarged "a lot").

If I understand correctly, the recommendations must be "cheap to implement". At the very least, complications should give good bang for the buck.

This is what I suggest for downsampling.

If the ratio is 1, nothing to do (except compress).

For the other downsampling ratios, keeping in mind that the downsampled and compressed signal is NOT meant to be viewed "as is", I would suggest the following. (Note that this describes the effect of the filter. It is not a description of an efficient implementation.)

Step 1: Convert to YcCbcCrc.

Step 2: Box filter by 2x2 (2 horizontally and 2 vertically) if the downsampling ratio is greater than or equal to 1/2, 3x3 if the downsampling ratio is greater than or equal to 1/3, 4x4 if ... 1/4.

Step 3: Decimate. (For example, with downsampling ratio 5/6, keep the first 5 pixel values (each of which is an average of 4, since 5/6 >= 1/2), skip the sixth, keep the next 5, skip the 12th, etc. Then, only keep the first 5 scanlines, drop the 6th, etc.)

Step 4: Convert back to Y'Cb'Cr'.

Rationale:
1) Downsampling through something that approximates linear light is a high impact improvement, so push for that as the one "quality expense".
2) Halos and other sharpening artifacts will re-enlarge (and further downsample) badly, so we should not use a sharpening filter. In addition, clipping leads to loss of information (and visually annoying artifacts when re-enlarging). So, we may as well use the simplest low pass filter followed by decimation.
3) Processing the decompressed downsampled image for viewing then becomes a clearly defined problem. For the sake of exposition, let's use the 5/6 ratio re-enlarged by 6/5.
Given averages of 4 pixels except that every 6th column and every 6th scanline of such averages is missing, reconstruct the full res image so it looks good.

I'll discuss the issue of upsampling (actually resampling, since we me be also be downsampling further) in another post (if I have time...).

However, the one thing I am quite sure to recommend, is NOT to perform this final resampling/sharpening through linear light: Filter the Y'Cb'Cr' values directly.

Rationale: Section 6.6 of http://web.cs.laurentian.ca/nrobidou...tersThesis.pdf and http://www.imagemagick.org/Usage/filter/nicolas/#short.

Last edited by NicolasRobidoux; 26th November 2013 at 21:24.
NicolasRobidoux is offline   Reply With Quote
Old 26th November 2013, 21:32   #20968  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
The one thing I don't like about my (not really anything novel there, so "my" should have quotes) proposed downsampling approach is that if we are going to compress the downsampled image with something that plays well with "Fourier" methods (like JPEG does through its connection to the DCT) we may as well "downsample" in "(Fourier) coefficient space", which really means dropping modes and/or compressing more.
"Downsample on load" does work well with JPEG. Basically, one would not downsample directly: One would simply compress the full res image using "quality settings" sufficient to get a reasonable image when viewed at a certain smaller size.
This of course takes away the "linear light downsampling" benefits, since these types of compression are generally better used in a "perceptual color space", but it does make for a fairly elegant approach: Instead of downsampling then compressing, compress the full size image suitably aggressively.

Last edited by NicolasRobidoux; 26th November 2013 at 21:45.
NicolasRobidoux is offline   Reply With Quote
Old 26th November 2013, 22:14   #20969  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
On to the final resampling for viewing (upsampling as much as 4x, or downsampling further by as much as 1/2).
If the above "downsample by box filtering by 2, 3 or 4 in both directions" recommendation is followed, the result should be filtered for display, not merely decompressed, even if the dimensions of the viewed image are the same as the dimensions of the downsampled image.
The main reason is that the box filtered samples that make up the decompressed downsampled image are not equally spaced in the original full size image. If, for example, we downsampled by a ratio 5/6, the first five groups of 4 (2x2) pixels that were downsampled have centroids at a distance of 1 from one 2x2 group to the next (in the full size image), but the sixth retained 2x2 group is 2 away from the fifth retained one because the "actual" sixth 2x2 group was skipped.
In addition, the alignment of the downsampled image is slightly different than the original: It is still centered, but the first centroid is one half inter-pixel distance to the right and down compared to the center of the first pixel. (The last kept centroid is one half-pixel to the left and up.)
Now that we have established that the decompressed result should be filtered whether it is viewed at the same size, re-enlarged fully or partially, or further downsampled (with one exception: full, non-downsampled, resolution material viewed at full resolution, the "trivial case"), allow me to indicate how this filtering should be performed so as to preserve alignment.
First, figure out the positions, within the full size image, of the centroids of the groups of pixels that were box filtered when downsampling. Then, map out where these centroids "land" within the viewed ("final") image when transformed using the "align corners" image geometry convention (http://libvips.blogspot.ca/2011/12/t...ith-align.html).
Let's assume that the pixel centers of the "final viewed image" are located at coordinates written (i,j) such that nearest pixels are at a distance of 1, and the centroids are located at coordinates labeled (U,V).
We have pixel values at the centroids that were not "thrown away" when decimating. Our job is to interpolate the "known" data at the (U,V) positions to the (i,j) positions.
Construct a separable filter as follows.
Let the horizontal = vertical distance between centroids in the viewed image be D. If D > 1, the downsampled image is re-enlarged. If D < 1, it is further downsampled.
When reconstructing the pixel value at (i,j), we will consider all centroids that are within max(3,3D) to the left, right, top and bottom. That is, we consider all centroids that fall within the square of width = height = 2max(3,3D) centered at (i,j). The reconstructed pixel value will be a weighted sum of the centroid values.
Give a raw weight w(U,V) equal to 0 if (U,V) is the position of a centroid that was "dropped" when decimating. This is equivalent to "ignoring" such centroids in the weighted sum.
Otherwise, let the raw weight w(U,V) be the usual Lanczos 3 (Sinc-windowed Sinc with lobe parameter a = 3) weight
w(U,V) = sinc(pi*(U-i)/max(1,D))*sinc(pi*(U-i)/(3*max(1,D)))*sinc(pi*(V-j)/(max(1,D))*sinc(pi*(V-j)/(3*max(1,3D)))
The raw weights need to be normalized by the sum of the (nonzero) weights (there are at most 36 when re-enlarging, more when further downsampling).
Although terse, this completes the description of the filter.
-----
The above filter is separable. However, it has a rather large footprint.
If this footprint is too large to be practical, the Mitchell-Netravali bicubic, another separable filter, can be used instead of Lanczos 3 to generate the raw weights.
Details can be found here http://www.imagemagick.org/Usage/filter/#mitchell.

Last edited by NicolasRobidoux; 27th November 2013 at 08:51.
NicolasRobidoux is offline   Reply With Quote
Old 26th November 2013, 22:59   #20970  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by NicolasRobidoux View Post
A broadcaster, deeply involved in the standardization of 1080p- and Ultra HD-receivers, has kindly asked for my opinion regarding down/up-sampling of progressive scan video, but as I am no video expert I would like to ask for comments RE: what I would like to suggest as starting points. (I don't mind being called stupid, esp. if a better solution is proposed.)

[...]
IMHO, linear light processing is less important than using a reasonably sharp downsampling algorithm - but it depends a bit on the scaling factor. The higher the downscaling factor is, the more important linear light will be. I'd prefer a simple Lanczos3 or Bicubic50 downscale without linear light over a 2x2 or 3x3 box downfilter, because I believe a Lanczos3 or Bicubic50 downscale would be noticeably sharper. Of course the ideal solution would be to use Bicubic50 with linear light and with anti-ringing. But I'm not sure if anti-ringing is an algorithm the broadcaster can use. And without anti-ringing I wouldn't recommend to use linear light, either (at least not with a linear resampler which has negative lobes). So my suggestion would be either simple Lanczos3 or Bicubic50, without linear light. Or alternatively Bicubic50 (not Lanczos3!) with linear light and anti-ringing.

BTW, I can't imagine that the broadcaster is going to downscale, then compress, then upscale again for broadcasting. I suppose if they downscale, that will probably be the resolution they are going to broadcast in, and upscaling might then only be performed in the end user's video playback chain, depending on which resolution the display of the end user has. I would not try to "balance" downscaling and upscaling, unless you know for a fact that you will always have perfect control over both steps. Instead I'd look at every step separately, by trying to optimize each step as far as possible.

In terms of "cheap to implement", I can say that Bicubic50 with linear light and anti-ringing performs quite well in madVR. It's noticeably faster than a simple Jinc/EWA Lanczos downscale (without linear light and without anti-ringing) would be.

Of course this is just my 2 cents. Maybe it would make sense to do some tests. We should trust our eyes more than theoretical brainstorming. E.g. ask the broadcaster to provide you with a few samples. Or alternatively just take a Blu-Ray movie and downscale it. Then compare how the final result looks like with the suggested algorithms and pick the one which looks best. Personally, I believe choosing a sharper algorithm will have a more beneficial effect than using linear light.

Edit: One more thing: What you destroy during downsampling you can't (easily) get back later through upscaling. So the downscaling step is quite important. Choosing a soft downscaling algorithm will make it extra hard to get a nicely sharp final output image to the display, IMHO.

Last edited by madshi; 26th November 2013 at 23:07.
madshi is offline   Reply With Quote
Old 27th November 2013, 00:06   #20971  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
As usual, thank you Mathias.
-----
I was not given much time on this ("ASAP = yesterday") and my day job is keeping me tres busy so...
-----
I am getting the impression that downsampling and upsampling will be mixed and matched. In addition, that downsampling results are not meant to be viewed directly.
Maybe you're right, and we should go for a fairly sharp downsampling filter (like Lanczos 3, say).

(Side note: I think that EWA is out of the running because of computational complexity; anti-ringing may be doable. "Anti-ringing filters have been found to produce good results with sharpening filters like Lanczos." may be a good sentence to add.)

However, I am not sure everybody would like the result of re-enlarging compressed images downsampled with Lanczos, say. Or anything with significant haloing, actually.

Preventing such horrid artifacts, and avoiding "changes in texture/blurriness introduced by the filter changing phase across the image" is why I feel the classical box filtering then decimate (through linear light) is appropriate. But then using a very sharp filter to "finish up" (and hoping that the initial box filtering prevents the worst artifacts that can be introduced by the use of Lanczos in the final stage).

This is not totally unreasonable: Sharpening (to tighted features and interfaces) sometimes looks better when applied to an image which was first lightly low pass filtered (enough to filter out the Nyquist checkerboard).

But as you say, without taking the time to test...

Well, bad advice is sometimes better than no advice.
NicolasRobidoux is offline   Reply With Quote
Old 27th November 2013, 00:10   #20972  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
There are several reasons why I am reasonably certain that downsampling through linear light then re-enlarging (or further downsampling) through perceptual light is a good idea.
One of them is chapter 6.6 of the thesis of my former student Adam Turcotte: From an accuracy viewpoint, downsampling through linear light (linear RGB with sRGB primaries) then re-enlarging through sRGB has been found to be consistently more accurate than toolchains that do everything through linear light or everything through sRGB.
Of course I'm extrapolating...
But there are heuristics that support this viewpoint. They are related to the intent of sigmoidization.
-----
Indeed, I am skating on thin ice heuristics.
NicolasRobidoux is offline   Reply With Quote
Old 27th November 2013, 00:15   #20973  |  Link
NicolasRobidoux
Nicolas Robidoux
 
NicolasRobidoux's Avatar
 
Join Date: Mar 2011
Location: Montreal Canada
Posts: 269
Mathias:
I hope "somebody" looks at your suggestions too.

I may be "corrupted" by my current focus on preserving colors. Not really sure if this is reasonable in video.
NicolasRobidoux is offline   Reply With Quote
Old 27th November 2013, 05:10   #20974  |  Link
ryrynz
Registered User
 
ryrynz's Avatar
 
Join Date: Mar 2009
Posts: 3,650
I have a crash on start of playback when playing this file when xysubfilter is enabled.
ryrynz is offline   Reply With Quote
Old 27th November 2013, 07:29   #20975  |  Link
TheElix
Registered User
 
Join Date: May 2010
Posts: 236
I hope "cnvrgc" feature isn't forgotten.
TheElix is offline   Reply With Quote
Old 27th November 2013, 07:37   #20976  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
@Nicolas, I think before doing a recommendation you should really do some comparison tests. Trust your eyes instead of your scientific brain, while testing with real world material (not with test patterns).

One problem with giving a recommendation is that not everybody has the same priorities. Some people have different sensitivities to specific artifacts (like aliasing or ringing) than others. But I think the majority of end users values sharpness very highly. And many end users don't even know what ringing is, nor are they very bothered by it. Personally, I hate ringing artifacts, but if I had to pick a cheap algorithm, I'd still pick a sharp one, even if it rings, because videos are often somewhat soft by nature, and I think a sharp resampling algorithm is what the majority of end users would likely prefer, if they had the choice. Because of that IMHO a good choice for a cheap upscaling algorithm is probably Lanczos3 (bzw, be very specific with taps, because marketing likes to count them differently than developers; developers say Lanczos3, marketing says the algorithm uses at least 6 taps, or maybe 12 or even 36). For downscaling Bicubic50 might do.

Don't forget that video resampling is different to image resampling. In image resampling you can magnify the scaled image and judge every minute difference under a microscope. In video resampling you have only 1/24nd of a second to judge one image, then already the next image is displayed. And the human eye tries to track motion while watching the video, automatically reducing noise and tracking sharp edges etc.
madshi is offline   Reply With Quote
Old 27th November 2013, 09:57   #20977  |  Link
iSunrise
Registered User
 
Join Date: Dec 2008
Posts: 496
Quote:
Originally Posted by madshi View Post
...But I think the majority of end users values sharpness very highly. And many end users don't even know what ringing is, nor are they very bothered by it. Personally, I hate ringing artifacts, but if I had to pick a cheap algorithm, I'd still pick a sharp one, even if it rings, because videos are often somewhat soft by nature, and I think a sharp resampling algorithm is what the majority of end users would likely prefer, if they had the choice. Because of that IMHO a good choice for a cheap upscaling algorithm is probably Lanczos3 (bzw, be very specific with taps, because marketing likes to count them differently than developers; developers say Lanczos3, marketing says the algorithm uses at least 6 taps, or maybe 12 or even 36). For downscaling Bicubic50 might do.
At least for me, thatīs exactly true. It already was important to me when I still had a CRT, but since I went from a CRT to a LCD screen and since the "HD introduction for the masses" took place, sharpness is a lot more important to me. I kind of dislike algorithms that alter the image to make it (considerably) softer, because I want to see every little detail in the source, even if it adds a bit of ringing. Because of this, I went BiCubic75 early in the development of madVR, but now itīs basically Lanczos 4. Not that you can see huge differences between the two, anyway.
iSunrise is offline   Reply With Quote
Old 27th November 2013, 11:13   #20978  |  Link
michkrol
Registered User
 
Join Date: Nov 2012
Posts: 167
Quote:
Originally Posted by ryrynz View Post
I have a crash on start of playback when playing this file when xysubfilter is enabled.
Works flawlessly for me, but I am using the debanding test build. If you're on stable madVR, perhaps try it http://forum.doom9.org/showthread.php?p=1652418 ?
Since you didn't tell if the crash is in madVR, note that I'm also using MPC-HC 1.7.1.83 (nightly) with internal LAVFilters and the official XySubFilter build (3.1.0.546).
michkrol is offline   Reply With Quote
Old 27th November 2013, 16:24   #20979  |  Link
huhn
Registered User
 
Join Date: Oct 2012
Posts: 7,922
Quote:
I have a crash on start of playback when playing this file when xysubfilter is enabled.
xy subfilter has know issue try xy vsfilter. xy subfilter is a preview build.

if it still crash with xy vsfilter then it's most likely a problem with madvr

you can try to lower the cpu queue to 20 or lower or you can look for related errors in the bug tracker http://code.google.com/p/xy-vsfilter/issues/list
huhn is online now   Reply With Quote
Old 27th November 2013, 19:26   #20980  |  Link
madshi
Registered Developer
 
Join Date: Sep 2006
Posts: 9,140
Quote:
Originally Posted by leeperry View Post
Same story with the new rendering path
Weird, must be the drivers, I guess.

Quote:
Originally Posted by ryrynz View Post
Anyone had exclusive mode just freeze? Not sure if it's related specifically to my system or not.
Audio continues to play, If I change to windowed mode in continues to play fine.. then I can enter back into exclusive and everything is normal.
I did see briefly some red text on the top left likely commenting on a direct3d error, might be a tough issue to track down.
Yes, might be tough to track down, especially if you can't reliably reproduce it.

Quote:
Originally Posted by luke823 View Post
Hi all,
If you're interested to see a new media player that supports madvr internally, please take a look at Media Browser Theater:

http://forum.doom9.org/showthread.php?t=169768
Cool!

Quote:
Originally Posted by Ver Greeneyes View Post
Running the TPG in fullscreen mode during calibration or profiling, on particularly dark patches I see random clumps of pixels with a slight greenish hue that change roughly every half a second - not single pixel noise of a neutral colour that changes rapidly as I would expect. Is this working as intended?
Bug. Will be fixed in the next test build.

Quote:
Originally Posted by dansrfe View Post
I've observed sharp drops in the render queue with artifact removal at default settings which results in dropped frames every ~5 seconds. This tends to happen only in high motion sequences like action/dance sequences.
Bug. Will be fixed in the next test build.

Quote:
Originally Posted by huhn View Post
i got a crash and i can't send the bug report just got this:
http://abload.de/img/bugreportbugdysj2.png

the problem was the old mpc hc version so no big deal. these old version are still needed because newer version can't play ps2 lpcm audio. is the bug report tool part of mpc hc or madvr. so i just tried to use an old version that simply isn't supported anymore or was broken?
The bug report tool is part of madVR. If you can't send the bug report with the built in send functionality, just press Ctrl+C while the error window is still visible, then you have the bug report in the clipboard and can manually upload it somewhere.

Quote:
Originally Posted by secvensor View Post
Similar the project is frozen.
What do you mean?

Quote:
Originally Posted by Niyawa View Post
"Hi, "fastSubtitles" is not listed in the commented list of available settings in mvrInterfaces in the latest available version. Do you plan on keeping this setting at that specified name?"
Yes, sorry, forgot to add that to the interface.

Quote:
Originally Posted by Niyawa View Post
Also a question of my own... the option "use a separate device for presentation (Vista and newer)", does it haves any downsides?
If it works with your specific OS/GPU/driver setup then it's usually beneficial to enable it. At least it rarely harms. There have been some reports about problems with this option, though. E.g. some AMD/NVidia laptop users with a shared AMD/NVidia <-> Intel GPU had problems. Also the latest Intel drivers don't play nice with this option. These are all GPU driver problems, though, and not madVR's fault.

Quote:
Originally Posted by vivan View Post
On systems with Optimus it was leading to freeze after 30-40 seconds of playback if you were using nVidia GPU for rendering.

UPD: tested right now, seems that this is issue is now gone
Good to hear!!

Quote:
Originally Posted by Niyawa View Post
Also on an unrelated note, any of you guys heard of an issue with madVR where video only appear 5 seconds after even though audio has already started in playback?
I've heard about this only if there's a display mode change and the display needs 5 seconds to resync.

Quote:
Originally Posted by kerimcem View Post
exclusive mode bar exit(flybar) button added?
Not sure what you mean? Flybar is a feature of MPC-BE, I think. If you want to have a button added to that, you need to talk to the MPC-BE developers.

Quote:
Originally Posted by NicolasRobidoux View Post
IMHO the "deblur" you currently use for Jinc3, equal to roughly .98 (which happens to be the same as the default for ImageMagick, by no accident) is right around where "the perfect balance" happens. And this seems to be confirmed by "consensus".

However, and I'm pretty sure you've tried some values before (and most likely I've made the same suggestion before, in this forum no less), you could quiet some voices that want "sharper" by allowing smaller deblurs. I'm not sure implementing features to "quiet" complaints is a good development strategy, but just in case...
Thanks for the suggestion. However, in order to get near to Lanczos sharpness I would have to increase the deblur factor so much that too much aliasing would creep back in for my taste. I'd rather stick with the current settings. If some people find Jinc too soft, they can use a separate sharpening algorithm. I think that makes more sense.

Quote:
Originally Posted by lansing View Post
I'm trying to play an old mpeg1 video in mpc, but it returns an error "madVR reports: creating Direct3D device failed (80070057)"
Are you using hardware or software decoding? Which video decoder? Does this only occur with this one video? Or also with other videos?

Quote:
Originally Posted by TheElix View Post
I hope "cnvrgc" feature isn't forgotten.
Definitely not. Need/want it myself.
madshi is offline   Reply With Quote
Reply

Tags
direct compute, dithering, error diffusion, madvr, ngu, nnedi3, quality, renderer, scaling, uhd upscaling, upsampling

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 08:17.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.