View Single Post
Old 4th April 2011, 09:44   #137  |  Link
hello_hello
Registered User
 
Join Date: Mar 2011
Posts: 4,823
Quote:
Originally Posted by Ghitulescu View Post
Not at all. See NEC ND-3540 vs. Plextor PX-750. The percentages represent the good sectors (green) vs. the unreadable ones (red). In the green zone are also included the corrected sectors (ie sectors that were faulty but the drive used the CRC algorithm to recover the valid data).
Have you got an example which is relevant to the discussion? Or one which means something? So, some drives could read a bad disc better than others. No news flash there. What disc? Why was the quality bad? Was it just because the disc was scratched, was it a bad burn, was it actually a burned disc or was it a pressed disc?

It seems to me you've only offered a graph which shows some drives can read a poor quality disc better than others, and there's nothing to say it was even a burned disc or what sort of quality would have been reported by each drive. No doubt the drives which could read more of the disc would have reported a better quality than those which could read less, but do you really think your graph proves any of the drives would have reported the disc quality as being anything but low?

I think you'll find I've already acknowledge some drives can read poor quality discs better than others and quality scores will vary as a result, but I still maintain it doesn't negate being able to test discs for quality even though it's not an exact science and your graph does nothing to prove otherwise.

Do you have anything which actually relates to testing a disc for quality and which shows the results vary so much between drives a disc quality test is meaningless? i.e. reporting PI errors and failures etc.

Quote:
Originally Posted by Ghitulescu View Post
As long as one uses the same drive to check the discs, no problems, s/he has at least a reference. But combining data from various drives, with various discs, and various programs (I use Plextools and pxscan, some use KProbe, other use CDSpeed and so on) is not a good practice IMHO.
Well I'd assume most people would generally be using the same drive and the same software to test their discs.
However there seems to be no logical argument offered as to why combining data from various drives, discs and programs is not good practice. If disc testing isn't an accurate science, surely having more than one reference makes it somewhat more accurate than only having a single reference, the accuracy of which is unknown?

Last edited by hello_hello; 4th April 2011 at 09:56.
hello_hello is offline   Reply With Quote