Also, another note. These are Grayscale16bit in “generic gray gamma 2.2”
We are testing between Generic Gray Gamma 2.2 (equiv to sRGB space gamma) and Gray Gamma 2.2 (equiv to AdobeRGB space gamma) in 16bit gray targets as I write this. In recent months, Generic Gray Gamma 2.2 stays consistent with our legacy targets but the differences are slight.
best,
Walker
Thanks for this. At present I am in the middle of a (metaphorical, non-meteorological) tropical cyclone. If it passes in the next few days then I may be able to spend a small amount time printing and measuring.
This continues to puzzle me. “Known bugs” - known to whom? I read all the main forums on this and I not seen any references to such problems, other than your comments. Is Roy aware of this? I may well ask him about it on Yahoo-QTR. I know I’m in a minority in still using Measure Tool. So I get the sense that most people are using i1Profiler, and where are the reports of these bugs?
Previous comment still applies. You and no-one else? How was this problematic measurement data created? By hand? I’ve not seen any such bug or reports of it.
Your cgats is simpler than mine, but the structure is essentially the same. They’re both tab-delimited, CSV-style files. I open yours in a text editor (Notepad++) and I don’t see the first Lab number 96.44 under the LAB_L label either! That’s the nature of tab delimited files. Open both in Excel or similar and you will see that in both cases the Lab numbers are in the LAB_ column. The various QTR droplets have always been able to handle tab delimited files. I don’t accept that your cgats file is any more correctly formatted than mine.
I have tried 51 steps. I thought that the relinearisation that it produced was excessive in the number of twists and turns. I thought it was picking up too much random printer variation, and this was using a 51x3 chart to average out the variation as much as possible. This seemed to be the general consensus on 51 steps on other forums where these things are discussed. I will try your targets out of interest as soon as I get some clear air, but you will understand that I remain sceptical.
I’m finding it difficult to have a rational conversation with you Brian. There is a million ways to get to the correct place in the end so just get on the bus.
///
My paired down cgats with the dropped menu labels fixed a persistent issue with QTR quad linearizer not wanting to read normal cgats files. I’m compiling the data and evidence. Known (in-lab). When I hunt it down completely I’ll work it out with roy, et all. Remember the past few months when I’ve been working on “validating” the linearizer? This is one of many variables that I’m working on.
///
Related to measurement error and squiggly lines, etc. Most of these issues come down to reflectance problems white creeping into the sensor from the edge of a patch or from a poorly timed gap between patches due to hardware, patch, printer, or user error, etc. At IJM we’ve been doing 256 patches since before the i1 so many would ask, how does one do that if everyone else is being all skeptical about only 51? We’ll, all I can say is, error correction.
Some error correction can be done by measuring again and again. Other error correction can be done by averaging the readings (not the same as dumbing down the readings). Others can be done by drawing interpolation data between jagged points and a lot of data-points. There is a ton of ways to error correct and averaging is not always the most accurate. The hard-drive in your computer (if it spins) did a few a thousand error corrections in the time it took you to read this sentence . . . but it will make this sentence appear exactly the same a million times while doing that underlying error correction all the-while.
RE: Lab_L alignment, just download and open in excel and you’ll see what I mean . . . . It’s a cgats file after all.
best,
Walker
I’m struggling to understand why we are discussing an issue with QTR here. It would seem appropriate to, at least, mention it on the QTR forum. AFAIK, nothing has been posted there. I have linearised lots of papers with the droplet with nary an issue, and measured the results to prove the linearisation. Maybe, you can explain the issue. When Dana was active on this forum, she advocated using the linearisation droplet. What is the latest advice please?
I am well aware of the nature of the charts for the master curves. That’s why I gave the 51x3 a try - I thought that more would be better. When it wasn’t, I came to the view that the 256 patch master curves are intended for use with the sophisticated, proprietary Piezography curve creator, which must be quite a different beast to the simple QTR linearise function and relinearisation droplet. That’s why I’m sceptical - curve creation is not relinearisation, or so it seemed to me. And you’re not the only one doing error correction.
By the way, Roy Harrington is also sceptical: Yahoo! Groups
As noted in post #22, I did (open both yours and mine in Excel) and I didn’t (see any fundamental differences in alignment). Both files have the columns delimited by tabs, and both files have the luminosity data in the came column as the LAB_L label. There’s certainly differences in the some of the headers, but my headers from MT haven’t caused any droplet problems for me.
What an odd comment! I’m perfectly happy with the bus I’m on, thanks, as it takes me to satisfying places. That said, I’m always willing to learn and try something new. I just need a clear exposition, and the opportunity to understand why someone else’s experience differs to mine and to that of other users.
This exchange started with a simple question - what about Measure Tool? Nothing in this thread gives me a reason not to use it.
I apologize for “get on the bus.” It was a very long day yesterday with a million gotchas coming my way all day.
If you l look at the cgats file, you’ll notice that the “Lab_L” is directly above the first lab measurement row and not in the row that it would normally be in. That is what I was talking about. I had to drop the label to be exactly above the first measurement row for the linearizer to work. (as stated in previous posts).
///
Curve creation and curve linearization are two different things and Roy’s linearizer is actually fairly good and fairly close to ours however it does not have error correction built in and only allows a maximum of 151 patches. Therefore, error correction must be applied to the measurements before the linearizer is run and this is traditionally done by averaging multiple readings. I think there may be a better way to do this and this is something we’re working on right now with readings from a platinum print. We are able to linearize 129 patches with Roy’s linearizer and 256 with another linearizer that I built myself that is based around negative profiling, but the key to this is proper error correction of the measurements.
Regards,
Walker