{sorry to double reply -- lots to write}
From your other post on this topic: http://www.chemicalforums.com/index.php?topic=60128.msg214938#msg214938 You can see this happening, your "blank" has less sample than your real "sample" How you're going to normalize I don't know. But if the blank were spiked with standard, then it'd be simple.
Usually, system suitability is used for a chromatographic analysis, say for a drug sample. For example, for testing for drugs of abuse, the blanks, standards, and samples are spiked with something like a cocaine metabolite from the standard. If the samples give the same amount as the blank, then you can say there's none. If there's more, you can use the blank value to normalize and determine the amount from the spiked standard. If no peaks show up in blank and standard, you've done it wrong, and the system will stop running to tell you so. This is because you can't run a clean blank, an unspiked standard, and a clean sample, see a peak just in the sample, and say, "By the retention time, this peak in the sample is the same as the peak in the standard" -- that logic just won't fly in a court case.
Of course, none of this usually applies to a UV sample, so I don't really know whats going on. But try to ask people in charge is this sort of thing is what's going on.