Here is a side note: the digital cameras will alter colour depending on lighting conditions (energy spectrum of incident light).
It is not a problem with digital cameras, actually it is a problem with our sight. Cameras show reasonably accurately real colors present. We are accustomed to see them after they pass through adaptive filtering applied by our brains, so we prefer to modify the picture so that it looks as if it was taken in a daylight.
White balance is meant to compensate for it, but does poor job.
It does a poor job if the camera has to guess, but there are ways of making it work great (it requires uniform lighting and a white reference on the picture, no guessing needed then).
That is why i am including the black oxide coated nail to serve as a "true black" reference point in every image.
True black (0,0,0) is identical no matter what the white balance is so I don't see how it is going to help.
Generally i do not like to tell smart people that they are wrong, but in this case you are very wrong on many points. I have spent several years working with photography and fully calibrated work flow, here is how it actually works:
Camera uses a Bayer sensor, which is a collection of RGB cells with R, G or B filter (little prisms) on top of each pixel. Each pixel measures intensity of one of those 3 colours and the other 2 get interpolated from neighbouring pixels. This is what is called a RAW image. It has no "colours" on each pixel, just intensities of RGB. When the camera or your computer processing the raw file, creates the jpg file a particular "colour" is assigned to each pixel. This particular colour can be in one of many colour spaces (sRGB, aRGB, Lab etc) and the algorithm that generates the colours can take into account the white balance, which is an attempt to compensate for energy spectrum of incident light when picture was taken, or it can be a full fledge colour profiling using something like a colour checker passport from x-rite (yes I have one). While balance is meant to correct for energy spectrum of incident light via a colour adjustment to a true grey scene, while colour calibration also adjusts for any bias present in the sensor.
Your monitor has an RGB colour space (well close to it, many monitors can't display full RGB space), which is known as additive space, because you add all colours to make white and a printer uses CYMK or the more tone equivalent of CYMK which is known as subtractive space, because you mix CYM to make the black while you start from white paper and subtract from it to make black. Going from RGB space of the stored file to RGB space of the monitor or CYMK space of the printer is know as rendering intent and it determines what happens to colours that exist in the file, but cannot be dsplayed on the output device (screen or printer). YOu often see on expansive monitors statements like "full 10-bit RGB". This means each pixel can have 1024 levels of intensity. In comparison a consumer DSLRs have 12-bit sensors, while pro DSLRs have 14 bit sensors. So each pixel can recognize 2^12 or 2^14 unique intensity levels.
In something like photo-shop you can use tools like "curves" to try and correct for colour shifts by "selecting" a black, grey and white colours. This is why having a black nail is useful as a colour reference point since manganese phosphate itself supposed to be almost black. Of course, even if i took a picture with my x-rite colour checker in the frame and actually adjusted the colours so that the image is colour accurate on my colour calibrated monitor and actually prints colour accurately on my colour calibrated epson 3880, i have no way of predicting how it will display on your uncalibrated monitor. which is why having a black reference point in the picture is useful.