In general, if you are analyzing concentration by standard addition/subtraction the uncertainty or error is dependent on the relationship between your analyte and what you are adding.
In standard addition you add more of your analyte, and the uncertainty with the measured change in absorbance is due to solvation of that extra analyte (if you define your tipping point as the point where error increases a lot, it would be at suspected high initial concentrations of analyte where you risk incomplete dissolution of your added analyte). Make sure you achieve thermodynamic equilibrium in your addition of more analyte, as it is generally agreed upon that initial dissolution concentration readings give slightly higher readings than is expected based on a compounds longer-term solubility due to kinetic effects competing with thermodynamic equilibrium.
In standard subtraction you add a precipitating or complexing agent to decrease the concentration of your analyte, and the uncertainty is associated mainly with a chemical equilibrium of the precipitation or complexing (this is assuming that you're able to filter out or spectroscopically separate the complex/precipitate from your analyte). The tipping point by my prior definition would in this case be the area where you have a suspected very low concentration of analyte, as you may end up adding more complexing/precipitating agent than needed.
Both of these methods are considered more accurate than measuring concentration directly, but I would think that standard subtraction would have a greater uncertainty in addition to its being already difficult to calculate. In my experience, you deal with smaller unknown concentrations of analyte more often than very high concentrations in an analytical lab, so I would stick to standard addition.