The switch to the Carpenter method after HOT-10 was a giant step forward in terms of improving and ensuring data quality. Although the Strickland and Parsons (1972) method was newer, the Carpenter (1965) method was much more rigorous in the minimization of error. One large source of error identified by Carpenter was the loss of iodine due to volatization, particularly during sample transfer. Titrating aliquots of a larger sample, as called for in the Strickland and Parsons method, creates a potential loss of iodine during sample transfer and therefore can result in potentially lower dissolved oxygen concentrations. Aliquots can also introduce variance into results if the original sample was not completely homogenous. The whole bottle titrations of the Carpenter method eliminate the potential iodine loss generated by sample transfer and also eliminate questions of sample homogeneity.

     Consideration of sample temperature effects was also an improvement. It has been found that the seawater samples in the Niskin bottles tend to warm slightly as they are raised to the surface. Since it is a fixed volume of sample that is collected, the disregard of any temperature changes, and corresponding density changes, could result in significantly different oxygen concentrations. Temperature measurements made during HOT-11 and HOT-12 showed a maximum temperature change of 6 oC, which translates to a maximum dissolved oxygen concentration error of approximately 0.06 %. While this value is relatively low, and is less than the 0.1 % precision of the Carpenter method, it could have significant effects on the data when combined with error from other sources. In addition, this error is a bias and not random due to the fact that bottles only warm with time.

     The Carpenter (1965a) method also reduced error through the optimization of reagent concentrations and through careful preparation of the reagents to reduce the chances of iodine production and consumption caused by reagent contaminates. The current HOT/BEACH protocols (Karl et al. 1990) for dissolved oxygen determination refer the reader to the Carpenter (1965) paper for optimum reagent concentrations and preparation techniques, however there is a lack of documentation that ensures these procedures are followed every time.

     It is difficult to assess the effects of each change individually, however all in all Carpenter showed that his techniques differed from other commonly used methods by as much as 5%, which Carpenter attributed to deficiencies in the other methods.

     An additional point of concern addressed by Carpenter (1965) was the method of determining the titration endpoint. The common method used up through HOT-30 is through a visual endpoint using a starch indicator solution. This method has a limitation in that the human eye is only so sensitive, and that sensitivity varies from person to person. So one person might consider a titration complete while a second person might still detect a hint of color in the solution and continue titration, and the two people would calculate different dissolved oxygen concentrations for the same sample.

     Methods of endpoint determination were previously addressed by Bradbury and Hambly (1952) and by Knowles and Lowden (1953). It was found that the sensitivity of the visual starch endpoint method was only 10 micro equivalents per liter. The authors recommended an amperometric method with a sensitivity of 0.08 micro-equivalents per liter, however they noted that the method was dependent on the reliability and variability of the electrodes. Redox probes rely on porous membranes and reference solutions, both of which can degrade over time. (See Valenciano, 2003, Appendix A). The stability of modern electrodes has greatly improved over the years due to improvements in construction of the porous glass membrane (Appendix A), although there are no data available to quantify the level of improvement. The change to a potentiometric endpoint beginning with HOT-31 represents a great improvement in the data quality.

     The most sensitive method of end point determination is through the measurement of ultraviolet light absorption, with which a sensitivity of 0.015 µ-equivalents per liter could be achieved (Carpenter 1965). Carpenter suggested that the precision of this method was beyond what was necessary.

     Automatic titration was also an improvement in the titration method. A common mistake, especially using the visual starch method, is to miss the endpoint and add too much titrant. This can be corrected for by adding a known amount of iodide standard and then retitrating, however this can be quite tedious and allows more chances of human error to be introduced. Computer detection of the endpoint and control of the titrant volume through a Dosimat greatly reduces the chances of passing the endpoint. An additional effect is that person-to-person variabilities are removed from the process allowing the collection of completely unbiased data.

Back to Table of Contents

Previous Next