Micronutrient bioavailability techniques: Accuracy

2 downloads 0 Views 218KB Size Report
all of the micronutrients in plant foods are available (i.e. bioavailable) for absorption .... cially those required at the parts per billion level. 2. Balance techniques. 2.1. ... of balance studies on calcium, iron, zinc and copper ... there are two radioactive isotopes of the element in ..... routes and is in the `naturally occurring' form.
Field Crops Research 60 (1999) 93±113

Micronutrient bioavailability techniques: Accuracy, problems and limitations Darrell R. Van Campen, Raymond P. Glahn* USDA-ARS, U.S. Plant, Soil and Nutrition Laboratory, Ithaca, NY 14853, USA Accepted 2 September 1998

Abstract Within the scienti®c agricultural community it is widely known that the total micronutrient content of soils is not a useful measure of the amount of `available' micronutrients to plants. Thus, soil tests have been developed to determine the amounts of micronutrients in soils available to plants for growth. This same concept applies to plant foods eaten by humans because not all of the micronutrients in plant foods are available (i.e. bioavailable) for absorption and or utilization. Antinutrients and promoter substances within plant foods that can either inhibit or enhance the absorption and/or utilization of micronutrients when eaten. As a result, numerous techniques have been developed to determine the amounts of bioavailable micronutrients present in plant foods when consumed in mixed diets with other dietary constituents that can interact and affect the micronutrient bioavailability. Unfortunately, micronutrient bioavailability to humans fed mixed diets is still a confusing and complex issue for the human nutrition community. Our understanding of the processes that control micronutrient bioavailability from mixed diets containing plant foods is relatively limited and still evolving. It remains the subject of extensive research in many human nutrition laboratories globally. This article reviews some of the numerous methodologies that have arisen to account for the bioavailability of micronutrients in plant foods when eaten by humans. # 1999 Elsevier Science B.V. All rights reserved. Keywords: Trace minerals; Bioavailability; Micronutrients

1. Introduction Improvement of plant foods as sources of essential mineral nutrients can be accomplished by: (1) increasing the concentration of the nutrient(s) and maintaining bioavailability; (2) maintaining the concentration and improving bioavailability; or (3) increasing both concentration and availability of the selected nutrient(s). Information on the total concentrations of *Corresponding author. Tel.: +1-607-255-2452; fax: +1-607255-1132; e-mail: [email protected]

mineral nutrients in plant foods is extensive, but not always reliable. Generally, there is little, if any, information on the genetic variability between different cultivars of the same plant food species and relatively little is known of the chemical forms of trace elements in either individual foods or in meals. In addition to the `native' mineral nutrients in plant foods, contamination by dust or soil, cooking water, storage, cooking utensils, equipment used in sample preparation, etc. is often an important issue. For example, Cary et al. (1994) found that as much as 70% of the Fe and 100% of the Cr in certain vegetables could be accounted for

0378-4290/99/$ ± see front matter # 1999 Elsevier Science B.V. All rights reserved. PII: S0378-4290(98)00135-X

94

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

by contamination with soil and dust particles that could not be removed by washing. Another issue involves improvements in analytical instruments that allow routine determination of many elements in the parts per billion range. If large dilutions or very small samples are needed to obtain appropriate concentrations in the analyte, any contamination of samples, either before or during preparation, is magni®ed when one back-calculates to the concentration in the original sample. In addition to errors in estimates of total mineral nutrient content, estimates of bioavailability may be affected by the presence of contaminants. Mineral nutrients present as contaminants are often not utilized to the same extent as those naturally present in the food (Hallberg and Bjorn-Rasmussen, 1981); thus, contamination impacts estimates of both total amount of nutrients in food as well as their bioavailability. Techniques for increasing the concentrations of essential trace minerals via breeding, genetic engineering, and other methods are covered in other papers of this special issue; thus, the focus of this review will be on bioavailability, particularly on the methods used to estimate bioavailability. There is less information on mineral nutrient bioavailability than on concentrations, but collectively, there is still a large amount of data. Unfortunately, these data were collected by many different procedures and under highly variable conditions. In many cases, there is no way to directly compare results from different methods or from different researchers using essentially the same methods. There is no universally accepted de®nition of bioavailability; different researchers have de®ned it in different ways. A de®nition that has gained fairly wide acceptance de®nes bioavailability as the amount of a nutrient that is available for absorption in a form

that is physiologically useful. This de®nition will be used here. Absorption and/or retention of mineral nutrients are often used as indicators of bioavailability. Again, these terms are often de®ned in different ways by researchers and they are not synonymous with bioavailability. De®nitions of absorption, as adapted from Thompson (1965) are presented in Table 1. Loss due to sloughing of skin, perspiration, etc. are ignored in these equations. A complete bibliography of bioavailability would require much more space than is available here; thus, examples and references are meant to be illustrative rather than comprehensive. Readers are referred to other reviews for additional information as well as additional references (Hallberg, 1981; Inglett, 1983; Van Campen, 1983; Lynch, 1984; Mertz, 1984; Solomons and Cousins, 1984; O'Dell, 1985; Forbes et al., 1989; Fairweather-Tait, 1992; Rao, 1994; Ammerman et al., 1995). Amongst trace elements, the amount of information on methodologies for estimating iron bioavailability is, by far, the most voluminous. There is a substantial amount of material for zinc, far less for copper, manganese, chromium and selenium and little or none for the other essential trace elements, especially those required at the parts per billion level. 2. Balance techniques 2.1. Chemical balance Balance studies were the method of choice for determining trace element absorption before the use of radioisotopes was introduced (McCance and Widdowson, 1937; McCance and Widdowson, 1942). Balance studies represent the difference between intake and excretion. For many of the trace elements,

Table 1 Definitions of absorption Terminology

Equation a

Equation relative to net retention (R)

Apparent absorption Net retention True absorption Bioavailability

IÿF Iÿ(F‡U) Iÿ(FÿFe)ÿ(UÿUe) (Iÿ(FÿFe)ÿ(UÿUe))B

R‡U R R‡Fe‡Ue (R‡Fe‡Ue)B

a

Iˆintake; Fˆtotal fecal excretion; Feˆendogenous fecal excretion; Uˆtotal urinary excretion; Ueˆendogenous urinary excretion; and Bˆfraction of retained mineral nutrient that is absorbed in a form that can be utilized for structural or functional purposes.

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

including iron, zinc and copper, urinary excretion is often deemed insigni®cant and is ignored. Consequently, many chemical balance studies involving these elements correspond to `apparent absorption' in Table 1. For those trace elements such as Se and I that are absorbed as anions, urinary excretion is signi®cant and must be determined. The primary advantages of chemical balance methods are that they do not expose subjects to ionizing radiation and are simple in concept. They are still useful in situations where radioisotope facilities are not available or exposure to ionizing radiation is not advisable. Recent examples of balance studies on calcium, iron, zinc and copper include: Kies and Harms, 1989; Morris and Ellis, 1989; Hunt et al., 1990; Johnson and Walker, 1992; Rosado et al., 1992; Swanson et al., 1983; Vijay and Kies, 1994. Although simple in concept, in practice, balance studies require great care if valid results are to be obtained. Errors in determination of either intake or excretion can result in signi®cant errors in absorption estimates. Hegsted (1973) suggested that intake is often overestimated (incomplete consumption) and excretion is often underestimated (incomplete fecal and urine collections). Contamination can be a serious problem with trace element balance studies. Absorption of trace elements, when uncorrected for endogenous excretion, is often very low; thus any systemic errors tend to be magni®ed. Endogenous excretion can be described as the amount of substance that has been absorbed from the diet prior to or during the balance study which is subsequently excreted into the intestinal lumen or renal lumen and lost via the feces or urine. 2.2. Radioisotope balance Radioisotope balance techniques for trace elements were introduced in the late 1940s (Dubach et al., 1948) and have been widely used since then (see Bothwell et al., 1979; Weaver, 1988). One of the advantages of radioisotopes is that one can obtain estimates of endogenous excretion. This is particularly useful if there are two radioactive isotopes of the element in question, so that it is possible to correct for endogenous excretion by feeding one of the isotopes and injecting the other. Radioisotope balance studies share some of the problems of chemical balance studies, i.e. errors in

95

estimates of either consumption or excretion can lead to signi®cant errors in estimates of bioavailability. Measurement of the amount of the radioactive element that is absorbed or retained generally is not affected by contamination but, if the radioisotope does not exchange completely with the nonradioactive element, errors in absorption and retention result. A disadvantage in utilizing radioisotopes in human studies is the exposure to ionizing radiation which may be particularly inadvisable for some primary target groups, such as pregnant women, infants and young children. Where use of radioactive isotopes is feasible, a combination of isotope balance and chemical balance can yield signi®cantly more information than when either is used alone (Layrisse et al., 1990). 2.3. Stable isotope balance Stable rather than radioactive isotopes have been used in a number of bioavailability studies. They are subject to the same measurement and contamination problems as chemical or radioisotope balance studies. Advantages and disadvantages of stable isotopes, as compared to radioisotopes, have been summarized by Weaver (1988). Generally, the advantages are: (1) there is no health risk from ionizing radiation; (2) they do not decay, thus labeled materials and samples can be stored inde®nitely; (3) they can be used freely in analytical or other sample or food processing equipment without fear of radioactive contamination; and (4) the ever-increasing costs of radioactive waste disposal is eliminated. There are also some disadvantages to stable isotopes. They are more expensive than radioisotopes; however, some of the additional costs may be offset by eliminating disposal costs incurred with radioactive materials. Presumably, in studies using stable isotopes in farm animals, one could end up with a marketable carcass although this is not an issue that has been widely discussed. Also, the dose sizes required to obtain adequate enrichment with a stable isotope may exceed what is considered a `tracer' level. This may require either multiple doses or doses that are large enough to in¯uence normal equilibria. Finally, the analyses for stable isotopes are expensive and labor intensive. Primary methods for analyses include neutron activation analyses and mass spectrometry. Mass spectrometric methods include thermal ionization, gas chromatography, direct probe,

96

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

fast atom bombardment, inductively coupled plasma and resonance ionization. A comprehensive review of various methods for stable isotope analyses has been published (Weaver, 1988). Each of these methods has advantages and disadvantages, but generally they all require expensive equipment that is beyond the reach of many research programs. 3. Tissue concentrations or indirect indicators 3.1. Iron 3.1.1. Plasma/serum iron Increases in plasma or serum iron concentrations have been used for estimating iron bioavailability. This technique monitors changes in serum iron concentrations following an oral dose and requires dose levels that would generally be considered pharmacological rather than physiological, i.e. doses of 25 mg or more are generally required to obtain a signi®cant response. Also, the response is dependent upon the iron status of the individual (Cook et al., 1969). In one study, estimates based on either the maximum serum iron concentrations or a serum iron tolerance curve correlated well with estimates based on whole-body counting (Ekenved et al., 1976). However, the authors concluded that the technique was most appropriate when comparing relative absorption of different iron sources rather than absolute absorption from a single source. 3.1.2. Plasma/serum ferritin Changes in the serum or plasma concentration of the iron storage protein, ferritin, generally re¯ect changes in body stores and a number of studies have included changes in serum ferritin concentrations as an indicator of bioavailability. The results have been mixed. In a study by Miles et al. (1984), serum ferritin was monitored in individuals consuming self-selected diets for a one year period. These workers monitored iron intake, iron balance and serum ferritin. They did not ®nd any correlation between iron status (as determined by serum ferritin or iron balance) and iron intake. Similarly, in a study in iron-depleted women, consuming diets predicted to have low iron availability, ascorbic acid supplementation increased some parameters of bioavailability but did not produce

any change in serum ferritin (Hunt et al., 1990). In other studies, however, changes in serum ferritin that appear to be related to iron availability have been detected (Saarinen and Siimes, 1979; Borch-Iohnsen et al., 1990; Olivares et al., 1990; Fuchs et al., 1993; Hunt et al., 1994). In general, it seems that changes in serum ferritin as an indicator of bioavailability have the advantage of being a relatively unobtrusive technique, but usefulness may be limited to situations where iron sources are moderately to highly available and long-term studies are feasible. If bioavailability is low and the differences between sources are small, a minimum two-year observation period may be needed (Hallberg, 1981). Other factors such as growth spurts and in¯ammation affect serum ferritin levels independently of iron status and may limit its use in bioavailability studies of rapidly growing infants or children and in individuals with any in¯ammatory disorders. 3.1.3. Serum transferrin receptor assay Iron that is absorbed from the intestinal lumen is transported in the general circulation bound to transferrin. Throughout the human body, cells take up iron by receptor-mediated endocytosis of diferric transferrin. Once internalized, the diferric transferrin vesicle is degraded and the iron is released to the cytosol. The transferrin±receptor complex is then returned to the cell surface for recycling (Huebers and Finch, 1987). The key point to this process in estimating iron status is the transferrin receptor, fragments of which can be detected in the serum by immunoassay techniques. Studies have shown that measurement of serum transferrin receptor is a reliable measure of tissue iron de®ciency (Ferguson et al., 1992). As mentioned above, serum ferritin values can be altered dramatically by acute infection or chronic disease; thus, measurement of serum transferrin receptor may be the optimal method for determining iron status. In the study by Ferguson et al. (1992), the results clearly revealed that measurement of serum transferrin receptor is able to distinguish iron de®ciency anemia from anaemia caused by acute infection or chronic disease. Recent studies have also demonstrated that commercial assays for serum transferrin receptor can vary dramatically (Flowers and Cook, 1997). The most reliable serum transferrin receptor assay is an ELISA currently produced by Ramco Laboratories in Houston, Texas. With the recent increase in acceptance of

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

this assay, and the ease of the ELISA method, measurement of serum transferrin receptor may become the preferred marker of iron status in human trials. 3.1.4. Hemoglobin repletion Another technique used for estimating iron bioavailability is hemoglobin repletion. In this method, as proposed by Fritz et al. (1975), test animals are made anemic by feeding them iron-de®cient diets. Subsequently, they are fed diets containing iron at graduated levels from a control source and from one or more dietary sources. Repletion of hemoglobin by iron from the test source(s) is compared to repletion by the control source and the results are usually reported as `relative biological value' (the ratio of hemoglobin repletion by iron from the test source to that from the reference source). Estimates of iron bioavailability from radioiron test meals, known to have differing availabilities, have been compared to those obtained by the hemoglobin repletion method (Johnson et al., 1987). Relative biological values estimated from the hemoglobin repletion assay ranged from 100 to 30% and those from the radioiron assay ranged from 92 to 29. Correlation coef®cients for the two assays ranged from 0.98 to 0.73. Recently, the hemoglobin repletion procedure has been adapted to a piglet model (Howard et al., 1993). This procedure appears promising in that the gastrointestinal tract of pigs may resemble that of humans more closely than to those of other nonprimate models. Secondly, iron depletion can be accomplished easily, by withholding the iron injections that are usually given to newborn piglets. The hemoglobin repletion procedure is simple, of relatively short duration and does not require expensive or specialized equipment. Additionally, iron bioavailability is averaged over a number of days rather than being the result of a single dose. Disadvantages include the fact that applying it to human subjects would require that they ®rst be made anemic or that large populations be screened to ®nd an adequate number of anemic subjects, which generally is not possible. As with all animal studies, extrapolation to humans must be done with caution and the estimates are relative to a standard rather than direct estimates of iron absorption. Often, widely differing quantities of individual food items must be added to the diet to obtain the same iron intake. Thus, in addition to changes in iron concentrations, the concentrations the noniron components of

97

the food item are being changed and this could affect bioavailability. 3.1.5. Hemoglobin incorporation This method is based on the fact that most absorbed iron is incorporated into hemoglobin within 7 to 10 days after oral ingestion (Bothwell et al., 1979). One of the earliest uses of radioisotopes for determining absorption was based on incorporation of radioiron into red blood cells (Hahn et al., 1939). The percent absorption is often calculated by measuring the radioactivity present in whole blood, generally at 14 days after ingestion and assuming a value, often 80%, for the percentage of absorbed iron that is incorporated into hemoglobin. Although total blood volume can be measured directly, it is usually estimated as a percentage (commonly 7%) of body weight. This basic technique was widely used in the early days of hemoglobin incorporation studies (Dubach et al., 1946). When whole blood incorporation is used, the calculation is as follows: Absorption …%† ˆ

ab  BV Ak

(1)

where ab is the activity per ml of blood, BV the blood volume, A the total radioactivity administered and k the fraction of absorbed radioactivity incorporated into circulating red cells. If the amounts of total and radioactive iron in hemoglobin are determined, the calculation is: Absorption …%† ˆ

aFe  HbFe Ak

(2)

where aFe is the radioactivity per mg of hemoglobin iron, HbFe the total amount of hemoglobin iron and A and k the same as the whole-blood equation. One criticism of this method when used with a single isotope is that a red-cell-incorporation factor must be assumed and incorporation is known to vary with iron status as well as a number of metabolic disorders. This problem can be overcome by using two radioisotopes, i.e. one isotope is injected and the other is given orally; the incorporation of the injected isotope then provides a precise utilization factor. This technique was ®rst demonstrated in animals (Saylor and Finch, 1953). Subsequently, it has been widely used in human studies and compares favorably with estimates obtained by whole-body counting (Lunn

98

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

et al., 1967). If one is willing to assume an incorporation factor, dual labeling can also be used to compare two sources of iron in the same subject at the same time, a technique that is particularly useful in certain situations. In more recent studies, the subjects have often been given a reference dose of radioiron and absorption from the test dose in relation to absorption from the reference dose, rather than `absolute' absorption, is used as the estimator of bioavailability. Higher correlation coef®cients were obtained when comparing dietary iron absorption to absorption from a reference dose than when comparing it to serum ferritin or transferrin saturation (Taylor et al., 1988). This technique appears to be the method of choice in current human studies (e.g. Hulten et al., 1995).

dies aimed at assessing bioavailability of zinc to farm animals or to animal models, changes in weight and bone zinc concentrations are still commonly used (Fordyce et al., 1987; Hunt and Johnson, 1992; Wedekind et al., 1994).

3.2. Zinc

3.3. Copper

3.2.1. Plasma/serum zinc As with iron, changes in serum or plasma concentrations of zinc have been used as indicators of zinc bioavailability. In human studies, the area under the curve produced by changes in plasma zinc concentrations following an oral dose is often used as an indicator of zinc bioavailability (Solomons et al., 1979; Solomons and Jacob, 1981; Keating et al., 1987; Castillo-Duran and Solomons, 1991). In animal studies, it is possible to deplete the animals of zinc and then monitor subsequent increase in plasma zinc concentrations as the animal is repleted with zinc from various test sources. In depleted animals, plasma zinc concentrations generally increase linearly with increasing zinc (Wedekind et al., 1994).

3.3.1. Plasma/serum copper Changes in plasma or serum copper concentrations as indicators of bioavailability have been used in a number of animal studies (Lo et al., 1984; Price and Chesters, 1985; Ledoux et al., 1989; Kegley and Spears, 1994), but do not appear to have been widely used in human studies. In contrast to studies with iron and zinc where many of the animal studies are model studies with the end point being improving iron and/or zinc nutriture of humans, much of the copper work is aimed directly at improving the nutrition of farm animals.

3.2.2. Weight gain/other tissues In addition to changes in plasma levels, weight gain and changes in bone zinc concentration have been used as indicators of zinc bioavailability to experimental animals. Burpo et al. (1971) found that when zinc carbonate was used as a supplement, weight gain in chickens was linear from 3 to 19 mg Zn/kg feed in chickens and from 3 to 15 mg Zn/kg feed for rats. Studies by Morris and Ellis (1980) indicated that bone zinc may be a more sensitive than weight gain as an indicator of zinc availability, i.e. they found that at certain ratios of phytate to Zn, growth was not affected but bone zinc concentrations were depressed. In stu-

3.2.3. Zinc enzymes/proteins Concentrations of both the metal-binding protein, metallothionein, and the activity of the zinc-containing enzyme, alkaline phosphatase, have been shown to vary with zinc status and to respond to changes in zinc intake (Harmuth-Hoene and Meuser, 1987; HarmuthHoene and Meuser, 1988; Pauluf et al., 1990; Lei et al., 1993; Rojas et al., 1995). There does not appear to be widespread use of these indicators of zinc status speci®cally for assessment of zinc bioavailability.

3.3.2. Other tissues In addition to changes in plasma copper concentrations, changes in concentrations of copper in heart, liver, bone and bile have been used as an indicators of bioavailability (Rockway et al., 1987; Ledoux et al., 1989; Ledoux et al., 1991; Ledoux et al., 1996; Aoyagi and Baker, 1993). 3.3.3. Copper proteins/enzymes Activity of the enzyme, cytochrome c oxidase, in duodenal tissue has been used as a bioavailability assay in rats and was reported to be a more sensitive indicator of copper availability than plasma copper concentrations (Price and Chesters, 1985). Changes in the concentration of ceruloplasmin, a copper-containing plasma oxidase, has also been used as an indicator

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

of bioavailability (Lee et al., 1988; Kegley and Spears, 1994). In addition to plasma ceruloplasmin concentrations, Lee et al. (1988) utilized changes in the activity of the enzyme, Cu±Zn superoxide dismutase, in the liver as an indicator of copper bioavailability. At low copper concentrations, liver indicators (liver copper and liver Cu±Zn superoxide dismutase) were reported to be more responsive indices of Cu status than serum copper or ceruloplasmin levels. 3.4. Chromium 3.4.1. Plasma/tissue concentrations Plasma and tissue concentrations of chromium are known to vary with chromium status (Lukaski et al., 1996). Changes in chromium concentrations in these tissues and changes in urinary chromium excretion (Gargas et al., 1994; Lukaski et al., 1996) have been used to estimate bioavailability. Accurate determination of chromium, particularly at the concentrations found in many biological samples is dif®cult and requires great care if one is to obtain reliable results. Use of the isotope chromium-51 (Seaborn and Stoecker, 1992; Olin et al., 1994) reduces the analytical problems but may raise other issues. For example, intrinsic labeling of plant foods with chromium-51 is very inef®cient because of root/shoot barriers and use of extrinsic chromium-51 may be questionable because of a lack of complete equilibration between extrinsic and intrinsic forms (Johnson and Weaver, 1986). Indirect indicators such as the effect of various forms of chromium on serum glucose levels (Vinson and Hsiao, 1985) and activity of the chromium-containing `glucose tolerance factor' appear to be good indicators of `biologically active' chromium. However, neither of these appear to have been widely used in bioavailability studies. 3.5. Selenium 3.5.1. Plasma/tissue For an element such as selenium that can be absorbed as the anion, absorption is high, generally exceeding 50% and, sometimes, approaching 100%. In these cases, the second part of the de®nition of bioavailability, namely, that the nutrient is absorbed in a form that is physiologically useful, becomes the more important component. In the case of selenium,

99

changes in plasma or serum concentrations are generally considered good indicators of bioavailability (Levander et al., 1983; Luo et al., 1985; Torre et al., 1991; Thomson et al., 1993). 3.5.2. Selenium proteins/enzymes Changes in plasma levels of selenium do not always agree, quantitatively, with changes in other indicators of selenium status. Changes in the concentrations of the enzyme, selenium-dependent glutathione peroxidase (GPX), in plasma or in other soft tissues is often considered a better indicator of `functional' selenium than plasma concentrations (Levander et al., 1983; Zhou and Combs, 1984; Hassan et al., 1987; Thomson et al., 1988, 1993; Kadrabova et al., 1995). Estimates of selenium bioavailability based on plasma concentrations often differ from those based on GPX (Whanger and Butler, 1988; Thomson et al., 1993; Kadrabova et al., 1995). Levander et al. (1983) suggested that studies of selenium bioavailability include a short-term platelet GPX determination to assess immediate availability; a medium-term plasma selenium determination to estimate retention and a long-term platelet GPX estimation after supplementation is discontinued to assess the conversion of tissue stores to biologically-active selenium. Urinary selenium concentrations have also been shown to respond to intake and have been used to estimate bioavailability (Meltzer et al., 1990; Meltzer et al., 1992; Torre et al., 1991; Sanz-Alaejos and DiazRomero, 1993). Estimates based on urinary excretion appear to correlate well with serum values but do not always correlate well with GPX activity (Meltzer et al., 1990). These authors suggested that this anomaly is due, in part, to the fact that organic forms of selenium such as selenomethionine, can be incorporated directly into a nonspeci®c amino acid pool whereas inorganic forms are more likely to be incorporated into GPX. 4. Isotope dilution/compartmental analysis Isotope labeling, with either radioactive or stable isotopes, can extend the usefulness of tissue uptake in estimating bioavailability. The use of isotopes allows one to use either tracer or physiological concentrations of the element in question. Isotope dilution procedures

100

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

involve labeling of plasma and, at equilibrium, determining dilution of the radioisotope by the endogenous element. Absorption can be described as: Aˆ

I ÿ F ‡ F…Sf =Sm † F

(3)

where I is the intake, F the total fecal excretion, Sf the speci®c activity of the element in the feces and Sm the speci®c activity of the element of endogenous origin. To achieve maximum accuracy, one must integrate the speci®c activity of the element in the feces over time to obtain Sf and integrate the speci®c activity of the element in plasma over time to obtain Sm. Of trace element studies, fractional absorption techniques appear to have been most widely applied to zinc absorption and metabolism (Weigand and Kirchgessner, 1976; Friel et al., 1992, 1996; Sian et al., 1993, 1996; Davidsson et al., 1996) but have also been applied to a number of other trace minerals including iron, copper, manganese and selenium (Solomons et al., 1986; Martin et al., 1989; Daviddsson et al., 1991; Atkinson et al., 1993). If a suf®cient number of samples can be obtained at various times after dosing, one can utilize compartmental analysis techniques to estimate bioavailability. House et al. (1982) used plasma disappearance curves to estimate absorption of zinc. The calculations for this study were done manually and were very tedious and time consuming. The advent of compartmental modeling programs that can be run on personal computers is a major advance and makes this approach much more feasible. Several recent studies utilizing both radioactive (Wastney et al., 1986; Wastney et al., 1991; Wastney and Henkin, 1989) and stable (Wastney et al., 1992, 1996; Serfass et al., 1996) isotopes of zinc have been reported. Compartmental modeling has also been applied to other essential elements, e.g. iron (McClaren et al., 1991; Gupta et al., 1992), copper (Dunn et al., 1991), and molybdenum (Botha et al., 1995; Cantone et al., 1995; Thompson et al., 1996 but these applications have been much more limited than those for zinc. The primary disadvantages of this procedure, in human studies, is that the number of compartments one can actually sample, physically, is limited, thus it is not always possible to link `mathematical' compartments to `physiological' compartments. Also, the number of compartments that can be identi®ed depends, to some extent on the frequency of

sampling. While compartmental modeling is an invaluable tool in describing partitioning and transport, estimates of absorption are indirect and subject to some of the errors associated with conventional balance studies. 5. Whole-body counting Whole-body counting has been widely used in both animal and human studies. Whole-body counting of human subjects has been somewhat limited by the availability of whole-body counters large enough to accommodate human subjects. Early whole-body counters of the liquid scintillation type were used for measurements of trace element absorption as early as 1961 (Van Hoek and Conrad, 1961). However, the counters used in most, if not all, modern bioavailability studies are based on sodium iodide detectors for which an early prototype was introduced by Price et al. (1962). Whole-body counting with a sodium iodide detector can be accomplished with most gamma-emitting isotopes. Space precludes a detailed list of bioavailability studies using whole-body counting but a very limited number of typical recent examples are offered. These include studies with iron (Lykken, 1983; Barrett et al., 1994), zinc (Gallaher et al., 1988; Boza et al., 1995), copper (Milne and Nielsen, 1993), manganese (Daviddsson et al., 1991), and selenium (Boza et al., 1995). Exposure of human subjects to ionizing radiation is a concern when contemplating whole-body studies with humans. This is particularly true for certain population groups including infants, young children and pregnant or lactating women. However, improvements in the technology now allow studies to be conducted with very small doses of radioisotopes (Lykken, 1983) and, consequently, very low exposure of subjects to radiation. In its simplest form, whole-body counting involves administration of a radio-labeled test material to an animal or a human followed by an initial whole-body count that is taken before any excretion has occurred. Subsequent whole body counts are taken at predetermined intervals for the remainder of the experimental period. Periods of 10 to 14 days are generally considered adequate for most trace element studies in both animals and humans studies. Results are generally

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

expressed as percent retention of the original dose, i.e.: % Retention ˆ 100

Whole-body count at time t (4) Whole-body count at time 0

After excretion rate has stabilized, an estimate of absorption can be obtained by extrapolation of the retention data back to time zero. Also, from the slope of the retention vs. time data, one can also estimate the biological half-life of the element being studied. Absorption estimates obtained by whole-body counting compare favorably with those obtained by other methods (Hallberg, 1980). Whole-body counting has an advantage over both chemical and radioisotope balance studies in that urine and fecal collections are not required, thus problems associated with incomplete consumption or collections are avoided. A problem that is common to whole-body counting in both animal and human studies is that the counting geometry may change with time. The radioisotope is, initially, a point source located in the stomach but, as absorption progresses, it is distributed to other parts of the gastrointestinal tract and to the peripheral parts of the body. This has the net effect of moving some of the isotope closer to the detector and, thereby, increasing counting ef®ciency. When absorption in high, changes in counting ef®ciency can have a signi®cant effect upon results. A second problem is that when one estimates absorption by extrapolating time vs. retention curves to time 0, there can be a problem in determining the time 0 intercept. There is generally a period of at least 12 h after dosing when there is little or no excretion of the radioisotope. It is not completely clear whether one should extrapolate back to the dosing time or to the time when excretion has just begun. Generally, this difference is trivial but, if endogenous excretion is high, signi®cant errors could result. Where appropriate equipment is available and exposure to ionizing radiation is not a problem, whole-body counting is likely to be the method of choice. 6. Extrinsic vs. intrinsic labeling Extrinsic vs. intrinsic labeling is an issue with all bioavailability procedures that utilize either radioisotopes or stable isotopes. In some studies, individual food items are labeled intrinsically with an isotope.

101

For labeling plant foods, the isotope of the nutrient in question is usually incorporated into a hydroponic solution in which the plant is grown or, less commonly, added to soil or injected into the stem of the plant. To obtain intrinsically labeled animal foods, the isotope is usually injected intravenously since this is a more ef®cient delivery route than feeding. The rationale for intrinsic labeling is that the element in question is incorporated into the food through the normal routes and is in the `naturally occurring' form. The ®ndings from studies using intrinsicallylabeled foods have been very useful, i.e., early studies by Layrisse et al. (1969) demonstrated that iron from meat products was more bioavailable than that from plant foods. A second important ®nding was that, when foods were fed in mixed diets, the bioavailability was different than when they were fed alone. Inclusion of meat in a meal improved the utilization of iron from the plant foods, i.e., in addition to being a rich iron source, the meat enhanced bioavailability of the iron from other components of the meal. Although intrinsic labeling with iron-59 allowed one to determine bioavailability of iron from a particular source, it did not allow determination of iron availability from the entire meal. This led to the concept of using an extrinsic tag to label an entire meal, in which iron-59 was added either to an entire meal or to one or more major components of a meal (Hallberg and Bjorn-Rasmussen, 1972; Bjorn-Rasmussen et al., 1972; Cook et al., 1972). The assumptions underlying extrinsic labeling are: (1) for purposes of absorption, dietary iron consists of two pools, heme iron and nonheme iron; and (2) the radiolabel (either heme or nonheme iron) exchanges completely with the appropriate pool such that absorption of the radiolabel represents absorption from the entire pool. The widest application of the extrinsic tag method has been in iron absorption studies and, in many cases, the underlying assumptions have held up well, i.e. when both an intrinsic and an extrinsic label were fed, they were absorbed in a ratio of ca. 1 : 1. However, there are a number of circumstances when the exchange apparently was not complete. These include studies in which meals included unmilled rice; certain iron forti®cation compounds, ferritin or hemosiderin iron and iron present from contamination with dust or soil (Layrisse et al., 1975; Bjorn-Rasmussen et al., 1977; Hallberg and Bjorn-Rasmussen, 1981;

102

D.R. Van Campen, R.P. Glahn / Field Crops Research 60 (1999) 93±113

Hallberg, 1981). Also, Hallberg et al. (1986) found that carbonyl iron, a commonly used forti®cation compound, was poorly soluble and did not equilibrate with an extrinsic tag. Contamination with soil or dust particles appears to present some serious problems with some plant foods, e.g., Prabhavathi and Rao (1981) reported that