Skip to main content

    Andre Bouville

    Methods to assess radiation doses from nuclear weapons test fallout have been used to estimate doses to populations and individuals in a number of studies. However, only a few epidemiology studies have relied on fallout dose estimates.... more
    Methods to assess radiation doses from nuclear weapons test fallout have been used to estimate doses to populations and individuals in a number of studies. However, only a few epidemiology studies have relied on fallout dose estimates. Though the methods for assessing doses from local and regional compared to global fallout are similar, there are significant differences in predicted doses and contributing radionuclides depending on the source of the fallout, e.g. whether the nuclear debris originated in Nevada at the U.S. nuclear test site or whether it originated at other locations worldwide. The sparse historical measurement data available are generally sufficient to estimate external exposure doses reasonably well. However, reconstruction of doses to body organs from ingestion and inhalation of radionuclides is significantly more complex and is almost always more uncertain than are external dose estimates. Internal dose estimates are generally based on estimates of the ground deposition per unit area of specific radionuclides and subsequent transport of radionuclides through the food chain. A number of technical challenges to correctly modeling deposition of fallout under wet and dry atmospheric conditions still remain, particularly at close-in locations where sizes of deposited particles vary significantly over modest changes in distance. This paper summarizes the various methods of dose estimation from weapons test fallout and the most important dose assessment and epidemiology studies that have relied on those methods.
    An abstract is unavailable. This article is available as HTML full text and PDF.
    The recently demonstrated radiation-induction of chronic lymphocytic leukemia (CLL) raises the question as to whether the amount of radiation exposure influences any of the clinical characteristics of the disease. We evaluated the... more
    The recently demonstrated radiation-induction of chronic lymphocytic leukemia (CLL) raises the question as to whether the amount of radiation exposure influences any of the clinical characteristics of the disease. We evaluated the relationship between bone marrow radiation doses and clinical characteristics and survival of 79 CLL cases diagnosed during 1986-2006 in a cohort of 110 645 male workers who participated in the cleanup work of the Chornobyl nuclear accident in Ukraine in 1986. All diagnoses were confirmed by an independent International Hematology Panel. Patients were followed up to the date of death or end of follow-up on 31 October 2010. The median age at diagnosis was 57 years. Median bone marrow dose was 22.6 milligray (mGy) and was not associated with time between exposure and clinical diagnosis of CLL (latent period), age, peripheral blood lymphocyte count or clinical stage of disease in univariate and multivariate analyses. Latent period was significantly shorter am...
    Data on transfer of radioiodine into human milk are rare in the literature. Data from sixteen publications were reviewed and analyzed to estimate the transfer coefficient (f(hm)*, having units of d L(-1)). The data on the radioiodine... more
    Data on transfer of radioiodine into human milk are rare in the literature. Data from sixteen publications were reviewed and analyzed to estimate the transfer coefficient (f(hm)*, having units of d L(-1)). The data on the radioiodine concentration in breast milk were analyzed by two methods: direct numerical integration and integration of a fitted exponential model. In general, the integrated fitted functions were greater. The fitted functions likely better describe the transfer into milk since few data sets sampled mothers' milk near the time of maximum excretion. The derived transfer coefficient values seem to represent two populations. The first group was those individuals who had very low excretions, including those where thyroid and mammary uptake was impaired by the administration of stable iodine or iodinated compounds. The second group included those with much higher excretions. The second group, termed the "normal-excretion" group, had transfers of iodine to milk that were more than ten-fold higher than in the "low-excretion" group. The derived milk transfer coefficient data for the low- and normal-excretion groups fitted to lognormal distributions gave geometric means, (geometric standard deviations), of 0.043 d L(-1) (2.1, n = 14) and 0.37 d L(-1) (1.5, n = 12), respectively. Estimates of the effective half-time (time from maximum concentration to half the value) were determined for the low- and normal-excretion groups separately. There was evidence that the effective half-time was longer for the normal- than for the low-excretion group; the geometric mean (and geometric standard deviation) were 12 (1.7) and 8.5 (2.6) h, respectively, though the difference was not statistically significant. The geometric mean times to maximum milk concentration in the low- and normal-excretion groups were nearly identical, 9.4 (3.1) and 9.0 (1.6) h, respectively. The data show that administration of large doses of stable iodine (commonly used to block uptake of iodine into the thyroid) is also an effective means to block radioiodine transfer into milk. Thus, protecting the mother's thyroid also protects the nursing infant. Despite inadequacies of available data describing the transfer of radioiodine to human milk within a healthy population of women, the values of f(hm)* provided here are believed to be the best available for use in radiological assessments. These values are particularly applicable to lactating women having normal diets and availability to stable iodine, as in the United States.
    Methods to assess radiation doses from nuclear weapons test fallout have been used to estimate doses to populations and individuals in a number of studies. However, only a few epidemiology studies have relied on fallout dose estimates.... more
    Methods to assess radiation doses from nuclear weapons test fallout have been used to estimate doses to populations and individuals in a number of studies. However, only a few epidemiology studies have relied on fallout dose estimates. Though the methods for assessing doses from local and regional compared to global fallout are similar, there are significant differences in predicted doses and contributing radionuclides depending on the source of the fallout, e.g. whether the nuclear debris originated in Nevada at the U.S. nuclear test site or whether it originated at other locations worldwide. The sparse historical measurement data available are generally sufficient to estimate external exposure doses reasonably well. However, reconstruction of doses to body organs from ingestion and inhalation of radionuclides is significantly more complex and is almost always more uncertain than are external dose estimates. Internal dose estimates are generally based on estimates of the ground deposition per unit area of specific radionuclides and subsequent transport of radionuclides through the food chain. A number of technical challenges to correctly modeling deposition of fallout under wet and dry atmospheric conditions still remain, particularly at close-in locations where sizes of deposited particles vary significantly over modest changes in distance. This paper summarizes the various methods of dose estimation from weapons test fallout and the most important dose assessment and epidemiology studies that have relied on those methods.
    This presentation will summarize past exposures of the public to radioactive fallout from nuclear testing and extrapolate to the possible fallout-related consequences from detonation of multiple warheads that might accompany international... more
    This presentation will summarize past exposures of the public to radioactive fallout from nuclear testing and extrapolate to the possible fallout-related consequences from detonation of multiple warheads that might accompany international conflicts. Long-term consequences could be of three distinct types: (1) the abandonment of living areas that might be heavily contaminated; (2) the necessity to curtail use of particular agricultural products and foods, and (3) life-shortening due to increased rates of cancer and possibly some non-cancer diseases among the exposed populations. While the actual health and economic impact on the surviving public after such conflicts could vary tremendously depending on the number and sizes of explosions (fission yields), height of detonations, and the public's proximity to explosion sites, it is clear that multiple detonations would disperse radioactive products over large geographic areas. Our understanding of radioactive fallout is based on studies carried out for more than five decades on weapons testing fallout that originated from sites worldwide including Nevada, the Soviet Union, four locations in the Pacific, and elsewhere. Those studies have led to an understanding of the composition of radioactive fallout, of its radioactive qualities, and of its capacity to contaminate ground and agricultural products, as well as dwellings and workplaces located from a few km to tens of thousands of km from the explosion site. Though the most severe individual health consequences from exposure to fallout would most likely develop relatively close to the detonation sites (within a few hundred km), wide geographic distribution of fallout, well beyond the borders of the nations involved in the conflict, would affect much larger populations and would likely cause elevated cancer rates and cancer-related deaths among them for many decades following. While acute radiation symptoms (and even death) can result from very high short-term exposures (on the order of a few thousand times the annual dose from natural background radiation), the increase in the long-term rate of cancer development as a result of lower, chronic exposures due to the contamination of the habitat and of the dietary foodstuffs, will pose very difficult scientific, economic, political, and societal problems. Most areas close to sites of detonation (i.e., within about 1000 km) would be primarily impacted by radionuclides with shorter half-lives (i.e., less than 2 months), e.g., Zirconium-95, Niobium-95, Iodine-131, Iodine-132, Iodine-133, Barium-140, Lanthanum-140, and Strontium-89. Conversely, most areas at further distances would be primarily impacted by radionuclides with longer half-lives, e.g., Strontium-90 and Cesium-137 (each with half-lives of 30 years). Contaminating radionuclides with very long half-lives, e.g., Plutonium-239, which has a half-life of 24,000 years, will almost never limit habitation despite widespread fear of them.
    Radioactive fallout from nuclear test detonations during 1946–1958 at Bikini and Enewetak Atolls in the Marshall Islands (MI) exposed populations living elsewhere in the MI archipelago. A comprehensive analysis, presented in seven... more
    Radioactive fallout from nuclear test detonations during 1946–1958 at Bikini and Enewetak Atolls in the Marshall Islands (MI) exposed populations living elsewhere in the MI archipelago. A comprehensive analysis, presented in seven companion papers, has produced estimates of tissuespecific radiation absorbed dose to MI residents at all historically inhabited atolls from internal (ingested) and external irradiation resulting from exposure to radioactive fallout, by calendar year, and by age of the population at time of exposure. The present report deals, for the first time, with the implications of these doses for cancer risk among exposed members of the MI population. Radiation doses differed by geographic location and year of birth, and radiation-related cancer risk depends upon age at exposure and age at observation for risk. Using dose-response models based on committee reports published by the National Research Council and the National Institutes of Health, we project that, durin...
    ... CrossRef, PubMed, CSA. Littlefield, LG, AF McFee, SI Salomaa, JD Tucker, PD Inskip, AM Sayer, C. Lindholm, S. Makinen, R. Mustonen ... John D. Bess, Melinda P. Krahenbuhl, Scott C. Miller, David M. Slaughter, Viktor V. Khokhryakov,... more
    ... CrossRef, PubMed, CSA. Littlefield, LG, AF McFee, SI Salomaa, JD Tucker, PD Inskip, AM Sayer, C. Lindholm, S. Makinen, R. Mustonen ... John D. Bess, Melinda P. Krahenbuhl, Scott C. Miller, David M. Slaughter, Viktor V. Khokhryakov, Valentin F. Khokhryakov, Klara G. Suslova ...
    During the first day after the explosion, the Chornobyl accident of April 26, 1986 exposed a few hundred emergency workers to high dose levels ranging up to 16 Gy, resulting in acute radiation syndrome. Subsequently, several hundred... more
    During the first day after the explosion, the Chornobyl accident of April 26, 1986 exposed a few hundred emergency workers to high dose levels ranging up to 16 Gy, resulting in acute radiation syndrome. Subsequently, several hundred thousand cleanup workers were sent to the Chornobyl power plant to mitigate the consequences of the accident. Depending on the nature of the work to be carried out, the cleanup workers were sent for periods ranging from several minutes to several months. The average dose from external radiation exposure that was received by the cleanup workers was about 170 mGy in 1986 and decreased from year to year. The radiation exposure was mainly due to external irradiation from gamma-ray-emitting radionuclides and was relatively homogeneous over all organs and tissues of the body. To assess the possible health consequences of external irradiation at relatively low dose rates, the U.S. National Cancer Institute is involved in two studies of Chornobyl cleanup workers: (1) a study of cancer incidence and thyroid disease among Estonian, Latvian and Lithuanian workers, and (2) a study of leukemia and other related blood diseases among Ukrainian workers. After an overview of the sources of exposure and of the radiation doses received by the cleanup workers, a description of the efforts made to estimate individual doses in the Baltic and Ukrainian studies is presented.
    Data on transfer of radioiodine into human milk are rare in the literature. Data from sixteen publications were reviewed and analyzed to estimate the transfer coefficient (f(hm)*, having units of d L(-1)). The data on the radioiodine... more
    Data on transfer of radioiodine into human milk are rare in the literature. Data from sixteen publications were reviewed and analyzed to estimate the transfer coefficient (f(hm)*, having units of d L(-1)). The data on the radioiodine concentration in breast milk were analyzed by two methods: direct numerical integration and integration of a fitted exponential model. In general, the integrated fitted functions were greater. The fitted functions likely better describe the transfer into milk since few data sets sampled mothers' milk near the time of maximum excretion. The derived transfer coefficient values seem to represent two populations. The first group was those individuals who had very low excretions, including those where thyroid and mammary uptake was impaired by the administration of stable iodine or iodinated compounds. The second group included those with much higher excretions. The second group, termed the "normal-excretion" group, had transfers of iodine to milk that were more than ten-fold higher than in the "low-excretion" group. The derived milk transfer coefficient data for the low- and normal-excretion groups fitted to lognormal distributions gave geometric means, (geometric standard deviations), of 0.043 d L(-1) (2.1, n = 14) and 0.37 d L(-1) (1.5, n = 12), respectively. Estimates of the effective half-time (time from maximum concentration to half the value) were determined for the low- and normal-excretion groups separately. There was evidence that the effective half-time was longer for the normal- than for the low-excretion group; the geometric mean (and geometric standard deviation) were 12 (1.7) and 8.5 (2.6) h, respectively, though the difference was not statistically significant. The geometric mean times to maximum milk concentration in the low- and normal-excretion groups were nearly identical, 9.4 (3.1) and 9.0 (1.6) h, respectively. The data show that administration of large doses of stable iodine (commonly used to block uptake of iodine into the thyroid) is also an effective means to block radioiodine transfer into milk. Thus, protecting the mother's thyroid also protects the nursing infant. Despite inadequacies of available data describing the transfer of radioiodine to human milk within a healthy population of women, the values of f(hm)* provided here are believed to be the best available for use in radiological assessments. These values are particularly applicable to lactating women having normal diets and availability to stable iodine, as in the United States.
    The primary aim of the epidemiologic study of one million U.S. radiation workers and veterans [the Million Worker Study (MWS)] is to provide scientifically valid information on the level of radiation risk when exposures are received... more
    The primary aim of the epidemiologic study of one million U.S. radiation workers and veterans [the Million Worker Study (MWS)] is to provide scientifically valid information on the level of radiation risk when exposures are received gradually over time and not within seconds, as was the case for Japanese atomic bomb survivors. The primary outcome of the epidemiologic study is cancer mortality, but other causes of death such as cardiovascular disease and cerebrovascular disease will be evaluated. The success of the study is tied to the validity of the dose reconstruction approaches to provide realistic estimates of organ-specific radiation absorbed doses that are as accurate and precise as possible and to properly evaluate their accompanying uncertainties. The dosimetry aspects for the MWS are challenging in that they address diverse exposure scenarios for diverse occupational groups being studied over a period of up to 70 y. The dosimetric issues differ among the varied exposed populations that are considered: atomic veterans, U.S. Department of Energy workers exposed to both penetrating radiation and intakes of radionuclides, nuclear power plant workers, medical radiation workers, and industrial radiographers. While a major source of radiation exposure to the study population comes from external gamma- or x-ray sources, for some of the study groups, there is a meaningful component of radionuclide intakes that requires internal radiation dosimetry assessments. Scientific Committee 6-9 has been established by the National Council on Radiation Protection and Measurements (NCRP) to produce a report on the comprehensive organ dose assessment (including uncertainty analysis) for the MWS. The NCRP dosimetry report will cover the specifics of practical dose reconstruction for the ongoing epidemiologic studies with uncertainty analysis discussions and will be a specific application of the guidance provided in NCRP Report Nos. 158, 163, 164, and 171. The main role of the Committee is to provide guidelines to the various groups of dosimetrists involved in the MWS to ensure that certain dosimetry criteria are considered: calculation of annual absorbed doses in the organs of interest, separation of low and high linear-energy transfer components, evaluation of uncertainties, and quality assurance and quality control. It is recognized that the MWS and its approaches to dosimetry are a work in progress and that there will be flexibility and changes in direction as new information is obtained with regard to both dosimetry and the epidemiologic features of the study components. This paper focuses on the description of the various components of the MWS, the available dosimetry results, and the challenges that have been encountered. It is expected that the Committee will complete its report in 2016.
    Recent studies of children and adolescents who were exposed to radioactive iodine-131 (I-131) after the 1986 Chernobyl nuclear accident in Ukraine exhibited a significant dose-related increase in the risk of thyroid cancer, but the... more
    Recent studies of children and adolescents who were exposed to radioactive iodine-131 (I-131) after the 1986 Chernobyl nuclear accident in Ukraine exhibited a significant dose-related increase in the risk of thyroid cancer, but the association of radiation doses with tumor histologic and morphologic features is not clear. A cohort of 11,664 individuals in Belarus who were aged ≤18 years at the time of the accident underwent 3 cycles of thyroid screening during 1997 to 2008. I-131 thyroid doses were estimated from individual thyroid activity measurements taken within 2 months after the accident and from dosimetric questionnaire data. Demographic, clinical, and tumor pathologic characteristics of the patients with thyroid cancer were analyzed using 1-way analysis of variance, chi-square tests or Fisher exact tests, and logistic regression. In total, 158 thyroid cancers were identified as a result of screening. The majority of patients had T1a and T1b tumors (93.7%), with many positive regional lymph nodes (N1; 60.6%) but few distant metastases (M1; <1%). Higher I-131 doses were associated with higher frequency of solid and diffuse sclerosing variants of thyroid cancer (P < .01) and histologic features of cancer aggressiveness, such as lymphatic vessel invasion, intrathyroidal infiltration, and multifocality (all P < .03). Latency was not correlated with radiation dose. Fifty-two patients with self-reported thyroid cancers which were diagnosed before 1997 were younger at the time of the accident and had a higher percentage of solid variant cancers compared with patients who had screening-detected thyroid cancers (all P < .0001). I-131 thyroid radiation doses were associated with a significantly greater frequency of solid and diffuse sclerosing variants of thyroid cancer and various features of tumor aggressiveness.

    And 36 more