Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
  • Divecha Centre for Climate Change, Centre for Atmospheric and Oceanic Sciences, Indian Institute of Science, Bangalore, India.
Prediction markets are increasingly being used to estimate probabilities of future events, and market equilibrium prices depend on the distribution of subjective probabilities of underlying events. When each contract requires the payment... more
Prediction markets are increasingly being used to estimate probabilities of future events, and market equilibrium prices depend on the distribution of subjective probabilities of underlying events. When each contract requires the payment of a dollar if the underlying event were to occur, equilibrium prices are usually used to estimate the mean probabilities of the corresponding events. This paper shows that under certain conditions, market equilibrium prices of such contracts can lie outside the convex hull of potential traders’ probability beliefs, and where this occurs, market forecasts can induce stochastically dominated group decisions. We describe examples of where this could occur and generalize these examples to characterize conditions for nonconvex prices. A necessary condition for nonconvex prices is that market risk premia for complementary contracts have opposite signs. Preference functions on the lines of prospect theory have this property.
Cumulative emissions accounting for carbon-dioxide (CO2) is founded on recognition that global warming in earth system models is roughly proportional to cumulative CO2 emissions, regardless of emissions pathway. However, cumulative... more
Cumulative emissions accounting for carbon-dioxide (CO2) is founded on recognition that global warming in earth system models is roughly proportional to cumulative CO2 emissions, regardless of emissions pathway. However, cumulative emissions accounting only requires the graph between global warming and cumulative emissions to be approximately independent of emissions pathway (“path-independence”), regardless of functional relationship between these variables. The concept and mathematics of path-independence are considered for an energy-balance climate model, giving rise to a closed-form expression of global warming, together with analysis of the atmospheric cycle following emissions. Path-independence depends on the ratio between the period of the emissions cycle and the atmospheric lifetime, being a valid approximation if the emissions cycle has period comparable to or shorter than the atmospheric lifetime. This makes cumulative emissions accounting potentially relevant beyond CO2, to other greenhouse gases with lifetimes of several decades whose emissions have recently begun.
Sulfur dioxide is a radiatively and chemically important trace gas in the atmosphere of Venus and its abundance at the cloud tops has been observed to vary on interannual to decadal timescales. This variability is thought to come from... more
Sulfur dioxide is a radiatively and chemically important trace gas in the atmosphere of Venus and its abundance at the cloud tops has been observed to vary on interannual to decadal timescales. This variability is thought to come from changes in the strength of convection which transports sulfur dioxide to the cloud tops, although the dynamics behind such convective variability are unknown. Here, we propose a new conceptual model for convective variability that links the radiative effects of water abundance at the cloud‐base to convective strength within the clouds, which in turn affects water transport within the cloud. The model consists of two coupled equations which are identified as a recharge‐discharge oscillator. The solutions of the coupled equations are finite amplitude sustained oscillations in convective strength and cloud‐base water abundance on 3–9 years timescales. The characteristic oscillation timescale is given by the geometric mean of the radiative cooling time and the eddy mixing time near the base of the convective clouds.
Rain gauges are considered the most accurate method to estimate rainfall and are used as the ''ground truth'' for a wide variety of applications. The spatial density of rain gauges varies substantially and hence influences the accuracy of... more
Rain gauges are considered the most accurate method to estimate rainfall and are used as the ''ground truth'' for a wide variety of applications. The spatial density of rain gauges varies substantially and hence influences the accuracy of gridded gauge-based rainfall products. The temporal changes in rain gauge density over a region introduce considerable biases in the historical trends in mean rainfall and its extremes. An estimate of uncertainty in gauge-based rainfall estimates associated with the nonuniform layout and placement pattern of the rain gauge network is vital for national decisions and policy planning in India, which considers a rather tight threshold of rainfall anomaly. This study examines uncertainty in the estimation of monthly mean monsoon rainfall due to variations in gauge density across India. Since not all rain gauges provide measurements perpetually, we consider the ensemble uncertainty in spatial average estimation owing to randomly leaving out rain gauges from the estimate. A recently developed theoretical model shows that the uncertainty in the spatially averaged rainfall is directly proportional to the spatial standard deviation and inversely proportional to the square root of the total number of available gauges. On this basis, a new parameter called the ''averaging error factor'' has been proposed that identifies the regions with large ensemble uncertainties. Comparison of the theoretical model with Monte Carlo simulations at a monthly time scale using rain gauge observations shows good agreement with each other at all-India and subregional scales. The uncertainty in monthly mean rainfall estimates due to omission of rain gauges is largest for northeast India (;4% uncertainty for omission of 10% gauges) and smallest for central India. Estimates of spatial average rainfall should always be accompanied by a measure of uncertainty, and this paper provides such a measure for gauge-based monthly rainfall estimates. This study can be further extended to determine the minimum number of rain gauges necessary for any given region to estimate rainfall at a certain level of uncertainty.
We estimate statistics of spatial averages by linearly weighting available observations over a spatial domain, and derive an account of bias and variance in the presence of missing observations. With missing observations, the spatial... more
We estimate statistics of spatial averages by linearly weighting available observations over a spatial domain, and derive an account of bias and variance in the presence of missing observations. With missing observations, the spatial average is a ratio of random variables, and estimators are derived by truncating Taylor series of functions of the ratio followed by taking suitable expectations (’delta method’). The resulting estimators are approximate, and perform well when compared with simulations. Previous authors have examined “optimal averaging” strategies for minimizing mean squared error (MSE) of a spatial average, and we extend the analysis to the case of missing observations. It is shown that to minimize variance primarily requires higher weights where local variance and covariance are small, whereas minimizing bias requires higher weights where observations lie nearer to true spatial average. Missing data increases variance and contributes to bias, and reducing both effects involves emphasizing locations with observations nearer to the spatial average. The framework is applied to spatially averaged rainfall over India. We estimate standard error of all-India rainfall as the combined effect of measurement uncertainty and bias, in the case where weights are chosen to minimize MSE. We also discuss different special cases of the estimators, and applications thereof.
Observations and GCMs exhibit approximate proportionality between cumulative carbon dioxide (CO2) emissions and global warming. Here we identify sufficient conditions for the relationship between cumulative CO2 emissions and global... more
Observations and GCMs exhibit approximate proportionality between cumulative carbon dioxide (CO2) emissions and global warming. Here we identify sufficient conditions for the relationship between cumulative CO2 emissions and global warming to be independent of the path of CO2 emissions; referred to as "path independence". Our starting point is a closed form expression for global warming in a two-box energy balance model (EBM), which depends explicitly on cumulative emissions, airborne fraction and time. Path independence requires that this function can be approximated as depending on cumulative emissions alone. We show that path independence arises from weak constraints, occurring if the timescale for changes in cumulative emissions (equal to ratio between cumulative emissions and emissions rate) is small compared to the timescale for changes in airborne fraction (which depends on CO2 uptake), and also small relative to a derived climate model parameter called the damping-timescale, which is related to the rate at which deep-ocean warming affects global warming. Effects of uncertainties in the climate model and carbon cycle are examined. Large deep-ocean heat capacity in the Earth system is not necessary for path independence, which appears resilient to climate modeling uncertainties. However long time-constants in the Earth system carbon cycle are essential, ensuring that airborne fraction changes slowly with timescale much longer than the timescale for changes in cumulative emissions. Therefore path independence between cumulative emissions and warming cannot arise for short-lived greenhouse gases.
Carbon-dioxide (CO2) is the main contributor to anthropogenic global warming, and the timing of its peak concentration in the atmosphere is likely to be the major factor in the timing of maximum radiative forcing. Other forcers such as... more
Carbon-dioxide (CO2) is the main contributor to anthropogenic global warming, and the timing of its peak concentration in the atmosphere is likely to be the major factor in the timing of maximum radiative forcing. Other forcers such as aerosols and non-CO2 greenhouse gases may also influence the timing of maximum radiative forcing. This paper approximates solutions to a linear model of atmospheric CO2 dynamics with four time-constants to identify factors governing the timing of its concentration peak. The most important emissions-related factor is the ratio between average rates at which emissions increase and decrease, which in turn is related to the rate at which the emissions intensity of CO2 is reduced. Rapid decarbonization of CO2 can not only limit global warming but also achieve an early CO2 concentration peak. The most important carbon cycle parameters are the long multi-century time-constant of atmospheric CO2, and the ratio of contributions to the impulse response function of atmospheric CO2 from the infinitely long lived and the multi-century contributions respectively. Reducing uncertainties in these parameters can reduce uncertainty in forecasts of the radiative forcing peak.

A simple approximation for peak CO2 concentration, valid especially if decarbonization is slow, is developed. Peak concentration is approximated as a function of cumulative emissions and emissions at the time of the concentration peak. Furthermore peak concentration is directly proportional to cumulative CO2 emissions for a wide range of emissions scenarios. Therefore, limiting the peak CO2 concentration is equivalent to limiting cumulative emissions. These relationships need to be verified using more complex models of Earth system's carbon cycle.
Research Interests:
The dynamics of a linear two-box energy balance climate model is analyzed as a fast–slow system, where the atmosphere, land, and near-surface ocean taken together respond within few years to external forcing whereas the deep-ocean... more
The dynamics of a linear two-box energy balance climate model is analyzed as a fast–slow system, where the atmosphere, land, and near-surface ocean taken together respond within few years to external forcing whereas the deep-ocean responds much more slowly. Solutions to this system are approximated by estimating the system’s time-constants using a first-order expansion of the system’s eigenvalue problem in a perturbation parameter, which is the ratio of heat capacities of upper and lower boxes. The solution naturally admits an interpretation in terms of a fast response that depends approximately on radiative forcing and a slow response depending on integrals of radiative forcing with respect to time. The slow response is inversely proportional to the “damping-timescale”, the timescale with which deep-ocean warming influences global warming. Applications of approximate solutions are discussed: conditions for a warming peak, effects of an individual pulse emission of carbon dioxide (CO2), and metrics for estimating and comparing contributions of different climate forcers to maximum global warming.
Monsoons involve increases in dry static energy (DSE), with primary contributions from increased shortwave radiation and condensation of water-vapor, compensated by DSE export via horizontal fluxes in monsoonal circulations. We introduce... more
Monsoons involve increases in dry static energy (DSE), with primary contributions from increased shortwave radiation and condensation of water-vapor, compensated by DSE export via horizontal fluxes in monsoonal circulations. We introduce a simple box-model characterizing evolution of the DSE budget to study nonlinear dynamics of steady-state monsoons. Horizontal fluxes of DSE are stabilizing during monsoons, exporting DSE and hence weakening the monsoonal circulation. By contrast latent heat addition (LHA) due to condensation of water vapor destabilizes, by increasing the DSE budget. These two factors, horizontal DSE fluxes and LHA, are most strongly dependent on the contrast in tropospheric mean temperature between land and ocean. For the steady-state DSE in the box-model to be stable, the DSE flux should depend more strongly on the temperature contrast than LHA; stronger circulation then reduces DSE and thereby restores equilibrium. We present conditions for this to occur. The main focus of the paper is describing conditions for bifurcation behavior of simple models. Previous authors presented a minimal model of abrupt monsoon transitions and argued that such behavior can be related to a positive feedback called the 'moisture advection feedback'. However, by accounting for the effect of vertical lapse rate of temperature on the DSE flux, we show that bifurcations are not a generic property of such models despite these fluxes being nonlinear in the temperature contrast. We explain the origin of this behavior and describe conditions for a bifurcation to occur. This is illustrated for the case of the July-mean monsoon over India. The default model with mean parameter estimates does not contain a bifurcation, but the model admits bifurcation as parameters are varied.
Research Interests:
We welcome the response from Baldev Raj to our article. 1 However, it does not satisfactorily address the issues raised therein. To begin with, it does not explain how 100 MJ is an upper bound on mechanical energy that could be released... more
We welcome the response from Baldev Raj to our article. 1 However, it does not satisfactorily address the issues raised therein. To begin with, it does not explain how 100 MJ is an upper bound on mechanical energy that could be released in a core disruptive accident (CDA). Raj states that safety studies have shown that mechanical energy release from loss of coolant flow together with failure of safety systems is less than 1 MJ. This contradicts DAE's published studies of the PFBR, 2 cited in our article, which show that energies on the order of 1000 MJ are possible. A key assumption in this calculation is the reactivity " insertion " rate (which depends on how much of the core collapses, in what configuration and how fast); the 100 MJ limit is based on assuming that only a part of the core participates in the accident. As we show in our article, the DAE's analysis that calculates the effect of the initiating events on the extent of core melting is limited by omissions (ignoring of cladding failure modes and the effects of burnup on fast reactivity feedbacks as well as fuel thermophysical properties) and therefore is unduly optimistic. Regarding the small CDA mechanical energy to thermal power ratio estimated for the Prototype Fast Breeder Reactor (PFBR) as compared to previous fast reactors, Raj points to other recent designs with even smaller figures. The problem is that the reactors he offers as counterexamples have not yet been cleared for construction by the appropriate safety regulators in those countries, let alone constructed. We see no reason to expect that French regulators , for example, would be satisfied with a reactor design that offers less containment strength than the Superphénix, after the series of accidents that the Superphénix experienced. The only reactor under construction in the list Raj offers is the Russian BN-800. However, this reactor has been designed to have a very small or negative sodium void coefficient and therefore is not the most relevant benchmark for the PFBR which has a large positive value. 3 The BN-800's containment design must also be seen in the background of the safety performance of Russia's breeder program. The largest reactor constructed so far, the BN-600, experienced 27 sodium leaks between 1980 and 1997, 14 of which resulted in sodium fires. 4 In most, if not all, cases, it appears that the reactor was not even shut down and continued operating as the fires were raging, indicating that inadequate priority is given to safety.
Research Interests:
The state of Karnataka in India faces a shortfall in its electricity-generating capacity, which is higher during peak demand periods. To remedy this, the government has planned a large expansion of baseload capacity, mainly in the form of... more
The state of Karnataka in India faces a shortfall in its electricity-generating capacity, which is higher during peak demand periods. To remedy this, the government has planned a large expansion of baseload capacity, mainly in the form of large coal based thermal power plants. In the study described here, we calculate the per-kilowatt hour costs of various supply side options at different plant load factors, and find that the current plans for supply expansion via large base-load generation plants is at odds with the objective of meeting demand shortfall at lowest cost. We estimate a comparable kilowatt hour cost of energy efficiency measures, and estimate their potential via a sector-wise analysis. A lowest cost planning exercise suggests that the increase in demand over the next decade can be met by implementing energy efficiency measures, expanding renewable sources of power to their capacity, and implementing the thermal power plants already contracted. Therefore it is not necessary to commission any new large coal based thermal plants during the next decade or more.
Research Interests:
This article explores the safety capabilities of the 500 MWe Prototype Fast Breeder Reactor that is under construction in India, and which is to be the first of several similar reactors that are proposed to be built over the next few... more
This article explores the safety capabilities of the 500 MWe Prototype Fast Breeder Reactor that is under construction in India, and which is to be the first of several similar reactors that are proposed to be built over the next few decades, to withstand severe accidents. Such accidents could potentially breach the reactor containment and disperse radioactivity to the environment. The potential for such accidents results from the reactor core not being in its most reactive configuration; further, when there is a loss of the coolant, the reactivity increases rather then decreasing as in the case of water-cooled reactors. The analysis demonstrates that the official safety assessments are based on assumptions about the course of accidents that are not justifiable empirically and the safety features incorporated in the current design are not adequate to deal with the range of accidents that are possible.
Research Interests:
In systems with rotational symmetry, bending modes occur in doubly-degenerate pairs with two independent vibration modes for each repeated natural frequency. In circular plates, the standing waves of two such degenerate bending modes can... more
In systems with rotational symmetry, bending modes occur in doubly-degenerate pairs with two independent vibration modes for each repeated natural frequency. In circular plates, the standing waves of two such degenerate bending modes can be superposed with a 1/4 period separation in time to yield a traveling wave response. This is the principle of a traveling wave ultrasonic motor (TWUM), in which a traveling bending wave in a stator drives the rotor through a friction contact. The stator contains teeth to increase the speed at the contact region, and these affect the rotational symmetry of the plate. When systems with rotational symmetry are modified either in their geometry, or by spatially varying their properties or boundary conditions, some mode-pairs split into singlet modes having distinct frequencies. In addition, coupling between some pairs of distinct unperturbed modes also causes quasi-degeneracies in the perturbed modes, which leads their frequency curves to approach and veer away in some regions of the parameter space. This paper discusses the effects of tooth geometry on the behavior of plate modes under free vibration. It investigates mode splitting and quasi-degeneracies and derives analytic expressions to predict these phenomena, using variational methods and a degenerate perturbation scheme for the solution to the plate’s discrete eigenvalue problem; these expressions are confirmed by solving the discrete eigenvalue problem of the plate with teeth.
Research Interests:
Tradeoffs are examined between mitigating black carbon (BC) and carbon dioxide (CO2) for limiting peak global mean warming, using the following set of methods. A two-box climate model is used to simulate temperatures of the atmosphere and... more
Tradeoffs are examined between mitigating black carbon (BC) and carbon dioxide (CO2) for limiting peak global mean warming, using the following set of methods. A two-box climate model is used to simulate temperatures of the atmosphere and ocean for different rates of mitigation. Mitigation rates for BC and CO2 are characterized by respective timescales for e-folding reduction in emissions intensity of gross global product. There are respective emissions models that force the box model. Lastly there is a simple economics model, with cost of mitigation varying inversely with emission intensity.

Constant mitigation timescale corresponds to mitigation at a constant annual rate, for example an e-folding timescale of 40 years corresponds to 2.5% reduction each year. Discounted present cost depends only on respective mitigation timescale and respective mitigation cost at present levels of emission intensity. Least-cost mitigation is posed as choosing respective e-folding timescales, to minimize total mitigation cost under a temperature constraint (e.g. within 2 °C above preindustrial). Peak warming is more sensitive to mitigation timescale for CO2 than for BC. Therefore rapid mitigation of CO2 emission intensity is essential to limiting peak warming, but simultaneous mitigation of BC can reduce total mitigation expenditure.
This article contributes a case study of regulation of the design of India’s Prototype Fast Breeder Reactor (PFBR). This reactor is the first of its kind in India, and perceived by the nuclear establishment as critical to its future... more
This article contributes a case study of regulation of the design of India’s Prototype Fast Breeder Reactor (PFBR). This reactor is the first of its kind in India, and perceived by the nuclear establishment as critical to its future ambitions. Because fast breeder reactors can experience explosive accidents called core disruptive accidents whose maximum severity is difficult to contain, it is difficult to assure the safety of the reactor’s design. Despite the regulatory agency’s apparent misgivings about the adequacy of the PFBR’s design, it eventually came to approve construction of the reactor. We argue that the approval process should be considered a case of regulatory failure, and examine three potential factors that contributed to this failure: institutional negligence, regulatory capture, and dependence on developers and proponents for esoteric knowledge. This case holds lessons for nuclear safety regulation and more generally in situations where specialized, highly technical, knowledge essential for ensuring safety is narrowly held.
Prediction markets are increasingly being used to estimate probabilities of future events, and market equilibrium prices depend on the distribution of subjective probabilities of underlying events. When each contract requires the payment... more
Prediction markets are increasingly being used to estimate probabilities of future events, and market equilibrium prices depend on the distribution of subjective probabilities of underlying events. When each contract requires the payment of a dollar if the underlying event were to occur, equilibrium prices are usually used to estimate the mean probabilities of the corresponding events. This paper shows that under certain conditions, market equilibrium prices of such contracts can lie outside the convex hull of potential traders’ probability beliefs, and where this occurs, market forecasts can induce stochastically dominated group decisions. We describe examples of where this could occur and generalize these examples to characterize conditions for nonconvex prices. A necessary condition for nonconvex prices is that market risk premia for complementary contracts have opposite signs. Preference functions on the lines of prospect theory have this property.
Safe operation of nuclear power facilities requires a culture of learning, but Indian nuclear authorities appear to continuously fail to learn the lessons of accidents including at facilities they operate. This paper examines how nuclear... more
Safe operation of nuclear power facilities requires a culture of learning, but Indian nuclear authorities appear to continuously fail to learn the lessons of accidents including at facilities they operate. This paper examines how nuclear authorities in India responded to the Fukushima accidents and a previous accident at one of India’s nuclear power plants, and infers what they seem to have learned from them. By evaluating this experience in light of a wide body of research on factors promoting reliability and safety in organizations managing complex and hazardous systems, it seeks to draw lessons about the prospects for nuclear safety in India.
This paper examines lessons from the operating experience in India’s nuclear facilities about factors influencing the risk of potential accidents. Different perspectives on safety in hazardous facilities have identified organizational... more
This paper examines lessons from the operating experience in India’s nuclear facilities about factors influencing the risk of potential accidents. Different perspectives on safety in hazardous facilities have identified organizational factors coincident with reliable and accident-free operations; these include functional redundancy and compensation for failures, the importance of organizational leaders in setting and maintaining safety standards, healthy relationships between management and workers, and sophisticated learning from failures. Using publicly available information about incidents and failures, we find that these conditions are frequently violated.
The November 2009 exposure of employees at the Kaiga nuclear power plant to tritiated water is not the first instance of high radiation exposures to workers. Over the years, many nuclear reactors and other facilities associated with the... more
The November 2009 exposure of employees at the Kaiga nuclear power plant to tritiated water is not the first instance of high radiation exposures to workers. Over the years, many nuclear reactors and other facilities associated with the nuclear fuel cycle operated by the Department of Atomic Energy have had accidents of varying severity. Many of these are a result of repeated inattention to good safety practices, often due to lapses by management. Therefore, the fact that catastrophic radioactive releases have not occurred is not by itself a source of comfort. To understand whether the DAE's facilities are safe, it is therefore necessary to take a closer look at their operations. The description and discussion in this paper of some accidents and organisational practices offer a glimpse of the lack of priority given to nuclear safety by the DAE. The evidence presented here suggests that the organisation does not yet have the capacity to safely manage India’s nuclear facilities.
This work presents a numerical model for normal engagement of two rough surfaces in contact. In this study, the Johnson translator system with a linear filter is used to transform a Gaussian white-noise input to an output surface with... more
This work presents a numerical model for normal engagement of two rough surfaces in contact. In this study, the Johnson translator system with a linear filter is used to transform a Gaussian white-noise input to an output surface with prescribed moments and autocorrelation function. The rough surface contact model employs influence coefficients obtained from finite element analysis of the contacting bodies. The contact solution accounts for the effects of macroscopic geometry and boundary conditions, and can be used to simulate engagement at a wide range of loads, including loads at which bulk effects dominate the response. A description of bulk deflection in terms of the displacement of the surface mean plane is also presented. The effects of surface topography on normal engagement stiffness are discussed.
Economics has a well defined notion of equilibrium. Unlike mechanics or thermodynamics, economics does not include explicit theories of dynamics describing how equilibria are reached or whether they are stable. However, even simple... more
Economics has a well defined notion of equilibrium. Unlike mechanics or thermodynamics, economics does not include explicit theories of dynamics describing how equilibria are reached or whether they are stable. However, even simple economics problems such as maximization of a welfare function might sometimes be interpreted as dynamics problems. Here we consider when dynamics is relevant to welfare optimization problems involving a single decision-maker, for example a social decision-maker maximizing a social welfare function. We suggest that dynamics occurs in case a welfare maximum can only be known through a sequence of local computations. These local computations give rise to a dynamical system, and the welfare optimum is also equilibrium. On the contrary if the welfare function is known then dynamics is irrelevant and the maximum can be chosen directly. The importance of choosing the right metaphor for the economics problem is discussed.
If the India-US deal moves forward, this would give the former greaterfreedom to pursue cooperation with countries possessing nuclear materials and technology. However, international cooperation would require the facilities... more
If the  India-US  deal  moves  forward, this  would give the former greaterfreedom to pursue cooperation with  countries
possessing nuclear materials and technology. However, international cooperation would require the facilities  receiving
assistance to be subject to safeguards, and  to  that  extent  India's priorities  for international cooperation must  be articulated. Having clear priorities would also help India's negotiators navigate a situation  in  which offers  of  cooperation come  with strings attached.
Research Interests:
We welcome the response from S C Chetal and P Chellapandi of the Department of Atomic Energy (DAE) to concerns we have raised about the safety of the Prototype Fast Breeder Reactor (PFBR) in a core disruptive accident... more
We  welcome  the  response  from S C Chetal and P Chellapandi of the Department of Atomic Energy (DAE)  to  concerns  we  have  raised  about  the  safety  of  the  Prototype  Fast  Breeder Reactor (PFBR) in a core disruptive accident  (CDA)  (Kumar  and  Ramana  2011). However,  there  are  persistent  disagreements that we outline below. It is important to take into account the consequences of a CDA because safety systems, including multiple ones, can fail.
Research Interests:
The Prototype Fast Breeder Reactor that is being built in Kalpakkam in Tamil Nadu has the potential to undergo severe accidents that involve the disassembly of the reactor core. Such accidents could release sufficient energy to fracture... more
The Prototype Fast Breeder Reactor that is being built in Kalpakkam in Tamil Nadu has the potential to undergo severe accidents that involve the disassembly of the reactor core. Such accidents could release sufficient energy to fracture the protective barriers around the core, including the containment building, and release large fractions of the radioactive material in the reactor into the surroundings. The designers of the PFBR have made choices aimed at making the reactor cheaper rather than safer. The safety assessment of the PFBR points to some fundamental problems with how nuclear technology is regulated.