Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

NNadir

NNadir's Journal
NNadir's Journal
December 11, 2019

It's Too Bad Journalists Don't Read Editorials in Scientific Journals: Units and Energy Literacy.

The paper I'll discuss in this post is an editorial in a scientific journal: Energy Literacy Begins with Units That Make Sense: The Daily Energy Unit D (Bruce Logan, Environ. Sci. Technol. Lett. 2019, 6, 12, 686-687)

Dr. Logan is the editor of Environ. Sci. Technol. Lett. It is the rapid communications sister journal of Environ. Sci. Tech. I read both regularly.

Recently on this website, someone posted an excerpt of this bit of journalistic nonsense marketing bullshit and unsurprisingly it immediately generated 25 recommends: Tesla's Virtual Power Plant rescues grid after coal peaker fails, and it's only 2% finished.

The qualifications of the author of this piece of benighted marketing is named Simon Alvarez, who proudly - if you click on the link for his name - has this to say about his qualifications to write this bit of "news:"

Simon is a reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday.


As a scientist, all I can say is, "Don't worry about it, Simon. You are already on Mars."

It's pretty funny that I came across this editorial on the same day I came across Simon's Elon Musk worship piece.

Here is how Simon, who is no worse than nearly all of the journalists writing about the grand solar/battery "miracle:"

Once complete, Tesla’s Virtual Power Plant in South Australia will deliver 250MW of solar energy and store 650 MWh of backup energy for the region. That’s notably larger than the Hornsdale Power Reserve, which is already changing South Australia’s energy landscape with its 100MW/129MWh capacity. In a way, Tesla’s Virtual Power Plant may prove to be a dark horse for the company’s Energy Business, which is unfortunately underestimated most of the time. Couple this with the 50% expansion of the Hornsdale Power Reserve, and Tesla Energy might very well be poised to surprise in the coming quarters.


According to Simon, we're "only" 2% along the way for the huckster Musk's "Virtual Powerplant." There's that magic word so popular in this kind of narcoleptic rhetoric that is destroying the world with complacency, "percent."

While the illiterate use of the unit MW is used to describe the solar peak power capacity, Simon is slightly better than most journalists inasmuch as he (in the same sentence) also includes a unit of energy, the MWh, which is equal to 3.6 billion joules.

The big lie we tell ourselves with huge enthusiasm even as the atmosphere collapses in a festival of ignorance is that a 250 MW solar plant is the equivalent of a 250 MW gas or coal or nuclear plant. However, it is rare for a solar plant to ever reach its peak capacity, and overall, even in deserts, the capacity utilization of a solar plant is typically 15% or less. If a gas or coal plant shuts down because a solar plant is producing a significant portion of its rated peak capacity for an hour, it has to burn extra gas or coal to restart, because, as anyone with a cooled down tea kettle knows, if it's cooled, the water does not boil instantaneously when you turn the gas or electric burner back on. A "250 MW" solar plant is thus the equivalent of a 37.5 MW plant that can operate continuously. Moreover, since the solar plant's output is in no way connected with demand, it's not clear that the energy provided by it will be useful.

Simon doesn't tell us how big the tripped coal plant was, but let's say it was a small coal plant, rated at 500 MW. Two percent of 650 is 13. Thirteen MWh means that the Tesla future electronic waste could cover the output of the coal plant (if it's 500 MW) for 13/500 = 0.026 hours = 1.56 minutes = 93 seconds.

Really? It "saved" Queensland Simon?

Because we want to believe this sort of wishful thinking by Simon who wants to go to Mars someday on Elon Musk's
"vision," we are well past the 400 ppm milestone for the accumulations of the dangerous fossil fuel waste carbon dioxide in the planetary atmosphere. We passed it (as measured at the Mauna Loa Observatory) permanently in the week ending November 8th, 2015. No one alive now will ever see a reading below 400 ppm again.

Smoke another joint Elon, and tell us all about your solar powered car and your rockets to Mars.

History will not forgive us, nor should it.

The serious paper written not by a little kid with science fiction dreams, but by a real scientist (Bruce Logan) referenced above at the outset of this post is open sourced, and anyone can read it. I will excerpt it briefly and post a table from it in any case; an interested party who actually is invested in reality can read it in full.

It is amazing how much we learn to perceive things through units that become common in our lives. On a cool autumn morning, you look at the thermostat in the United States and from experience you know how to choose the perfect coat for 52 °F. However, if you hear the temperature in Gallargues-le- Montueux, France, reached 46 °C (this past July), you probably have to Google a temperature conversion to change it to Fahrenheit (115 °F) to understand it. When you go to work and drive on a road posted at 35 mph, you know what that speed feels like, but what if you were in Europe and it was posted in kph? Or what if a European tells you the mileage for her car in liters per 100 km, and you struggle to relate that to numbers you know based on miles per gallon. We develop a sense of things based on experience with certain units, and when those are different, you lose your perception of the quantity.

Most of us do not have a basic sense of the amount of energy we consume for different activities in our lives. One reason is that we find it difficult to compare things that have different units, even if they describe the same property (such as temperature), and units of energy are particularly challenging! We often make comparisons based on something we can relate to, such as saying how many football fields we could cover or how many Olympic size pools we could fill. It is more difficult to relate energy units within one context, such as energy for our apartment or house, to other things in our life, such as fuel for our car.


Dr. Logan does not, unfortunately suggest the general public the opportunity to use the SI unit for energy, which is the joule. It is easily transferred to units of large scale with the prefixes Kilo-, Mega-, Giga-, Terra-... ...Peta-, Exa- Zeta-...

He suggests a unit D for day, which is 2000 calories, the dietary food requirement of a "normal" human being in a single day, which is 2.3 kWh or 8.28 million joules. I think this unnecessary. The Joule is the best energy unit there is.

The world in 2018 passed an energy consumption of 600 exajoules, an all time record. Exa- means 600 with 18 zeros after it. Solar energy, after the expenditure of trillions of dollars on it, doesn't, combined with wind energy, produce 13 exajoules. In the percent language so popular in the lies the public tells itself, led by scientifically illiterate journalists, all this money and all this hype - half a century of it - produces less than 2% of world energy demand, and, in percentage terms, the fraction provided by dangerous fossil fuels is increasing, not decreasing.

In units of energy kWh, Dr. Logan provides the following table, later translating it in a subsequent table into his tortured unit "D." This is the "typical" amount of energy required or produced by each device for a typical day:



He writes below the table:

Note that these units are in energy use per day (kWh/d), which has units of power, and a gallon of gasoline is included as a reference point. Some of these units makes sense to compare, but for others, such comparisons are awkward. For example, the 120 hp engine from your car translates to an engine rated at 2160 kWh, but you would not (I hope) operate your car all day at its maximum power. These units of kWh also span different time frames (you do not eat continuously all day), and some units lack a more personal connection, such as food units in kWh.


The unit of power here MWh/day, is easily converted to a unit of energy by multiplying it by 1 day. Thus it is easily understood as energy. Note that it would take 33 solar cells to produce a single gallon of gasoline, 90 to produce the electricity demand (for all purposes, including labor) as much electricity as a person in this country consumes in a day. The second law of thermodynamics which is almost never discussed in the garbage people like Simon produce, limits how much of the stored energy in a battery, a piece of future electronic waste that will never be sustainable on a scale of hundreds of exajoules, can be recovered.

As long as we cheer for crap like this, we will be doing nothing useful to address the great crime we are perpetrating on all future generations, the permanent destruction of the planetary atmosphere.

Have a nice day tomorrow.


December 8, 2019

Continuous On Line Analysis of Constituents of the Radioactive Hanford Tanks.

The paper I'll discuss in this post is this one: Online, Real-Time Analysis of Highly Complex Processing Streams: Quantification of Analytes in Hanford Tank Sample (Bryan et al, Ind. Eng. Chem. Res. 2019, 58, 47, 21194-21200).

Nobel Laureate Glenn Seaborg described the chemical processing in the Manhattan Project to produce plutonium, an element of which he was co-discoverer, as the fastest and greatest chemical scale up in history. The first sample of plutonium he created, which is now displayed in the Smithsonian Institution's History of Science Museum - I've seen it - contained a tiny quantity of plutonium that was invisible; it's existence was recognized by detection of its radioactive decay signal. The nuclear reactions that created it was 238U[d,2n]238Np. The reaction was carried out using the 60 inch cyclotron at UC Berkeley. The neptunium (which was not initially detected) decayed within days to the plutonium isotope, 238Pu, which was characterized by trace chemical procedures in Gilman Hall, room 307, in late February, 1941.

The first human built device to leave our solar system necessarily contains kg quantities of the 238Pu isotope.

As everybody knows, the discovery of plutonium played a huge role in the Manhattan project, and the scale up in which Seaborg played a key role, involved scaling the isolation of plutonium from essentially the atomic scale to multiple kg quantities. This was an industrial process, designed and executed in a completely ad hoc fashion, using materials and substances that had never been seen by anyone previously, possessing properties, notably intense radioactivity, that had never been addressed on an industrial scale.

As someone with considerable experience, albeit largely (but not entirely) indirect, involving the scale up of chemical processes, this is not the way chemical processes are scaled today.

In this process, it was absolutely necessary, given the physics of plutonium at the rate at which it formed to utilize sources of it that were extremely dilute solid solutions of uranium. This procedure therefore necessarily produced significant quantities of by products, many of them highly radioactive. At the time, very few people thought about the long term consequences of handling these by products, now generally described by the public lexicon as "nuclear waste." A far greater concern at that time was that scientists working for Adolf Hitler would develop a nuclear weapons first. In some cases the by products were simply dumped in trenches. Ultimately storage tanks were built. Almost all of this process work was conducted at the Hanford plant in Washington State, the site having been selected because the nuclear reactors that were ultimately built to produce plutonium required significant quantities of cooling water to run.

As everybody knows, the "hot" war, World War II - which started, at least as far as the United States and the former Soviet Union were concerned, as an oil war - became the world's only observed nuclear war, which was followed by a cold war, by the two participants in the war who possessed and produced significant amounts of oil. (There have been many oil wars since 1945, but happily, no more nuclear wars.)

During the cold war, the production of weapons grade plutonium, in Washington State and elsewhere, accelerated to an even larger scale, from kilograms to metric tons. The requirement for the production weapons grade plutonium - tons of which was vaporized in the open atmosphere, and distributed across the planet by the United States, the former Soviet Union, Great Britain, France and China - has always involved the use of dilute solid solutions of the element, and has thus always generated huge quantities of by products. At the Hanford site, 149 single shell tanks were constructed to contain these by products between 1945 and 1964, and after 1964, when it was understood that some of these tanks were leaking by products, many of which were highly radioactive, into the ground. After this was discovered, a new class of tanks were built, double shell tanks, an additional 28 tanks.

During the history of the filling of these tanks, the types of materials in them varied widely, often with marginal record keeping because they were subject to multiple and changeable processes. The initial process for plutonium recovery was called the "Bismuth Phosphate" process, which was followed by the Purex process (still in use in various places around the world), the Urex process, and the Truex process, the "ex" referring to the basic chemical approach in the processes, which is solvent extraction using solvents and extractants produced from the dangerous fossil fuel petroleum, for example, kerosene, and tributyl phosphate. The fuel rods were dissolved in highly corrosive (necessarily corrosive) acids, primarily nitric acid. The nitric acid solutions were neutralized, after extensive processing to isolate plutonium (and in some cases other elements of interest), with sodium hydroxide, enough to keep aluminum from the processes in solution, although in some cases, this aluminum precipitated in a form of the mineral gibbsite.

The early tanks were designed to accommodate solutions that were subject to continuous boiling, since the side products were not only radiologically hot, but also thermally hot. Once it was recognized the tanks were leaking, it was decided to reduce the heat load in them by removing the cesium from the tanks, using another set of processes that were also somewhat ad hoc. I wrote about the processing involved elsewhere in this space: 16 Years of (Radioactive) Cesium Recovery Processing at Hanford's B Plant. As I noted in that post, the process utilized to remove the cesium was recognized, after the fact, as having created a theoretical risk of a massive chemical explosion owing to a potential for a chemical reaction between ferricyanide and nitrate. It was happily discovered however that the radiation in the tanks had destroyed the cyanide and rendered any risks nil.

This outcome, by the way, suggests why so called "nuclear waste" has largely unappreciated value, since it has the demonstrated value of destroying high risk chemicals, some of which are far more intractable than cyanide and are features of far larger quantities of wastes than are present at Hanford, specifically electronic and the very frightening (at least to people paying attention) agricultural waste nitrous oxide.

Unlike nitrous oxide, the "nuclear waste" tanks, and the Hanford site in general, has garnered a huge amount of interest and concern, particularly from a set of people, anti-nukes, who I personally regard as intellectual and moral cripples. I, as anyone who has ever read the tripe I write here - which is not necessarily designed to be informative as it is to drive my autodidactic exercises - knows, am a rather rabid advocate of the rapid scale up of nuclear energy, which I regard as the only practically available tool to save humanity from its most intractable wastes, the most dangerous form of waste being dangerous fossil fuel waste. Combustion wastes, including the combustion wastes associated with "renewable" biofuels kill, as I often point out, kills about 19,000 people per day. These wastes are most commonly called "air pollution." Another 1200 people die per day from diarrhea associated with untreated fecal waste. As an advocate of the rapid expansion of nuclear energy, people who oppose my admittedly less than uniformly admired stance, are always directing my attention here to the Hanford reservation, about which they know less than I do, since they are a uniformly uneducated bunch when it comes to nuclear issues, and simply hate stuff about which they know nothing. The Hanford tanks are not risk free. It is very possible that materials leaching from them will someday result in death or injury for some people, but the number of "at risk" people is vanishingly small when compared to the observed and on going death toll of people killed by other wastes, in particular, combustion wastes associated with dangerous fossil fuel and "renewable" biomass combustion. I therefore morally and intellectually reject the notion that we should spend hundreds of billions of dollars to save a few lives that may be lost from Hanford leaching when we are unwilling to spend a comparable amount of money to clean up the planetary atmosphere which are in the process of destroying.

The moral idiots making this case, that Hanford is a dire emergency requiring the abandonment of nuclear power, while the death toll of air pollution, climate change from dangerous fossil fuels and, for that matter, fecal waste, is not, simply make me angry and upset.

Thank God DU has an ignore function. I have a very low tolerance for deliberate ignorance.

Despite this objection of mine, huge amounts of money are being spent to "clean up" Hanford utilizing an arbitrary risk to cost ratio that would never be applied to dangerous fossil fuels, since the application of such a ratio to dangerous fossil fuels would make them immediately unaffordable, and we believe we can't live without our consumer stuff that dangerous fossil fuels power. The silver lining on this cloud of selective attention is that the money being spent is producing some very good science, science that will have value in many fields, including the field of the recovery and utilization (ideally) of radioactive materials.

That brings me to the paper referenced at the outset.

Because of the ad hoc nature of the processes to which the contents of the Hanford tanks were subject, the nature of their contents is highly variable and in some cases, unknown. The paper is about the contents Hanford Tank AP-105, a single shell tank from the 1960's that has been leaking for some time. However to see how variable the contents of the tanks can be, here is a graphic from a government report, PNNL-18054 WTP-RPT-167, Rev 0, describing variability in a set of Hanford tanks not including AP-105:



In order to reduce costs, improve safety and quality in any industrial process, real time analysis of the process is to be preferred to what the authors called "grab sample collection and offline analysis. To wit, from the introduction of the paper:

Online monitoring of chemical processes is a growing field with the potential to impact manufacturing, field detection, and fundamental research studies.(1?5) This approach allows for unprecedented, in situ characterizations of chemical systems. A variety of analytical techniques have been employed, ranging from ultrasonics to mass spectrometry.(6,7) However, optical spectroscopy offers a pathway with the greatest potential for providing chemical information including concentration, oxidation state, and speciation.(8?11) The primary strength of optical spectroscopy is the ability to provide significant amounts of characterization data for many chemical species, which leads to the primary challenge associated with this technique. In complex systems with multiple chemical species, the measured optical signals will be proportionally complex. The resulting spectral overlap, matrix effects, ionic strength effects, or signal interferences can inhibit accurate or timely response.(12,13)

This is strongly evident when monitoring the complex streams of the Hanford waste site, the largest superfund cleanup site in the United States.(14,15) With millions of gallons of radioactive waste needing to be remediated and moved to environmentally secured locations, current processing schemes rely heavily on sample collection and off-line analysis to ensure the correct management of materials. Grab sample collection and off-line analysis, however, are time consuming, costly, and have the potential to expose personnel to hazardous conditions.(13,16?19) Most importantly for processing timelines, waiting on grab sample analysis can force a batch-processing approach with extended periods of wait time between processing steps.(20) The Hanford site would benefit from the application of online monitoring by realizing faster (real-time) characterization of process streams while substantially reducing the need to expose personnel to hazardous conditions in the collection of grab samples.

Optical spectroscopy, and particularly Raman spectroscopy, is useful in the analysis of Hanford tank wastes. A majority of tank components are Raman-active with unique fingerprints that can be used to identify and quantify target analytes.(20)
The primary analytical challenge lies in accurately quantifying target analytes within the Hanford tank matrix. Hanford tanks contain a wide range of chemical species, with limited precharacterization to inform and aid in signal analysis.


Raman spectroscopy was discovered in the late 1920's by C.V. Raman, the first Asian to win the Nobel Prize. At the time of his discovery, during the British Raj, when the British regarded themselves as superior to Indians with absolutely no justification, Raman spectroscopy involved extremely hard work, with a single experiment taking many days to perform. The technique involves exciting a molecule with intense monochromatic light, and observing weak emissions radiating at wavelengths differing from the monochromatic light. The development of lasers and CCD detection devices has made it possible to develop commercial instruments that can run experiments in seconds rather than days. Since the emissions involve vibrational and rotational changes in molecules, raman signatures can only be obtained for multi-atomic molecules and not for atoms or ions that are not bonded to another atom or ion.

Here, from the paper, is a description of the contents of the components of Tank AP105.




The equipment:

Spectra were collected using a Raman spectrometer from Spectra Solutions Inc. and associated Spectra Soft software (version 1.3). Instrumentation consisted of a thermoelectric-cooled charge-coupled device detector and 671 nm diode laser. Collection times of 1 s were utilized, where every five spectra were collected and averaged into one spectrum for modeling and online monitoring applications. No spectral data processing other than data collection was performed using the Raman instrumental software.
A specialized flow cell, consisting of a machined holder to maintain the Raman probe alignment into a quartz flow cell, was used to interrogate both stationary and flowing samples. Flow loops were maintained with a QVG50 variable speed piston pump (Fluid Metering, Inc.) capable of pumping fluids at rates from 0 to 35.6 mL/min as set by a controller module. Flow rate calibration curves can be seen in the Supporting Information.


The experiments take 1 second.


The following graphics demonstrate the result of the Raman real time spectroscopy experiments performed on simulated and real Hanford tank contents:



The caption:

Figure 1. Spectra of pure components anticipated in tanks focused on the fingerprint range (top), overlapping NO3– and CO32– bands (middle), and the water band (bottom).





The caption:

Figure 2. Parity plots for NO3– (top) and CrO42– (bottom) showing results for both the training set (gray circles) and validation set (other markers).




The caption:

Figure 3. Spectral response of the multicomponent sample (top) and the concentrations over the course of the run (bottom).




The caption:

Figure 4. Raman spectral response (top) over the course of the flow test and resulting chemometric measurements (open circles) of NO3– (middle) and CrO42– (bottom) to known values (black dashed lines).




The caption:

Figure 5. Spectra of real AP-105 at multiple flow rates and resulting chemometric results from flow test.


An important feature of the instrument must be the radiation resistance of the components.

Irradiation Experiments

A Raman probe and two different samples of a quartz window material (sample cuvettes) were exposed to ? dose from a cobalt-60 source. These materials were irradiated stepwise, increasing by a decade each irradiation, from 1 × 104 rad to a cumulative dose of 1.7 × 108 rad. Between each irradiation step, the spectra of the AP-105 tank simulant were acquired using irradiated and nonirradiated micro-Raman and 1 cm cuvettes.



The results:



The caption:

Figure 6. Picture of the window material before and after complete irradiation (top), spectra of AP-105 simulant as a function of dose (middle), and resulting NO3– measurements across the dose steps (bottom).


The table of analytical results.



The R2 values are, in some cases, a little lower than what we would accept in the pharmaceutical industry, but almost certainly sufficient for this type of analysis.

The paper's conclusion:

Raman spectroscopy is a robust and highly applicable tool that can be applied to the online monitoring of complex and hazardous processing streams. Subsequent analysis of spectra utilizing chemometric analysis allows for highly accurate, real-time quantification of target analytes. Raman spectroscopy and chemometric analysis were successfully utilized to accurately identify and quantify nine critical components of real tank waste from Hanford tank AP-105: a radioactive sample that has more than 10 components in a high ionic strength environment. Furthermore, the Raman probes and subsequent analysis demonstrated highly robust capabilities to perform accurately after receiving over 1 × 10^8 rad of ? dose. Overall, Raman-spectroscopy-based online monitoring is a powerful route to characterize processing streams that present challenges such as chemical complexity and hazardous or damaging environments.


Interesting, I think.

I trust you're having a wonderful Sunday and that if you will be celebrating the upcoming holidays, that your preparations are going well.






December 6, 2019

Trump and Judy.

For some reason, Baby Trump's adventures with Trudeau and Macron made me think of this Kliban cartoon.


December 2, 2019

Jackson Station

December 1, 2019

Experimental Determination of the Bare Sphere Critical Mass of Neptunium-237.

The paper I'll discuss in this post is this one: Criticality of a 237Np Sphere (Rene Sanchez et al., Nuclear Science and Engineering, Nuclear Science and Engineering, 158:1, 1-14 (2008)).

Neptunium is the only actinide element that is easy to obtain in an isotopically pure form simply by chemically isolating it. This is because all of the isotopes except Np-237, which has a half-life of 2,144,000 years, that are known and which form readily in thermal spectrum nuclear reactors - which represent almost all of the world's commercial nuclear reactors - are short lived. The half-life of Np-238, the parent of plutonium-238 is 2.117 days, and the half-life of Np-239, the parent of plutonium-239 is 2.356 days. Thus even in a continuous on line isolation system from a critical nuclear fluid of the types now under discussion, chiefly molten salt type reactors, any isolated neptunium would decay, with a few weeks time to essentially pure Np-237.

Neptunium is routinely formed in the operation of commercial nuclear reactors. In thermal reactors, neptunium has a high neutron capture cross section and its fission is rare. Chiefly it is transmuted into plutonium-238, the accumulation of which has the happy result, in high enough concentrations (albeit not necessarily routinely formed concentrations), to make reactor grade plutonium that is essentially unusable in nuclear weapons. (As a practical matter, it is much easier to make nuclear weapons from natural uranium by separating the U-235 than it is to make it from reactor grade plutonium, and since it is impossible for humanity to consume all of the natural uranium on the planet, it will never be possible to make nuclear war impossible.)

In a fast neutron nuclear spectrum, neptunium can form a critical mass, and thus can be utilized as a nuclear fuel (or in theory, a nuclear weapon).

I personally favor fast spectrum nuclear reactors, since they represent the potential to ban all energy related mining, dangerous natural gas wells, fracked and "normal," dangerous petroleum wells, fracked and "normal," all the world's coal mines, and in fact, all of the world's uranium mines for many centuries to come, utilizing the uranium already mined and the thorium already dumped by the lanthanide industry.

The so called "minor actinides," generally including neptunium, americium, curium and sometimes berkelium and californium, all have useful properties; there has been a lot of discussion in the scientific literature of using neptunium and americium as constituents of nuclear fuels, to eliminate the often discussed, but entirely unnecessary waste dumps for the components of used nuclear fuel.

From the introduction of the paper:

For the past 5 yr, scientists at Los Alamos National Laboratory LANL have mounted an unprecedented effort to obtain a better estimate of the critical mass of 237Np. To accomplish this task, a 6-kg neptunium sphere was recently cast1 at the Chemical and Metallurgy Research Facility, which is part of LANL. The neptunium sphere was clad with tungsten and nickel to reduce the dose rates from the 310-keV gamma rays originating from the first daughter of the a-decay of neptunium, namely,233Pa.

Neptunium-237 is a byproduct of power production in nuclear reactors. It is primarily produced by successive neutron captures in 235U or through the n, 2n reaction in 238U. These nuclear reactions lead to the production of 237U, which decays by beta emission into 237Np (Equation 1):



It is estimated that a typical 1000-MW electric reactor produces on the order of 12 to 13 kg/yr of neptunium.2 Some of this neptunium in irradiated fuel elements has been separated and is presently stored in containers in a liquid form. This method of storage is quite adequate because the fission cross section for 237Np at thermal energies is quite low, and any moderation of the neutron population by diluting the configurations with water would increase the critical mass to infinity. However, for long-term storage, the neptunium liquid solutions must be converted into oxides and metals because these forms are less movable and less likely to leak out of containers.

As noted in Ref. 3, metals and oxides made out of neptunium have finite critical masses, but there is a great uncertainty about these values because of the lack of experimental criticality data. Knowing precisely the critical mass of neptunium not only will help to validate mass storage limits and optimize storage configurations for safe disposition of these materials but will also save thousands of dollars in transportation and disposition costs.

The experimental results presented in this paper establish the critical masses of neptunium surrounded with highly enriched uranium (HEU) and reflected by various reflectors. The primary purpose of these experiments is to provide criticality data that will be used to validate models in support of decommissioning activities at the Savannah River plant and establish welldefined subcritical-mass limits that can be used in the transportation of these materials to other U.S. Department of Energy facilities. Finally, a critical experiment using an a-phase plutonium sphere surrounded with similar HEU shells and using the same setup used for the neptunium experiments was performed to validate plutonium and uranium cross-section data.


A brief excerpt of the materials utilized in these experiments:

The fissionable and fissile materials available consisted of a neptunium sphere, HEU shells, and an a-phase plutonium sphere. The neptunium sphere was ;8.29 cm in diameter and weighed 6070.4 g. Based on its weight and volume, the calculated density for the neptunium sphere was 20.29 g0cm3. A chemical analysis was performed on the neptunium sphere sprue…

…The analysis showed that the sphere was 98.8 wt% neptunium, 0.035 wt% uranium, and 0.0355 wt% plutonium. There were also traces of americium in the sphere. Table I shows the elements found in the chemical analysis of the sprue. Approximately 1% of the mass of the sphere was missing because the sprue sample did not dissolve completely.

To reduce the gamma-radiation exposure to workers, which comes mostly from the 310-keV gamma ray from the first daughter of 237Np, 233Pa, the neptunium sphere was clad with a 0.261-cm-thick layer of tungsten and two 0.191-cm-thick layers of nickel. The gamma radiation at contact with the bare sphere was reduced from 2 R/h to 300mR/h for the shielded sphere. Table II shows the dimensions, weights, and calculated densities of the neptunium sphere and different cladding materials. The total weight of the sphere, including cladding materials, was 8026.9 g. Figure 2 illustrates how the neptunium sphere was encapsulated. Except for the tungsten layer, both of the nickel-clad materials were electronbeam welded. In addition, a leak test was conducted for the nickel-clad layers to ensure that the neptunium metal and possibly some neptunium oxide produced in the event of a leak were contained within these materials and not released into the room or the environment.


Table 1:



This is a highly technical paper, and it is probably not of any value here to excerpt all that much of it. Nevertheless, there is a great deal of public mysticism about nuclear technology, mysticism that is killing the world, since nuclear energy is the only technology that might work to ameliorate, stop, or even reverse climate change. There is so much mysticism and misinformation that completely scientifically illiterate morons like say, Harvey Wasserman, can find people ignorant enough to believe he is, in fact, an "expert" on nuclear issues. (He's not. He is an abysmally ignorant fool, whose ignorance is killing people right now.)

With this in mind, I thought it might be useful to show some diagrams and photographs of the work that was performed here and that is found in the original paper:















A student of nuclear history will recognize that these experiments are very much like the experiments with the "demon core" that killed the nuclear weapons scientists Harry Daghlian and Louis Slotin in separate experiments in 1946. The remote equipment here is obviously designed to prevent that sort of accident from recurring.

The authors explored a number of different systems and reflectors, including both polyethylene and steel. In the process of conducting these studies, they refined some nuclear data on uranium isotopes, a valuable outcome.

From their conclusion:

Several experiments were performed at the Los Alamos Critical Experiments Facility to measure the critical mass of neptunium surrounded with HEU shells and reflected with various reflectors. For some experiments, Rossi-? measurements were performed to determine an eigenvalue that could be calculated by transport computer codes. These experiments were modeled with MCNP. For neptunium/HEU experiments, ENDF0B-VI data underestimated the keff of the experiment by ;1%. ENDF0B-V data and an evaluation provided by the T-16 group at LANL were in better agreement, although these cross sections continue to underestimate the keff by only 0.3% on average. After adjusting the neutron cross section for 237Np and 235U so that the MCNP simulations reproduce the experiments, we have estimated that the bare critical mass of 237Np is 57 +/- 4 kg.


Currently the main use for Np-237 is as a precursor for Pu-238 for use in deep space missions. Production of this important isotope has resumed at Oak Ridge National Laboratory, albeit on a small scale.

If we are interested in saving the world - there isn't much evidence that we are - neptunium can play a larger role in doing so, and thus this historical work is of considerable value.

A related minor actinide, which is also a potential source of Pu-238, although this plutonium will always be contaminated with Pu-242 owing to the branching ratio of the intermediate Curium-242, is americium-241.

It was estimated, in 2007, that the world inventory of these valuable elements was, as of 2005, was about 70 tons of Np-237, and 110 tons of Americium. It is desirable, critical actually (excuse the pun) that these materials be put to use.

I wish you a pleasant Sunday.
November 29, 2019

Why I switched my support here, not that it matters, from Warren to Yang.

I love Elizabeth Warren, because my feeling is that she has a flexible mind; if nothing else this will be a critical feature that a future President must have if we are to save anything from this unnatural disaster represented by the ignorant pig in the White House.

She also has a real chance to be the nominee, and if she is, I will be thrilled to vote for her.

Nevertheless, my sons prevailed on me to take a look at Andrew Yang, and to the extent I have time to engage in politics, I did so.

What I think doesn't matter, actually. I am not that politically engaged and to the extent I am, I'm strictly "anyone but Bernie" in the Primaries, and, in the general election, well I'm in agreement with that bumper sticker that reads "Any Functioning Adult, 2020."

If the "functioning adult" is Bernie - I don't think it will be - I'll have to bite the side of my cheek hard and pull the lever for him.

The nominee will not be Andrew Yang either. This is OK and supporting him in consistent with my personal history over a long lifetime of voting. As far as I can recall, I have never supported a candidate in the primary season who won the nomination, so Senator Warren should be glad to be rid of me, if one believes in Karma. In fact, I have never supported a candidate in the Primary season who came close to winning the nomination, except in 2008, when I supported Ms. Clinton.

Bill Richardson, Fred Harris, so and so and so and so, all more or less forgotten, garnered my early attention and support, usually based on their ideas.

So why Yang?

1) His idea about the place of technology. We may have forgotten this, but the benefits rapid growth in labor productivity in the early and mid 20th century were distributed. Workers experienced a work week that contracted to 40 hours; health benefits, vacation time, and access to good schools and safe homes. Yang is the only candidate who seems to understand this, and his value is raising this point in the campaign; it is in fact, a perspicacious point, and - although it dates from the 20th century - in this time constitutes a real and rare, "new idea." This idea needs attention. It is critical in the coming age of AI and robotics.

2) His ideas about climate change will actually work, inasmuch as he supports nuclear power. Ms Warren's stated ideas, which are anti-nuclear will not work, and are in fact dangerous. I note however, that Obama's 2008 energy ideas, involving coal based Fischer Tropsch chemistry, a key stone of Jimmy Carter's energy program in the 1970's would not have not worked; they would have been a disaster. Fortunately President Obama was very different than Democratic Primary Candidate Barack Obama. He actually hired a first rate, world class scientist as his secretary of energy.

Yang's support of nuclear power is involved with the somewhat fashionable thorium/U-233 fuel cycle, but it's OK. The worst nuclear technology is still superior to the best dangerous fossil fuel technology.

3) Mr. Yang is not in my generation, the Baby Boomer generation, which has been the least great generation since that of the antebellum generation preceding the American Civil War. In some ways that awful freak in the White House is an avatar of my generation. Yes, we had good and great people, and some accomplishments, but our post-World War II consumer mentality has been a disaster for our country and for our planet.

4) Yang has no chance of winning the nomination. I need to be consistent. My support for a candidate will have no bearing on the outcome. I live in New Jersey. Before we have a primary, the candidate will be more or less decided.

That about sums it up in a nutshell.

Have a nice weekend.









November 29, 2019

Nature Commentary: Climate tipping points -- too risky to bet against?

The commentary I'll discuss in this post comes from the prominent scientific journal Nature: Nature 575, 592-595 (2019)

In the title of this post I have added a question mark that is not included in the commentary, which is not to say that I question the point, but since the commentary is written by European climate scientists and is written about or to inspire government policy, it is increasingly clear, from simple measurements of carbon dioxide concentrations in the planetary atmosphere, that the public does not take the risk even remotely seriously.

Even on the middle class and upper class left, where we nominally accept the science - one cannot "believe" in science, since scientific facts do not depend whether or not the majority of people are intellectually or emotionally equipped to "accept" them - we think that we can continue to live in our sybaritic ecstasy if only we embrace electric cars, and continuously cheer for the vast areas of the planet being destroyed to build industrial parks for wind "farms," while mining, often under appalling conditions using appalling processes, vast amounts of chemical elements for transmission lines, solar cells, and other useless junk misnamed "renewable energy."

So called "renewable energy" did not work, it is not working and it won't work to address climate change.

This is experimentally observed: World Energy Outlook, 2017, 2018, 2019. Data Tables of Primary Energy Sources. If one accepts science rather than "believes" in science, or particularly if one is trained in science, one understands that if one has a theory, and the experimental results conflict with the theory, the theory is wrong, and not the experiment.

The results of the multi-trillion dollar so called "renewable energy" experiment are in: The use of dangerous fossil fuels and the accumulation of dangerous fossil fuel wastes - only one of which is carbon dioxide - is now at the highest rate ever observed in human history, with the 1st derivative of such use and accumulation also being at the highest rate ever observed and the second derivative is uniformly positive.

From what I can tell the commentary is open sourced, and I will only excerpt a few brief passages, before making some remarks on the public perception of the all important topic of risk.

From the first few paragraphs:

Politicians, economists and even some natural scientists have tended to assume that tipping points1 in the Earth system — such as the loss of the Amazon rainforest or the West Antarctic ice sheet — are of low probability and little understood. Yet evidence is mounting that these events could be more likely than was thought, have high impacts and are interconnected across different biophysical systems, potentially committing the world to long-term irreversible changes.

Here we summarize evidence on the threat of exceeding tipping points, identify knowledge gaps and suggest how these should be plugged. We explore the effects of such large-scale changes, how quickly they might unfold and whether we still have any control over them.

In our view, the consideration of tipping points helps to define that we are in a climate emergency and strengthens this year’s chorus of calls for urgent climate action — from schoolchildren to scientists, cities and countries.

The Intergovernmental Panel on Climate Change (IPCC) introduced the idea of tipping points two decades ago. At that time, these ‘large-scale discontinuities’ in the climate system were considered likely only if global warming exceeded 5?°C above pre-industrial levels. Information summarized in the two most recent IPCC Special Reports (published in 2018 and in September this year)2,3 suggests that tipping points could be exceeded even between 1 and 2?°C of warming (see ‘Too close for comfort’).


The commentary begins with the word "Politicians." I note that two of the authors come from countries whose governments have endorsed and support the offshore drilling of dangerous fossil fuels, Denmark and Great Britain, and another comes from a country that has absurd and extremely dangerous energy policies, Germany.

A few other excerpts:

...Research in the past decade has shown that the Amundsen Sea embayment of West Antarctica might have passed a tipping point3: the ‘grounding line’ where ice, ocean and bedrock meet is retreating irreversibly. A model study shows5 that when this sector collapses, it could destabilize the rest of the West Antarctic ice sheet like toppling dominoes — leading to about 3 metres of sea-level rise on a timescale of centuries to millennia. Palaeo-evidence shows that such widespread collapse of the West Antarctic ice sheet has occurred repeatedly in the past...


I referred to some of this palaeo-evidence elsewhere in this space:

The amplitude and origin of sea-level variability during the Pliocene epoch

...The Greenland ice sheet is melting at an accelerating rate3. It could add a further 7?m to sea level over thousands of years if it passes a particular threshold. Beyond that, as the elevation of the ice sheet lowers, it melts further, exposing the surface to ever-warmer air. Models suggest that the Greenland ice sheet could be doomed at 1.5?°C of warming3, which could happen as soon as 2030.

Thus, we might already have committed future generations to living with sea-level rises of around 10?m over thousands of years3. But that timescale is still under our control. The rate of melting depends on the magnitude of warming above the tipping point. At 1.5?°C, it could take 10,000 years to unfold3; above 2?°C it could take less than 1,000 years6...


Future generations...as if we gave a shit.

...Ocean heatwaves have led to mass coral bleaching and to the loss of half of the shallow-water corals on Australia’s Great Barrier Reef. A staggering 99% of tropical corals are projected2 to be lost if global average temperature rises by 2?°C, owing to interactions between warming, ocean acidification and pollution. This would represent a profound loss of marine biodiversity and human livelihoods.

As well as undermining our life-support system, biosphere tipping points can trigger abrupt carbon release back to the atmosphere. This can amplify climate change and reduce remaining emission budgets...


In 30 years of personal research, I have convinced myself that the only viable solution to address climate change is nuclear energy. It is the only technology with a high enough energy to matter density to slow the first derivative, change the sign of the second derivative, and perhaps create a negative first derivative for the presence of at least one dangerous fossil fuel waste, carbon dioxide, although the latter change represents a vast engineering problem that cheap carny barkers like, say, Elon Musk, engaged in marketing ersatz "solutions," are far too ignorant to comprehend.

Smoke another joint Elon...

The world now has well over 17,000 reactor years of commercial nuclear operations.

There have been three major failures, two of which involved the release of volatile radioactive components to the environment. I hear about them all the time, generally from people with a clear and obvious inability to think straight, and they are all more famous than the 7 million people who die each year. They are, chant after me: Three Mile Island, Chernobyl, and Fukushima.

A fourth putative "disaster" is the Hanford nuclear weapons plant in Washington State, to which I am often directed by stupid people to consider - even though I have clearly considered this plant on a far deeper level than most of these dumbbells who raise the point with me - my favorite and most memorable such occasion being an ignoramus who told me that I should be OK with 7 million air pollution deaths each year because a tunnel collapsed on the Hanford site with "radioactive materials" in it.

Thank God DU has an ignore function. The anger such ignorance raises for me is not good for my health.

The causes of all three of the major nuclear reactor failures can be, in a straight forward way, engineered away, and all technology is subject to failure, and any technology involving the use of high energy is subject to failure involving a loss of life. The issue is whether, on balance, a technology saves more lives than it ends.

For many years, I heard that the "solution" to the variability of so called "renewable energy" was transmission lines, made of copper sheathed in polymeric species suspended from steel towers. Now California, which bought heavily into the "renewable energy will save us" theory, which has failed to address climate change, has experienced vast destructive fires from an extensive network of, um, transmission lines.

Does this issue get as much coverage, or any coverage, comparable to Fukushima, the latter being an issue that any man or woman on the street is aware?

Three major failures of nuclear reactors do not impress me. The experimental probability of a major failure is observed to be 3 failures/17,000 reactor years = 0.02%. The experimental probability of a reactor failure resulting in the release of significant quantities of radiation is 2/17,000 = 0.01%. Again, these failures show us a path to engineer away their risks. I have made myself familiar with almost all of them.

Another widely employed engineering product is aircraft. It is well understood by simple appeal to deaths/passenger km (or mile) that flying is far safer than driving a car, or even riding a bicycle where cars exist. Nevertheless, deaths from aircraft accidents greatly exceed deaths from nuclear reactor failures and what we do when such deaths occur - as detailed in the wonderful engineering show on the Smithsonian Channel, "Air Disasters" - is to engineer away the risks. We do not as a culture, declare air travel too dangerous. But the real risk of air travel is contained in the fact that it is powered using dangerous fossil fuels., the waste of which is proving intractable.

The overwhelming share of posts I make on this political website refer to the primary scientific literature, and of these, the overwhelming share is devoted to issues in climate change, either the reality of climate change or to possible engineering processes to address it, or, are directed at debunking, with appeal to scientific research and scientifically collected data, the incredible and deadly popular enthusiasm for proposals that have not worked, are not working, and will not work to address the extreme risk of irreversibly destroying the entire planetary biosphere, or at least rendering it unrecognizable to cognizant species.

Does the general public understand this risk? I think not at all. All day yesterday and most of the night, I walked through one of the world's largest and most prominent cities, lit up with electronic signs selling stuff, some of them as high as 50 meters. No where was there any reference to climate change, although there were many ads encouraging people to buy plastic stuff, metal stuff, or to take aircraft to remote regions of the planet for fun.

Are the risks of climate tipping points to risky to allow to continue? The answer from the world seems to be "stuff it."

I hope you're enjoying the Thanksgiving weekend.










November 27, 2019

Heavy Lanthanides: An "Imminent Crisis."

The paper I'll discuss in this brief post is this one: Heavy rare earths, permanent magnets, and renewable energies: An imminent crisis. (Karen Smith Stegen, Energy Policy 79 (2015) 1–8)

I came across it going through some old files (accessed in 2018), but had not read it, although the subject of critical elements has been a subject of considerable interest to me, and represents a big part (besides toxicology and wilderness preservation) of why I changed my mind on the issue of whether so called "renewable energy" is sustainable. It is clear enough that despite all of the mindless cheering for it (in which I, to be honest, used to participate) so called "renewable energy" has not addressed climate change, is not addressing climate change and, I contend, will not address climate change. This paper addresses that issue.

From the introduction:

In past years, many policy makers, scientists and other interested parties have urged reducing reliance on hydrocarbon energy sources in favor of renewable ones. Reasons for this range from concerns over global warming, oil price volatility and economic vulnerability, to the peaking of oil production or the general need for diversification in energy portfolios. Actually attaining the potential environmental, economic and political benefits of renewable energies will, however, require a massive build-out. This article sounds the alarm that one significant obstacle to this effort may be the scant supplies of certain critical materials: rare earth elements.1 These are conventionally divided into two categories: the more common light rare earths and the less abundant heavy rare earths, which are particularly needed for efficient lighting applications and for the permanent magnets used in many renewable energy technologies. Lately, the ‘rare earth problem’ has received considerable attention, and several publications have taken stock of the situation. These assessments include, but are not limited to, a flawed Wall Street Journal article belittling the possibility of shortages (Sternberg, 2014), a more accurate but overly optimistic report (Butler, 2014), as well as a rigorous evaluation (Golev et al., 2014). None of the recent reporting on rare earths accurately depicts the extent of the various challenges. In general, misconceptions about rare earths and rare earth-related industries are rampant. Rare earths are the linchpin ingredients of many high technologies for a wide variety of uses—ranging in application from military and medicine to entertainment, communications and petroleum refining, through to lighting and renewable energies...

...This article seeks to serve as a wake-up call to renewable energy advocates, whether government officials, policy makers, industry decision-makers or simply concerned citizens. We begin by providing background information on rare earth elements and permanent magnets, clarify several ubiquitous misperceptions about rare earths and outline the risks of heavy reliance on a single supplier. We then review and assess the various methods for addressing shortages and present the main issues associated with developing rare earth supply chains outside of China. The article closes with a discussion of the implications and several policy
recommendations.


The bold is mine, which is to reflect my feeling which would be amusing were it not so dire, that trying to "wake up" advocates of so called "renewable energy" to the fact that so called "renewable energy" is not, in fact, "renewable," is at best a Sisyphean task, but more likely, a Quixotic task.

One thing I have noted about advocates of converting every wilderness area into industrial parks for short lived wind "farms" - the use of the word "farm" is another example of, um, lying - is that they are in general, disinterested in replacing dangerous fossil fuels and are more interested in attacking nuclear energy, even though nuclear energy is the only sustainable form of energy there is and represents the only workable tool for addressing climate change. (As good as nuclear energy is, however, addressing climate change at all is increasingly a long shot.)

The author makes a distinction between the heavy lanthanides and the light lanthanides, which is a very important distinction, and one about which I've spent considerable time thinking, particularly with respect to dysprosium.

The ‘rare earths’ category, depicted in Table 1, refers to 15 chemical elements (numbers 57–71 of the periodic table) collectively known as the lanthanide or lanthanoid series plus two additional metals, scandium and yttrium, that are closely related. Although many rare earths were discovered one-to-two centuries ago, their value has only recently been discerned. The “unique magnetic, luminescent, and electrochemical properties” of rare earths makes them almost indispensable to many of today's technologies (RETA, 2014); for example, when used as additives to permanent magnets, they endow resistance to demagnetization at high operating temperatures.

Several of the rare earths used in renewable energy technologies and efficient lighting applications are considered critical, that is, at risk for short- and mid-term shortages. The United States (US) Department of Energy (US DOE, 2011) assessed the criticality of various materials to clean energy applications according to a two-part schema: the importance of each individual material and [page 3] the severity of the supply risks. Materials scoring high on both dimensions are considered “critical”, and those at medium or low risk are deemed, respectively, “near critical” or “not critical” (see Table 1). For both the short- (0–5 years) and medium-term (5–15 years) periods, five rare earth elements were placed in the critical category: dysprosium, neodymium, europium, yttrium, and terbium. Most of these are categorized as heavy rare earths: dysprosium, used in neodymium–iron–boron permanent magnets (for example, in wind turbines and electric vehicles); terbium, used primarily in lighting (terbium can also substitute for dysprosium, but is more expensive); and yttrium, used in lighting. Europium, used in lighting, lies between the light and heavy rare earths on the periodic table and is considered a heavy rare earth by some authorities (US DOE (US Department of Energy), 2011, Molycorp, 2012 and Alkane Resources, 2013) and as a light rare earth by others...


Although the chemistry of the lanthanides (aka "rare earths" as in this article) is very similar, which is why all of these elements, plus yttrium and scandium, are generally found together in ores, there is a very subtle but consequential difference in their chemistry which appears when the f shell is half filled, which occurs at europium. Europium itself is sometimes depleted in these ores because it, unlike the other lanthanides, has a very stable +2 oxidation state, which makes its chemistry more like that of barium and strontium than the other lanthanides, effecting the geochemistry of some ores. The elements after europium, except sometimes in some contexts gadolinium, have some differences in their geochemistry which makes changes to their distributions.

The author gives one - among many that are not mentioned in this paper - fairly good xample of the relevance of these elements, including dysprosium, to the utility of so called "renewable energy:

Rare earth permanent magnets are particularly important for clean energy applications and, currently, China accounts for about 80 percent of global production (Benecki, 2013 and Dent, 2014). Permanent magnets are divided into two categories: samarium cobalt and neodymium–iron–boron. According to an executive in the permanent magnet industry interviewed for this article, the two types have similar properties, but offer different advantages and disadvantages. Samarium cobalt magnets perform better at higher temperatures, but are brittle, which limits magnet size and can cause problems with integration into certain applications, such as motors. Samarium cobalt magnets do not contain dysprosium, but there are supply and price concerns associated with cobalt (US DOE, 2011). These magnets are used for small, high-temperature applications and are typically not found in renewable energy technologies.

Neodymium–iron–boron magnets are even stronger than samarium cobalt magnets and, because their size is not as restricted, they are more suitable for large applications, such as wind turbines and other electricity generators. These magnets typically contain two to four percent of dysprosium to enhance their temperature resistance. The advantages offered by neodymium–iron–boron permanent magnet to renewable energies are not inconsequential. Depending on the system, permanent magnets can increase efficiency—upwards to 20 percent—which translates into lower costs and shorter payback periods. For example, at least two major benefits can be derived from replacing the mechanical gearboxes in wind turbines with direct-drive permanent magnet generators: first, the overall weight of the turbine is reduced, which thus reduces the costs of other components, such as the concrete and steel required to support heavy gearboxes; second, reducing the number of moving parts allows for greater efficiency and reliability (Hatch, 2014; see also Kleijn, 2012). The advantages of permanent magnet generators are particularly salient for offshore installations, where reliability is paramount due to the high costs of maintaining and repairing turbines. Neodymium–iron–boron magnets are also used in other types of renewable energy technologies—such as underwater ocean and wave power (Dent, 2014). Additional potential applications that could use permanent magnets include small hydro applications, solar updraft towers (Hatch, 2008), geothermal drilling (Hatch, 2009), and heat pumps (rdmag.com, 2013). Several of these renewable energy technologies are in the prototype or testing stages. One factor that could impede their commercialization is the price of permanent magnets. Indeed: were the price lower, many existing renewable energy technologies could be re-designed around them, which could reap the same efficiency, size, reliability and, ultimately, cost benefits as they already produce for new technologies (Hatch, 2014).


Table 1:



The table gives a nice overview of the uses of these elements, omitting a few.

Many of the light lanthanides are fission products, some of which feature radionuclides with acceptably short half-lives that suggest they could be utilized directly after isolation. These are praseodymium, neodymium and lanthanum. The latter two contain radioisotopes that are very long lived and are in fact found in the natural ores as a result, for example Nd-144. Cerium contains the parent isotope of Nd-144, Ce-144, which has a half-life of 284 days, meaning that to utilize cerium in places where the radioactivity would not be desirable - there are many potential applications where the radioactivity would be desirable - would require up to ten years of cooling.

Promethium, element 61, is found only in used nuclear fuel and not in nature (except for very, very minor trace amounts from the spontaneous fission of natural uranium). It is not very long lived in the common isotope, 147, but has been utilized for permanent lighting applications.

Samarium and europium have very high neutron capture cross sections, and, as a result, are somewhat depleted in used nuclear fuels and are, in any case, the reason that nuclear fuel becomes exhausted in the current common nuclear reactors, before all the fissionable material is exhausted. I believe that in "breed and burn" reactors, they might serve (besides as control rods) as long term neutron shields for reactors that run for decades without refueling. Under these conditions, some of these elements would be transmuted into "heavy" lanthanides.

Used nuclear fuel however is not an option for the long term supply of lanthanides, since it has a high energy density.

The amount of plutonium required to meet all of the world's energy demands, shutting all the world's energy mining (including for many centuries, uranium mining or extraction from seawater), all the gas, all the oil and all the coal, is rather small.

Currently the world is consuming about 600 exajoules of energy per year. The amount of plutonium required to meet this demand is relatively trivial, about 7,500 tons per year, as compared with billions of tons of dangerous fossil fuels consumed each year.

These small quantities, and the fact that the lanthanides are only a fraction of the elements that can be obtained as fission products, suggests that the lanthanide problem cannot be solved by isolation from used nuclear fuels, as the yearly production would only represent a small fraction of world demand.

I wish you a happy Thanksgiving holiday.








November 26, 2019

Anyone having Thanksgiving in a restaurant?

For the first time in about 25 years, my family is.

It's in New York City.

I'm not really crazy about the idea, but some out of town family is flying in to do it, including some people I last saw when they were children and who are now parents themselves.

November 26, 2019

I found a paper giving the solution for the diffusion equation for a conical boundary. Life...

...is very beautiful, and then you die.

It's just one of those truly wonderful things.

For some reason, I never looked for it, or maybe I did, but didn't know how to look for it.

It all came together in a bout of really, really, really bad insomnia.

Solutions of the diffusion equation in cones and wedges

Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 33,511
Latest Discussions»NNadir's Journal