HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » NNadir » Journal
Page: 1 2 3 4 5 6 ... 51 Next »


Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 22,604

Journal Archives

New Weekly Record High for Carbon Dioxide Established at Mauna Loa.

From the Mauna Loa Carbon Dioxide Observatory:

Up-to-date weekly average CO2 at Mauna Loa

Week beginning on April 7, 2019: 413.13 ppm
Weekly value from 1 year ago: 409.46 ppm
Weekly value from 10 years ago: 389.50 ppm
Last updated: April 14, 2019

The increase over 1 year ago is 3.67 ppm.

The value recorded here is the highest weekly reading ever recorded.

This year has been somewhat anomalous, with a kind of plateau lasting through much of February and March, after the previous high ever recorded, 412.41 was recorded on February 10. Values ranged between 412.41 and 410.98, with an average of 411.98 since February 10, not including this week.

I began to suspect an early Northern Hemisphere spring; each spring the yearly maximum is reached for these readings.

In 1976, the first full year for which weekly data is available, the annual peak, 335.30 ppm was reached on May 23.

In 2016, it occurred on April 10.

It does see as the climate condition worsens while we all wait for the wind and solar nirvana that never comes, that the Northern Hemisphere spring, which dominates the carbon dioxide annual cyclic readings owing to the bulk of land mass being in the Northern Hemisphere, should be arriving earlier each year, but February seemed a little extreme, even to me, a person who has come to expect the worst.

The peak in 2018, 411.85 arrived comfortably late, May 13.

We do not know when the peak will come this year.

The increase over the same week last year is 3.67. Since 1975, 2254 such readings have been recorded. The increase represented here, again, 3.67, is the 45th worst ever recorded, placing it in the 98th percentile for "worst ever." The absolute worst ever, 5.04 ppm was recorded in the week of July 31, 2016, by comparison with the reading at the end of July in 2015.

If any of this troubles you, don't worry, be happy. I read here recently that the Wisconsin legislature approved a 5 fold increase in solar installations in that State. This has nothing at all to do with climate change of course, since all the solar energy installations installed over all of human history to endless cheering have done nothing to address climate change, although it should create lots of jobs for people sweeping snow off of solar cells in that state.

In any case, new solar farms in Wisconsin should be good for migrant workers, establishing a winter business, snow sweeping. If that doesn't work in Wisconsin, we can always burn more dangerous natural gas when snow is on the ground and on the solar cells and go a long way to eliminating, um, snow.

I hope you're enjoying your Sunday afternoon.

2019 Monthly Growth Figures for Carbon Dioxide Increases Are Among the Highest Recorded.

The monthly year to year increases in carbon dioxide concentrations are reported on the NOAA website for the last 55 years.

They may be accessed here: Mauna Loa CO2 Data Sheets

I have imported the monthly data into spreadsheets to use Excel functions to compare and order data from best to worst.

The worst data point ever recorded was a 4.16 ppm increase recorded in April of 2016 as compared to April 2015, about 15 years into the world wide adaptation of the "renewable energy will save us" anti-nuke scheme that prominently was adapted in certain European countries featuring Germanic languages.

The second worst ever recorded was in June of 2016, 4.01 ppm in June of 2016.

The worst January figure ever recorded was 3.61 ppm, recorded in January 2017.

The worst February figure ever recorded was 3.76 ppm, recorded in February 2016.

The worst March figure ever recorded was 3.31 ppm, recorded in March of 2016.

For 2019, January recorded a value of 2.85 ppm over 2018, the third worst ever recorded.

For 2019, February recorded a value of 3.40 ppm over 2018, the second worst ever recorded.

For 2019, March recorded a value of 2.51 ppm over 2018, the eighth worst ever recorded.

If any of this disturbs you, don't worry be happy. I recently read here that so called "renewable energy" subsidies in Texas are "working."

Apparently, in this mentality, "working" involves putting up lots of steel towers to grind up endangered and common birds and bats, and has nothing to do at all with climate change.

It figures. If "working" did involve climate change, then so called "renewable energy" did not work; is not working and will not work.

Have a very pleasant Sunday.

The External Cost, Including Climate Cost, of Stationary Batteries For Grid and Off Grid Power.

The paper from the primary scientific literature I'll discuss in this post is this one: Additional Emissions and Cost from Storing Electricity in Stationary Battery Systems (Schmidt et al Environ. Sci. Technol., 2019, 53 (7), pp 3379–3390)

My first exposure to the concept of life cycle analysis was when, years back, the European Union released it first ExternE reports, which attempted, in units of Euros, to monetize the destruction caused by the use of energy, that is the external costs, the cost of destroyed human health, destruction to the environment including but not limited to climate change, and the depletion of resources.

The figures in the early report showed - and this certainly caught my attention as my enthusiasm for nuclear energy, while firmly established, was accelerating at the time, even though, back then, I was still a fan of so called "renewable energy," - that the external costs of nuclear energy were the lowest calculated by the use of the variables then in use.

Ten or twelve years later, the general ExternE concept has exploded in the scientific literature, and now comes under the general rubic of "Life Cycle Analysis" often abbreviated "LCA."

If one puts the search terms LCA and Life and cycle in Google scholar, one will get about 179,000 hits, in 2019, over 3,400 as of this writing.

Many examples of software to calculate these costs in various units, for example, grams of CO2 per kWh for electricity generation by various means, or 2,4 dichlorobenzene equivalent for toxicological effects, and yes, in terms of money, dollars, euros, whatever. There is for instance, Ecosense, which appears regularly in the scientific literature but there are many, many other such programs.. No one person can possibly read all of the papers written on this subject, of course, but personally, I have come across perhaps a thousand or so over the years.

One needs to realize that these calculations contain a great deal of subjectivity, because intrinsically many questions connected with values arise.

To wit:

There are right now, said to be 849 whooping cranes left on this planet as of this writing. This is way up from the number a few decades ago, which was 21. The whooping crane is a very photogenic animal, quite beautiful:


The bird has the largest wing span of any North American Bird. Whooping Cranes, when released in the wild, migrate from the Gulf of Mexico to Northern Canada each year.

There was a time in my life, a few decades ago, when people who self defined as "environmentalists" cared more about the whooping crane than about, for example, Elon Musk's stupid car for millionaires and billionaires.


Here is the whooping cranes' migratory pathway:


Notice that it goes right through Oklahoma and Oklahoma, those oil and gas drilling hell holes that votes repeatedly for ignorant rednecks to be their Senators, including the oblivious fool James Inhofe, one of the most prominent climate change denying assholes in the United States Senate, the US Senate being an organization that has degenerated into a club of morons controlled by the silly, cartoonish, Pravda wannabe channel propaganda channel, Fox News.

Of course, in modern times, we do have people who call themselves "environmentalists" who slobber all over Oklahoma with their ignorant "percent talk," since as a state, Oklahoma has one of the highest percentages of electricity produced by wind energy, which was, as of 2017, according to the EIA, 32%. Of course, this also means that the bulk of the electricity is not produced using wind energy.

This reminds me of one of my favorite engineering jokes: “ The optimist says the glass is half full; the pessimist says it's half empty, and the engineer says the glass is twice the size it needs to be.” Call me a pessimist, but I note that it is possible for the 68% of electricity not produced by wind to actually be greater than the zero percent case that applied in the case that overall consumption has grown to more than 147% = 100%/.68 of what it was a few years early. It is possible in percent case a system to be losing ground. This is, in fact, what is happening on the entire planet. All the so called "renewable energy" assembled after decades of mindless cheering, is not even close, not even remotely close, to matching the increases in overall energy consumption world wide.

Anyway, let's indulge in some "percent talk" with respect to Whooping Crane populations There are seven billion human beings on this planet, and it is a fact that this is way past the carrying capacity of the planet. If one whooping crane is killed by a wind turbine blade in Texas, it would represent roughly 0.12% - we all love percent - of the population, and would be in percentage terms the equivalent of killing 8,400,000 human beings. If another Whooping Crane is killed by a wind turbine blade in Oklahoma, that would be the equivalent of killing 16,800,000 human beings. If a third is killed in Kansas by a wind turbine blade, it would be the equivalent of killing 25,200,000 human beings, and so on through Nebraska, South Dakota, North Dakota, Manitoba, Saskatchewan, and Alberta.

We accept the deaths, year after year, decade after decade, the deaths of tens of millions of people – in this century we have accepted the death over 130 million people, almost twice the population of the United Kingdom – from air pollution without so much as a whimper, although many, many, many people refuse to accept Fukushima, thinking that it was more or less, the worst energy disaster of all time.

But in a purely scientific sense, the deaths of 130 million people will not have as much of an effect on the human population as the death of one whooping crane will have on the whooping crane population. Species that have suffered but survived a near extinction – cheetahs are well known example – have lost considerable genetic diversity, and the population as a whole, may lose genes that may prove critically important to survival as a result, for example, of extreme weather events. At one point, again, there were only 21 whooping cranes, and environmentalists of another time – a time when environmentalism involved more than platitudes about electric cars, solar cells and industrial wind parks that people imagine, albeit with considerable delusion, will make their consumer lifestyles “sustainable” - cared more about whooping cranes than wind turbines. Those historic environmentalists worked hard to get the population closer to 1000 than to 10.

(There is a nice open sourced overview of the genetic diversity of endangered species for interested "old school" environmentalists, discussing the utilization of modern "-omics" technologies such as gene sequencing: Conservation of adaptive potential and functional diversity: integrating old and new approaches(able, B.K. Conserv Genet (2019) 20: 89. https://doi.org/10.1007/s10592-018-1129-9))

The reason for this long riff about whooping cranes is that while some LCA analyses include the economic costs of human health and human loss of life, for example in DALY, (Disability Adjusted Life Years), as if we knew what a human being is “worth,” I don’t believe there are any which consider anything life the value of a species, whether the species in question is photogenic, like polar bears and whooping cranes and butterflies, or an ugly insect eating bat like the endangered Ozark Big Eared Bat (Corynorhinus townsendii ingens).

While LCA’s are valuable tools, one must be aware that the relative importance of any factor factors depends very much on one’s values.

By the way, in 2017, Oklahoma consumed, according to the full data tables that can be accessed here, 73,731,764 MWh of electricity. (I have alluded to one of these tables above, and will take much data below from one of these spreadsheet tables.) This represents an average continuous power of 8,400 MW.

In order to provide this much electricity (assuming 33% thermodynamic efficiency, although higher efficiencies are accessible), one would need to consume about 0.332 grams of plutonium per second, this while providing no threat to whooping cranes, or for that matter, the lungs of human beings. This could be accomplished in 5 or 6 buildings or small complexes. In a year, this would involve the production of less than 11 tons of potentially very valuable fission products.

Nevertheless, despite the sense of the former paragraph, ignorant people who know absolutely zero about plutonium but fear and hate it all the same, would rather fill tens of thousands of acres laced with bird and bat grinders - and maintenance and construction access roads - involving literally tens of thousands of tons of steel, aluminum, and huge quantities of exotic elements, all of which will require redundant systems because the wind does not always blow in Oklahoma or anywhere else.

Right now, and for the immediate future on a planet where the carbon dioxide atmosphere has risen to roughly 413 ppm, as compared to 371 ppm in April of 2000, the redundant system to back up the wind crap is dangerous natural gas and coal plants.

A capacity factor is the ratio of the energy actually produced to the nameplate capacity operating 100% of the time. One of the big lies told by people making excuses for the so called "renewable energy" industry's failure to address climate change, is to report peak power, in units of Watts, for so called "renewable energy" installations that might operate at 20% of capacity as if were the same as nuclear plants, most of which operate at close to 100% of capacity.

The 2017 capacity factors for wind turbines in Oklahoma ranged from 19.4% (August 2017) to 49.1% (March 2019). Overall the capacity utilization of wind power in Oklahoma was 40.3%, which, by the way, is very high compared to most wind systems on this planet. Oklahoma must be a windy place. I can't remember; it's been a long time since I was in that state. Nevertheless, the available electricity for these turbines had nothing whatsoever to do with demand. The average high temperature in August in Oklahoma City is 93F. Presumably people turn on their air conditioners. In August of 2017, the capacity utilization of combined cycle dangerous natural gas plants was 63.1%; in July 71.2%. The waste from these plants was dumped directly into the planetary atmosphere.

The average monthly capacity utilization for dangerous fossil fueled power plants in Oklahoma was below 50%. The capacity utilization of coal plants in Oklahoma was extraordinarily low for coal plants in general, 45.3%, for combined cycle gas plants, 46.0%. This means that these plants most of the time were stranded assets, thus degrading their economic performance. I personally couldn't care less, since I oppose all coal plants and all gas plants and diesel plants.

The same people who want to cover thousands upon thousands upon thousands of acres of the Whooping Crane migratory pathway with wind turbines will engage in every more cockamamie schemes to make this useless crap work, and they claim, without consideration of the possibility of long wind droughts lasting weeks, or maybe months that huge batteries will save the day.

This, finally, brings me to the subject of the paper referenced at the beginning of this post, the carbon cost (and other external costs) of batteries. I really enjoyed this paper because it looks at batteries on a deeper level than one usually sees, accounting for the types of batteries as represented by their chemistry, their geography, and - this is very important, I think - the purpose for which they are used. The fact that the first two sentences in this very fine paper are nonsensical statements of the popular imagination that are in fact urban myths rather than representations of reality has no bearing in my mind on the paper's quality.

These two sentences are:

Renewable energy (RE) technologies, particularly wind turbines and solar photovoltaics (PV), play a key role in decarbonizing the electricity sector.1 They can substantially reduce the (lifecycle) carbon footprint of electricity generation, 2 while often providing electricity at low cost...3−5

Renewable energy technologies are not playing a "key role" in decarbonizing the electricity sector, which is by the way not being decarbonized. After half a century of highly inflated rhetoric about how wonderful so called "renewable energy" is, or more regularly could be, the solar, wind, geothermal and wave industries combined produced, in 2017, 10.83 exajoules of energy as compared to 584.98 exajoules of energy generated and consumed by humanity. Between 2016 and 2017, the output of all the world's wind, solar, geothermal and wave powered plants combined grew by 1.21 exajoules, compared to the growth in energy consumption (from all sources) of 8.88 exajoules. (Ref: IEA 2017 World Energy Outlook, Table 2.2 page 79 and 2018 Edition of the World Energy Outlook Table 1.1 Page 38. (I have converted MTOE in the original tables to the SI unit exajoules in this texts.)).

"Could" is a word used, with obvious contempt for all future generations, by the advocates of and apologists for the grotesquely failed so called "renewable energy" industry quite often. The use of the word "can" in the second sentence of this otherwise fine paper is dubious, since it has never been proved that so called "renewable energy" can do anything to reduce the carbon footprint of energy. In fact, the opposite is true on inspection, until the dawn of the 19th century, all of humanity almost exclusively survived on so called "renewable energy," but abandoned it, because most human beings, even more so than today, lived short miserable lives of dire poverty.

The remainder of the first paragraph of the paper gets a little more realistic:

...However, wind turbines and PV are intermittent (i.e., they fluctuate with the energy source) and cannot easily provide some of the grid services that conventional technologies can provide. Hence, with growing RE shares, measures to counterbalance intermittency and provide grid stabilization are needed.6 In addition to other solutions, such as demandside management, energy storage technologies are an important technological measure.7,8 Electrochemical storage in the form of batteries is particularly interesting, as batteries can be used in several different applications across the electricity supply chain (from generation, via the electricity grid, to “behind the meter”).9−11...

The next paragraph includes reference some, but not all, of the criticisms that I make of people wanted to bet the planetary atmosphere on this cockamamie scheme to convert all of our ecosystems into industrial parks to produce so called "renewable energy" as well as a statement of the raison d’être of the paper:

However, it has been argued that the additional economic cost of employing batteries can make electricity-sector decarbonization based on high RE shares economically less attractive.12 Similarly, emissions occurring during the manufacturing phase of batteries, as well as emissions during the use phase stemming from losses due to inefficiencies in charging and discharging and from potentially displacing cleaner generation, can deteriorate the life cycle emissions balance of RE-intensive electricity systems.13−16 The situation is complicated further by the wide range of available battery technologies, each of which possesses different advantages and disadvantages across the various stationary applications.17−20 In order to determine whether using batteries adds substantial emissions and costs to electricity systems and whether there are trade-offs between these two dimensions, two important variables need to be analyzed: the contribution of using batteries to lifecycle greenhouse gas emissions (LCE) and batteries’ lifecycle cost (LCC). Importantly, to enable a fair comparison of the contributions of different battery technologies to lifecycle cost and emissions, both indicators should be calculated for all relevant applications and based on consistent inputs of technical parameters. Furthermore, by translating the two dimensions into the same meaningful dimension (e.g., cost) potential trade-offs and their size can be identified using a single value.21 Trade-offs might occur if technologies exhibit good performance on one dimension but relatively weak performance on the other.

With reference to the phrase "potentially displacing cleaner generation" many advocates of and apologists for the so called "renewable energy" industry attack nuclear energy, as opposed to dangerous fossil fuels, even though dangerous fossil fuel waste kills millions upon millions of people continuously and is the major cause of climate change, and nuclear energy, um, does neither. The so called "renewable energy" industry will never be as clean, as sustainable, as safe as the nuclear industry. This is not to say that nuclear energy is risk free; clearly it isn't. As I often say, however, the nuclear industry does not need to be risk free to be vastly superior to all other alternatives. It only has to be vastly superior to all other alternatives, which, by the way, it is. In this country, because of these appeals to selective attention, wishful thinking and pure ignorance a process of replacing nuclear plants with dangerous natural gas plants is well underway, with this selfish generation dumping all the external costs on all future generations.

More raison d’être for the paper:

Most existing studies have looked at either LCC22−25 or LCE26−29 (with the majority of analyses focusing on electric vehicles30−36), using very different technical and economic assumptions concerning the analyzed technologies and applications. In addition, the system boundaries are not consistent (compare Figure 1): Some studies have analyzed the total LCC or LCE of stored electricity (cf. the red dotted line in Figure 1), that is, the cost or emissions stemming from the charged electricity plus the emissions or cost embedded in the battery’s material and manufacturing per unit of energy delivered by the battery.37 Other studies have analyzed the additional LCC or LCE stemming from storing electricity22 (c.f., the blue dashed line in Figure 1), that is, the emissions or cost from the embodied material plus the emissions from the charged electricity that is lost due to round-trip inefficiencies per unit of electricity delivered by the battery, but excluding the cost or emissions stemming from the charged electricity that is simply cycled through the battery.

Figure 1, on system boundaries:

The caption:

Figure 1. Schematic of alternative system boundaries for analyzing electricity storage systems’ LCE and LCC. Arrows refer to both cost and emissions. ESS stands for electricity storage system.

This fine paper contains lots of abbreviations defined internally. Here is a description giving the abbreviations for types of battery chemistry explored in the paper:

In this study, we address this gap by performing a consistent analysis of the additional LCE and LCC that stem from storing electricity in batteries (illustrated by the blue boundary in Figure 1) in five different applications. We include three major types of battery systems, namely vanadium redox flow (VRF), valve-regulated lead-acid (RLA), and lithium-ion batteries. Among lithium-ion batteries, we differentiate four chemistries: lithium iron phosphate (LFP), lithium nickel manganese cobalt oxide (NMC), lithium nickel cobalt aluminum oxide (NCA) as cathode material and graphite as anode material, and lithium titanium oxide (LTO) as anode material with NCA as cathode material. We compute their performance in three exemplary European countries representing different electricity prices and GHG emission intensities: Switzerland (high wholesale electricity prices and low carbon intensity), Germany (medium wholesale prices and carbon intensity), and Poland (low wholesale electricity prices and high carbon intensity). In total, we calculate 90 different combinations of technologies, applications, and geographies.

The "gap" to which this text refers is that most papers focus exclusively on carbon dioxide external costs while not considering economics and the effects of geography, since as should be obvious from the fact stated above about the higher than usual capacity utilization of wind turbines in Oklahoma, more than 10% higher than that in Denmark, the wind does not blow with the same intensity or regularity every where, nor, although solar is even more trivial than the trivial wind industry, is the insolation the same everywhere: In some places it rains often or snows often; in other places this is less of a factor.

Other abbreviations in the paper refer to how the batteries would be or are used, which is also a very different factor. An interested reader if there is an interested reader can access the open sourced supporting information of the paper to get a detailed description of the types of use abbreviated. That supplementary material is here: Supporting Information

The abbreviations for types of use are these: WA, wholesale arbitrage, AF, area and frequency regulation, TD T&D, upgrade deferral grid, PS demand peak shaving commercial and industrial, SC increase of self-consumption residential end consumer. Again, the details of what these usage types are is in the supporting information.

Science is a mathematical enterprise, and here are some types of simple calculations that the paper utilizes:

Battery System Sizing. In order to ensure the abovementioned application specifications throughout the entire lifetime, the battery size should account for the depth of discharge, discharge efficiency, and reduction in useable capacity (i.e., the capacity that can actually be delivered) until the end-of-life criterion of the battery is reached. In order to ensure comparability, we account for these factors based on the necessary energy storage capacity for each application:


As shown in the formula used to compute LCE for storing 1 kWh of electricity in battery systems below, the GHG emissions can be split into two parts: emissions from manufacturing of battery systems and emissions from electricity loss during charging and discharging. Emissions for battery systems are allocated to each kWh of electricity delivered from the battery systems during their lifetimes.

where LCEbat is the manufacturing phase emissions for battery production, including all necessary replacements during a 20- year lifetime (CO2e/battery system), LCEel,charged is the lifecycle emissions of charged electricity (CO2e/kWh), kWhapp is the annual electricity delivered from battery (kWh/year), lifetime denotes the lifetime of the battery system (years), held constant at 20 years, and η is the roundtrip efficiency of battery system (%). Please note that emissions related to the end-oflife phase are not considered in eq 2 because our analysis does not cover them, given the lack of available data.

The sections that I have put in bold by me and not in bold in the original paper refer to two points that I often make:

The first point is that the second law of thermodynamics - a law that is not subject to repeal by Edward Markey's so called "Green New Deal" which is not even close to being "green," since so called "renewable energy" is not sustainable nor can it be effective at addressing climate change (I say this as a lifelong Democrat) - is that storing energy wastes energy irrespective of the fact that advocates and apologists for the so called "renewable energy" also prattle on about energy efficiency with absolutely no appreciation of Jevon's paradox. Jevon's composed his paradox, by the way, at the time that the world was abandoning "renewable energy" to displace it with a "new" wonder fuel that he was contemplating: coal.

The second point is that the vast amounts of materials that support the so called "renewable energy" fantasy will become waste when the renewable energy facilities are no longer workable. Their observed and calculated lifetimes are incredibly short when compared to nuclear plants, even those built in the 1950's, like Britain's Calder Hall reactor, the first commercial nuclear nuclear reactor in the western world, a reactor built to put the "fear of God" into then regularly striking British coal miners. Appeal to the comprehensive Danish database of existing and decommissioned wind turbines gives a mean lifetime for this metal and concrete intensive technology of less than 18 years. Calder Hall ran for close to 50 years. Advocates and apologists for the so called "renewable energy" industry" engage in a lot of hand waving about recycling these vast quantities of matter, without even a remote understanding of the chemistry and environmental impact of recycling complex materials, of which batteries are merely a subset.

The current issue of Environmental Science and Technology from which this paper comes has two related papers, one on recycling batteries, and the other on the general limitations of LCA calculations, which do not include intangibles, like the existence of Whooping Cranes.

These are the two papers in question:

Upcycling of Spent Lithium Cobalt Oxide Cathodes from Discarded Lithium-Ion Batteries as Solid Lubricant Additive (Pol et al, Environ. Sci. Technol., 2019, 53 (7), pp 3757–3763)


Uncertainty Implications of Hybrid Approach in LCA: Precision versus Accuracy (Jessica Perkins and Sangwon Suh, Environ. Sci. Technol., 2019, 53 (7), pp 3681–3688)

From the first of these two papers, some text:

As the demand for LIBs is rapidly increasing due to various electronic applications and the emerging trend of electric cars, the waste generated after its useful life is also increasing. As of today, only 5% of LIBs are recycled in the U.S.A. and the rest becomes landfill waste...4

... Zhang5,6 X. et al. showed that trichloroacetic acid (TCA) and trifluoroacetic acid (TFA) along with hydrogen peroxide reductant can leach lithium and cobalt up to 90%. Yao L. et al. used D,L malic acid7 as a leaching agent as well as a chelating agent to recycle LiNi1/3Co1/3Mn1/3O2 (LNCMO, different type of cathode material). Santana I. L. et al. used citric acid8 as a leaching agent with enhanced recovery. To achieve higher metal recovery, Jie G. et al. used iron powder9 to reduce LCO first followed by acid leaching, which eliminated the use of peroxide. Furthermore, they enhanced the yield of valuable metals by optimizing the ball milling parameters...10

...Though these methods show fascinating results, it is very difficult to implement such ideas commercially, especially due to the high cost of leaching agents, waste generated, and purification requirements after use. Additionally, the secondary product, i.e., recovered LCO, requires enrichment of lithium to achieve a comparable battery performance as that of pristine LCO...

By the way, the moral cost of cobalt is not included in LCA calculations. My son bought me an interesting but painful to read book on the subject of human slavery in the United States, this one: The Half Has Never Been Told which argues that modern American wealth as well as historical American wealth derives from its unbelievably sordid history of human slavery. It's a compelling thesis. Modern lithium batteries also depend on slavery, modern African slavery in the Congo, because the largest fraction of the world's cobalt is obtained by using enslaved children as miners. This is the issue known as the Coltan crisis.

And no, recycling will not make that moral cost go away. Cobalt is a monoisotopic element. There is no way to trace its source, and in any case, for things like Elon Musk's magical car for billionaires and millionaires, we require more cobalt than that which has already been mined.

Some more math, touching on the non-inclusion of magical recycling techniques:

2.2.2. LCC Model. To calculate the LCC of each battery system, we include capital expenditures (CAPEX) and operation expenditures (OPEX). CAPEX refer to the investment in and replacement of equipment. OPEX are necessary to keep the system up and running for the expected system lifetime. In addition, we incorporate the energy delivered by the system and the financing cost. We apply the concept of levelised cost of electricity for storage applications:22,54

where CAPEX is the capital expenditures (EUR), OPEX is the operation and maintenance expenditures (EUR), kWhapp is the electricity delivered as defined by application (kWh), r is the financing cost (%), and T is the battery system lifetime (years). Here, we neither consider end-of-life cost (e.g., for recycling) nor any scrap value.

It is important to note that simply comparing LCC does not take into account that the value of storage differs among applications. Therefore, LCC should not serve as the only metric for making investment decisions...

The all important thermodynamics of energy storage which would make so called "renewable energy" even less efficient than its already pathetic efficiency and production:

Another important factor is the cost incurred from efficiency losses of the battery system, which is part of the OPEX and is calculated as follows:

where ceff is costs incurred from efficiency losses, kWhapp is electricity delivered as defined by application (kWh), η is roundtrip efficiency (%), and pel is price of electricity for charging the battery system (EUR/kWh). Total OPEX are derived by summing up fixed annual OPEX (see SI Table S4) and ceff.

At this point, it's probably worth it to look at the pictures. Before doing so, for reference, it is useful to keep figures in one's head for the carbon cost of the major forms of fuel on this planet used to generate electricity, which are the three dangerous fossil fuels, coal, gas, petroleum, nuclear and hydroelectricity. All of these generate more than 20 exajoules of energy per year. In this context, it is also worthwhile to consider the working figures for trivial forms of electrical energy generation, solar and wind. The figures vary widely, depending on where you read, but I'm going to use the figures from a paper written some time ago in a paper written by Paul Denholm, now at the National Renewable Energy Laboratory and coauthors in 2005 - I don't expect the figures will have changed much in the intervening years with respect to carbon dioxide. For coal he claims (900-1100 grams CO2 per kwh), combined cycle gas (400-500 grams CO2 per kWh) - he doesn't give figures for pure thermal gas, which are undoubtedly higher - for nuclear, 10-25 g CO2 per kWh and wind (no storage) 5-25 grams/kWh.

Denholm's paper is here: Emissions and Energy Efficiency Assessment of Baseload Wind Energy Systems (Denholm et al, Environ. Sci. Technol., 2005, 39 (6), pp 1903–1911.

With the exception of wind energy, which obviously depends on where the wind facility is located - a facility destroying thousands of acres in windy Oklahoma will be very different than an industrial wind park trashing the continental shelf offshore in New Jersey, these figures seem consistent with the many, many, many LCA papers I've come across. Denholm was writing about the environmental cost of storing wind energy using compressed air, assisted by dangerous natural gas to recover thermal losses from the cooling from gas expansion; he considered it would raise the carbon dioxide cost of wind energy to somewhere between 69-102 g CO2/kWh for the 5 types of turbines he and his coauthors considered.

In Denholm's paper, he was assuming that the wind turbines were in the American Midwest, and Oklahoma, with it's unusually high capacity utilization probably qualifies for what he may have been thinking. By the way, in the 15 years since his paper was written, the number of huge scale air compression energy storage facilities powered by the wind is essentially zero.

The carbon cost of solar facilities is also highly dependent on location, for example, if the solar cells are located on a roof in Phoenix, or if they are covered for weeks or months at a time by snow in Vermont. I don't think that people really appreciate the metal intensity of so called "renewable energy," even solar energy.

An overview of metals used in the solar industry, the other form of energy that is supposed to charge millions and millions of large scale batteries can be found in a paper by a scientist at PV Environmental Research Centera Brookhaven National Laboratory who writes quite a bit on the external costs of solar energy, Vasilis Fthenakis.

It is here: Life cycle inventory analysis of the production of metals used in photovoltaics (Fthenakis, Wang and Kim, Renewable and Sustainable Energy Reviews 13 (2009) 493–517)

For our 100% renewable "by 2050" or "by 2035" or "by 2100" and "by [insert year after you'll be dead]" I offer the following text excerpts from that paper:

...The feed material for producing cadmium consists mainly of residues from the electrolytic production of zinc, and of fume and dust collected in baghouses from emissions during pyrometallurgical processing of zinc and lead smelting. The cadmium sponge, a purification product from precipitating zinc sulfate solutionwith zinc dust at the zinc smelter, is 99.5% pure cadmium. This sponge is transferred to a cadmium recovery facility and is oxidized in steam for 2 days or so. The product, cadmium oxide, along with particulates collected in baghouses, is leached with spent cadmium electrolyte and sulfuric acid to produce a new recharged electrolyte. Impurities are precipitated with a strong oxidizing agent. The wastes are refined for other uses or stockpiled, until a use can be found for them. Non-corrosive anodes are used during electrowinning. Additives (often animal glue) are used to enhance the smoothness of the resulting cadmium cathode. The cathodes are removed about every 24 h and are rinsed and stripped. The stripped cadmium is melted under flux or resin and cast into shapes...

...Teck Cominco Ltd. is one of the world’s largest indium producers, generating approximately 36 tonnes of high-purity (99.998% and 99.9999%) indium per year. It recovers indium from gaseous streams at its integrated zinc and lead in Trail, British Columbia, Canada. Fumes and other particulates from the lead smelter are transferred to the zinc facilities for hydrometallurgical separation and from there to the high-purity indium plant. Fumes generally contain only 0.05–0.2% In or Ge. The plants then leach the fumes to extract In and Ge into solution (along with Zn and Cd) to separate them from the lead sulfate residue. After a first leaching, slurry is settled to remove a lead oxide residue which is pumped back into the lead smelter, and the clear solution is passed on to a second leach. There, the slurry is partially neutralized with direct fume addition and ferric iron to precipitate germanium, indium, arsenic and antimony. This precipitate is the feed for the indium/ germanium recovery plant. The residues from the oxide leaching plant second leach are re-leached with sulfuric acid to dissolve the contained germanium and indium. After filtration, the clear solution is processed in a solvent extraction (SX) unit where both metals are recovered and subsequently reprecipitated to a product for further purification...

Two graphics from Renewable and Sustainable Energy Reviews 13 (2009) 493–517):

The caption:

Fig. 5. The recovery of indium and cadmium from zinc processing at Kidd Creek, Canada [21].

The caption:

Fig. 6. Cd Flows from Cd concentrates to CdTe

I'm sure all of these pyrometallurgical retorts in Canada will do just fine, powered by the wind, perhaps with compressed air storage, or maybe not.

Nevertheless, even with some knowledge of the steel requirements for wind turbines, the aluminum requirements for wind turbines I will buy the 10-25 g/kWh for CO2 for wind energy, and include solar energy along with it [b]at least when they operate. The elephant on the table is that these forms of energy require back up, and an honest appraisal would assess the economic and environmental costs of back up and include it in their real costs .

My opinion is that there is very little honesty among those who have bet the atmosphere on this cockamamie scheme that was abandoned close to 200 years ago, making energy dependent on the weather, but again, that's my opinion, and one is free to question whether I know as much as someone who, for example, cruises university press releases to announce "breakthroughs" that assure us that it was the right thing to bet the planet, and much of the life on it, on the success of so called "renewable energy."

Now let's return to the original paper cited at the outset here which discusses the added costs in CO2 and money for storage of energy using batteries.

In the following graphics showing the results of the battery analysis, note that the units on the ordinate for carbon dioxide are in kg/kWh and need to be multiplied by 1000 to convert to [grams/kWh as cited above and in many other LCA papers.

The caption:

Figure 2. Lifecycle emissions (LCE) and cost (LCC) by battery technology and application.

In all usage cases, the lithium based batteries perform better than the lead based batteries and the vanadium based batteries; in Poland these two types of batteries is some use cases make the carbon dioxide cost actually greater than burning dangerous natural gas, and even approaching the case of coal. On the other hand in the arbitrage case in Switzerland, the "buy low, sell high" strategy - storing energy purchased at low busbar prices to sell it when busbar prices are high shows the best performance over all for carbon dioxide in Switzerland, nearly approaching the low carbon cost of nuclear plants, especially if one is willing to ignore the subsidy paid by slave laborers digging cobalt in the "Democratic" "Republic" of the Congo for three of the four lithium battery types.

I shouldn't, I know, keep dragging this slave labor thing into it all - Elon Musk is a hero - but well, I can't help it you see, because it bothers me just a little. Slave laborers in the "Democratic" "Republic" of the Congo aren't "green" like us; they're black. The Half Has Never Been Told.

All of the slave subsidized batteries perform pretty well in all scenarios, ranging (in exactly one case, in Switzerland for arbitrage, and in nearly all cases for the SC, "off grid" cases) from nearly nuclear equivalent to "only" 15X (1500% in "percent talk" ) higher than nuclear for carbon emissions, at least if one only looks at the batteries and not the intrinsic costs of the wind turbines and solar cells themselves. In the latter case, if one does include these costs, then they're only twice as high as nuclear.

To evaluate the consumer cost, whether the consumer is a large scale utility or some "off grid" homeowner, one should note that as of this writing, the Euro is worth about $1.12, and that US Electricity Prices, as of March 29, 2019, range from 8.8 cents per kWh in Oklahoma "where the wind comes sweeping down the plains" to 32.09 cents/kWh in Hawaii with the US average being 12.47 cents/kWh or 13.96 euro cents/kwh.

For the arbitrage people this doesn't matter, since they're buying electricity when it's cheaper than average, and selling it when it's higher than average. The "off grid" people are paying more than 300% more than US consumers, but they're very noble, at least if you don't count the slave labor subsidy, which I should stop bringing up, except "The Half Has Never Been Told."

This is probably the graphic that needed the most attention, I think, since it says the most. This is into what we are sinking the future of the world, and even if it's not working and won't work, well either read it and weep, or cheer, it matters not.

The breakdown of cost of manufacturing batteries in emissions only, slaves and resource depletion notwithstanding, and the losses of energy to the second law of thermodynamics:

The caption:

Figure 3. Contributions of the manufacturing and use phases to batteries’ lifecycle emissions (LCE) and cost (LCC). Absolute values in kgCO2e/kWhdelivered (LCE) and EURcents/kWhdelivered (LCC) are provided next to each bar.

Component based and process based carbon dioxide costs:

The caption:

Figure 4. Contribution of manufacturing-related emissions to LCE of storing 1 kWh of electricity.

A graphic designed as a summary.

The caption:

Figure 5. Comparison of GHG emissions cost and lifecycle costs of storing one kWh of electricity in battery systems under different social cost of carbon assumptions. Each dot represents one country-technology-application combination. The sloped gray lines represent different ratios of GHG emission cost over LCC. The main graph represents the medium social cost of carbon (SCC) assumption of 70 EUR/tonCO2e, the top right graph the lower (35 EUR/tonCO2e), and the bottom right graph the high (180 EUR/tonCO2e) assumptions. Note that the scale of the y-axis changes between the charts.

Don't worry, be happy. Even though batteries suck, we can always read press releases about how batteries can someday be better and put our hope in hope.

The caption:

Figure 6. Conceptual framework for potential LCE and LCC improvements.

Where the authors think we should focus our efforts in the "don't worry, be happy" case to realize these improvements, even though climate change is here and now:

The caption:

Figure 7. Impact of technical improvements on reduction of LCE and LCC.

From the author's conclusions:

To conclude, this paper is a first study to compare the additional social cost stemming from GHG gas emissions with the lifecycle cost of storing electricity in battery systems. As such, it needs to be complemented by further research. We propose six potential avenues for future research. First, while here we perform all of our calculations on a per-kWh basis, some applications (such as AF) are typically sized on a per-kW basis. The effect of these sizing differences on cost and life-cycle emissions should be analyzed in future research. Second, storing electricity in batteries (and other storage technologies) incurs environmental costs beyond GHG emissions (e.g., through release of pollutants throughout manufacturing chains). These further costs should be analyzed in future assessments, considering variations or sensitivities in the grid mix, for instance by varying the geographical context. Third, recent analyses(41,71,72) showed that combining applications in one battery can increase the economic attractiveness of batteries. In principle, combining applications should also reduce the additional LCE; however, to date the extent of the effect remains an open question. Fourth, the impact of supplying battery manufacturing plants with different grid mixes or self-generated solar power (as partly done in Tesla’s Gigafactory) should be analyzed. Fifth, new data on recycling of batteries ought to be collected in order to estimate the impact of recycling on both additional LCC and LCE. Sixth, an analysis taking into account expected technology developments seems worthwhile, especially because large-scale stationary battery deployment will only happen within a decade.

I have bolded what the authors concede they have not discussed about our dreams of that grand renewable nirvana we burn gas and coal to praise so loudly and so often. They also didn't mention moral costs, but that's OK, I did, I couldn't help myself. The Half Has Never Been Told.

Above I remarked that it's quite possible that no one will read what I say, and fewer will care. The value of writing this long post was, for me, to clarify my thinking. Clear thinking can make one weep.

I hope you've had a pleasant weekend thus far, and will have a pleasant Sunday evening.

Quantitative Study of Straw Bio-oil Hydrodeoxygenation over a Sulfided Ni Mo Catalyst

The paper I'll discuss in this post is this one: Quantitative Study of Straw Bio-oil Hydrodeoxygenation over a Sulfided NiMo Catalyst (Miloš Auersvald*† , Bogdan Shumeiko†, Martin Staš† , David Kubička† , Josef Chudoba‡, and Pavel Šimáček†, ACS Sustainable Chem. Eng., 2019, 7 (7), pp 7080–7093)

Recently in this space I noted the preparation of a carbon dioxide capture agent, a porous form of magnesium carbonate impregnated with PEI (Polyethylimine) that involved the use (in preparation) of the dangerous fossil fuel derived solvent toluene.

I often argue that petroleum mining, which irrespective of how the "peak oil" hullabaloo turned out in the short run turned out, is unacceptable. As far as motor fuels - to the extent we really want them - are concerned, it is (to my mind) a no-brainer. Gasoline is neither a safe nor clean fuel, but clean fuels are available, notably the wonder fuel dimethyl ether, which can effectively displace all of the world's diesel, gasoline, LPG, and dangerous natural gas wherever they are subject to combustion in heat engines of various types.

The question of replacement of petroleum for chemical feed stocks is a little more problematic, although syn gas can easily displace most aliphatic molecules. The pathway for most of these is seems pretty clear to me from memory.

Aromatic compounds, benzene and its related compounds, at least those lacking oxygen substituents are a little bit more problematic for me. I know routes exist to make them from biomass, but they for some reason don't stick in my mind. It's why this paper caught my eye in my general reading.

From the introductory text:

Fast pyrolysis is one of the simplest and most cost-effective options for the conversion of a lignocellulosic biomass into a bio-oil, achieving yields of up to 75 wt %.(1) Despite its undesirable properties (thermal lability, high acidity, high water/oxygen content),(2) the bio-oil has the potential to be used for the production of advanced biofuels. For this purpose, it is necessary to reduce the oxygen content of the bio-oil and to improve its properties in general. Despite its high operating expenditures, hydrotreatment is one of the most promising methods for bio-oil upgrading, producing higher yields of upgraded products with an acceptable quality.(3) Thus, it is desirable to optimize the bio-oil hydrotreatment process. For this purpose, a quantitative characterization of the chemical composition of the bio-oil and its upgraded products is crucial.(4)

A bio-oil is a complex mixture of hundreds to thousands of oxygenates.(5) Together with the fact that its chemical composition is strongly dependent on the original biomass,(2) this makes a detailed quantitative characterization very difficult and time-consuming. Probably, for this reason, some papers studying bio-oil hydrodeoxygenation (HDO) only focused on the characterization of the physicochemical properties of the HDO products.(6−8) To simplify the determination of the bio-oil composition, some researchers used the percentage of the total peak area obtained from GC-MS to estimate the content of the individual chemical compounds.(4,9−11) However, such an approach can be misleading due to the different response factors of the different oxygenates.

To our best knowledge, just three papers focused on the detailed quantification of the chemical changes occurring during the bio-oil HDO.(12−14) Routary et al.(12) used GC-FID and HPLC-RI to quantify oxygenates and a special GC-MS technique (nitric oxide ionization spectroscopy evaluation) to quantify the hydrocarbons formed. Sanna et al.(14) used GC-MS for the quantification of 28 different compounds and HPLC for the quantification of saccharides. Nevertheless, the most detailed study up to now was apparently carried out by Stankovikj et al.(13) from the National Renewable Energy Laboratory (NREL)...

...In our previous paper, we tested a sulfided NiMo/Al2O3 catalyst for the HDO of a straw bio-oil (as an alternative to the wood bio-oil generally used) from the ablative fast pyrolysis and analyzed the physicochemical properties of the resulting HDO products.(25) To provide a deeper understanding of the whole straw bio-oil HDO process over the sulfided catalysts, we have built upon our previous work and present what, to our best knowledge, is the first such detailed quantitative study of this process including analysis of both the aqueous and organic phases formed. Low-molecular compounds were quantified by GC-MS, 115 of them were quantified directly and the other more than 100 indirectly. The total concentrations of the carboxylic acids, carbonyls and phenols were quantified by the carboxylic acid number (CAN), Faix, and Folin–Ciocalteu methods, respectively. Thanks to the detailed analysis of the volatile compounds, we were able to consider the reactivity of the respective groups in the nonvolatile fractions of the samples...

The authors utilize GC/MS (gas chromatography with mass spec detection) to understand the catalytic approach they are performing.

Bio oils (and lignins, the non-cellulose portion of wood and straw) typically contain large amounts of phenols and polyphenolic compounds, aromatic compounds having -OH groups attached to them. These are subject to oxidation and side reactions which limit the amount of time that they can be utilized as fuels (or solvents) and also result in corrosion of metal and other surfaces.

Some quick pictures from the paper:

Lignin bio-oil composition:

The caption:

Figure 1. Total amount of oxygen in oxygenates determined by GC-MS vs the total amount of oxygen determined by elemental analysis in organic phase of bio-oil and products. The number on the right side of the red column is the share of the blue/red column.

The authors then treat the bio-oil with a nickel molybdenum catalyst as follows:

Before the hydrotreatment experiments, the bio-oil was first filtered to remove residual solids and then doped with 0.5 wt % dimethyl disulfide (Sigma-Aldrich, DMDS ≥ 99.0%) to maintain the catalyst activity as suggested by Yoshimura et al.(27) A commercial sulfided NiMo/Al2O3 catalyst (5.5 wt % of NiO and 28.3 wt % of MoO3) and hydrogen (SIAD, 99.9 vol %) were used in the continuous fixed-bed hydrotreatment experiments. Two sets of experiments were carried out. In the first one (further labeled as T/4), the temperature (T) was increased from 240 to 350 °C at a constant hydrogen pressure of 4 MPa. The second experiment set was labeled as T/P; the temperature (T) and hydrogen pressure (P) varied between 300–360 °C and 2–8 MPa, respectively. Compared to our previous paper,(25) the T/4 experiment was repeated with a greater emphasis on the reaction conditions around 300 °C and 4 MPa, where the density of the organic phase became lower than that of water for the first time. Therefore, the temperature change step between 280–330 °C was only 10 °C.


The caption:

Figure 2. Cumulative changes in wt % of the compounds representing each group of oxygenates and their distributions between the aqueous and organic phase.

Chemical pathways:

The caption:

Figure 3. Reaction scheme of the nonalkyl monocyclic compounds. Compounds: (1) syringol; (2) guaiacol; (3) pyrocatechol; (4) phenol; (5) cyclohexanol; (6) cyclohexanone; (7) benzene; (8) cyclohexene; (9) cyclohexane. Reactions: HYD, hydrogenation; DeMEOX, demethoxylation; DeMET, demethylation; HDO, hydrodeoxygenation; K-E, ketone/enol isomerization.

The caption:

Figure 4. Cumulative changes of the nonalkyl monocyclic compounds in the feed and all products.

The caption:

Figure 5. Content of 2-ethylphenol and 3-/4-ethylphenol in the feed and all the organic phases of products from the T/P experiment.

The caption:

Figure 6. Cumulative changes of the compounds with one ring substituted with propyl chain (propyl monocyclic compounds); the black lines separate the compounds that were transformed to 4-propylguaiacol (gray and light blue row) and 4-propylsyringol (brown and dark green row) through hydrogenation of the double bound in the propyl substituen

Some more reaction pathways:

The caption:

Figure 7. Reaction scheme of the propyl monocyclic compounds: (1) 4-allyl-syringol, (2) 4-(1-propenyl)syringol, (3) isoeugenol, (4) eugenol, (5) 4-propylsyringol, (6) 4-propylguaiacol, (7) 4-propylpyrocatechol, (8) 4-propylphenol, (9) propylbenzene, (10) 1-propylcyclohexene, (11) propylcyclohexane. Reactions: HYD, hydrogenation; DeMEOX, demethoxylation; DeMET, demethylation; HDO, hydrodeoxygenation; HYLY, hydrogenolysis. The compounds whose concentration is affected by the decomposition (hydrolysis) of pyrolytic lignin followed by subsequent deoxygenation of aldehydes and free hydroxyl groups are marked by red arrows.

The caption:

Figure 8. Cumulative changes in wt % by hydrocarbon groups A–D for the T/P and E–F for the T/4 experiment. i-Alkanes represent the sum of C5–C9 i-alkanes.

The next several graphs refer to chemical speciation. It is important to note that crude oil is also highly speciated before refining and processing.

The caption:

Figure 9. Amount of carboxylic acids in the organic phase (GC-MS vs CAN). The number on the right side of the blue row is the ratio of the red/blue row.

The caption:

Figure 10. Amount of carbonyls in the organic phase (GC-MS vs Faix method). The number on the right side of the blue row is the ratio of the red/blue row.

The caption:

Figure 11. Amount of phenols in the organic phase (GC-MS vs Folin–Ciocalteu method). The number on the right side of the blue row is the ratio of the red/blue row.

From the conclusion:

We presented the first detailed quantitative study mapping the fate of the individual key oxygenates during the one-stage hydrotreatment of straw bio-oil over a sulfided catalyst in a wide range of reaction conditions. Using a complex analysis of the aqueous and organic phases based on the combination of GC-MS analysis with functional-group-specific analytical methods (i.e., carboxylic acids, carbonyls and phenols determined by the carboxylic acid number, Faix, and Folin–Ciocalteu methods, respectively), we obtained a comprehensive understanding of the formation and/or consumption of oxygenates and hydrocarbons as a function of the reaction conditions used. Among the tested reaction conditions, one-stage bio-oil upgrading at 340 °C and 4 MPa is to be preferred, as there was no significant saturation of the aromatic ring while a majority of the oxygenates was removed. Thus, a sustainable product with minimum hydrogen consumption suitable for the subsequent coprocessing with petroleum fractions in a refinery was obtained. Moreover, the biobased aromatics are very desired components of the gasoline and jet-fuel blending pools.

Note that I am personally not interested in gasoline or jet-fuels, but these nasty fuels do contain valuable chemicals.

To the extent that such chemicals are utilized to make materials, and to the extent to which these chemicals are obtained from biomass, they are sequestered from the atmosphere.

We need to pay attention to such things, or at least the future generations we have screwed will need to do so.

The heat for these reactions is available from nuclear energy, which, as I state often, is the only sustainable form of primary energy available in time to save what is left to be saved.

Have a pleasant evening.

Some more "by 2080" stuff, albeit less cheery.

For much of my adult life, quite possibly all of it - and I'm not young - I've been hearing this "by 2000" or "by 2020" or "by 2030" or "by 2050" happy talk, usually with a superoptimistic "100% renewable" stuff.

As of 2017, or "by 2017" the entire wind and solar portion of so called "renewable energy" amounted to less than 2% of world energy demand.

2018 Edition of the World Energy Outlook Table 1.1 Page 38

The result is that we are seeing concentrations of the dangerous fossil fuel waste CO2 in the atmosphere is approaching 412 ppm, with no end in sight.

Here's a somewhat more dire prediction, a "by 2080" prediction of what the betting of the atmosphere on so called "renewable energy" will produce if it continues as it has for the last half a century of wild cheering for it, first in theory and then, regrettably, in practice:

Nearly one billion people could face “their first exposure” to a host of mosquito-borne diseases by 2080

The full original paper to which this news item from Carbon Brief, to which I subscribe (and you can too, easily and for free) is opened sourced and is here: Global expansion and redistribution of Aedes-borne virus transmission risk with climate change (Ryan et al PLOS Neglected Tropical Diseases 2019)

I trust you're having a pleasant evening.

Impregnating magnesium carbonate with polyethyleneimine to capture carbon dioxide.

The paper I'll discuss in this thread is this one: Impregnation of PEI in Novel Porous MgCO3 for Carbon Dioxide Capture from Flue Gas (Xiao et al, Ind. Eng. Chem. Res., 2019, 58 (12), pp 4979–4987)

Despite the title of the paper I am discussing herein, I personally believe that the concept of the "flue" should be phased out as rapidly as possible. "Flues" are waste dumping devices; in almost every case, they are the equivalent of pipes dumping raw sewage into rivers and other bodies of water. Flues dump waste into what has become humanity's favorite waste dump, it's planetary atmosphere, which is rapidly being destroyed by indifference and/or the inexplicable popular enthusiasm for technologies which don't work very well; here, as usual, I'm referring to the multi-trillion dollar investment in wind and solar energy which has done nothing, absolutely nothing, to arrest the acceleration of climate change. We are now at around 412 ppm of CO2 in the atmosphere; at the end of March, 1998, we were at 369 ppm.

Elon Musk. Tesla electric car. Megawatts Solar. Megawatt wind.

We are oblivious.

As we are oblivious, it will fall to future generations, from the immediate through the end of human time, to clean up our mess, and do so after we have robbed them of important resources. The clean up of the mess we've made of the planetary atmosphere, is an unimaginable engineering challenge which will require the generation of vast amounts of energy while using zero fossil fuels, almost all of which will have been oxidized and dumped in the atmosphere as even more waste to clean up.

After much study, I consider that this task is just over the line of feasibility; it might be accomplished, but only with a massive concerted effort of all of humanity, such a concerted effort being the most improbable feature of the effort among all features, included the technical features.. We are making 1930's fascism look like small change, given the consequences of the environmental results of present day fascism (albeit disguised as "democracy." )

While I oppose flues, I do consider that combustion ironically represents a part of the path to removing carbon dioxide waste from the atmosphere, at least in the case where the carbon dioxide is generated in an atmosphere of pure oxygen (this generated by nuclear heat) with the combustion of waste biomass. Under these circumstances a pure stream of carbon oxides (monoxide and dioxide) are generated; where steam is present, hydrogen and carbon dioxide, a form of "syn gas" that can essentially replace all materials now obtained from dangerous petroleum, can be generated.) Similarly, "dry reforming" heating biomass to high temperatures under an atmosphere of pure carbon dioxide, can generate carbon monoxide, which can be disproportionated into various allotropes of carbon and more carbon dioxide.

For various reasons, including the increase of energy efficiency under certain rather obscure but real circumstances, carbon capture technologies are of interest, even if the idea of "carbon sequestration" in waste dumps is a Quixotic and useless exercise that will not work. Hence my interest in this paper.

My comments aside, the paper begins with a genuflection to the idea of "carbon capture & storage" "CCS" as opposed to what I believe to be essential in order to give these processes any remote chance of being useful, sustainable and economic, "carbon capture and utilization" "CCU." It also refers, as it comes from a Chinese institution, to coal, a fuel I oppose along with the allegedly "green" dangerous fossil fuel, dangerous natural gas, and of course, petroleum.

From the introduction:

Global warming and other consequential environmental problems resulting from the greenhouse effect have received a great deal of attention in recent years. Since CO2 is the major contributor to greenhouse gases, it is particularly important and urgent to reduce the amount of CO2 emitted into the atmosphere due to the utilization of fossil fuels.1 Considered to be a critical solution to global CO2 emission reduction, CO2 capture and storage (CCS) technology has been given an urgent requirement for its own development.2 Among the various CCS technologies, the chemical absorption using aqueous solutions of amines, such as monoethanolamine (MEA), methyldiethanolamine (MDEA), and diethanolamine (DEA), is the most mature and well-established one for CO2 capture.3,4 However, this process presents major drawbacks, such as high operating costs, evaporation of amine solution, and equipment corrosion,5 which lowered the production efficiency of coal-fired power plants by 10−12%.6 Thus, there is a growing demand on new energy-efficient CO2 capture techniques for CCS applications. The adsorption process with the use of solid adsorbents has been developed to overcome these drawbacks in chemical absorption and showed the advantages of high product purity, low energy consumption, low toxicity, and ease of adsorption and regeneration,7−9 which displayed a broad application prospect in adsorptive separation of CO2 from flue gas.10,11 During recent years, numerous studies have reported that the CO2 capture capacity of porous solid adsorbents could be greatly enhanced by amine modification.12,13 These amine-modified solid adsorbents can be simply obtained by physically impregnating the porous supports with amine,14 which showed a higher CO2 capture capacity and lower cost compared to the grafting methods.15 An excellent amine-modified adsorbent should have unobstructed pore structure for CO2 transfer16 and a high capture capacity of CO2...

Many of the well known examples of solid phased carbon dioxide capture agents are challenging to synthesize on an industrial scale, a point the authors make referring to silica base absorbents, including the well known MCM-41:

Although amine-modified mesoporous silica-based materials exhibit excellent CO2 adsorption properties, the preparation of mesoporous silica is not cost-effective due to the use of expensive silica sources and surfactants in the synthesis, leading to difficulties with large scale manufacturing.32 Besides, it is an essential step to remove the organic surfactants after the synthesis of silica materials, which indeed involves the use of high temperature and chemicals that could increase the cost and the environmental burden.33 Therefore, the easily synthesized and environmentally friendly porous materials with superior performance and desired economics urgently need to be developed as the support of amine-modified adsorbents. Moreover, in addition to N2 and CO2, the flue gas also contains water vapor, SO2, and NOX, which may affect the performance of amine-modified adsorbents during CO2 capture.

What the authors propose is to synthesize a mesoporous form of magnesium carbonate, having the interesting property that its preparation is a case of CCU, inasmuch as the synthesis utilizes carbon dioxide as a reactant:

2.2. Synthesis of Adsorbents. The porous MgCO3 was prepared as the procedure reported previously.34 Briefly, MgO was mixed with methanol, after stirring under 3 bar CO2 pressure at 50 °C for 3 h, the mixture reacted under 1 bar CO2 pressure at 25 °C, followed by drying at 70 °C for 3 days, the dried product was calcined at 250 °C for 3 h with a 3 h ramp time. On the basis of this method, in order to select the best support, the following 5 samples (M1 to M5) were synthesized under different experimental conditions, which are shown in Table 1, respectively.

Methanol is readily available from syn gas. Table 1 lists synthesis conditions. M4, which is the most discussed porous MgCO3 form is prepared with the methanol containing 33% toluene, toluene being a product of the dangerous petroleum industry which is, albeit not industrially, conceivable to obtain from certain forms of biomass, for example by the reaction of butadiene (from cellulose derived furan) or pentadiene (from methyl furan) with ethylene (from syn gas) or propylene (also from syn gas). M4 is prepared by stirring MgO in this solvent under a CO2 atmosphere for 4 days at room temperature.

Further aspects of the process are described, using ethanol, also available from syn gas, and, of course, albeit as questionably as is the case with other so called "renewable energy" schemes, from grain:

PEI-modified MgCO3 adsorbents were prepared via a wet impregnation method.35 The desired amount of PEI dissolved uniformly in ethanol was added to the sufficiently dried MgCO3. The resulting slurry was stirred and refluxed at 80 °C for 2 h. After completely evaporating the ethanol at 80 °C, the sample was dried at 100 °C for 2 h in an oven. The obtained adsorbent was denoted as xP-M, where x (x = 10, 20, 30) indicated the mass percentage of PEI. The synthetic process of porous MgCO3 and PEI-modified MgCO3 adsorbents is illustrated schematically in Figure 1.

The "x" in "xPM" carries through the paper, for example 20P-M, is 20% PEI and 80% MgCO3.

Beginning with Figure 1, let's now just look at the pictures, a useful way to get a feel for a full paper before reading it in detail.

The caption:

Figure 1. A schematic diagram of the synthesis of porous MgCO3 and the impregnation process of PEI.

The caption:

The testing apparatus for measuring its performance as an absorbent:

The caption:

Figure 2. Diagram of experimental apparatus for CO2 adsorption.

Note that the authors are imagining this material to capture carbon dioxide from the flue gas from the combustion of dangerous coal. In contracts to the combustion of biomass in a pure oxygen atmosphere, the air fueled combustion of coal will contain considerable amounts of nitrogen. Hence the effect of nitrogen is considered important by the authors:

The caption:

Figure 3. N2 adsorption/desorption isotherms (a) and pore size distribution curves (b) of the prepared groups of MgCO3

It seems that the PEI loadings have a fairly large effect on gas availability in the pores, related to the extent to which pores in the magnesium carbonate are obstructed by the polymer.

The caption:

Figure 4. N2 adsorption/desorption isotherms (a) and pore size distribution curves (b) of M4 and adsorbents with different PEI loadings.

The caption:

Figure 5. FTIR spectra of M4 and adsorbents with different PEI loadings.

The caption:

Figure 6. SEM images of M4 (a), and adsorbents with different PEI loadings:10P-M (b), 20P-M (c), and 30P-M (d).

"Breakthrough" below refers to the point at which CO2 is detected after the flow has passed over the absorbent.

The caption:

Figure 7. Breakthrough curves of CO2 of M4 and adsorbents with different PEI loadings at 25 °C (a), 40 °C (b), 60 °C (c), and 75 °C (d).

20P-M can capture carbon dioxide at fairly high temperatures:

The caption:

Figure 8. Effect of adsorption temperature on the CO2 capture
capacities of M4 and adsorbents with different PEI loadings.

The effect of trace gases on the absorption:

The caption:

Figure 9. Effects of H2O, NO, and SO2 on the breakthrough curves (a), (c), (e), and CO2 capture capacity (b),(d), and (f) of 20P-M at 75 °C.

It is important to note here that even in pure oxygen, combusted biomass will contain limited amounts of these impurities because biomass will contain nitrogen (in proteins and nucleic acids) and sulfur, (from the amino acids cysteine and methionine, and molecules for which they are biological precursors.

The material shows excellent recyclability when the carbon dioxide is removed at approximately 100 C.

The caption:

Figure 10. CO2 capture capacity of 20P-M during 10 cycles of CO2 adsorption/desorption in dry and 10 vol % H2O contained flue gas.

An excerpt from the conclusion of the paper:

A variety of MgCO3 with different porous structures were successfully synthesized and characterized. The synthesis of MgCO3 was based on a facile and template-free method and utilized CO2 as reactant, allowing the porous MgCO3 to be new and promising CO2-storage materials. Meanwhile, the synthesis strategy developed is also beneficial to the potential utilization of CO2. Among those as-prepared MgCO3 materials, M4 with the optimal morphology was selected as support for CO2 adsorbent. A series of adsorbents with different PEI loadings were prepared by effective impregnation while the microstructure of the adsorbents was well maintained afterward. The capacity of CO2 capture in PEI-modified adsorbents was significantly increased, particularly for the adsorbent with 20% PEI loading (4 times higher than the one without PEI at 75 °C, up to 1.07 mmol/g). At low temperature (25 and 40 °C), because of the sterically hindered effect, adsorbents with relatively low PEI loading performed better than the highly loaded ones. On the contrary, the high PEIloaded adsorbents were advantageous at higher temperature (60 and 75 °C) where the diffusion resistance was reduced.

Whether we know it or not, whether we spend our time obliviously picking lint out of our navels glibly waxing enthusiastic for Elon Musk's stupid car and/or the endless series of "renewable energy breakthroughs" decade after decade, this while these "breakthroughs" fail to even slow the rise in the use of dangerous fossil fuels and the contamination of the atmosphere, or whether we recognize the need to change our attitudes and face the true magnitude of the problem, we are in very, very, very bad shape with respect to the environment on which all life depends.

Papers like this one allow, nevertheless, for a sliver of hope.

I trust you're having a pleasant Sunday afternoon.

South Korea accepts geothermal plant probably caused destructive quake.

I came across this news item in a recent issue of Nature.

Bricks and debris from damaged buildings lie on the ground in front of a damaged car in Pohang, South Korea
A 2017 earthquake in Pohang, South Korea has been linked to a geothermal plant.Credit: Yonhap/EPA-EFE/Shutterstock

A South Korean government panel has concluded that a magnitude-5.4 earthquake that struck the city of Pohang on 15 November 2017 was probably caused by an experimental geothermal power plant. The panel was convened under presidential orders and released its findings on 20 March.

Unlike conventional geothermal plants, which extract energy directly from hot underground water or rock, the Pohang power plant injected fluid at high pressure into the ground to fracture the rock and release heat — a technology known as an enhanced geothermal system. This pressure caused small earthquakes that affected nearby faults, and eventually triggered the bigger 2017 quake, the panel found.

The quake was the nation’s second strongest and its most destructive on modern record — it injured 135 people and caused an estimated 300 billion won (US$290 million) in damage...

...Earthquakes have been linked to geothermal power plant in other parts of the world. But the Pohang quake is by far the strongest ever tied to this kind of plant — 1,000 times mightier than a magnitude-3.4 quake triggered by a plant in Basel, Switzerland, in 2006.

The full brief news item seems to be open sourced, since I didn't need to log in to read it:

Nature News 22Mar19.

Have a pleasant Sunday.

Refractory Ablative Heat Shields for Spacecraft: A Path to Addressing Climate Change?

The paper I will discuss in this post is this one: Zirconium-Doped Hybrid Composite Systems for Ultrahigh-Temperature Oxidation Applications: A Review (Giridhar Gudivada and Balasubramanian Kandasubramanian, Ind. Eng. Chem. Res., 2019, 58 (12), pp 4711–4731)

This paper itself is not about climate change, and the reason I am posting it here in the E&E section, rather than the Science group, where it may be equally appropriate if not more appropriate, is solely based on my own speculations, speculations connected with some insight into how superalloy turbines, wherein the surfaces are protected by thermal barrier coatings, work. These types of turbines are generally utilized in dangerous fossil fuel combustion systems such as combined cycle gas - and far more rarely in combined cycle integrated gasification coal plants - and in dangerous petroleum fueled jet aircraft, but they it is clear that they might well be adapted for use in cleaner and safer nuclear systems. One feasible avenue - not the only avenue, but certainly one likely to be important - is the high temperature, high pressure reformation of biomass. Some, but not all, of the energy invested in reforming the biomass can be recovered by allowing the resultant gases, likely to be a mixture of steam, hydrogen, and carbon dioxide (or, if the water has been consumed, carbon monoxide) to expand against a turbine. The hydrogen/carbon oxide mixtures (syn gas) will then be available to displace all of the current uses for dangerous fossil fuels, including those that represent sequestration in products. In this case, particularly in the case of extreme temperatures that are ideal for many reasons of efficiency, the velocity and temperature of the gases, particularly at critical points like nozzles, are likely to approximate those found on the surfaces of vehicles experiencing re-entry or launch at supersonic speeds.

Thus, the relevance of this materials science paper to climate change can be established.

The introduction to this review indicates what the subject is really about, which is not turbines, but high speed aircraft and space craft:

Ablative materials are degenerative composite systems which, by design, are processed to degrade at projected rates when exposed to high aerodynamic heat rates (∼10^5 BTU/ft^2) at high temperatures (∼8000 °C). Ablative materials have diverse applications1−5 in the fields of aerospace as a protective layer for leading edges6 of the control surface, in medicine for curing various diseases in form of ablating lasers and in space technology as thermal protecting systems at hyperthermal7−9 environments. In the field of medicine, ablation2 phenomenon is used to cure tumors and treat irregularities in heart pulse rates; by focusing high dosages of energy over a small volume, as in the case of ablative radiography or catheter ablation for atrial fibrillation; however, in the case of aerospace technology, heat energy is insolated upon a larger surface that is to be considered. The term “ablation” in medical terminology implies the complete removal of material from the host system, as in the case of tumors and in the case of atrial fibrillation, the paths of unnecessary impulses are cut down, whereas, for the field aerospace technology, only a part of the system is necessarily required to ablate at known uniform rates under stable operating conditions. A technical understanding of the phenomenon ablation, as early as 1983, states that,
“Ablation is a complex energy dissipative process whereby a material undergoes combined thermal, chemical, and mechanical degradation accompanied by a physical change or removal of surface material”.10
The degradation process has been a keen interest among the scientific community for many years and has evolved many techniques to converge upon a common idea, i.e., how an ablative material functions under severe aerodynamic conditions. Recently, the multiphase modified matrix technology, unlike simple single-phase matrix systems of two classes mentioned in the next section has offered a platform for yielding knowledge investment from a multidisciplinary background of science and engineering for design the ablative materials. Therefore, the performance of modern ablatives is tending toward euclidative application of ultrahightemperature ceramics, potentially with zirconium diboride, because of its quick and timely response to the cataclysmic reentry environments as witnessed by the thermokinetic approach and experimental procedures that are discussed in this article.

1.1. Re-entry Vehicle Structures. Re-entry11−14 vehicle structures are marvels of modern engineering that have made human space travel conceivable by guaranteed safe landings, surviving the extreme re-entry conditions that are discussed in the next section...

The main idea of these kinds of systems is two fold: They are designed to dissipate some heat by exploiting the very high heat of vaporization of very high melting (and vaporizing) systems while protecting inner layers from heating beyond their melting points by containing materials that have extremely low thermal conductivity.

One of the best descriptions of this phenomenon is the paper published by the great Princeton University Scientist Emily Carter on the occasion of her induction into the National Academy of Scientists: Atomic-scale insight and design principles for turbine engine thermal barrier coatings from theory (Kristen A. Marino, Berit Hinnemann, and Emily A. Carter, April 5, 2011 108 (14) 5480-5487) The full paper is available open sourced on line, but for convenience I reproduce an excerpt of the introduction here:

Aircraft and power plants share a common source of usable energy: Both employ turbine engines that combust fuel to either propel airplanes or produce electricity. At a time in which efficient use of energy is paramount, improving the efficiency of turbine engines is one means to contribute to this global challenge. Turbine engines operate via the Brayton cycle, which offers lower carbon dioxide emissions and lower cost for power generation than other possible alternatives. Their efficiency can be increased by increasing the inlet temperature...

...However, high-temperature operation, under oxidizing conditions, poses serious demands on the materials...

...Materials must be found that are robust under such harsh operating conditions. Engineers over the past few decades have improved greatly the thermomechanical properties of the metal alloy comprising, e.g., the turbine blades, and have created a multilayer coating for the blades that protects against both heat and corrosion, referred to as a thermal barrier coating (TBC). These materials advances, along with internal component cooling, have been astonishingly successful, allowing the gas temperature to exceed the melting point of the metal alloy from which the engine components are constructed!

The class of multilayered materials largely discussed in the review introduced in this post are ablative, designed to erode (slightly) in use, whereas the layered materials for turbines to which Dr. Carter eludes in her paper, are not. Both papers however discuss the chemistry and material properties of zirconium: Dr. Carter's refers to "YSZ" or yttrium stabilized zirconia, and predicts that a Hf analogue - hafnium is a (relatively rare) cogener of zirconium and titanium - may be superior based on in silico calculations. In the current paper under discussion however, the layered material discusses a material doped with zirconium boride, an extreme refractory.

The paper has a nice graphic showing the classes of refractory layered materials:

The caption:

Figure 1. Classification of materials for the thermal protection system.

Of particular interest are the ultra-high temperature ceramics which are described in the text as follows:

1.3. Ultrahigh-Temperature Ceramics (UHTC) for Ablative Applications. The UHTCs are the ceramics with melting points greater than 2700 °C.1 These materials possess properties like good oxidation resistance, ablation resistance, thermal expansion, and damage tolerance among other characteristic features which are discussed in later sections. The best contender among UHTCs for ablative is ZrB2; nevertheless, there are other ceramics like tantalum carbide (TaC), hafnium-diboride (HfB2), and hafnium carbide (HfC) with melting point temperatures higher than that of ZrB2 but there are other aspects, such as cost, ease of processing, availability, and temperature range of chemical activity (1500− 1800 °C). It is necessary for the ceramic to be used as a matrix modifier while ablation during the re-entry phase that they have to respond to the changes in the environment in the vicinity of the boundary layer.49−53

Technologies based on tantalum, are best avoided, since tantalum is a fairly rare and easily depleted element, and - although it is widely used in cell phones - is a conflict metal. Small amounts of it are synthesized in certain types of nuclear reactors, generally ship borne reactors, in control rods by neutron capture in hafnium, but hafnium, utilized in this fashion because of its high neutron capture cross section is relatively rare, and is found as an impurity in all zirconium ores, from which it must be removed for nuclear applications.

Zirconium ceramics, of which zirconium boride is one example, have extremely high melting points, according to the review, better than 3000°C, however they are said to exhibit poor thermal shock resistance and low fracture toughness and, as is true of many ceramic materials, they are brittle. Thus they, as suggested by Dr. Carter's paper, are utilized as composite materials.

This schematic cartoon, showing how ablative thermal shields work, gives a feel for how the layering works:

The caption:

Figure 3. Representation of the ongoing ablation process.

An issue with layered systems however, is that the properties of the materials must be closely matched, specifically thermal expansion and factors like Young's modulus, or "stiffness."

Some mathematics connected with these considerations are described:

…mathematical evaluation of mechanical properties have been undertaken for fracture at high time rate of thermal loads based on certain assumptions which state that the model (Figure 4) is a two well-bonded plate, which does not consider the interface damage, there is no heat exchange between the UHTC plate and base plate, both the layers geometrically confirm with each other which make calculation easier, finally plate is continuous, isotropic, elastic, and is restricted in the domain of small deformation hypothesis.

The equations for the effective linear expansion in the ceramic layer then are given by eq 1:


where Δx1 is the elongation in the ceramic plate without external restrictions and Δxσ is the value of restricted elongation due to complementary compressive stress at the interface. From the above equation, it is evident that the net elongation of the system when assumed the changes in Young’s modulus (Yc) and Poisson’s ratio (μc) for ceramic material are devoid of temperature changes, could generate an internal stress σ in the ceramic plate plausibly at the interface, which was derived by Li et al.,87 as shown in eq 2:


Considering the above equation with the effects of temperature would be modified to eq 3.

Here, the pressure stress or internal compressive stress has been taken into account by considering Young’s modulus (Y) and the coefficient of thermal expansion (α:


The most effective way to mitigate failure by thermal shock is to increase the critical temperature difference of rupture (CTDR). According to Wang et al., the CTDR increases as the temperature of surroundings increase, up to a certain extent, and then decreases. The governing equation for CTDR by Wang et al.,82 is as shown in eq 4


where h is the heat-transfer coefficient, tS is the thickness of the ceramic plate, and R′ is a constant parameter called as second thermal shock resistance parameter and is given by eq 5:


Note that the physical properties of mechanical interest, such as the Young’s modulus and fracture stress, vary along with temperature as described by eq 6.

where B1, B2, B3, and Bo are material constants and Yo is Young’s modulus at ambient conditions. The fracture stress as a function of temperature is given according to Li et al.,87 as in eq 7. In order to understand the dependence of the fracture stress with temperature, it has to be inferred from the function of Young’s modulus that it is dependent on temperature in a transcendental fashion and so does the fracture strength.

It is mentioned that the term

belongs to a temperature-dependent fracture surface energy term82 and indicates that, as the operating temperature approaches the melting point, the ratio φ has a tendency to unity; as a result, the temperature-dependent fracture stress tends toward theoretical stress, leading to failure. The thermal shock resistance can be increased by incorporating microflaws into the ceramics, like crack, pores, grains, residual stress due to thermal expansion anisotropy, and, as such, eliminating the initial rupture temperature reaching the danger zone of temperature for thermal shock resistance as reported by Kou et al.,88 and Wang et al., for materials such as hafnium diboride and zirconium diboride, respectively, along with mathematical reasoning.82 These microflaws also cause deterioration in mechanical performance, as reported by Wang et al., which is testified by the following equations of fracture mechanics...

There is a lot of similar information in this wonderful review and it will not be possible to cover all of the things covered therein. Regrettably the paper is not open sourced, and must be accessed in a library.

It may be useful though, to look at the pictures.

The fine details of how these materials, for which the above text gives some feel, results in changes to the material as it performs, and in some cases, these changes improve the materials performance.

This cartoon evokes as much:

The caption:

Figure 5. Schematic of ablation cycle of a material

These changes can actually enhance the properties of the material. For example, a zirconium boride carbon composite will become coated with ZrO2 in an oxidizing environment, and enhance temperature resistance, since ZrO2 has well known thermal barrier properties, and, as discussed above, can be modified with yttrium to give the widely used "YSZ" material.

Silicon carbide is a well known and widely used refractory ceramic. When doped with zirconium boride, graphene or graphene oxide can form. The following graphic relates to a consequence of this structural rearrangement, which that the material can be utilized as an oxygen reduction electrode in fuel cells in the presence of platinum dopants, thus showing the further the utility and versatility of these materials. Note that if the carbon involved in the graphene and silicon carbide is obtained by air capture (by any means) the carbon is effectively sequestered.

The caption:

Figure 6. Combined effect of graphene and ZrB2 under the influence of an ionized platinum on oxidation properties at low temperatures.

While not explicitly described as such in this review, the paper from which the graphic immediately above comes can be found in this open sourced paper, which, if interested, the reader can easily access and read merely by clicking on the link below.

Nano Conductive Ceramic Wedged Graphene Composites as Highly Efficient Metal Supports for Oxygen Reduction (Mu et al, Scientific Reports volume 4, Article number: 3968 (2014)

In the case of a re-entry vehicle the temperature driven evolution of the material is evoked by the following cartoon:

The caption:

Figure 7. Illustration of ZrB2–SiC response in a typical re-entry environment.

A more detailed representation:

The caption:

Figure 8. Detailed illustration of highlighted area for the typical response of ZrB2–SiC to re-entry environment.

The addition of additional elements are being evaluated to improve the performance of these materials:

Many scientists have investigated the aforementioned studies and concluded that mechanical alloying with rare-earth elements forms a multilayer protective glass coating, yet each layer may still be multiphase. Tan et al.104 modified ZrB2 with samarium and thulium through two processes: first, chemical doping by CVI technique and second, by dry mixing in a ball mill, followed by compaction in a press. Furthermore, they reported that chemically doped ZrB2 best performs by enhanced surface emissivity, which is an ingenious technique to deal with ablating environment as radiation can transfer 90% of heat. It is required to recollect that the addition of one atom to another effects cation field strength and the addition of transitional metals to ZrB2 due to optimal cation field strength (eq 8), there would be immiscibility, which increases viscosity, as explained by the Einstein−Stokes equation (eq 9) of the melt at oxidation temperatures. As a result, oxygen transport into the material reduces in proportion to the increasing viscosity of the melt. In addition, mechanical mixing has not given many admirable results, when compared to chemically modified ZrB2, as reported by Monteverde et al.105



where C denotes cation field strength, Z denotes valency, r denotes ionic radius, D denotes diffusion rate, K denotes Boltzmann constant, η denotes viscosity, p denotes particle dimension, and T denotes absolute temperature…

…Another innovative idea to form multilayer was reported by Zhang et al.,107 by doping zirconium diboride with tungsten carbide (WC), which lead to the formation of dual glass layer (Figure 10), the top layer was porous and depleted of tungsten oxide and appeared light in complexion, while the bottom layer was rich in WC and appeared dark and dense...

Like zirconium, samarium is a fission product, and thus given the high energy to mass density of nuclear energy even when compared with dangerous fossil fuels, appreciable quantities may be available in the reprocessing of used nuclear fuels, especially when the timing of the reprocessing is utilized to minimize or maximize residual radioactivity for some isotopes in some of these elements. The higher lanthanides beyond europium are not appreciably represented as fission products, for example thulium, although small amounts may be formed in a kind of earthbound aufbau process - the manner in which heavy elements are formed in stars in the s-process - in "breed and burn" nuclear reactors, the kind I personally favor. This would result in the use of fission products with high neutron capture cross sections, as represented by the heaviest lanthanide fission products as neutron shields (and in some cases heat sources to maintain metal coolants in liquid states during shut down) In any case, not all of these strategies result in positive outcomes, and the matter should remain an area of materials science research.

Figure 10:

The caption:

Figure 10. Effect of modifying ZrB2 with WC on the formation of barrier coat.

Overall, these effects are summarized in the following graphic:

The temperature gradient that these materials generally experience is shown in this cartoon:

The caption:

Figure 12. Temperature profile of the ablative material.

There is a nice evocation of the thermodynamics of these systems - thermodynamics being the science most routinely ignored by those "efficiency will save us" and "batteries will save us" types that have lead us to the horror of the dangerous fossil fuel waste carbon dioxide's concentrations being permanently well above 410 ppm (and rapidly rising). The discussion includes some very beautiful and fun differential equations as well as an evocation of the Arrhenius equation, Arrhenius being the guy who told us in the late 19th century that what is happening would happen with respect to climate change:

It has also been mentioned that there is a continuous thermal gradient that exists through the char region, reaction/ pyrolysis zone, and unaffected virgin material (Figure 12). An Arrhenius-type temperature-dependent reaction rate (eq 12) has been mentioned and explained as follows:

A significant work presented by Norman et al., where the temperature distribution was presented as a function of char depth and energy balance (eq 13).

The first term apparently is the rate of heat flow through the nonporous part of the material, calculated from Fourier’s principles for one-dimensional (1-D) heat flow, the second term excludes the conductive heat flow into the trapped gases inside the voids, which could otherwise flow away to the surface with the velocity υs, the fourth term is probably the heat rate exchanged between the hot entrapped gases and elemental material of depth dx, while the gases are expelled out of reaction zone of the material with no effectiveness of heat exchange taken into consideration; finally, the last term has been mentioned by Norman et al.,57 as a result of heat of decomposition. With the above analysis, where, for eq 13, ϵ is fractional void in the solid (for a unit length volume fraction and area fraction do not vary significantly), υs is a relative velocity between the material surface and incoming mass of air, ρs is the density of the material, cps stands for the specific heat of material, cpg represents the specific heat of gases, ṁg is the mass rate of gases, ks is the conductivity of the solid, ΔE is the activation energy of phenolic matrix (11 kcal mol−1), MW is a constant with the value of 10, and, finally, kg denotes the conductivity of gas...

There is then a discussion of the preparation methods of these materials, a subject I personally find interesting because of my interest in printable nuclear reactor cores composed of ultrahigh temperature ceramics represented by actinide nitrides. I just have time for the cartoons.

The caption:

Figure 13. Depiction of the sol–gel process.

The caption:

Figure 15. Schematic of a typical CVI process.

There's a depiction of the test equipment:

The caption:

Figure 16. Schematic of the ablation test.

And a graphic illustrating the over all concepts:

The caption:

Figure 17. Design parameters of an ablative material.

Some phenolic resin chemistry of carbon relative to the building of these materials:

The caption:

Figure 18. Mechanism of coalescence of phenol rings during pyrolysis.

This post is, I'm sure, highly esoteric, even for my posts, many of which fit into the category of "esoteric."

I write them to fix concepts in my mind, and post them on the off chance that there are people interested in the practical scientific and engineering issues of addressing climate change which, trust me, are way beyond anything being discussed politically and popularly. There are scientists working long and hard hours to build the intellectual infrastructure by which we may save what remains to be saved, and any attention they get, improves whatever small chances remain for our planet.

Irrespective of your interest in the practical approaches to addressing and even reversing climate change, and the Herculean engineering tasks they represent, I trust you're having a nice weekend.

Pore Size and Shape & the Release of Radon Gas in Fractured Rocks in the Marcellus Shale Gas Fields.

The paper I'll discuss in this post is this one: Investigating Effects of Pore Size Distribution and Pore Shape on Radon Production in Marcellus Shale Gas Formation (Sondergeld et al, Energy Fuels, 2019, 33 (2), pp 700–707).

Although it garnered very little attention until the late 1930's, other than as a colorant for stained glass and to make orange glazes for ceramic cookware and serving dishes, uranium was of considerable scientific interest, and some commercial interest. Industrially the ore was mined not for the metal itself, but rather for its decay product, radium, which was widely used in luminescent watch and clock dials. (I had one of these when I was a small kid. I thought it was great.) The discovery of radioactivity was also associated with uranium, and it gathered much interest in what was, again up until the late 1930's, when Lise Meitner discovered nuclear fission while interpreting the experimental data from an experiment conducted in the laboratory of Otto Hahn.

It was not recognized until well after the discovery of nuclear fission that uranium is a very common element, about as common as tin. Because uranium has been present on the planet since its formation and is often fixed in ores, it has had time to come into "secular equilibrium" with all of its decay products except for the final product, non-radioactive lead. Except in the ocean, which contains a little under 5 billion tons of uranium, where the chemical distribution of decay products is driven by solubility and is thus subject to fractionation, the products of uranium decay generally remain in the ores, unless the ores are disturbed.

The Marcellus shale, which is a large producer of dangerous natural gas is, in fact, a low grade uranium ore, and throughout its geological history it has contained all of the decay products of uranium.

Here is the decay chart for U-238, which should be fairly familiar to people in high school science classes:

Radon-222 (Rn-222) is a noble gas. Where uranium is found in surface soils, it can accumulate in people's basements, and can represent a significant health hazard, in paticular because it's decay product, highly radioactive polonium-218, can lodge in people's lungs, go through several fast radioactive decays and remain, ultimately, lodged as lead-210, with a half life of 22 years. (I have measurable radon in my basement, and probably have a few radioactive atoms in my lungs.)

The half-life of uranium-238 is approximately equal to the age of the earth, about 4.5 billion years. There is so much uranium on the surface and subsurface of the Earth that no technology can ever eliminate it.

Here, for completeness, is the decay chart for U-235, which is also found in natural uranium, although its shorter half-life, 703.8 million years, means that it is relatively depleted in this isotope. (About 1.8 billion years ago, the fraction of U-235 found in uranium ores was high enough that natural nuclear reactors operated, most famously at Oklo, in Gabon.)

There is also a related decay series for thorium-232, itself a decay product from historic Pu-244 which has more or less gone extinct on earth.

A fourth decay series, the Cm-249/Np-237 series went extinct early in Earth's history.

Fracking has allowed for the release of radon gas from the Marcellus shale uranium ores which are not being mined for uranium, but for the dangerous natural gas that is mined in ever increasing amounts while we all wait for the grand so called "renewable energy" nirvana that never comes, as I often say, like Godot.

The paper cited here at the opening is about the mechanism of the release of radon from natural gas, and the fate of that radon as it's shipped to end users.

From the introduction:

Marcellus Shale in the Appalachian basin is a middle Devonian-age shale and lies between limestone (Tristates Group) and shale (Hamilton Group).1 Pennsylvania has become the second largest shale gas-producing state because of Marcellus Shale production.2 In order to economically produce natural gas from extremely low-permeable shale formation, operators rely on hydraulic fracturing to increase the reservoir contact area, creating high-permeable conduits for natural gas to flow.3

Radon gas associated with shale gas production has come under the scrutiny of medical and environmental societies because of its potential negative impacts on the public health and environment.4−6 Radon is the daughter product of radium. Its most stable isotope is 222Rn with a half-life of 3.8 days. Radon is commonly found in the gaseous phase, but it can also partition into the aqueous phase such as contaminated brine and flowback fluids from hydraulic fracturings.7−12 Epidemiological and toxicological surveys show that exposure of radioactive radon causes lung cancer.13,14 Considering radon’s hazard to the public, the EPA set the safe level of radon concentration at 4 pCi/L. Picocuries per liter is a unit of radioactivity. Radon production from the Marcellus Shale is particularly more severe than other shale gas reservoirs and it is worth more attention. First, Marcellus Shale contains highly concentrated uranium and radium, inferring possibly high concentration of radon. Uranium concentration in rock can reach about 8.9−83.7 ppm, which is much higher than other US shale formations.15 Laboratory test measured radium concentration in hydraulic fracturing flowback water to be 1.7 × 10^4 pCi/L.16 Kondash et al.17 also pointed out that flowback water from Marcellus Shale contained unusually high levels of radium. Secondly, field measurements confirmed the existence of radon at a wellsite4 and inside a natural gas pipeline.6 Both observations indicated the radon level was higher than the safe standard. Thirdly, Marcellus Shale is close to a highly populated residential area, which implies a short transportation time for radon to decay from wellsite to residential buildings. Consequently, residents would be at risk of being exposed to hazardous radon. Therefore, it is imperative to critically evaluate the potential danger of the produced radon from Marcellus Shale.

In this paper, the author's obtained some fracked rock from a well, and also used certain kinds of computational analysis to consider how the radon escapes into the gas stream and flowback water.

An important thing to understand is that a nuclear decay is a very energetic event. The decay of radium-226 which gives rise to radon-222 occurs roughly at 4.87 million electron volts. Much of this energy is contained in the helium atom (alpha ray) ejected from the nucleus, but the conservation of momentum requires that the recoiling radon atom also has considerable energy, and can in fact travel quite far even in a solid matrix.

From the text:

The radon atoms acquire kinetic energy after the alpha decay of radium. This energy defines a finite distance, known as the recoil range.27 The kinetic energy allows the radon atoms to travel inside materials. Once the atoms lose all the energy, they stop moving. This process is known as alpha recoil. The distance traveled is material-dependent. Usually, solid materials such as rock grain require more energy than air, for example, to travel equivalent distances. In other words, the radon recoil range is shorter in the material with higher density. Typically, the recoil range in rock, water, and gas is 36, 100, and 60 000 nm, respectively.27

The authors consider two sources, radium already in the pore or on the surface of the pore, and radon that travels through the rock as part of the alpha decay.

Some remarks on the mathematical modeling of how the radium/radon system works in pores:

Some of the produced radon may stay in pore space while some may penetrate into the adjacent grains. On the other hand, for radon emanated from rock grains into pore space, the alpha recoil process is assumed as the primary mechanism. Given that radon’s half-life is 3.8 days and its low diffusivity (in range of 10−31 to 10−69 m2/s) in rock grains,28 diffusion contribution to radon emanation is negligible compared to recoil. Therefore, only radon produced within the distance of the recoil range to grain-pore surface has nonzero probability of escaping the grain. Equation 1 is modified from Hammond29 to estimate the radon concentration in pores contributed by radium in rock grains.


where, ARa is the radioactivity of radium and ARn is the radioactivity of radon, both in unit of pCi/L. Ve is the grain volume in which the radon generated from radium has nonzero possibility entering pore space, in unit of L^3. e is the emanation efficiency of recoil and Vp is pore volume in unit of L^3. Emanation efficiency e consists of two parts (eq 2). First, not all produced radon near the grain-pore surface will be emitted into pore space (fe). Some of the produced radon atoms remain inside the grain due to the inappropriate recoil direction. Second, radon atoms that enter pore space may maintain sufficient kinetic energy so that they could enter neighboring grains eventually (1 − f i). Both of these factors should be included in evaluating efficiency e


The slit pore shape is one commonly used pore geometry, defined by two parallel planes (grain surface).26 Andrews20 analytically calculated the radon release fraction from grains into pore space (fe) for slit pores. Fleischer21 further studied the fraction of radon atoms ejected from grains that are trapped in pores ( f i). Tian et al.22 investigated how much radon produced from radium in pore space will remain in slit pores after alpha recoil. Besides the slit pore shape, spherical pores also occur in shale, which require different formulas to calculate radon in situ concentration. Emanation efficiency, e, is defined in eq 2. The point O1 is the center of the spherical pore with the radius of R, as shown in Figure 1. The radium atom is initially located at O2. The radon recoil range inside fluid-filled pore space is Rf and the recoil range in solid material is Rs. The solid circle in Figure 1 represents the pore wall and, therefore, the inside of the circle is pore space. If the trajectory of radon after recoil is O2AB, it is helpful to convert the stopping power in fluid to solid.21 In other words, the distance b in the pore filled by fluid is modified to an equivalent distance bRs/Rf if the pore space is assumed to be filled by the solid. Radon particles could possibly be ejected and trapped into the pore if the following criteria are satisfied



The noble gases, including radon, are known to form clathrates with water, and water transport is an important feature. The fracturing of fracking is accomplished with water laced with a number of interesting chemicals, and this water, called "flowback" water is brought to the surface.

Radium was located in the rock grains and the formation water, as the source of radon. Because of the existence of radium, radon reached secular equilibrium,22 which indicates that the concentration of the radioactive atom remains constant as a result of the balance between the production rate and decay rate. The radium concentration in water was taken to be 1.73 × 104 pCi/L.16 The radium concentration in the solid phase was determined corresponding to radon in situ concentration. Radon was initially trapped in pore space but can partition between gas and water. The partitioning coefficient is described in eq 10.34

Once the shale reservoir development starts, radon escapes to the surface through conductive hydraulic fractures, being entrained in shale gas and formation water. The alpha decay of radium and radon in the reservoir was simulated by first-order chemical reaction because the decay rate was dependent on their concentrations (eq 11). During the simulation, fresh water was injected into the formation for 0.5 day to mimic the hydraulic fracturing process. The injected fracking fluid did not contain any radon or radium. The well was then brought back to production under a constant bottom-hole pressure after 0.5 day shut in. This work adapted model setup from Tian et al.22

where N is the concentration or radioactivity and λ is the exponential decay constant.

Some diagrams and graphics:

The caption:

Figure 1. Schematic cross-section view of the spherical pore shape. The radon generated from radium in grains (outside of the solid circle) may enter pore space (inside the solid circle). The O2A section has a length of a. The AB section has a length of b. The O2C section has a length of x. O2 represents the location of a radium molecule. After alpha decay, if the radon molecule could fall inside the solid circle, it is considered to be ejected into pore space.

Figure 2. Schematic cross-section view. Radon generated from radium in pore space (inside the solid circle) may remain in pore space. O2 represents the location of a radium molecule. After alpha decay, if the radon molecule could fall inside the solid circle, it is considered to be ejected into the adjacent grains.

Some other graphics:

The caption:

Figure 3. Synthetic model configuration. The horizontal well is located at the top. It is perforated at hydraulic fracture at the left side. The stimulate reservoir is divided into two sections: the near -fracture zone and far-formation zone.

The caption:

Figure 4. Backscattered SEM images for Marcellus Shale. (a) Organic and inorganic pores at 3 μm. The inorganic pores show the slit shape and the organic pores shows the spherical shape. In (b), the image shows more slits and sheets of illite. Illite is the dominant matrix mineral and is more visible as sheets in (c,d), creating inorganic pores around the sample.

The caption:

Figure 5. Pore size distribution for Marcellus Shale. Case A and case B are calculated through DFT using our adsorption measurements. Case C is obtained from the literature.(30)

The caption:

Figure 6. Radon in situ concentration distribution for the three cases.

Figure 7. Wellhead radon concentration with multiple initial radon in situ concentrations. The wellhead radon concentration is directly related with the in situ concentration.

The caption:

Figure 8. Wellhead radon concentration to investigate heterogeneity impact. The near-fracture zone determines the early radon production.

The concern is that the radon will persist long enough to make it to consumers. I'm sure it does.

Transport time in surface facility from the wellhead to consumers could reduce the radon levels, but radon may still be dangerous to human health. For example, assuming it takes natural gas one week to be transported from the wellhead to users, radon will decay to approximately 25% of its original concentration, considering 3.8 days half-life. That is to say, the radon concentration that entered residential buildings would be in the range of 9−25 pCi/L (based on case A), which is far above the safe standard of 4 pCi/L. Therefore, radon monitoring and protection should be implemented during Marcellus Shale gas development.



Enjoy what's left of the evening.

On the Relationship Between Highly Organized Culture and Moralizing Gods.

The paper I'll discuss in this post is this one: Complex societies precede moralizing gods throughout world history (Savage et al, Nature, Published On Line March 20, 2019)

A few weeks back, I came across a commentary in my files that I never actually read, this one: Birth of the moralizing gods (Lizzie Wade, Science, Vol. 349, Issue 6251, pp. 918-922 (2015)).

I took a brief look through it - wondering a little bit about what had caused me to download it some years back - to find a discussion of the interesting thesis that in order for a highly organized culture to arise, it was necessary to have an organized religion in which a God (or Gods) punish or reward one for one's behavior, if in no other way than in a putative afterlife, where one is judged on the (defined) morality of one's earthly behavior. This idea of punishment and reward of course is an outline of what one might call "justice."

Religion in these times is a huge force, of course, and not always for good; one wonders about our fundamentalists in this country and their worship of Donald Trump, of all beasts, without contemplating whether, by appeal to their Bible, if this awful tiny handed gnome might or might not be worshiped as described in Revelations 13, 1-18, a rather psychotic passage that reads like an acid trip, but warns of worshiping a perverted god who is, not, in fact, a god.

That's their business, not mine, except inasmuch they do ill and unethical things.

Dr. Wade's subtitle for her commentary was this: "A new theory aims to explain the success of world religions—but testing it remains a challenge."

The Nature paper linked at the outset, claims to have tested this theory using certain kinds of scales, tests, and historical (often archaeological) evidence.

From the introductory text:

Supernatural agents that punish direct affronts to themselves (for example, failure to perform sacrifices or observe taboos) are commonly represented in global history, but rarely are such deities believed to punish moral violations in interactions between humans2. Recent millennia, however, have seen the rise and spread of several ‘prosocial religions’, which include either powerful ‘moralizing high gods’ (MHG; for example, the Abrahamic God) or more general ‘broad supernatural punishment’ (BSP) of moral transgressions (for example, karma in Buddhism)9,12,16,17,18. Such moralizing gods may have provided a crucial mechanism for overcoming the classic free-rider problem in large-scale societies11. The association between moralizing gods and complex societies has been supported by two forms of evidence: psychological experiments3,6,27,28 and cross-cultural comparative analyses7,11,14,15,16,17,18,20.

The contributions of theistic beliefs to cooperation, as well as the historical question of whether moralizing gods precede or follow the establishment of large-scale cooperation, have been much debated9,10,12,23,24. Three recent studies that explicitly model temporal causality have come to contrasting conclusions. One study, which applied phylogenetic comparative methods to infer historical changes in Austronesian religions, reported that moralizing gods (BSP but not MHG) preceded the evolution of complex societies16. The same conclusion was reached in an analysis of historical and archaeological data from Viking-age Scandinavia18. By contrast, another study of Eurasian empires has reported that moralizing gods followed—rather than preceded—the rise of complex, affluent societies20. However, all of these studies are restricted in geographical scope...

The authors claim to take a broader approach as described later in the paper:

To overcome these limitations, we used ‘Seshat: Global History Databank’29, a repository of standardized data on social structure, religion and other domains for hundreds of societies throughout world history. In contrast to other databases that attempt to model history using contemporary ethnographic data, Seshat directly samples over time as well as space. Seshat also includes estimates of expert disagreement and uncertainty, and uses more-detailed variables than many databases.

To test the moralizing gods hypothesis, we coded data on 55 variables from 414 polities (independent political units) that occupied 30 geographical regions from the beginning of the Neolithic period to the beginning of Industrial and/or colonial periods (Fig. 1 and Supplementary Data). We used a recently developed and validated measure of social complexity that condenses 51 social complexity variables (Extended Data Table 5) into a single principal component that captures three quarters of the observed variation, which we call ‘social complexity’8. The remaining four variables were selected to test the MHG and BSP subtypes of the moralizing gods hypothesis. The MHG variable was coded following the MHG variable used as standard in the literature on this topic11,14,15,16,17,30, which requires that a high god who created and/or governs the cosmos actively enforces human morality. Because the concept of morality is complex, multidimensional and in some respects culturally relative—and because not all moralizing gods are ‘high gods’—we also coded three different variables related to BSP that are specifically relevant to prosocial cooperation: reciprocity, fairness and in-group loyalty.

The sampling region are shown in a map:

The caption:

The area of each circle is proportional to social complexity of the earliest polity with moralizing gods to occupy the region or the latest precolonial polity for regions without precolonial moralizing gods. For regions with precolonial moralizing gods, the date of earliest evidence of such beliefs is displayed in thousands of years ago (ka), coloured by type of moralizing gods. The three transnational religious systems that represent the first appearance of moralizing gods in more than one region—Zoroastrianism, Abrahamic religions (Judaism, Islam and Christianity) and Buddhism—are coloured red, orange and blue, respectively, whereas other local religious systems with beliefs in MHG or BSP are coloured yellow and purple, respectively. See Extended Data Table 1 for further details.

A graphic describes their findings from this approach to define the "chicken and egg" argument about the whether the concept of a moralizing god is necessary for the rise of complex societies, or whether complex societies develop these faiths in order to sustain themselves.

The caption:

a, Time series showing mean social complexity over time for 2,000 years before and after the appearance of moralizing gods. n = 12 regions with social complexity data for before and after moralizing gods. Social complexity has been scaled so that the society with the highest social complexity (Qing Dynasty, China, around AD 1900) has a value of 1 and the society with the lowest social complexity (Early Woodland, Illinois, USA, around 400 BC) has a value of 0. Vertical bands represent the period in which moralizing gods and doctrinal rituals first appeared. All errors represent 95% confidence intervals, with the exception of the vertical bar for moralizing gods, which represents the mean duration of the polity in which moralizing gods appeared (because times are normalized to the time of first evidence of moralizing gods, and there is thus no variance in this parameter). b, Histogram of the differences in rates of change in social complexity (SC) after minus before the appearance of moralizing gods. n = 200 time windows from the 12 regions. kyr, thousand years. The y axis represents the number of time windows out of 200. See Extended Data Fig. 1 for data for each of the 12 regions and Extended Data Fig. 2 for a version extending beyond 2,000 years before and after moralizing gods. The analyses in this figure treat the presence of either MHG or BSP as ‘moralizing gods’—see Extended Data Fig. 3 for an alternative analysis restricted only to the presence of MHG.

They write further:

In summary, although our analyses are consistent with previous studies that show an association between moralizing gods and complex societies7,11,14,15,16,17,18,30, we find that moralizing gods usually follow—rather than precede—the rise of social complexity. Notably, most societies that exceeded a certain social complexity threshold developed a conception of moralizing gods. Specifically, in 10 out of the 12 regions analysed, the transition to moralizing gods came within 100 years of exceeding a social complexity value of 0.6 (which we call a megasociety, as it corresponds roughly to a population in the order of one million; Extended Data Fig. 1). This megasociety threshold does not seem to correspond to the point at which societies develop writing, which might have suggested that moralizing gods were present earlier but were not preserved archaeologically. Although we cannot rule out this possibility, the fact that written records preceded the development of moralizing gods in 9 out of the 12 regions analysed (by an average period of 400 years; Supplementary Table 2)—combined with the fact that evidence for moralizing gods is lacking in the majority of non-literate societies2—suggests that such beliefs were not widespread before the invention of writing...

...Although our results do not support the view that moralizing gods were necessary for the rise of complex societies, they also do not support a leading alternative hypothesis that moralizing gods only emerged as a byproduct of a sudden increase in affluence during a first millennium BC ‘Axial Age’19,20,21,22. Instead, in three of our regions (Egypt, Mesopotamia and Anatolia), moralizing gods appeared before 1500 BC. We propose that the standardization of beliefs and practices via high-frequency repetition and enforcement by religious authorities enabled the unification of large populations for the first time, establishing common identities across states and empires25,26. Our data show that doctrinal rituals standardized by routinization (that is, those performed weekly or daily) or institutionalized policing (religions with multiple hierarchical levels) significantly predate moralizing gods, by an average of 1,100 years (t = 2.8, d.f. = 11, P = 0.018; Fig. 2a).

I'm not all that much into social science, but the role of religion in culture, for good and for bad, has always lingered in my consciousness, if only because religion was a very important part of my childhood, possibly the most important part of my childhood.

I personally know people who are highly ethical clearly because of their religion; and of course, we are all aware of - and I know several personal examples - people who excuse their lack of ethics by appeal to their religion.

I'm sure any sensible person would prefer the former, a type described both my mother and my step mother and some people with whom I work closely, and the latter by my own brother from whom I am estranged.

I'm not sure what all this may or may not mean, but in the time of awful people like Michael Pence and his ilk, the paper does inspire some interesting questions, as it is clear that under some circumstances, aggressive religious faith can serve to destabilize complex societies.

I wish you a pleasant Sunday.
Go to Page: 1 2 3 4 5 6 ... 51 Next »