HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » NNadir » Journal
Page: « Prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Next »


Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 25,693

Journal Archives

Solar ENERGY production in the United States in March 2018.

Often in this space we hear how "falling solar prices" are driving the coal industry out of business.

Because I find this rhetoric annoying and delusional to the point of toxicity I decided to look up how much solar energy was produced in the United States in recent times. The data is reported at the EIA website and can be found here:

EIA US Electricity Browser (Accessed 06/09/18).

It shows the entire output of the entire United States for solar energy, with the latest data figure being March of 2018.

In March of 2018 US solar, residential and utility scale combined produced 7513 thousand MWe, which translates to 0.027 exajoules of energy. World energy demand was, as of 2016 for the entire year, 576 exajoules, and will surely be higher this year.

In terms of average continuous power, the entire solar industry produced in the entire United States 10,483 MW of power.

The Navajo coal plant in Arizona has been killing Native Americans, with their active acquiesce and, indeed, enthusiasm, since 1974. It's power rating is 2,250 MW. The capacity utilization of coal plants is on the order of 80%, second only to cleaner and far more sustainable nuclear plants, which typically run at capacity utilization of greater than 90%. For generations native american miners have been laboring in the Kayenta coal mines to feed that plant. They're worried they're going to lose their jobs.

It follows from the above numbers, - people buying into the so called "renewable energy" scam are very, very, very, very, very poor at numbers - that if the Navajo coal plant operated at 80% capacity utilization, producing average continuous power of 1800 MW, all the solar plants combined in the entire United States produced less energy than six coal or gas plants the size of the Navajo coal plant.

The United States has 844 plants capable of running on coal.

Actually, since solar energy requires the operation of redundant systems it could be free and still not be "cheap" depending on the external and internal cost of back up systems which are always required, since it is widely reported that an event called "night" happens once in every 24 hour period almost everywhere on the planet, with the possible exception of areas inside the arctic circles. (Our habit is to ignore external costs - the costs paid by human and other species flesh, and the cost to the destruction of environment. This is precisely why solar energy is often described as "green." )

Boilers at coal plants cannot be thermally isolated, since boiling water by definition implies heat exchange. Thus if one shuts a coal or gas plant of the increasingly prevalent combined cycle type because the sun is shining, it follows that one must waste energy to bring cooled water back up to its boiling point under pressure.

By the way, as I noted elsewhere, a great hullabaloo has been raised about the fates of uranium Dine (aka "Navajo" ) miners. Many books have been written about them. By contrast nobody gives a shit about black lung disease or other health effects related to the jobs of native American miners at the Kayenta mines that supply the Navajo coal plant.

In the link immediately above, I analyzed the entire death toll of all Dine (Navajo) uranium miners from radiogenic cancers:

As I prepared this work, I took some time to wander around the stacks of the Firestone Library at Princeton University where, within a few minutes, without too much effort, I was able to assemble a small pile of books[50] on the terrible case of the Dine (Navajo) uranium miners who worked in the mid-20th century, resulting in higher rates of lung cancer than the general population. The general theme of these books if one leafs through them is this: In the late 1940’s mysterious people, military syndics vaguely involved with secret US government activities show up on the Dine (Navajo) Reservation in the “Four Corners” region of the United States, knowing that uranium is “dangerous” and/or “deadly” to convince naïve and uneducated Dine (Navajos) to dig the “dangerous ore” while concealing its true “deadly” nature. The uranium ends up killing many of the miners, thus furthering the long American history of genocide against the Native American peoples. There is a conspiratorial air to all of it; it begins, in these accounts, with the cold warrior American military drive to produce nuclear arms and then is enthusiastically taken up by the “evil” and “venal” conspirators who foist the “crime” of nuclear energy on an unsuspecting American public, this while killing even more innocent Native Americans.


I am an American. One of my side interests is a deep, if non-professional, reading of American History. Often we Americans present our history in triumphalist terms, but any serious and honest examination of our history reveals two imperishable stains on our history that we cannot and should not deny. One, of course, is our long and violent history of officially endorsed racism, including 250 years of institutionalized human slavery. The related other stain is the stain of the open and official policy of genocide against Native Americans: There is no softer word than “genocide.” Both episodes, each of which took place of a period on a scale centuries, were policies with open and “legal” sanctioning of the citizens of the United States and their “democratic” government, and were often justified by some of our most educated and influential leaders. I cannot reflect on my country without reflecting on these dire facts. I am not here to deny the role that genocide played in our history, and I note with some regret that the last people born within the borders of the United States to achieve full citizenship rights – this took place only in 1924 – were the descendants of the first human beings to walk here, our Native American brothers and sisters.

Still, one wonders, was hiring Dine/Navajo uranium miners yet another case of official deliberate racism as the pile of books in the Firestone library strongly implied?


A publication[51] in 2009 evaluated the cause of deaths among uranium miners on the Colorado Plateau and represented a follow up of a study of the health of these miners, 4,137 of them, of whom 3,358 were “white” (Caucasian) and 779 of whom were “non-white.” Of the 779 “non-white” we are told that 99% of them were “American Indians,” i.e. Native Americans. We may also read that the median year of birth for these miners, white and Native American, was 1922, meaning that a miner born in the median year would have been 83 years old in 2005, the year to which the follow up was conducted. (The oldest miner in the data set was born in 1913; the youngest was born in 1931.) Of the miners who were evaluated, 2,428 of them had died at the time the study was conducted, 826 of whom died after 1990, when the median subject would have been 68 years old.

Let’s ignore the “white” people; they are irrelevant in these accounts.

Of the Native American miners, 536 died before 1990, and 280 died in the period between 1991and 2005, meaning that in 2005, only 13 survived. Of course, if none of the Native Americans had ever been in a mine of any kind, never mind uranium mines, this would have not rendered them immortal. (Let’s be clear no one writes pathos inspiring books about the Native American miners in the Kayenta or Black Mesa coal mines, both of which were operated on Native American reservations in the same general area as the uranium mines.) Thirty-two of the Native American uranium miners died in car crashes, 8 were murdered, 8 committed suicide, and 10 died from things like falling into a hole, or collision with an “object.” Fifty-four of the Native American uranium miners died from cancers that were not lung cancer. The “Standard Mortality Ratio,” or SMR for this number of cancer deaths that were not lung cancer was 0.85, with the 95% confidence level extending from 0.64 to 1.11. The “Standard Mortality Ratio” is the ratio, of course, the ratio between the number of deaths observed in the study population (in this case Native American Uranium Miners) to the number of deaths that would have been expected in a control population. At an SMR of 0.85, thus 54 deaths is (54/.085) – 54 = -10. Ten fewer Native American uranium miners died from “cancers other than lung cancer” than would have been expected in a population of that size. At the lower 95% confidence limit SMR, 0.64, the number would be 31 fewer deaths from “cancers other than lung cancer,” whereas at the higher limit SMR, 1.11, 5 additional deaths would have been recorded, compared with the general population.

Lung cancer, of course, tells a very different story. Ninety-two Native American uranium miners died of lung cancer. Sixty-three of these died before 1990; twenty-nine died after 1990. The SMR for the population that died in the former case was 3.18, for the former 3.27. This means the expected number of deaths would have been expected in the former case was 20, in the latter case, 9. Thus the excess lung cancer deaths among Native American uranium miners was 92 – (20 +9) = 63...

...On the other hand, roughly 7 million people will die this year from air pollution.[52] Of these, about 3.3 million will die from “ambient particulate air pollution” – chiefly resulting from the combustion of dangerous coal and dangerous petroleum, although some will come from the combustion of “renewable” biofuels. Every single person living on the face of this planet and, in fact, practically every organism on this planet is continuously exposed to dangerous fossil fuel waste, and every person on this planet and practically every organism on this planet contains dangerous fossil fuel waste...

...Seen in this purely clinical way, this means that all of the Native American uranium miners dying from all cancers, 93 lung cancer deaths and 54 deaths from other cancers, measured over three or four decades, represent about 23 minutes of deaths taking place continuously, without let up, from dangerous fossil fuel pollution.

The modern equivalent of coal miners are the miners for the material intensive so called "renewable energy" industry. This mining is a toxicological nightmare but we couldn't care less. Most of the miners who will suffer the health effects of this latest affectation are overseas, particularly in China. Most of the people who will die from recycling this garbage are also overseas, living in the poorest places on earth.

We couldn't care less.

I wish you a pleasant Saturday afternoon.

Electricity Carbon Intensity Viewed in Terms of Export and Import.

The paper I will discuss in this brief post comes from the most recent issue of Environmental Science and Technology, a scientific journal that is a publication of the American Chemical Society.

The paper is here:

Virtual CO2 Emission Flows in the Global Electricity Trade Network (Xu et al, Environ. Sci. Technol., 2018, 52 (11), pp 6666–6675)

Some introductory text:

Electric power generation contributes significantly to global greenhouse gas (GHG) emissions. In 2014, over 40% of the global carbon dioxide (CO2) emissions were from the electric power sector.1 Mitigation initiatives, strategies, and policies related to the power sector have taken place at various scales, including the national, regional, organizational, and even individual scales. Underpinning such effects is the accurate and fair accounting for GHG emissions, for emissions from both electricity generation and consumption. Consumptionbased accounting is particularly relevant to initiatives and policies at regional, organizational, and individual levels. Converting grid electricity consumption (or purchased electricity) into emissions from power generation requires the measurement and use of the emission factor, which is defined as the emission generated due to unitary electricity consumption.

Current practices mostly use production-based emission factors for estimating emissions driven by electricity consumption...

...However, as electricity is purchased and consumed from interconnected grids, production-based emission factors lead to inaccurate measurements of emissions due to electricity consumption. In particular, power grids are connected and interdependent at the regional and even the global level. Indeed, global electricity trade has been steadily increasing in past decades. For example, electricity exports and imports of OECD countries have been growing by 4.5% and 4.3% annually from 1974, and reached 511 TWh and 510 TWh in 2015, respectively.5 Electricity trade brings about economic benefit, since it opens the opportunities to exploit region variations in natural resources, climate and load timing, reducing the surplus generation capacity needed.5,6 However, similar to the fact that globalized supply chains distance production and consumption and render environmental responsibilities more “invisible”,7,8 cross-border electricity trade furthers the separation between electricity generation and consumption...

In other words, if you live in a prominent middle European nation that makes a big deal about its Hoary Bat and Raptor grinding wind turbines, but import lots of electricity when the wind isn't blowing from the neighboring country of Poland - where almost all power is generated by burning coal - that counts.

One would think that would be obvious, but somehow it's not.

The graphics in this paper can be a little challenging to look at, but here are two that are straight forward:

Figure 1. Global electricity trade network in 2014. Curves represent direct electricity transfers, which flow clockwise from origins to destinations. Curve widths indicate the amount electricity transfers, some of which are labeled in TWh. Node size represents electricity generation of countries/ regions. Countries/regions are labeled with their ISO codes. The insets of (A) and (B) zoom in the African community and the European part of the Eurasian community, respectively. Map images are from the GADM database of Global Administrative Areas.30

The caption:

Figure 2. (A) Emission factors of electricity consumption of countries/regions. Shades of color represent the values of emission factors (efi C,network for country/region i), while sizes of circles represent total electricity consumption. White areas are where data are incomplete. (B) Differences between various accounting methods for country/region-level emission factors of electricity consumption. Blue area indicates that emission factors for electricity consumption (efi C,network) is smaller than those for generation (efi G); red area indicates the opposite. For hashed/meshed areas, enlarging the system boundary of accounting for electricity trade significantly changes estimates of emission factors, either downward (in hashed areas, where efi C,network/efi C,direct adjust < 95%) or upward (in meshed areas, where efi C,network/efi C,direct adjust > 105%). Map images are from the GADM database of Global Administrative Areas.3

Some more text:

When a country/region is a net virtual CO2 importer (or exporter) in the electricity trade network, the CO2 emission responsibility for its electricity consumption is greater (or less) than that for its electricity generation. In Europe, countries with the largest amounts of net virtual CO2 imports through electricity trade are Italy (11 Mt), Austria (7.7 Mt), Switzerland (5.5 Mt), and Hungary (5.1 Mt); and the most important net virtual CO2 exporters are Germany (32.4 Mt), Czech Republic (8.5 Mt) and Ukraine (5.4 Mt). In Africa, Mozambique, and Botswana have the largest net virtual CO2 imports (5.5 Mt and 4.9 Mt, respectively), and South Africa is the most important net CO2 exporter (12.4 Mt).

While the paper itself is not open sourced, the supplementary information is open sourced. It is here: Supporting Info Environ. Sci. Technol., 2018, 52 (11), pp 6666–6675

One may refer to table S4 in the supporting info to see the carbon intensity of almost every country in the world, based on production and then adjusted for consumption of imported electricity.

France for example, a country largely dependent on nuclear energy although there is an idiotic quest to destroy that happy circumstance along with the entire avian ecosystem, as my son, who is spending the summer in France, reports, has a carbon intensity based on production of 41.0 grams CO2/kwh, adjusted for trade and consumption to 44.2 grams CO2/kwh. The offshore oil and gas drilling hellhole of Denmark, internationally worshiped for its hatred of hoary bats and seabirds, has a carbon intensity for production that is 620% times that of France, at 255.1 grams CO2/kwh based on in country production but only 522% times that of France, at 227.2 grams CO2/kwh when adjusted for exports, compared, again, to France's 44.2.

The raptor and bat hating country of Germany, which can't grind up its avian ecosystem fast enough to satisfy world cheering, has an electricity carbon intensity based on production of 474.0 grams CO2/kwh, in "percent talk" so favored by people who love grinding up hoary bats and raptors, 1156% greater than that of France, but when adjusted for exports is "only" 1026% greater than that of France, at 453.6 grams CO2/kwh. This figure is very close to the standard figure for a dangerous natural gas fueled power plant, although Germany still produces lots of coal based electricity along with its bird and bat grinding based electricity.

This year the dangerous fossil fuel waste carbon dioxide concentrations peaked in the planetary atmosphere at close to 412 ppm. No one now living will ever see concentrations below 400 ppm again. Twenty years ago they were around 370 ppm, and 20 years before that they were at 338 ppm.

We're doing great. We really know what we're doing. Even if we hate bats and raptors and every other damned creature that flies, we're practically breaking our arms patting ourselves on our backs for being "green."

Have a great weekend.

The Epigenetics of Fish in Warming Water.

Many years ago, on a website where I was ultimately banned for telling the truth, I made fun of the "science" of Joseph Stalin:

The Most Interesting Chemistry of Lenin's Dead Body.

In this post about the "real stable genius" Stalin, and his relationship to Ilya Zbarsky and his father, I referred to Stalin's belief in "Lysenkoism," a belief which rejected natural selection and genetics and substituted a nonsense theory about heritability of acquired characteristics, a system of beliefs harking back to the ideas of Lamarck.

Stalin's faith based belief in the work of Lysenko set Soviet biological science back decades, resulting in the collapse, among other things, of Soviet grain harvests even years after Stalin had kicked off, much to the improvement of the world in general.

While it is true that natural selection is safe from political stupidity in most places - the rather communist style Republican Party notwithstanding - it is also true that a case can be made for genetic transmission of environmental conditions even without changes to DNA sequences, something.

This area, which has been burgeoning only in the last 20 years or so, is the fascinating subject of epigentics.

A very nice lecture on this topic, epigenetics is available on line: Professor Shirley Tilghman: The Wild and Wacky World of Epigenetics I attended this lecture when it was given and it was really, really informative and I recommend it highly.

Epigenetics involves the chemical transformation of the nuclear bases, often in the form of methylation of a cytosine, which is controlled by the proteins wrapping DNA, proteins known as histones, these in turn being controlled by post-translational modifications of the protein sequence by, for example, methylation or acetylation of lysine residues which are prominent in the sequences of histones.

Epigenetics, which also can involve the bonding of far more complex molecules, including many pollutants, is thought to result in many diseases, notably cancer, as well as somatic mutations. (One of my sons has a somatic mutation that resulted in a birth defect that proved to be minor though it need not have been. It involved, apparently, the deamination of a guanine moiety during gestation.)

In the most recent issue of Nature Climate Change epigenetic changes to fish are reported.

Some of these modifications are, despite the deserved rejection of Lysenko/Lamarckian ideas or ideology, heritable.

The paper is here: The epigenetic landscape of transgenerational acclimation to ocean warming (Munday et al Nature Climate Change Volume 8, Pages 504–509 (2018))

The introduction:

Epigenetic inheritance is a potential mechanism by which the environment in one generation can influence the performance of future generations1. Rapid climate change threatens the survival of many organisms; however, recent studies show that some species can adjust to climate-related stress when both parents and their offspring experience the same environmental change2,3. Whether such transgenerational acclimation could have an epigenetic basis is unknown. Here, by sequencing the liver genome, methylomes and transcriptomes of the coral reef fish, Acanthochromis polyacanthus, exposed to current day (+ 0 °C) or future ocean temperatures (+ 3 °C) for one generation, two generations and incrementally across generations, we identified 2,467 differentially methylated regions (DMRs) and 1,870 associated genes that respond to higher temperatures within and between generations. Of these genes, 193 were significantly correlated to the transgenerationally acclimating phenotypic trait, aerobic scope, with functions in insulin response, energy homeostasis, mitochondrial activity, oxygen consumption and angiogenesis. These genes may therefore play a key role in restoring performance across generations in fish exposed to increased temperatures associated with climate change...

Some elaboration:

Recently, we have shown that the common reef fish, Acanthochromis polyacanthus, can fully acclimate its scope for oxygen consumption (net aerobic scope) when both parents and their offspring experience the same increase in water temperature2,9. They do this by changing their transcriptional regulation of metabolism, cytoprotection, immunity, growth and cellular organization. Furthermore, these fish that are transgenerationally exposed to 3 °C warmer water (transgenerational treatment) differentially express a similar suite of genes compared with fish that are exposed to elevated temperatures from early development for just one generation (developmental treatment), albeit with more genes and higher magnitude changes in expression9. Reproductive capacity, however, was impaired in developmental and transgenerational fish, and only improved when temperature was increased incrementally (step-wise treatment) across two generations10. Here, we investigate if genomic DNA methylation could be implicated in the observed transgenerational plasticity of A. polyacanthus in an ocean warming scenario.

The methylation at the 5 position of a cytosine often initiates DNA repair, however under some conditions, the cytosine can deaminate to be converted into thymine, in which case a permanent mutation results.

I don't have a lot of time tonight, so I'll just cut to the pictures, which often is enough to induce an understanding of a paper if one hasn't much time.

The experimental set up:

The caption:

Fig. 1 | Experimental design and summary of the DMRs. a, Design of the fish rearing experiment and summary statistics of the genome, methylomes and transcriptomes. b, The number of DMRs between treatments for three methylation contexts. Hyper and hypo indicate higher and lower methylation, respectively, in the treatment in the left of the Comparison column compared to the right. The Unique column represents the unique number of DMRs in each methylation context. c, DMR distribution across genomic elements for CpG and CHH contexts. The CHG context is not shown due to the low number of DMRs.

CpG is a cytosine phosphate guanine system. "CHH" or "CHG" refer to triples where H can be adenine, thymine or another cytosine.

Some plots of the methylation patterns of fish in different groups.

The caption:

Fig. 2 | Differential methylation patterns. a,b, Heatmap of DMRs for CpG (a) and CHH (b) contexts. c,d, MDS plot of DMRs for CpG (c) and CHH (d) contexts. Each coloured circle represents one fish sample and treatment groups are denoted by different colours. C, D, S and T represent control, developmental, step and transgenerational treatments, respectively. Each ellipse represents a 95% confidence region from 1,000 bootstrapping of DMRs from each sample. Non-overlapping ellipses implies statistically significant differences among samples.

A heatmap of the genes involved in respiration and their differential methylation:

The caption:

Fig. 3 | Heatmaps of differentially methylated and net aerobic scopecorrelated genes. a,b, Negatively (a) and positively (b) correlated differentially methylated genes (adjusted p < 0.05; > 25% difference in methylation between treatments) from A. polyacanthus, comparing control, developmental, transgenerational and step treatments. Genes described in the text are marked in bold. The colour scale indicates the per cent difference in methylation between two treatments

Some more results:

The caption:

Fig. 4 | DNA methylation patterns for thermal acclimation. a–f, Density of methylcytosines from the CpG context are shown for selected genes: trpm2 (a), pctp (b), cidea (c), gab1 (d), igf2 (e) and ddx6 (f). Red rectangles represent differentially methylated regions. Genomic locations are indicated below the density plot. Gene models are shown for the corresponding coordinates.

The paper's conclusion:

In conclusion, our study indicates that the epigenome is altered following exposure to increased temperatures via DNA methylation of specific loci. Although our results are consistent with transgenerational epigenetic effects, we cannot exclude a role for developmental epigenetic effects during the gamete and embryonic stage because eggs experienced the parental conditions until hatching. To conclusively demonstrate transgenerational epigenetic inheritance, future experiments should test if differential methylation and gene expression is retained when fish exposed transgenerationally to high temperature are returned to ambient control conditions in both parental and offspring generations. Nevertheless, we identified 193 DMGs that correlate to aerobic performance, of which many play key roles in metabolic homeostasis, insulin sensitivity and improved oxygen delivery, thus suggesting that these are the core genes associated with physical acclimation to heat stress across generations. Our study shows that exposure to higher temperatures associated with climate change causes genome-wide changes in DNA methylation, demonstrating that epigenetic regulation is possible in a coral reef fish facing a warming ocean, and that DNA methylation could play a role in transgenerational acclimation.

In my position as an advocate of nuclear energy, I often hear all kinds of idiotic remarks about mutations. Of course, radiation can and does cause mutations, but so do many other things, including, apparently doing exactly what we're doing about climate change, which is, um, nothing at all.

Have a pleasant day tomorrow.

I'm having a hard time because my little baby got his first summer job...

...and it's in Europe.

And the thing is that, sniff, he isn't a little baby any more, he's a man.

He'll be gone until August working under an NSF grant in France.

I'm proud of him of course, that he was selected for this job, but I miss him terribly already and he's only been gone for 24 hours.

It seems like only last evening he was eight years old and I was explaining the Fibonacci numbers to him, and now...and now...he's already over my head in so many places.

He called me this morning from Madrid, deliciously tolerant of my hover parenting but also being sure to let me know that he's a man, not a child.

He's a man...

Life moves so fast.

Climate reddening increases the chance of critical transitions.

In electrical engineering - and many other disciplines - "white noise" is referred to as an effect tied to random fluctuations that give signals as output that can obscure or bury "real" signals. The most familiar form of white noise is static, and the elevation of signal over static is a very important feature to - for one example - audiophiles who might pay many thousands of dollars to hear a minor scratch of a bow against a string in a recording of Benjamin Britten's "War Requiem".

In physics one of the most famous examples of "white noise" is Brownian motion, which is famously the effect, explained by Albert Einstein, thus proving the reality of atomic theory (which in 1905 still had prominent doubters), by which small particles observed under a microscope in solution seem to vibrate and move in a way that cannot be predicted.

The "signal to noise ratio" is an important issue in many branches of science, and is, in fact, an important feature of many important international regulations wherever scientific expertise applies, for example, in drug development, aviation, and many areas of engineering.

It is common to think of "white noise" as having no meaning other than providing difficulties for instrument makers, but this is not exactly true; random fluctuations can result in the generation of very real and important effects on a macroscopic scale. This is known as the "Butterfly Effect," an important feature of chaos theory.

A famous electrical engineering paper proposed an example and some mathematics of how white noise can generate real signals:

A statistical model of flicker noise (Barnes Allan Proceedings of the IEEE Vol: 54, Issue: 2, Feb. 1966 pp 176-178)

The effect they described has become known as "red noise," in which seemingly random effects result in changes to the overall state of a system by generating low frequency ("red" as opposed to "white" ) signals that persist for a long time, and in fact can result in permanent changes to the state of a system.

A paper from which the title of this post is taken has just been published in the journal Nature Climate Change to show how the climate can be permanently placed rapidly into an alternate and different state owing to the seemingly random fluctuations in the weather (as distinct from climate, climate being the integration of individual weather events.)

The paper is here: Nature Climate Change Volume 8, pages 478–484 (2018).

Some excerpts from the paper:

Although many systems respond gradually to climate change, some systems may have tipping points where a small change can trigger a large response that is not easily reversible1,2. Such critical transitions have been studied mostly in simple models3,4, where climate variability is either left out or modelled as white noise5,6, that is, noise that is uncorrelated in time. However, such uncorrelated noise is a mathematical idealization. In reality, the climate system involves slow processes, causing the power spectrum to have pronounced low frequencies (a red spectrum). As a result, climatic time series are often autocorrelated on timescales that correspond to the diurnal to decadal timescales of change that are also characteristic for key variables of ecosystems and society7. For instance, the state of the atmosphere is highly correlated from one day to the next, anomalies in surface ocean temperatures can persist for several months8,9 and there are modes of decadal variability10,11. Importantly, the autocorrelation in climatic variables may change over time12. For instance, the Pacific Decadal Oscillation and North Pacific sea surface temperatures (SSTs) have become more autocorrelated in the period from 1900 to 201513,14, and large changes in climate variability are to be expected in the Arctic where sea ice loss leads to larger persistence and smaller variance in temperature variability15,16...

...To explore how changes in climate variability may affect systems with tipping points, we first ask how the size and duration of single environmental perturbations affect these systems. Next, we examine systematically how the autocorrelation and variance of climate variability separately affect the likelihood of a critical transition and how the autocorrelation of climate variability affects the duration of extreme events using an established ecological model as an example. Subsequently, we discuss evidence from five systems in which the duration of anomalously warm or dry events has been shown to elevate the chance of critical transitions: forests, coral reefs, the poverty trap, human conflict and the West Antarctic Ice Sheet (WAIS).

Note that not all of these five transitions are physical in nature, specifically two are social effects.

The authors continue:

Response to single perturbations
As a first step to see how changes in the dynamic regime of climatic drivers may affect the likelihood of critical transitions, consider the effect of an idealized single perturbation such as a temporary change in environmental conditions (Fig. 1, red arrow). Because it takes time for the system to respond to a change in conditions (Fig. 1, black arrows), the moment at which the conditions are reversed to the original (Fig. 1, green arrows) determines the fate of the system. If the conditions recover quickly, the system reverts to the original state (Fig. 1, trajectory 1→ 2→ 3→ 4→ 1). However, recovery of the conditions at a later moment can cause the system to settle in the alternative equilibrium (Fig. 1, trajectory 1→ 2→ 3’→ 4’→ 5).

One example, one close to my heart since my liberalism does not involve worship of Elon Musk's stupid car for billionaires and millionaires, but concern for those who lack basic resources:

Poverty traps.

People whose livelihoods depend directly on natural resources and ecosystem services are particularly vulnerable to climate change and changes in weather variables61. For example, the herds of pastoralists in East Africa graze extensively and the growth of the forage mainly depends on rainfall. Consequently, drought can lead to substantial herd losses. The effect of drought on the livelihood of people depends mostly on its duration — in 1981 the seasonal rainfall totals in Brazil were slightly above average, but longer periods of drought resulted in yield losses that year62. Interhousehold differences in the capacity to deal with these losses can lead to variability in income and wealth63. The poverty trap is a critical minimal asset threshold, below which families are unable to build up stocks of assets over time64. When households are close to such a situation, losses as a result of weather variability can have permanent adverse consequences as they invoke a transition into a poverty trap. An example of the differential effects of a prolonged weather event on poverty is the three-year drought in Ethiopia in the late 1990s; wealthy households were able to rebuild their assets, while the adverse effects for the low-income households lasted longer65...

An closing remarks from the paper:


The transitions we have reviewed illustrate the potentially wideranging implications of our theoretical prediction that a reddening of climate fluctuations may promote the likelihood of inducing self-sustained shifts into alternative stable states of climate-sensitive systems. Clearly, there is a huge gap between our limited understanding of such real-world cases and the simple models we have analysed. Our initial theoretical analysis captures the essence of how the duration of an event may affect the likelihood of invoking a shift to an alternative attractor. Subsequently, our red-noise simulations illustrate that this basic conclusion may indeed apply more broadly to include the effect of temporal correlation in regimes of permanent fluctuations. Clearly, we merely scratched the surface of the question of how the timescale and magnitude of fluctuations may affect the scenarios we outlined. In any of the discussed systems, reality is much more complex than the schematic representation in the deterministic and stochastic parts of our models...

An interesting paper, well worth a read.

We're playing with fire, and in saying this, I'm not merely referring to the rapidly increasing reliance on oxidative combustion to power the world, no matter what you may have read on those self declared "green" websites where they hype the failure of so called "renewable energy" as a grand success.

I trust you will have a pleasant Sunday.

Committed Carbon Releases for the $7.2 Trillion in New Power Plants To Be Built in the Next 10...


Many people seem to believe that we're doing something about climate change, the claim usually being involved with with that offshore oil and gas drilling hellhole Denmark or that coal and gas dependent hellhole Germany, or horseshit about California, where they plan to shut their single largest climate change gas free piece of infrastructure, the Diablo Canyon nuclear plant.

It's all nonsense. We are doing nothing but marketing fossil fuels with pretty pictures of (destructive) wind turbines and similar garbage, like the future toxicological nightmare represented by solar cells.

As I noted recently in this space, despite our delusion, concentrations of the dangerous fossil fuel waste carbon dioxide in the planetary atmosphere is just shy of 412 ppm, after being just shy of 388 just ten years ago.

The rate of accumulation of this dangerous fossil fuel waste is accelerating, not decelerating.

The following is from an open sourced paper in the primary scientific literature which I quote here for convenience.

The power sector is expected to invest about 7.2 trillion USD in power plants and transmission and distribution grids over the next decade (IEA 2016). The average expected lifetime of generators can range from 20–25 years for solar PV up to 70 years and longer for hydroelectric generators (EIA 2011, IEA 2016). Coal-, gas and oil-powered generators have a typical lifetime of between 35–40 years (Davis and Socolow 2014). These lifetimes probably represent only economic rather than technical lifetimes, however, since many power generators operate long beyond their expected end of life. The relatively long payback periods for such assets expose investments to the risk of future changes in economic and regulatory conditions. Changes in input prices, the competitive landscape, or regulation can have large impacts on the profitability and economic viability of such assets, before they have a chance to pay their investment back (Caldecott et al 2017). These long lifetimes mean that any investment made today in carbon dioxide (CO2) emitting infrastructure will have a considerable effect on the ability to achieve required CO2 emission reductions in the future—even if these desired reductions are many years away (Davis et al 2010, Rozenberg et al 2015).

The full paper, again open sourced, is here: Committed emissions from existing and planned power plants and asset stranding required to meet the Paris Agreement (Alexander Pfeiffer, Cameron Hepburn, Adrien Vogt-Schilb and Ben Caldecott, Environmental Research Letters, Volume 13, Number 5)

There's no reason for me to quote any more of it; if you're interested, you can read it yourself.

In 2018 dollars, the Oyster Creek Nuclear Reactor, where I live in New Jersey, and due to shut this year, 10 years before its license expires, cost $576 million to build. (Around $90 million in 1969 dollars.) Ground was broken in 1965 and the plant came on line in 1969. It's rated power in 619 MW, and it was built by engineers who had very poor access to computing power compared to what we have today; much of its design certainly involved slide rules.

It's capacity utilization in 2017 was 102%, meaning that it produced 19.9 petajoules of energy last year, or 37.7% as much energy as the 9,452 commissioned and decommissioned wind turbines in that offshore oil and gas drilling hellhole Denmark and did so in a single small building.

If we were as smart and as competent as the nuclear engineers who designed the Oyster Creek Nuclear Reactor using technology from the 1950's and early 1960's, for 7.2 trillion dollars in the next ten years, we could build 12,600 Oyster Creek Nuclear Reactors.

If they performed as the existing Oyster Creek Nuclear Reactor did in 2017, they would produce about 252 exajoules of energy, as compared to the 576 exajoules humanity was consuming annually as of 2016.

IEA 2017 World Energy Outlook, Table 2.2 page 79 (I have converted MTOE in the original table to the SI unit exajoules in this text.)

In 2016, 81% of the 576 exajoules produced by humanity was provided by dangerous fossil fuels, up from the 80% of 420 exajoules humanity was consuming in 2000. Of course, avoiding the "percent talk" so popular among advocates of so called "renewable energy" the situation is even worse. In absolute terms, since 2000, world consumption of dangerous fossil fuels has increased (from the already unacceptable 337 exajoules) by an additional 130 exajoules.

The Oyster Creek Nuclear Reactor saved lives that would have otherwise been lost to air pollution, but nobody gives a shit. It will shut in October, and people will die as a result.

My son is studying to be an engineer right now, and I know from my vicarious enjoyment of his work that he has far better tools than 1950's and 1960's engineers had, but somehow, it's difficult to explain exactly, despite historically haven proved to be very affordable - my electricity rates here were very affordable until some assholes started hanging solar cells off telephone poles around here - in recent decades, nuclear energy has been declared "too expensive." (Tomorrow my son will be flying to France to do research. France has electricity rates that are less than half those of Denmark, although there is a movement afoot to destroy French power generation infrastructure to make it as stupid as Denmark's.)

Go figure.

Go figure.

I imagine it is "cheaper" to completely destroy the planetary atmosphere by upping the concentration of the dangerous fossil fuel waste by more than 20 ppm every ten years.

We live in an interesting, if toxic and most likely doomed world, the dooming being an outgrowth of cancerous stupidity. Too bad the cancer of ignorance isn't one of the curable cancers apparently.

Have a nice weekend.

It looks like we topped out at 411.86 ppm this year.

Every year the carbon dioxide concentration in the planetary atmosphere peaks in May and generally falls until September. It may be thought of as a sinusoidal wave superimposed on a rising almost linear axis as the Mauna Loa data page shows.

Actually the axis is not strictly linear, the rate of increase (the second derivative), irrespective of year to year fluctuations, has been increasing since records have been kept beginning in 1958.

The week ending May 13, 2018 came in at 411.85 ppm, compared with 388.88 just 10 years ago, despite a two trillion wasted oxymoronically defined "investment" in so called "renewable energy" in the last ten years.

This year compared to the record setting years of 2015 and 2016 is relatively mild, because it is a post-El Nino year, but overall, we are now at a second derivative that approximates 2.2 ppm/year, as opposed to 1.5 ppm per year in the 20th century.

CO2 growth rates at Mauna Loa

No one alive on this planet will every again see a reading of below 400 ppm.

Congrats to that bourgeois asshole, Bill McKibben at "350.org" who wants to tear up every bit of ground on the planet to embrace his idiotic and unworkable 100% (so called) "renewable energy" scheme.

It didn't work; it isn't working and it won't work.

Bill however is far too cowardly to take a break from journalism to open a science book or to say the world "nuclear." Like many on the fairy tale zone of pretending to care about climate change, he doesn't give a shit about what works, but only about what he fantasizes will work.

It scares him, nuclear, far more than climate change itself does, which, of course, is all you need to know about how little he actually knows and about how little he actually cares.

If I sound bitter, I am. History will not forgive us, nor should it.

I hope you're having a pleasant Memorial Day weekend.

Defluorinating Branched Perfluoroalkanes with Cobalt Catalysts.

Recently in this space I decried what I called "The Worst Idea In Energy" with the caveat that possibly, based on the likelihood of it actually happening, the idea of lead perovskite solar cells was potentially worse.

My remarks were based on this paper:

Comparison of Linear and Branched Molecular Structures of Two Fluorocarbon Organosilane Surfactants for the Alteration of Sandstone Wettability (Ivan Moncayo-Riascos* and Bibian A. Hoyos, Energy Fuels, 2018, 32 (5), pp 5701–5710)

The paper was all about using recalcitrant perfluoroalkane species as a surfactant in "fracking" operations.

Not two days later, I came across a paper that suggested that people are already doing something very much like it. It's this paper:

Reductive Defluorination of Branched Per- and Polyfluoroalkyl Substances with Cobalt Complex Catalysts (Timothy J. Strathmann et al, Environ. Sci. Technol. Lett., 2018, 5 (5), pp 289–294)

From the text:

Since their original development in the 1940s, per- and polyfluoroalkyl substances (PFASs) have been widely used in industrial and consumer products.1,2 Detection of PFASs in the global environment has been extensively documented.2−6 Substantial research efforts aimed at the two C8 legacy compounds, perfluorooctanoic acid (PFOA) and perfluorooctanesulfonic acid (PFOS), have confirmed a variety of adverse health effects,7 leading to the phase-out of ≥C8 PFASs in North America and Europe8 and the U.S. Environmental Protection Agency’s recent issuance of drinking water health advisory levels for PFOA and PFOS.9,10 New alternative PFASs (e.g., shorter chain carboxylic and sulfonic acids, perfluoroalkyl ether carboxylic acids, and other novel structures) have already been detected in aquatic environments and are considered recalcitrant.11−15 Recent studies have also indicated that the emerging PFASs exhibit variable toxicities and environmental mobilities.11,16−20 Still, knowledge of emerging PFASs remains limited.11,21

Despite having received much less attention than linear PFASs have received, branched PFASs (Figure 1) have also been extensively applied and detected in the environment. For example, perfluoro-3,7-dimethyloctanoic acid (PFMe2OA) serves, along with linear PFASs, as an ingredient of well treatment fluids.22 This compound is also on the “STANDARD 100 by OEKO-TEX” list, indicating its wide application in textile production,23 and it has been detected in European water bodies.24 Perfluoroethylcyclohexanesulfonate (PFECHS)25 also contains two branched carbons on the cyclic structure, and its detection in Canadian Arctic lakes has been attributed to its use in aircraft anti-erosion fluid.26 Industrial PFOS products often contain variable fractions of branched isomers, which have been detected in both environmental waters and human tissues.27,28

The added bold is mine.

The authors propose a cobalt catalyst - a porphyrin catalyst - to defluorinate these recalcitrant and possibly toxic compounds.

The structure of the catalyst is shown in the cartoon in the header of the paper:

Here are the structures of the target compounds they seek to address:

Biochemists will note that the catalyst has certain similarities to the structure of vitamin B12, which the authors also note, pointing to it in another graphic:

The caption:

Figure 2. Degradation and defluorination for each PFAS with cobalt catalysts shown in panel e. Branches that are effective and ineffective in promoting defluorination are colored green and red, respectively. Reaction conditions: PFAS (0.1 mM), Co catalyst (0.25 mM), TiIII citrate (∼36mM), and carbonate buffer (∼40 mM) in water at pH 9.0 and 70 °C.

It appears from the paper that vitamin B12 - as is explored in the text as well as in the graphic - also can carry out this defluorination reaction, albeit on a very long time scale, which is displayed in the graphic as well. This of course is somewhat good news, although each fluorine removal results in an alternate toxicology I would suspect. Note the timeline for defluorination is not short, on the order of days, not minutes.

Another graphic gives the bond energies of some of these compounds calculated by DFT:

The caption:

Figure 3. Calculated bond dissociation energies (BDEs, in kilojoules per mole) at the B3LYP/6-311+G(2d,2p)/SMD level of theory of C−F bonds in the PFASs shown in Figure 2. The displayed terminal group with two CO bonds represents the charge-delocalized −COO− anion.

These are very strong bonds, which accounts for the persistence of these molecules in the environment and in human, animal and plant flesh, as well as in water supplies.


Cobalt, as I note when criticizing the less than satisfying hype surrounding Elon Musk, hero of electric car cultists, is a conflict metal, mined under appalling conditions in many places on earth. It's role in Vitamin B12 also makes it an essential element for human health, albeit in tiny amounts.

Preparation of the cobalt species is not given in the paper, but I'd imagine an industrial scale synthesis would be, um, interesting and expensive.

When the best tool with which one is familiar is a hammer, everything looks like a nail. Radiolysis with gamma rays would do this job much better and much more completely, but well, I think that what will be done about these wells is nothing.

Have a nice "hump day."

Must have been a hell of a place in the 1940's, that Cambridge...

Dirac, Hoyle, Lennard-Jones, Wittgenstein...

Cambridge in 1947 had greatly changed since 1943. The university was crowded with students in their late twenties who had spent many years away at the war. In addition, the lectures were given by the younger generation who had also been away on research projects. There was a general air of excitement as these people turned their attention to new scientific challenges. I remained as a mathematics student but spent the academic year 1947-8 taking courses in as many branches of theoretical science as I could manage. These included quantum mechanics (taught in part by Dirac), fluid dynamics, cosmology and statistical mechanics. Most of the class opted for research in fundamental areas of physics such as quantum electrodynamics which was an active field at the time. I felt that challenging the likes of Einstein and Dirac was overambitious and decided to seek a less crowded (and possibly easier) branch of science. I developed an interest in the theory of liquids, particularly as the statistical mechanics of this phase had received relatively little attention, compared with solids and gases. I approached Fred Hoyle, who was giving the statistical mechanics lectures (following the death of R.H. Fowler). However, his current interests were in the fields of astrophysics and cosmology, which I found rather remote from everyday experience. I next approached Sir John Lennard-Jones (LJ), who had published important papers on a theory of liquids in 1937. He held the chair of theoretical chemistry at Cambridge and was lecturing on molecular orbital theory at the time. When I approached him, he told me that his interests were currently in electronic structure but he would very possibly return to liquid theory at some time. On this basis, we agreed that I would become a research student with him for the following year. Thus, after the examinations in June 1948, I began my career in theoretical chemistry at the beginning of July. I had almost no chemical background, having last taken a chemistry course at BGS at the age of fifteen. Other important events took place in my life at this time. In late 1947, I was attempting to learn to play the piano and rented an instrument for the attic in which I lived in the most remote part of Trinity College. The neighbouring room was occupied by the philosopher Ludwig Wittgenstein, who had retired to live in primitive and undisturbed conditions in the same attic area. There is some evidence that my musical efforts distracted him so much that he left Cambridge shortly thereafter. In the following year, I sought out a professional teacher. The young lady I contacted, Joy Bowers, subsequently became my wife. We were married in Great St. Mary's Church, Cambridge in 1952, after a long courtship. Like many other Laureates, I have benefit immeasurably from the love and support of my wife and children. Life with a scientist who is often changing jobs and is frequently away at meetings and on lecture tours is not easy. Without a secure home base, I could not have made much progress. The next ten years (1948-1958) were spent in Cambridge. I was a research student until 1951, then a research fellow at Trinity College and finally a lecturer on the Mathematics Faculty from 1954 to 1958. Cambridge was an extraordinarily active place during that decade. I was a close observer of the remarkable developments in molecular biology, leading up to the double helix papers of Watson and Crick. At the same time, the X-ray group of Perutz and Kendrew (introduced to the Cavendish Laboratory by Lawrence Bragg) were achieving the first definitive structures of proteins. Elsewhere, Hoyle, Bondi and Gold were arguing their case for a cosmology of continuous creation, ultimately disproved but vigorously presented. Looking through the list of earlier Nobel laureates, I note a large number with whom I became acquainted and with whom I interacted during those years as they passed through Cambridge.

From the Nobel Lecture of John Pople

The demise of the ICE is another one of those popular beliefs that is not connected with reality.

I certainly don't want to place myself in the position of endorsing the car CULTure.

This said, certain kinds of self propelled vehicles would be required in a civilized world as opposed to the one in which we live.

Examples would be tractors, delivery trucks (where trains are not available), emergency vehicles etc...

As it happens, electric vehicles are often less clean than are gasoline ICE vehicles. I often point to this paper that shows that air pollution mortality in higher in China for electric cars (but not electric scooters) than air pollution mortality from gasoline powered cars:

Electric Vehicles in China: Emissions and Health Impacts (Cherry et al, Environ. Sci. Technol., 2012, 46 (4), pp 2018–2024).

I discussed this paper at length elsewhere: China Already Has 100 Million Electric Vehicles

The fantasy behind electric cars is that electricity is inherently clean and this is not remotely true. Almost all of the electricity on this planet is generated from dangerous fossil fuels and the fraction of world energy so produced is increasing, not decreasing. Given the number of energy transitions involved in making an electric car run, along with the 2nd law of thermodynamics, an electric car under many circumstances wastes energy and thus is very capable of being worse than a gasoline car.

To the extent we need self-propelled vehicles, I think the paper by the late Nobel Laureate George Olah is the best description of the path to reducing the unacceptably high external costs of the car CULTure, not that the car CULTure as structured now can ever be sustainable.

The highly cited paper is here: Anthropogenic Chemical Carbon Cycle for a Sustainable Future (George A. Olah*, G. K. Surya Prakash, and Alain Goeppert, J. Am. Chem. Soc., 2011, 133 (33), pp 12881–12898)

Of the two fluid fuels Olah proposes, methanol and DME, DME is superior by far, as it is non-toxic and has a very short atmospheric half life, about 5 days.

DME used in a diesel engine, appropriately modified to account for lubricity and DME compatible seals and injectors would be far cleaner than any electric car ever could be.

DME is also a replacement for any type of device running on natural gas, on LPG, and possibly on spark engines. It also is an excellent refrigerant, heat storage medium (as a supercritical fluid) and a very useful chemical solvent that is easy to remove simply by pressure release. It is easy to remove from water and, again, has very low toxicity.

It can be made directly or indirectly (from methanol), depending on the nature of the catalyst by direct hydrogenation of carbon dioxide.

Hydrogen, which is useless as consumer fuel but very valuable as a captive intermediate can either be made by thermochemical water splitting or thermochemical carbon dioxide splitting cycles, the latter because of the water gas shift reaction. Many of these are known. My personal favorite is a variant of the ZnO cycle, the variant producing at one step, an equimolar mixture of carbon dioxide and oxygen which would be very convenient for closed combustion (smoke stack free) of waste materials and biomass, thus affording concentrated carbon monoxide as a useful intermediate for fixing carbon dioxide from the atmosphere to make things like polymers and carbon fiber type materials.

These thermochemical cycles, although often proposed for useless and unworkable solar thermal schemes are easily adaptable to nuclear reactors, which is why whenever I dream of nuclear reactors, I am thinking of ones that operate at much higher temperatures than those currently in use, in some cases, "pre-melted" reactors.

These very high temperature reactors would have very high thermodynamic efficiency and would in fact produce electricity as a side product and not as the primary end product.

Thanks for asking. Have a nice evening.

Go to Page: « Prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Next »