HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » NNadir » Journal
Page: « Prev 1 2 3 4 5 6 7 ... 67 Next »


Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 23,486

Journal Archives

Bacterial biodiversity drives the evolution of CRISPR-based phage resistance

The paper I'll discuss in this brief post is this one: Bacterial biodiversity drives the evolution of CRISPR-based phage resistance (Ellinor O. Alseth, Elizabeth Pursey, Adela M. Luján, Isobel McLeod, Clare Rollie & Edze R. Westra, Nature 574, 549–552 (2019))

(The authors appear to be 100% women, nice to see.)

CRISPR is very much in the scientific news these days, both as a research tool and as a possible therapeutic agent for a host of genetic diseases, including (but hardly limited to) cancer, which may be thought of as a somatic genetic disease. With respect to its use as a research tool, a few days back I posted in this space, a report utilizing CRISPR to interrogate the toxin resistance that is observed in the Monarch butterfly. Genome editing retraces the evolution of toxin resistance in the monarch butterfly.

I haven't really paid much attention to the nuts and bolts of CRISPR technology, at least until a chance conversation at a science oriented social event stimulated me to do so. I was of course, aware of its role in gene therapy, but not of the basic science underlying it. The appearance of two papers in two subsequent issues of Nature stimulated more interest in the origins and use of this technology.

As many people know, among the many things we are leaving for future generations besides an atmosphere destroyed by appeals to denial, mysticism, fear and ignorance, and effective depletion of many of the elements in the periodic table, is a plethora of dangerous antibiotic resistant bacteria. One avenue for addressing this resistant bacteria is to appeal to an old idea, viral antibiotics, inoculating people with viruses that are known to attack and kill bacteria, viruses know as phages. (Phages are also widely used as research and production tools, particularly for the insertion of genes into organisms in the biotech industry.) This old idea is worth a look given that our understanding of molecular biology has entered a golden age which, one hopes, will be maintained despite the rise of anti-intellectualism on both political extremes.


The CRISPR/CAS-9 system is actually the immune system for bacteria, and this paper is about how this system is utilized to develop resistance to phages.

From the abstract of the paper:

About half of all bacteria carry genes for CRISPR–Cas adaptive immune systems1, which provide immunological memory by inserting short DNA sequences from phage and other parasitic DNA elements into CRISPR loci on the host genome2. Whereas CRISPR loci evolve rapidly in natural environments3,4, bacterial species typically evolve phage resistance by the mutation or loss of phage receptors under laboratory conditions5,6. Here we report how this discrepancy may in part be explained by differences in the biotic complexity of in vitro and natural environments7,8. Specifically, by using the opportunistic pathogen Pseudomonas aeruginosa and its phage DMS3vir, we show that coexistence with other human pathogens amplifies the fitness trade-offs associated with the mutation of phage receptors, and therefore tips the balance in favour of the evolution of CRISPR-based resistance. We also demonstrate that this has important knock-on effects for the virulence of P. aeruginosa, which became attenuated only if the bacteria evolved surface-based resistance. Our data reveal that the biotic complexity of microbial communities in natural environments is an important driver of the evolution of CRISPR–Cas adaptive immunity, with key implications for bacterial fitness and virulence.

From the introduction:

P. aeruginosa is a widespread opportunistic pathogen that thrives in a range of different environments, including hospitals, where it is a common source of nosocomial infections. In particular, it frequently colonizes the lungs of patients with cystic fibrosis, in whom it is the leading cause of morbidity and mortality9. In part fuelled by a renewed interest in the therapeutic use of bacteriophages as antimicrobials (phage therapy)10,11, many studies have examined whether and how P. aeruginosa evolves resistance to phage (reviewed in ref. 12). The clinical isolate P. aeruginosa strain PA14 has been reported to predominantly evolve resistance against its phage DMS3vir by the modification or complete loss of the phage receptor (type IV pilus) when grown in nutrient-rich medium5, despite carrying an active CRISPR–Cas adaptive immune system. By contrast, under nutrient-limited conditions, the same strain relies on CRISPR–Cas to acquire phage resistance5. These differences are due to higher phage densities during infections in nutrient-rich compared with nutrient-limited conditions, which in turn determines whether surface-based resistance (with a fixed cost of resistance) or CRISPR-based resistance (infection-induced cost) is favoured by natural selection5,13. Although these observations suggest abiotic factors are crucial determinants of the evolution of phage resistance strategies, the role of biotic factors has remained unclear, even though P. aeruginosa commonly co-exists with a range of other bacterial species in both natural and clinical settings14,15. We proposed that the presence of a bacterial community could drive increased levels of CRISPR-based resistance evolution for two main reasons. First, reduced densities of P. aeruginosa in the presence of competitors may limit phage amplification, and favour CRISPR-based resistance5. Second, pleiotropic costs associated with the mutation of phage receptors may be amplified during interspecific competition.

The authors used in their experiments, a streptomycin resistant strain (PA14) of the P. aeruginosa pathogenic organism and cutlured it in the presence of three other pathogenic bacterial species that are not infected by the DMS3vir virus, Staphylococcus aureus, Burkholderia cenocepacia and Acinetobacter baumannii.

Some pictures from the paper suggesting the results:

The caption:

a, Proportion of P. aeruginosa that acquired surface modification (SM) or CRISPR-based resistance, or remained sensitive at 3 d.p.i. with phage DMS3vir when grown in monoculture or polycultures, or with an isogenic surface mutant (6 replicates per treatment, with 24 colonies per replicate, n = 36 biologically independent replicates). Data are mean ± s.e.m. b, Microbial community composition over time for the mixed-species infection experiments. AB, A. baumannii; BC, B. cenocepacia; PA14, P. aeruginosa; SA, S. aureus.

The authors also evaluated the potential pathogenic implications of this result by growing cultures in synthetic sputum.

The rise of CRISPR resistance, as shown in the previous graphic in P. aeruginosa in the presence of bacterial diversity is clearly observed.

Another graphic:

The caption:

a, Relative fitness of a P. aeruginosa clone with CRISPR-based resistance after competing for 24 h against a surface-modification clone at varying titres of phage DMS3vir in the presence or absence of a mixed microbial community. Regression slopes with shaded areas corresponding to 95% confidence interval (n = 144 biologically independent samples). b, Relative fitness after competition in the absence of phage, but in the presence of other bacterial species individually or as a mixture. Data are mean and 95% confidence intervals (n = 144 biologically independent samples).

Another graphic touching on virulence:

a, Time until death (given as the median ± one standard error) after infection with PA14 clones that evolved phage resistance in either the presence or the absence of a mixed microbial community (n = 376 biologically independent samples, analysed using a Cox proportional-hazards model with Tukey contrasts). LT50, median lethal time. b, The effect of the type of evolved phage resistance (CRISPR-based or surface-modification-based) on bacterial motility (n = 981 biologically independent samples). Box plots show the median with the upper and lower twenty-fifth and seventy-fifth percentiles, the interquartile range, and outliers shown as dots. c, The effect of the type of resistance on in vivo virulence (time until death, given as the median ± one standard error; n = 981, analysed using a Cox proportional-hazards model with Tukey contrasts).

Some excerpts from the concluding discussion:

We have shown that the evolutionary outcome of bacteria–phage interactions can be fundamentally altered by the microbial community context. Although conventionally studied in isolation, these interactions are usually embedded in complex biotic networks of interacting species, and it is becoming increasingly clear that this can have key implications for the evolutionary epidemiology of infectious disease24,25,26,27,28. Our work shows that the community context can shape the evolution of different host-resistance strategies. Specifically, we find that the interspecific interactions between four bacterial species in a synthetic microbial community can have a large effect on the evolution of phage-resistance mechanisms by amplifying the constitutive fitness cost of surface-based resistance5. The finding that biotic complexity matters complements previous work on the effect of abiotic variables and force of infection on the evolution of phage resistance5...

...Primarily, the absence of detectable trade-offs between CRISPR-based resistance and virulence, as opposed to when bacteria evolve surface-based resistance, suggests that the evolution of CRISPR-based resistance can ultimately influence the severity of disease. Moreover, the evolution of CRISPR-based resistance can drive more rapid phage extinction29, and may in a multi-phage environment result in altered patterns of cross-resistance evolution compared with surface-based resistance30. The identification of the drivers and consequences of CRISPR-resistance evolution might help to improve our ability to predict and manipulate the outcome of bacteria–phage interactions in both natural and clinical settings.

Interesting, I think, and important.

Have a nice day tomorrow.

Genome editing retraces the evolution of toxin resistance in the monarch butterfly.

The paper I'll discuss in this post is this one: Genome editing retraces the evolution of toxin resistance in the monarch butterfly. (Whiteman et al, Nature 574, 409–412 (2019))

One of the happiest memories of the childhood of my sons was when we went to a local park, the New Jersey side of Washington's Crossing Park, where the ranger showed us how to collect monarch butterfly eggs, which were laid on the underside of milkweed plants, poisonous plants that grow wild all around here. We took the leaves home, put them in a butterfly cage, and ultimately the eggs hatched, and caterpillars began munching the leaves. We keep collecting them from the fields around here, until they formed cocoons, pupae, and finally emerged as butterflies, which we released.

It was a beautiful, wonderful experience, and probably one I would have never had were I not a father.

The Monarch's don't really "migrate" as individuals; each year several generations make their way across North America from Mexico, breeding repeatedly, happily munching toxic milkweed all across America.

A truly wondrous life form!

CRISPR/CAS-9 is a gene editing tool developed by Jennifer Doudna and Emmanuelle Charpentier that utilizes a bacterial protein, known as CAS-9, which is in a sense an "immunity defense" for procaryotic organisms, that operates in conjunction with a guide RNA sequence that acts much like "interfering RNA," "iRNA" relying on complementary. By appropriate editing of the RNA sequence, this system can be modified for the purpose of gene editing, both for research purposes and, perhaps, for therapeutic modalities.

(Having participated in my career in a number of "really hot" biomedical fads, I tend to be more "wait and see" than over the top enthusiastic for these sweeping claims.)

The paper cited above is using CRISPR/Cas-9 as a research tool to discover how the Monarch butterfly became immune to the plant toxins in milkweed. This toxicity by the way, protects the Monarch from predators, since the butterflies themselves are toxic.

From the abstract:

Published: 02 October 2019
Genome editing retraces the evolution of toxin resistance in the monarch butterfly
Marianthi Karageorgi, Simon C. Groen, Fidan Sumbul, Julianne N. Pelaez, Kirsten I. Verster, Jessica M. Aguilar, Amy P. Hastings, Susan L. Bernstein, Teruyuki Matsunaga, Michael Astourian, Geno Guerra, Felix Rico, Susanne Dobler, Anurag A. Agrawal & Noah K. Whiteman
Nature volume 574, pages409–412 (2019) | Download Citation

Identifying the genetic mechanisms of adaptation requires the elucidation of links between the evolution of DNA sequence, phenotype, and fitness1. Convergent evolution can be used as a guide to identify candidate mutations that underlie adaptive traits2,3,4, and new genome editing technology is facilitating functional validation of these mutations in whole organisms1,5. We combined these approaches to study a classic case of convergence in insects from six orders, including the monarch butterfly (Danaus plexippus), that have independently evolved to colonize plants that produce cardiac glycoside toxins6,7,8,9,10,11. Many of these insects evolved parallel amino acid substitutions in the α-subunit (ATPα ) of the sodium pump (Na+/K+-ATPase)7,8,9,10,11, the physiological target of cardiac glycosides12. Here we describe mutational paths involving three repeatedly changing amino acid sites (111, 119 and 122) in ATPα that are associated with cardiac glycoside specialization13,14. We then performed CRISPR–Cas9 base editing on the native Atpα gene in Drosophila melanogaster flies and retraced the mutational path taken across the monarch lineage11,15. We show in vivo, in vitro and in silico that the path conferred resistance and target-site insensitivity to cardiac glycosides16, culminating in triple mutant ‘monarch flies’ that were as insensitive to cardiac glycosides as monarch butterflies. ‘Monarch flies’ retained small amounts of cardiac glycosides through metamorphosis, a trait that has been optimized in monarch butterflies to deter predators17,18,19. The order in which the substitutions evolved was explained by amelioration of antagonistic pleiotropy through epistasis13,14,20,21,22. Our study illuminates how the monarch butterfly evolved resistance to a class of plant toxins, eventually becoming unpalatable, and changing the nature of species interactions within ecological communities2,6,7,8,9,10,11,15,17,18,19.

An excerpt of the introductory text:

Convergently evolved substitutions in ATPα have been hypothesized to contribute to cardiac glycoside resistance in the monarch butterfly and other specialized insects via target-site insensitivity (TSI) in the sodium pump6,7,8,9,10,11. However, it is unclear whether the changes are sufficient for resistance in whole organisms6,7,8,9,10,11,15,18,23 or are ‘molecular spandrels’—candidate adaptive alleles that do not confer a fitness advantage when tested more rigorously1,5. In addition, the evolutionary order of substitutions suggests a constrained adaptive walk11,13,14,20,21,22,24, but an in vivo genetic dissection has not been conducted, so it is not possible to draw conclusions about the adaptive role of these substitutions1,2,3,4,5,15.

We have identified a core set of amino acid substitutions in cardiac glycoside-specialized insects that define potential mutational paths to resistance and TSI. We focused on the first extracellular loop (H1–H2) of ATPα, where most candidate TSI-conferring substitutions occur7,8,9,10,11 (Fig. 1a). We used maximum likelihood to reconstruct ancestral states for cardiac glycoside specialization (feeding and sequestering) and amino acids within the H1–H2 loop of ATPα across a species phylogeny...

The authors identified a series of known amino acid substitutions in the ATPα in the Monarch, at residues 111, 119, and 122.

A figure from the text:

The caption:

a, Protein homology model of Drosophila melanogaster ATPα (navy) superimposed on a Sus scrofa ATPα crystal structure (light grey) with ouabain (yellow) in the binding pocket. Residues 111, 119 and 122 (sticks) within the H1–H2 extracellular loop are associated with feeding on cardiac glycoside-producing plants and toxin sequestration. b, Maximum likelihood phylogeny based on 4,890 bp from Atpα and coi, with maximum likelihood ancestral state reconstruction (ASR) of feeding and sequestering states, estimated from the states of extant species (inner band of squares). Reconstructions are shown as nodal pie graphs (white, neither feeding nor sequestering; green, feeding; purple, feeding and sequestering), and the number of substituted sites at positions 111, 119 and 122 along branches in grey-scale (light grey 0, medium grey 1, dark grey 2, black 3), based on maximum likelihood ASR of H1–H2 loop amino acid sequences. Black asterisks indicate the Atpα copy number for species with multiple paralogues. c, ATPα substitutions inferred from ASR at positions 111 (blue), 119 (yellow) and 122 (red) in 21 lineages where specialization occurred independently. d, P value distribution from a set of randomized tests to determine the reproducibility of substitutions observed along mutational paths among sub-sampled groups compared to randomly permuted substitutions. On average, 4.9% (considering all mutational steps) of randomly permuted trajectories demonstrate a degree of ordering equal to or greater than observed mutational paths.

The DNA of the Drosophilia fly was modified via editing to provide the necessary homology allowing the organisms to feed on cardiac glycosides as well.

Another figure from the paper:

a, The monarch butterfly lineage with the substitutions observed in the H1–H2 loop of ATPα (adapted from Petschenka et al.)11. b, Non-synonymous point mutations in the edited DNA sequence of the native Atpα in Drosophila knock-in lines code for the substitutions at sites 111, 119 and 122. Codons are underlined. c, d, Larval–adult survival (c) and adult survival (d) of flies reared on diets with ouabain were different between monarch lineage knock-in lines and control lines (QAN = engineered control; QAN* = w1118 wild type). Symbols represent the mean ± s.e.m. of 3–6 biological replicates (50 larvae and 10 females per replicate in c and d, respectively). Curves were fit using a logistic regression model for each line. Pairwise differences in survivorship trajectories between lines were evaluated with a likelihood ratio test on the significance of the interaction term between genotype (line) and ouabain concentration in a logistic regression for each pair of lines (letters). e, Egg–adult survival on diet supplemented with Asclepias curassavica leaves relative to control diets (n = 3–4; 100–200 eggs per replicate, see Methods; mean ± s.e.m.) was different between monarch lineage knock-in lines and QAN* (one-way ANOVA, P = 0.0035 followed by post hoc Tukey’s tests (letters)). f, Ouabain concentrations in diet versus adult fly bodies among monarch lineage knock-in lines (n = 2–4 biological replicates per group). Adult flies had not fed since eclosion. Genotype and dietary ouabain concentration influenced the probability of detecting ouabain in post-eclosion flies (logistic regression and likelihood ratio test, genotype two-sided P = 0.024, dietary ouabain concentration two-sided P = 6.344 × 10−5). Further information on experimental design and statistical test results is found in the Source Data.

Some observations:

We obtained in vivo evidence for adaptation in monarch lineage Atpα through larval–adult and adult survival experiments. Knock-in fly lines were reared on yeast medium with increasing concentrations of ouabain, a hydrophilic cardiac glycoside6 (Fig. 2c, d, Extended Data Figs. 4, 5). LAN, the first genotype to evolve, increased larval–adult survival at lower ouabain concentrations, but survival declined sharply as concentrations increased. LAN also increased adult survival at lower ouabain concentrations. LSN, the second genotype to evolve, increased larval–adult survival at the highest ouabain concentrations. The next step, VSN, provided the same larval–adult and adult survival benefit as LSN. Finally, the survival of ‘monarch flies’ carrying the monarch butterfly genotype (VSH) was unaffected by even the highest levels of ouabain in larvae and adults6,9,11,18 (Fig. 2c, d), which was not due to reductions in feeding rate or toxin ingestion (Extended Data Fig. 6).

When knock-in line eggs were placed on medium containing the suite of cardiac glycosides found in the leaves of the milkweed species Asclepias curassavica and A. fascicularis6, monarch lineage fly genotypes generally showed increased egg–pupal and egg–adult survival rates (Fig. 2e, Extended Data Fig. 7), although not always for VSN (Extended Data Figs. 3, 7). The LSN, VSN and VSH genotypes may enable insects to cope with the complex milieu of cardiac glycosides encountered during host shifts to these plants.

The monarch butterfly ATPα substitutions at positions 111, 119 and 122 may unlock a passive evolutionary route to cardiac glycoside sequestration, as we found small amounts of ouabain in newly emerged adult ‘monarch flies’ reared as larvae on a diet containing ouabain (Fig. 2f). However, toxin concentrations were far lower than in monarch butterflies, and the location of ouabain in flies is unclear6,17,18.

Another graphic:

The caption:

a, In vitro ouabain sensitivity of Na+/K+-ATPase activity in extracts of monarch lineage knock-in and control line fly heads (solid lines; QAN, engineered control; QAN*, w1118 wild type), against activity in extracts of monarch butterfly and pig nervous tissue (positive and negative control, dashed red and black line, respectively). Symbols represent the mean ± s.e.m. of 3–7 biological replicates. log10[IC50] (half-maximum inhibitory concentration) values for the Na+/K+-ATPases were estimated after fitting four-parameter logistic regression curves, and were different between genotypes (one-way ANOVA (P < 0.0001) with post hoc Tukey’s tests (letters)). b, Mean docking scores (± s.e.m. of five replicate calculations) from molecular simulations of ouabain binding to the Na+/K+-ATPases found along the monarch lineage showed differences between genotypes (one-way ANOVA (P = 0.0001) with post hoc Tukey’s tests (letters)). c, Effects of the substitutions Q111L, A119S and their combination on larval–adult survival on diets with 30 mM ouabain. Symbols represent the mean ± s.e.m. of three biological replicates (50 larvae each). The effect of mutations A119S and Q111L together was nearly threefold greater than the combined individual effects on survivorship (logistic regression, interaction effect between mutations: ***P = 2.36 × 10−15), indicating positive epistasis. d, Duration of paralysis following mechanical shocks (that is, bang sensitivity; n = 60 five-to-six-day-old adult flies). Bang sensitivity was affected by genotype (Kruskal–Wallis test (P < 0.0001) with post hoc Dunn’s multiple comparisons tests (letters); medians with 95% confidence intervals), and was higher for QAH than for all other genotypes (P < 0.05), except for LAN, which showed higher bang sensitivity than LSN (P = 0.0134). Further information on experimental design and statistical test results can be found in the Source Data.

The full paper makes comments about the evolutionary pathway by which the TSI, Target substance insensitivity, to cardiac glycosides evolved.

(The old drug digitalis and related digoxin fit into this class of compounds.)

Some concluding remarks:

Substitutions at three amino acid sites in ATPα are sufficient together, but not alone, to explain the evolution of resistance and TSI to cardiac glycosides achieved by the monarch butterfly at organismal, physiological and biochemical levels. The adaptive walk follows theoretical predictions on the length of such walks2,3,4,13,14, involves epistasis13,14,20,22, and minimizes pleiotropic fitness costs3,4,13,14,21, and variations of it convergently re-appeared across lineages that diverged more than three hundred million years ago7,8,9,10,11. Genome editing technology facilitates functional tests of adaptation across levels of biological organization5,25,26. Although mutational paths to adaptive peaks have been identified in microorganisms2,3,4,13,14,22, this is, to our knowledge, the first in vivo validation of a multi-step adaptive walk in a multicellular organism, and illustrates how complex organismal traits can evolve by following simple rules.

This technology is very powerful.

Like all powerful technologies, it has a potential to do good and great things, and likewise, bad and terrible things. The choice is moral. As old and as cynical as I am, I still believe, in spite of it all, in the capacity for humanity to come down on the side of good and great.

Have a nice day tomorrow.

Origin of an Upbeat Phrase in Dark Times: "We Cannot Predict the Future, But We Can Invent It."

I thought it attributed to Lincoln.

It's not, apparently:

The Quote Investigator, Investigates

Nevertheless, in these times, with our democracy in such danger, the thought somehow thrills me.

I hope the young people live this way.

Go Millennials! Take the World! Do better than us! You can't possibly do worse!

An Economic, Environmental, and Technical Analysis of Biomass Sourced Jet Fuel.

The paper I'll discuss in this post is this one: Comprehensive Life Cycle Evaluation of Jet Fuel from Biomass Gasification and Fischer–Tropsch Synthesis Based on Environmental and Economic Performances (Xiao et al, Ind. Eng. Chem. Res. 2019, 58, 19179−19188)

I have very little use for Bill McKibben of 350.org because although he "cares" loudly about climate change, he is nothing more than a journalist, and a cowardly one at that, since it is increasingly obvious that his prescribed solution, so called "renewable energy" has clearly not worked, and is not working and won't work. McKibben is a journalist. I often joke that one can only get a degree in journalism these days if one has not passed a college level science course.

No one now living will ever see an atmospheric concentration of the dangerous fossil fuel waste carbon dioxide measuring under 400 ppm again, never mind "350." Next year I'm certain I'll be able to say - if still alive - "under 410 ppm again." The blind, and frankly ignorant faith is so called "renewable energy" is one reason why this is so. The more than 2 trillion dollars spent in the last ten years alone on this scheme has caused climate change to accelerate, not decline. We are now seeing increases at 2.4 ppm/year, an unprecedented rate.

I call McKibben a "coward" because it takes courage to say "I am wrong" or "I was wrong" and he clearly lacks this ability, since the only way to be serious about climate change is to embrace science and engineering, as opposed to driving one's Prius (or Tesla electric) car to protests chanting "We want 'renewable energy now!' and carrying signs that the bearers consider witty. Over the last several hours I've been studying lignins, a component of wood and the stalks of many plants, and as a result have been studying the environmentally dubious Kraft process for wood pulping, which is utilized to make paper for signs people can carry to their protests stating how much they care about the climate.

Bill McKibben lacks both the courage and the intellectual insight and education to be able to say the word "nuclear."

If one respects science, one considers how scientists work. We have theories or hypotheses which must be tested by experiment. If the experimental results invalidate the theory, the theory goes, not the experimental result. We don't make Trumpian scale excuses for the experimental result in order to save a precious theory, which by being precious translates into blind faith. The experimental results of the multitrillion dollar "renewable energy will save us" theory are in; climate change is accelerating, not being ameliorated. It's time for the theory to be rejected. Denial and excuses for the experimental result are meaningless. No one now living will ever see an atmospheric concentration of the dangerous fossil fuel waste carbon dioxide measuring under 400 ppm again. The so called "renewable energy" experiment did not work; it is not working; it won't work.

The purpose of this riff on McKibben, who I obviously hold in low regard, is a bit of "Gotcha," which has come to permeate our culture of anti-thinking, the age of twits posting twitter witticisms, all of which are making the world worse, not better.

To avoid "Gotcha" statements the young climate activist Greta Thunberg took a sailboat across the ocean to address the UN on climate change. She declined to fly, since flying requires the consumption of rather large amounts of fuels based on dangerous petroleum. This reminds me of a statement I heard attributed to Mahatma Gandhi in which he remarked that his advisers complained that was very expensive to be sure he was keeping his vow of poverty in place.

By the way, I have enormous respect for Greta Thunberg, because I think she is right to ask us "How DARE you?!!!" about what we in my generation have done to hers.

History will not forgive us; nor should it.

It's OK for Greta Thunberg to not know anything about engineering by the way; she's sixteen. (Bill McKibben is 58.)


In general, as I've just made clear, I am hostile to so called "renewable energy" not because its slightly better than dangerous petroleum, dangerous coal and dangerous natural gas, when it functions, but because it requires dangerous petroleum, dangerous coal and dangerous natural gas to back it up when it's not working, which is often. This is why it is not working and won't work, and why Germany and Denmark have the highest electricity prices in the OCED, because a system that requires redundancy is obviously more expensive than one that doesn't, and not only that it, it is worse from an environmental standpoint. (We hit 415 ppm of CO2 this year.)

Still, despite to my hostility to so called "renewable energy" I am flexible enough to be intrigued by what is, by far, the largest source of it, biomass. As practiced now, biomass is a health and environmental disaster: Slightly less than half of the world's 7 million air pollution deaths each year derive from it, and the Mississippi River Delta system, along with other bodies of water, has be destroyed by agricultural fertilizer run off to make corn ethanol, and the Indonesian and Malaysian rain forests are being rototilled to make biodiesel to meet German "renewable portfolio standards."

Nevertheless, biomass relatively efficiently captures carbon dioxide from the air, and this is a non-trivial task that we leave for Greta Thunberg's generation to accomplish with depleted resources and a degraded planet. Biomass, especially algae biomass, is fast growing, self replicating, and capable of covering the large surface area required to address the entropy of mixing that makes cleaning up the dangerous fossil fuel waste carbon dioxide. Thus it cannot be ignored.

This brings me to the paper at the outset. This is one way to make jet fuel so Greta Thurnberg can feel safe to fly someday, but there are others, one of my personal favorites being that proposed by the US Naval Scientist Heather Willauer , although in truth, it's less than perfectly idea since it requires an electricity intermediate and is thus thermodynamically questionable.

The best way to deal with biomass in my opinion is heat driven gasification, which what the paper cited at the opening of this post is about.

The cartoon graphic introducing the paper:

From the introduction:

With increased aviation travel and limited substitutes in this area, jet fuel demand has increased significantly. The traditional jet fuel consumes huge fossil energy and leads to serious environmental pollution. With the global warming effect, biomass, as a renewable resource to produce jet fuel, has attracted progressively more attention at the global scale. In recent years, the conversion routes of jet fuel derived from biomass mainly include catalytic cracking-olefin oligomerization, hydroprocessed esters, and fatty acids, Fischer–Tropsch (FT) synthesis, hydrothermal liquefaction, and fermentation alcohol synthesis.(1−10) However, the environment, resource, and economic performances of biomass-based jet fuel need to be evaluated and compared for seeking beneficial technical pathways.

The life cycle assessment (LCA) is a method for evaluating the environmental impact of a product throughout its life cycle. In order to compare the influence of different processes of biomass-based jet fuel on the environment and resources, some literature studies carried out a variety of life cycle evaluations of the abovementioned conversion processes. These studies mainly focused on the contribution of biomass-based liquid fuel to mitigate the greenhouse effect. Moreover, some comprehensive evaluations were based on the fuzzy mathematics method, such as the analytic hierarchy process (AHP).
Several researchers(7−9) performed the LCA of biomass-based jet fuel derived from hydrothermal liquefaction (HTL) of microalgae. Two HTL processes of algal jet fuel based on the different circumstances were analyzed, and Monte Carlo simulation and sensitivity analysis were completed. The results showed that the transportation of microalgae led to the increase in the life cycle climate change impacts, and compared to the process of petroleum-based jet fuel, greenhouse gas emissions could be reduced by 76.0% based on the optimized process of algal jet fuel.

Klein et al.(3,4) compared different routes for renewable jet fuel (RJF) production integrated with sugarcane biorefineries in Brazil based on the technoeconomic and environmental assessments. They concluded that hydroprocessed esters and fatty acids exhibited the highest production potential and FT synthesis showed the best economic performance among the studied scenarios of RJF. Moreover, all conversion technologies of RJF could reduce greenhouse gas emissions by more than 70% compared to the process of petroleum-based jet fuel...(10)

...Moreover, many researchers have integrated the AHP into LCA to evaluate the comprehensive performance of products.(14−16) Tao et al.(6) obtained a resource-environment-economic comprehensive performance evaluation model of biomass-based jet fuel from biomass gasification and FT synthesis based on AHP. They showed that the case of biomass-based jet fuel combined with waste heat for power generation exhibited a lower environmental impact than that combined with heat supply directly and the reduction of environmental impact indicators was in the range of 11.7–40.8%. Compared to petroleum-based jet fuel, the global warming potential (GWP) of biomass-based jet fuel reduced by 52.6–71.9% and the nonrenewable resource consumption reduced by 84.4–93.6%. Different environmental impact distribution methods, such as based on economic value distribution, energy distribution, and mass distribution, used in the biomass growth stage led to significant changes in the environmental evaluation, in particular, for GWP and eutrophication potential (EP). It could also be found that the comprehensive performance of biomass-based jet fuel is the most sensitive to feedstock consumption...

...The method of monetization is more objective and rational, which has the same criteria for weighting economic performance, resource performance, and environmental performance. Therefore, the comprehensive evaluation obtained is fairer to the entire society, and its decision-making meaning is more perfect. This study not only employed the monetization method to reflect economic benefits but also completed the comprehensive analysis through the monetization method on resource and environment, to avoid the subjective factors in comprehensive evaluation.

Some graphics from the text beginning with a process flow sheet diagram:

Figure 1. Process of jet fuel from biomass gasification and Fischer–Tropsch synthesis

It is important to note that this analysis relies of combustion heat, and not nuclear heat, and therefore can be improved upon. Specifically in this diagram the heat is generated by the combustion of biomass, reducing the amount that can be recovered as a biofuel. However I very much like the FT approach and the heat exchange networks.

Two cases are considered:
Considering petroleum-based jet fuel as a reference, the performances of economy, resource, and environment were reflected by relative economic benefits (REBs), nonrenewable resources saving benefits (NRSBs), and pollution mitigation benefits (PMBs). These indicators are defined in the subsequent sections. Each alkane mixture is separated by distillation, and then the final product jet fuel (C8−C16), gasoline (C5−C7), and diesel (C17−C20) are obtained, in addition to by-product wax. The steam generated by the waste Figure 1. Process of jet fuel from biomass gasification and Fischer−Tropsch synthesis. heat is supplied for two cases, that is, heat directly (Bio-FTJ-1) and power generation (Bio-FTJ-2) cases.

Here is the grounds for the LCA analysis, note the presence of fertilizers and pesticides. These may not be necessary if the water utilized to grow the biomass is municipal waste water, or agricultural run-off water, since these are potential media for algae growth. The big problem with Algae growth is dewatering and transfer, both of which can be addressed to improve the process, dewatering by the use of waste heat, transport by direct flow into reactors. (This would also have the added advantage of recovering phosphorous, the depletion of which is another very, very, very, very serious matter we are dumping, with contempt, on Greta Thunberg's generation. How DARE we?)

Figure 2. Scope of LCA of Bio-FTJ systems.

An issue often ignored is the material costs of so called "renewable energy," which calls into question how "renewable" it is - this is a serious paper, not hand waving - is not not ignored here:

Table 2: from the paper:

Costs of this process, again analyzed in the absence of nuclear heat:

It is important to note that in the case of dangerous petroleum fuels, the economic costs of the destruction it causes, the costs of deaths and cost of disease from air pollution, and the cost of climate change - i.e. "external costs" - are not included. If they were, petroleum would be too expensive to use, inspiring idiots like Jim Kunstler to carry on how about we'll all die without oil, that "peak oil" nonsense. These external costs are not included although, they should be in an LCA paper in the analysis of the cost of petroleum jet fuel in table 4. I do not mean to criticize the authors or their fine work here, but they are buying into the fact that we blindly accept these enormous dangerous fossil fuel costs by habit while we all wait breathlessly for the grand renewable nirvana that never comes, and not because it is morally or intellectually justifiable.

Table 4:

For the next few graphics, there is a parameter called "ICP" for Indicator of Comprehensive Performance. There are also parameters associated with the weighting of these indicators, described in the text as follows:

Considering petroleum-based jet fuel as a reference, the performances of economy, resource, and environment were reflected by relative economic benefits (REBs), nonrenewable resources saving benefits (NRSBs), and pollution mitigation benefits (PMBs). These indicators are defined in the subsequent sections.

The weighting factors utilized in the analysis of these are assigned in the graphic below, where the weighting factors are described thusly:

α, β, and γ represent the weighting coefficients of REB, NRSB, and PMB, respectively.

Figure 3. ICP with different weighting coefficients.

The next graphic on the sensitivity of benefits to the price of oil depends on the dubious assumption with which we all live that dangerous fossil fuels are allowed to dump the dangerous fossil fuel waste without charge.

Figure 4. Sensitivity of ICP to different prices.

A genuflection to this fact that dangerous fossil fuel wastes can be dumped without charges accruing to users and dangerous fossil fuel companies.

Figure 5. Sensitivity of ICP to resource consumption and pollutant emission.

Figure 6. Influence of into-factory price of stalks on performance.

The next graphic refers to the price of stalks delivered to the plant; this is not an algae based process.

Figure 7. Influence of stalk consumption on performance.

And the final figure refers to the influence of the cost of oil, which is subsidized by lung tissue, the destruction of habitats, and the destruction of the future of Greta Thunberg's generation and all generations after hers.

Figure 8. Influence of the price of crude oil on REB.

Some conclusions to the paper:

Compared to Bio-FTJ-1, Bio-FTJ-2 can achieve greater benefits in saving nonrenewable resource and can emit less CO2 and other pollutants because it significantly reduces the consumption of external power input. However, owing to the high production cost of Bio-FTJ-2, its economic benefit is very low. Therefore, ICP of Bio-FTJ-2 is lower than that of Bio- FTJ-1.

According to the sensitivity analysis, the comprehensive performance of the two processes is highly sensitive to the price of crude oil and stalk consumption and the Bio-FTJ-1 is highly sensitive to electricity consumption. The higher the price of crude oil is, the better the comprehensive performance of the Bio-FTJ is. The results of this study indicate that the comprehensive performance of Bio-FTJ can be improved significantly by the reduction of the consumption of stalks and external power input in the production.

I trust you're having a nice day.

Photochemical Reduction of the Soluble Radioactive Pertechnate Ion to Insoluble TcO2.

The paper I'll discuss in this post is this one: Efficient Photocatalytic Reduction of Aqueous Perrhenate and Pertechnetate (Shi et al, Environ. Sci. Technol. 2019, 53, 18, 10917-10925)

Technetium is a synthetic element - the element in the periodic table with the lowest atomic number for which no stable isotopes exist - that is often regarded as so called "nuclear waste," something which is true in the paper I'm about to discuss. (I personally argue that there is no such thing as "nuclear waste" in the absence of stupidity, fear and ignorance, but that's my opinion. Fear and ignorance are far more popular and far more powerful than any of my opinions will ever be.)

The most common use for technetium is in medicine, a short lived nuclear isomer Tc-99m is the workhorse of medical imaging as well as some treatment modalities. It decays to the same isotope as is found in used nuclear fuel, Tc-99. People who have undergone medical testing and medical treatment with Tc-99m generally piss the resultant Tc-99 decay product away, because in general, it is in the form of the highly soluble TcO4- anion, known as the pertechnetate ion. In addition, unlike other soluble radioactive fission products such as isotopes of cesium and strontium (although strontium sulfate and carbonate are insoluble its nitrate is quite soluble) the pertechnetate ion has a fairly low affinity for adhesion to minerals. It migrates quite readily.

Historically fission product technetium from commercial nuclear reprocessing has been dumped into the ocean. This was true at both Sellafield in the UK and at La Hague in France, which is unfortunate, not because there is an incredible risk to the environment because of this practice, but because the potentially valuable element was not recovered.

Technetium metal has many interesting properties, both as a surrogate or potential replacement for the relatively rare and expensive element rhenium which is essential to modern technology. In other ways in which it is actually superior to rhenium, for example in dehydrogenation reactions for alcohols, chemistry which conceivably might play a role in the elimination of the mining of dangerous petroleum - with all the observed tragedy that represents - for the production of polymers: (cf. Theoretical design of a technetium-like alloy and its catalytic properties Koyama and Xie, Chem. Sci., 2019, 10, 5461-5469. The authors of this paper claim, without much justification, that technetium is "too dangerous" to use and therefore attempt to duplicate its electronic structure by alloying other metals.)

The pertechnetate ion is an excellent corrosion inhibitor, and personally I have been extremely interested in technetium alloys, some of which have extremely valuable properties. The hardness of technetium tetraboride is exceeded only by its rhenium analogue.

I'm not necessarily a big fan of nitric acid dissolution of used nuclear fuels - I think there are better approaches to performing this essential task - but the reality is that this has historically and is probably still the most prevalent way the valuable materials in them are recovered. In nitric acid type dissolutions, the chemical form of technetium is generally the pertechnetate ion. This is, for example, how it is found in the Hanford tanks that dumb anti-nukes always carry on about, even though they are spectacularly disinterested in the 7 million air pollution deaths that occur each year because we don't have more technetium.

The recovery of technetium for the exploitation of its many useful properties, now that it is available to humanity, will therefore require facile methods for its removal from aqueous solutions of pertechnetate, which is why this paper caught my eye.

From the introduction, covering some of what I've just said and some things I didn't say:

Technetium-99 (99Tc), a β-emitting isotope (β–max = 293.7 keV), is generated from thermal-neutron-induced fission of uranium-235 (235U) and spontaneous fission of 238U in the earth’s crust.(1,2)99Tc is also formed from the decay of the medical radioisotope 99mTc with a half-life of only 6.0 h.(3) The most common chemical form is pertechnetate 99TcO4–, which is of particular environmental concern due to the long half-life of 99Tc (2.13 × 105 years)(1) and the resistance to adsorption on mineral surfaces and sediments that results in migration with potential ecosystem risks.(4−7)

Because all technetium isotopes species are radioactive, research progress is challenging. As a result, rhenium (Re) is often used as a nonradioactive chemical analogue of 99Tc.(8−11) One of the various methods used for removal of 99TcO4–/ReO4– from aqueous solution is conventional solvent extraction.(12,13) Nevertheless, there remain shortcomings, such as utilization of large amounts of toxic and volatile organic reagents, resulting in production of secondary wastes. Alternative ion exchange methods(14−16) require high quality of raw liquid to avoid column blockage. Despite a recent breakthrough toward TcO4– elimination via molecular recognition,(17) long-term storage stability of Tc-containing materials requires further attention, and large-scale practical applications have not been demonstrated.(18) Solid waste forms for 99Tc immobilization include metals such as Tc-Zr alloys(19) and borosilicate glasses.(20) Disadvantages of the latter are oxidation and release of volatile Tc molecules during high-temperature vitrification.(1)

An appealing method to immobilize 99Tc is reduction of soluble Tc(VII) to sparingly soluble Tc(IV) with removal from aqueous solution as 99TcO2·nH2O species,(8,21) which can be separated by physical filtration and then converted to metal or other waste forms for long-term disposal.(19,20)

Common reducing agents such as SO32–, Sn2+, Fe2+,(9,22,23) and biomass(24,25) are exhausted in one cycle and not readily reused. Using Fe(0)/Fe(II) as the reductant couple, 99Tc/Re was sequestrated using a simultaneous adsorption–reduction strategy.(21,26−28) Electrochemical methods(29−31) involve toxic chemicals, and furthermore, the presence of SO42– suppressed Re(VII) reduction in aqueous solution. Although γ-radiation-induced reduction(32) via hydrated electrons might efficiently reduce and separate Re(VII), the conditions are impractical. Photochemical-induced reduction(31,32) of Re(VII) using broadband UV or laser irradiation over 6 h afforded 94.7% recovery of Re; unfortunately, the high molar absorptivity of Re(VII) limits the practical concentration of Re(VII).

Heterogeneous semiconductor-based photocatalytic reduction of heavy metal ions such as Cu2+, Hg2+, Ag+, U(VI), and Cr(VI)(33−37) has been proposed. Many photocatalysts are regarded as environmentally friendly materials because of their chemical inertness and biological compatibility in natural systems. For example, titanium dioxide (TiO2) is a good prospect for photocatalytic reduction and removal of metal ions due to its high resistance to photocorrosion, nontoxicity,(38) low environmental pollution, regeneration ability, low cost, and convenient operations.(38,39) Evans et al.(40) reported selective removal (98%) of uranium from waste liquid containing strong complexing agents using TiO2 as a photocatalyst. Wang et al.(41) prepared a TiO2/g-C3N4 heterojunction composite that facilitated rapid separation and transfer of photogenerated electrons, thus achieving efficient reduction and fixation of uranium...

...The objective of this study was to provide fundamental understanding of photocatalytic 99Tc/Re reduction and removal using TiO2 nanoparticles in the presence of HCOOH. Most of this work was still conducted using nonradioactive ReO4– as a surrogate for 99TcO4–.(8,42) Anyway, the reported 99Tc(VII/IV) redox potential (E0 = +0.74 V) is somewhat more positive than that for Re(VII/IV) (E0 = +0.51 V), which means that photocatalytic reduction of Tc(VII) should be more energetically favorable. In addition, the reduction/removal mechanism was elucidated by photoelectrochemical measurements, electron paramagnetic resonance spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy. These results suggest an environmentally friendly photocatalytic approach for 99TcO4–/ReO4– removal and sequestration from aqueous solution.

Titanium dioxide is a very cool photocatalyst in general, love it!

The experimental light source here is in the UV range, 320 nm, which means we can't apply in a verified way the magic word on which we've bet the planetary atmosphere with poor results, "solar" although the authors are happy to apply this word, although the experiments, using a xenon lamp, were UVa radiation.

UV radiation is continuously available by downrating X-rays and gamma rays from fission products by the use of barium fluoride, so this should not represent much of a problem in a putative reprocessing industrial plant.

Most of the work was performed using a rhenium surrogate for technetium, although ultimately technetium was directly utilized:

Tc Removal

99Tc was obtained as a 2% HNO3 stock solution of potassium pertechnetate (KTcO4) from China Institute of Atomic Energy. The 99Tc experiments were performed in a special radiological laboratory. In accordance with the above experimental protocol for Re, the corresponding 99TcO4– solution was illuminated for 150 min under the identified optimal Re(VII) reduction/removal conditions. Residual concentration of 99Tc was analyzed by a liquid scintillation counter (Tri-Carb, PerkinElmer). Aliquots of 0.5 mL were periodically collected during light irradiation and filtered through 0.2 μm Millipore membranes before analysis. 0.2 mL of the filtrate was then mixed with 5 mL of liquid scintillation cocktail (ULTIMA Gold, PerkinElmer) and held in a 6 mL plastic scintillation vial for measurements. The reacted suspension was stirred in air to observe the reoxidation and release of reduced Tc.

Some pictures from the text:

Figure 1. (A) Removal of Re(VII), for no TiO2 and 0.4 g L–1 TiO2 in different conditions; pH = 3, [HCOOH] = 1%, [Re(VII)] = 5 mg L–1. Removal of Re(VII) for different concentrations of HCOOH, for (B) no light and (C) UV–visible irradiation; pH = 2, [Re(VII)] = 10 mg L–1. (D) Removal of Re(VII) with different organic additives under light irradiation; pH = 3, [organic additive] = 1%, [Re(VII)] = 5 mg L–1. V = 50 mL, T = 298 K throughout.

Figure 2. (A) First-derivative EPR spectra of DMPO spin adducts. In the dark: TiO2, HCOOH, and TiO2/HCOOH/Re(VII); under light: TiO2, HCOOH, TiO2/HCOOH, and TiO2/HCOOH/Re(VII). (B) TiO2 current–potential measurements: (black ■ Idark with 0.1 mol L–1 Na2SO4 + 0.1% HCOOH + 5 mg L–1 Re(VII); (blue ● Iphoto with 0.1 mol L–1 Na2SO4; (red ▲ Iphoto with 0.1 mol L–1 Na2SO4 + 0.1% HCOOH; (green ▼ Idark with 0.1 mol L–1 Na2SO4 + 0.1% HCOOH + 5 mg L–1 Re(VII).

Figure 3. Time profiles of Re(VII) reduction during the irradiation of TiO2 suspensions with N2 bubbling, V = 50 mL, T = 298 K. (A) Various dosages of TiO2, [HCOOH] = 1%, [Re(VII)] = 10 mg L–1, pH = 2. (B) Effects of initial Re(VII) concentration, [HCOOH] = 1%, 0.2 g L–1 TiO2, pH = 2. (C) Influence of NO3– concentration, [HCOOH] = 1%, [Re(VII)] = 10 mg L–1, 0.2 g L–1 TiO2, pH = 2. (D) Solution pH values, [HCOOH] = 0.2%, [Re(VII)] = 10 mg L–1, 0.4 g L–1 TiO2.

Figure 4. (A) Cycling runs of TiO2 for photocatalytic reduction of Re(VII). Time profiles of Re(VII) reduction during the irradiation of 0.4 g L–1 TiO2 suspensions at pH = 3, with N2 bubbling, [HCOOH] = 1%, [Re(VII)] = 5 mg L–1, V = 50 mL, T = 298 K. (B) Color change of both solid and solution before and after photocatalysis.

Figure 5. Time profiles of 99Tc(VII) and Re(VII) reduction during the irradiation of 0.4 g L–1 aqueous TiO2 suspensions at pH = 3, with N2 bubbling, [HCOOH] = 1%, [99Tc(VII)] or [Re(VII)] = 0.05 mmol L–1, [NO3–] = 20 mmol L–1, V = 50 mL, T = 298 K.

I'm not convinced this process is necessarily worthy of industrialization. The text suggests that nitrate is a problem.

I think it's time to move past the workhorse Purex type solvent extraction process and there are many other approaches to the recovery of technetium for use, but one can imagine this process being of some utility in some places, for example, in extant situations where pertechnetate is migrating in the environment.

I trust you're having a nice afternoon.

How evolution builds genes from scratch.

The news item I'll discuss in this post is this one: How evolution builds genes from scratch

I don't think I logged into Nature when I saw it, so I think it's open sourced.

A lot of my day to day work is involved in proteomics either directly or indirectly. I am therefore often required to think about protein isoforms, many of which arise from genetic differences in people and related organisms; there is little more fascinating than seeing those forms highly conserved throughout evolution in comparison to variable, and indeed, vestigial proteins and sequences.

A surprise of the discovery of automated gene sequencing that led to the result of the human genome sequence, as the subsequent gene mapping of many other species is how much "junk DNA" there is, some of which is artifacts of ancient viral infections in ancestors or ancestral organisms.

This news article suggests that new genes can sometimes arise from turning on "junk DNA."

Some excerpts:

PDF version
5-inch Arctic cod in hollows of ice floes in the Arctic Ocean
Some cod species have a newly minted gene involved in preventing freezing.Credit: Paul Nicklen/NG Image Collection

In the depths of winter, water temperatures in the ice-covered Arctic Ocean can sink below zero. That’s cold enough to freeze many fish, but the conditions don’t trouble the cod. A protein in its blood and tissues binds to tiny ice crystals and stops them from growing.

Where codfish got this talent was a puzzle that evolutionary biologist Helle Tessand Baalsrud wanted to solve. She and her team at the University of Oslo searched the genomes of the Atlantic cod (Gadus morhua) and several of its closest relatives, thinking they would track down the cousins of the antifreeze gene. None showed up. Baalsrud, who at the time was a new parent, worried that her lack of sleep was causing her to miss something obvious.

But then she stumbled on studies suggesting that genes do not always evolve from existing ones, as biologists long supposed. Instead, some are fashioned from desolate stretches of the genome that do not code for any functional molecules. When she looked back at the fish genomes, she saw hints this might be the case: the antifreeze protein — essential to the cod’s survival — had seemingly been built from scratch1.

The cod is in good company. In the past five years, researchers have found numerous signs of these newly minted ‘de novo’ genes in every lineage they have surveyed. These include model organisms such as fruit flies and mice, important crop plants and humans; some of the genes are expressed in brain and testicular tissue, others in various cancers...

...Back in the 1970s, geneticists saw evolution as a rather conservative process. When Susumu Ohno laid out the hypothesis that most genes evolved through duplication2, he wrote that “In a strict sense, nothing in evolution is created de novo. Each new gene must have arisen from an already existing gene.”

Gene duplication occurs when errors in the DNA-replication process produce multiple instances of a gene. Over generations, the versions accrue mutations and diverge, so that they eventually encode different molecules, each with their own function. Since the 1970s, researchers have found a raft of other examples of how evolution tinkers with genes — existing genes can be broken up or ‘laterally transferred’ between species. All these processes have something in common: their main ingredient is existing code from a well-oiled molecular machine...

...But genomes contain much more than just genes: in fact, only a few per cent of the human genome, for example, actually encodes genes. Alongside are substantial stretches of DNA — often labelled ‘junk DNA’ — that seem to lack any function. Some of these stretches share features with protein-coding genes without actually being genes themselves: for instance, they are littered with three-letter codons that could, in theory, tell the cell to translate the code into a protein.

It wasn’t until the twenty-first century that scientists began to see hints that non-coding sections of DNA could lead to new functional codes for proteins. As genetic sequencing advanced to the point that researchers could compare entire genomes of close relatives, they began to find evidence that genes could disappear rather quickly during evolution...

...Some of these genes-in-waiting, or what Carvunis and her colleagues called proto-genes, were more gene-like than others, with longer sequences and more of the instructions necessary for turning the DNA into proteins. The proto-genes could provide a fertile testing ground for evolution to convert non-coding material into true genes. “It’s like a beta launch,” suggests Aoife McLysaght, who works on molecular evolution at Trinity College Dublin...

The nice cartoon in the news article:

Interesting I think.

From bomb to Moon: a Nobel laureate of principles

When I was a kid, I used to spend a lot of time at Urey Hall on the UCSD campus. Urey, of course, is the discoverer of deuterium.

At that time of my life I didn't think all that much about Urey, which was my loss. (I was a stupid kid and am now a somewhat less stupid adult.)

Nature has a review of a biography of Urey, and I'm going to put it on my list of "need to read someday."

From bomb to Moon: a Nobel laureate of principles

I think the review is open sourced, but if it isn't, some excerpts:

The Life and Science of Harold C. Urey Matthew Shindell University of Chicago Press (2019)

After witnessing the 1945 Trinity atomic-bomb test, the theoretical physicist J. Robert Oppenheimer recalled Hindu scripture: “Now I am become Death, the destroyer of worlds.” Although this is often interpreted as admitting moral culpability on the part of the Manhattan Project’s scientific director, Oppenheimer remained a central player in the nuclear-weapons establishment until he lost his security clearance in the mid-1950s.

Harold Urey also worked for the Manhattan Project. But by contrast, the Nobel-prizewinning chemist distanced himself from nuclear weapons development after the war. His search for science beyond defence work prompted a shift into studying the origins of life and lunar geology. Now, the absorbing biography The Life and Science of Harold C. Urey by science historian Matthew Shindell uses the researcher’s life to show how a conscientious chemist navigated the cold war.

Shindell argues that Urey’s pious upbringing underpinned his convictions about the dangers of a nuclear arms race, and his commitment to research integrity. Urey grew up a minister’s son in a poor Indiana farming family belonging to a plain-living Protestant sect, the Church of the Brethren. Progressing through increasingly diverse educational environments, culminating in a PhD at the University of California, Berkeley, Urey became self-conscious about the zealousness of his family’s faith. He also found the path to a cosmopolitan, middle-class life.

In the 1920s, Urey was among a small group of chemists who collaborated closely with physicists. Working at Niels Bohr’s Institute for Theoretical Physics at the University of Copenhagen, he kept abreast of developments in quantum mechanics. There, and on travels in Germany, he met the likes of Werner Heisenberg, Wolfgang Pauli and Albert Einstein. But Urey decided he lacked the mathematical skills to make theoretical advances in quantum chemistry. Moving back to the United States, he started both a family and an academic career.

At Johns Hopkins University in Baltimore, Maryland, and later at Columbia University in New York City, Urey taught quantum mechanics to chemists, while setting out on the trail that led him to deuterium. In 1931, he discovered this isotope of hydrogen. Predicted on the basis of work by Bohr, Frederick Soddy, and J. J. Thomson, its existence had been doubted by many chemists and physicists. Urey’s identification won him the Nobel three years later. By this time, he had also co-authored one of the first texts in English on quantum mechanics as applied to molecular systems, the 1930 Atoms, Quanta and Molecules.

Urey’s continuing work on stable isotopes of other chemical elements, such as nitrogen and oxygen, led to important applications in biochemistry and geochemistry, including the pioneering use of isotopic labels to study metabolic pathways. Living in New York also led Urey to political liberalism. He became aware of the anti-Semitism affecting Jewish scientists, and the lack of opportunities for women scientists. A generous mentor, he shared his Nobel prize money with two collaborators, and split a grant he had been awarded with the young Isidor Rabi (who later discovered nuclear magnetic resonance)...

...The Second World War changed Urey’s life, as it did those of most physical scientists and researchers in many countries. His expertise in isotopes made him valuable to the Manhattan Project. Here, he eventually headed a massive team of scientists and engineers working on the separation of uranium isotopes using gaseous diffusion methods. However, he was ill-suited to the pressure of managing this technologically complex and cumbersome project, and Leslie Groves — the project’s overall director — regarded him with suspicion. Even before the war’s end, Urey became deeply disenchanted with working for the military...

...After the war, Urey used his laureate status to voice alarm about the prospect of nuclear warfare. He backed international control through world government as a way to control the military future of atomic energy. This was not a radical view in 1946...

...Over this harrowing period, Urey lost faith in the ability of modern secular society to manage the new threats of the atomic age. Although he had long abandoned his parents’ religion, he began to argue that Judaeo-Christianity was key to democracy. He attributed the success of science itself, with its commitments to honesty and credit, to religious ethics...

...In the late 1940s, Urey used his expertise in mass spectrometry to begin work in geochemistry, and then in planetary science. It was a way to escape the orbit of the nuclear weapons establishment (although he still advised the US Atomic Energy Commission). With chemistry graduate Stanley Miller, he tested hypotheses on the origins of life by Soviet biochemist Alexander Oparin and biologist J. B. S. Haldane, and successfully produced amino acids by sparking a solution of water, methane, ammonia and hydrogen. In 1952, Urey published The Planets, a chemical treatise on the formation of the Solar System...

...Urey became influential during the early days of NASA, formed after the 1957 launch of the Soviet satellite Sputnik, offering the agency persuasive reasons to prioritize exploration of the Moon over other bodies. In 1969, he analysed lunar rocks collected during the Apollo 11 mission, which supported his theory of the Moon’s common origin with Earth. He wanted the well-funded agency to test theories about the origins of the Solar System — experimentation beyond the reach of individual university scientists. Despite his influence, he was disappointed in this: NASA focused on crewed space exploration over questions of cosmogony.

Sounds like a cool book about a cool life, no?

Have a nice weekend.

The terrible day of the wisecrack.

From the Wikipedia entry on the life of Dorothy Parker:

Following Campbell's death, Parker returned to New York City and the Volney Residential hotel. In her later years, she denigrated the Algonquin Round Table, although it had brought her such early notoriety:

These were no giants. Think who was writing in those days—Lardner, Fitzgerald, Faulkner and Hemingway. Those were the real giants. The Round Table was just a lot of people telling jokes and telling each other how good they were. Just a bunch of loudmouths showing off, saving their gags for days, waiting for a chance to spring them... There was no truth in anything they said. It was the terrible day of the wisecrack, so there didn't have to be any truth...[61]

Dorothy Parker

Of course, things are much worse in the age of Twitter, the age of anti-thinking.

We are all in the Algonquin Round Table, and that is not a good thing.

Mapping 123 million neonatal, infant and child deaths between 2000 and 2017

The paper I'll discuss in this post is this one: Mapping 123 million neonatal, infant and child deaths between 2000 and 2017.

This paper is open sourced, and anyone who cares can read it in its entirety.

The argument is often made - and it's a very good one - that the carrying capacity of the planet for human beings has been exceeded now for many decades. Thus it might seem that an argument for saving the lives of children under the age of five while consistent with human ethics may conflict with environmental ethics.

I have long argued that this conflict is actually invalid. The countries with the lowest birth rates are precisely those where people are secure in their homes, have sufficient health care, shelter, food, and where the rights of women in particular are most actively supported. The problem of exploding populations is therefore, in my opinion, is actually an issue of poverty and human development.

This is precisely why I personally focus the overwhelming portion of my private scientific interests on clean energy, because without clean energy, we cannot eliminate poverty and advance human development.

From article 25 of the Universal Declaration of Human Rights, approved by the United Nations in 1948, but since honored more in breach than practice:

Article 25.

(1) Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.

Universal Declaration of Human Rights

Note that Article 25 is not about electric cars and McMansions with solar cells on the roofs. Modern liberalism differs from 1948 liberalism; I personally prefer the latter.

From the abstract:

ince 2000, many countries have achieved considerable success in improving child survival, but localized progress remains unclear. To inform efforts towards United Nations Sustainable Development Goal 3.2—to end preventable child deaths by 2030—we need consistently estimated data at the subnational level regarding child mortality rates and trends. Here we quantified, for the period 2000–2017, the subnational variation in mortality rates and number of deaths of neonates, infants and children under 5 years of age within 99 low- and middle-income countries using a geostatistical survival model. We estimated that 32% of children under 5 in these countries lived in districts that had attained rates of 25 or fewer child deaths per 1,000 live births by 2017, and that 58% of child deaths between 2000 and 2017 in these countries could have been averted in the absence of geographical inequality. This study enables the identification of high-mortality clusters, patterns of progress and geographical inequalities to inform appropriate investments and implementations that will help to improve the health of all populations.

From the introduction:

Gains in child survival have long served as an important proxy measure for improvements in overall population health and development1,2. Global progress in reducing child deaths has been heralded as one of the greatest success stories of global health3. The annual global number of deaths of children under 5 years of age (under 5)4 has declined from 19.6 million in 1950 to 5.4 million in 2017. Nevertheless, these advances in child survival have been far from universally achieved, particularly in low- and middle-income countries (LMICs)4. Previous subnational child mortality assessments at the first (that is, states or provinces) or second (that is, districts or counties) administrative level indicate that extensive geographical inequalities persist5,6,7.

Progress in child survival also diverges across age groups4. Global reductions in mortality rates of children under 5—that is, the under-5 mortality rate (U5MR)—among post-neonatal age groups are greater than those for mortality of neonates (0–28 days)4,8. It is relatively unclear how these age patterns are shifting at a more local scale, posing challenges to ensuring child survival. To pursue the ambitious Sustainable Development Goal (SDG) of the United Nations9 to “end preventable deaths of newborns and children under 5” by 2030, it is vital for decision-makers at all levels to better understand where, and at what ages, child survival remains most tenuous.

A map:

The caption:

a, U5MR at the second administrative level in 2000. b, U5MR at the second administrative level in 2017. c, Modelled posterior exceedance probability that a given second administrative unit had achieved the SDG 3.2 target of 25 deaths per 1,000 live births for children under 5 in 2017. d, Proportion of mortality of children under 5 occurring in the neonatal (0–28 days) group at the second administrative level in 2017.

We live in a country where children are kept in cages for no "crime" other than their race. We are beneath contempt.

This is probably one of the most important scientific papers in terms of ethical import I've read in a long time, and I read a lot of papers.

Have a nice weekend.

Crosslinking ionic oligomers as conformable precursors to calcium carbonate

The paper I'll discuss in this post is this one: Crosslinking ionic oligomers as conformable precursors to calcium carbonate (Tang et al, Nature 574, 394–398 (2019))

The fastest growing contributor on this planet in the 21st century has been dangerous coal, followed by petroleum, which is likely to be exceeded in the next decade by dangerous natural gas. The next largest contributor, also accelerating, is land use changes. Following these, closely is concrete. (Much of the climate change cost of concrete is connected to heat, almost always generated by the use of dangerous fossil fuels. In theory, if not in wide practice it is possible for the use of concrete to by carbon negative, and some major advances along this line have been made, for instance Riman Concrete, but even Riman concrete requires heat to make. Nuclear heat is actually the only practical way to make concrete without dangerous fossil fuels, despite whatever cartoons you've read or even written about solar thermal plants. Solar thermal plants didn't work, they aren't working and they won't work to address climate change and they will never work to make concrete.

In the past several years, my vicarious interest in my son's education has led me to consider a concept called "polymer derived ceramics" in which is just what it sounds like, a polymer is, via process engineering (and generally heat) converted into a highly structured ceramic. This paper touches on that concept, at least in a loose way. Beautiful things, these, with all sorts of fabulous potential applications.

The abstract, which is open sourced:

Inorganic materials have essential roles in society, including in building construction, optical devices, mechanical engineering and as biomaterials1,2,3,4. However, the manufacture of inorganic materials is limited by classical crystallization5, which often produces powders rather than monoliths with continuous structures. Several precursors that enable non-classical crystallization—such as pre-nucleation clusters6,7,8, dense liquid droplets9,10, polymer-induced liquid precursor phases11,12,13 and nanoparticles14—have been proposed to improve the construction of inorganic materials, but the large-scale application of these precursors in monolith preparations is limited by availability and by practical considerations. Inspired by the processability of polymeric materials that can be manufactured by crosslinking monomers or oligomers15, here we demonstrate the construction of continuously structured inorganic materials by crosslinking ionic oligomers. Using calcium carbonate as a model, we obtain a large quantity of its oligomers (CaCO3)n with controllable molecular weights, in which triethylamine acts as a capping agent to stabilize the oligomers. The removal of triethylamine initiates crosslinking of the (CaCO3)n oligomers, and thus the rapid construction of pure monolithic calcium carbonate and even single crystals with a continuous internal structure. The fluid-like behaviour of the oligomer precursor enables it to be readily processed or moulded into shapes, even for materials with structural complexity and variable morphologies. The material construction strategy that we introduce here arises from a fusion of classic inorganic and polymer chemistry, and uses the same cross-linking process for the manufacture the materials.

An excerpt from the introduction:

Published: 16 October 2019
Crosslinking ionic oligomers as conformable precursors to calcium carbonate
Zhaoming Liu, Changyu Shao, Biao Jin, Zhisen Zhang, Yueqi Zhao, Xurong Xu & Ruikang Tang
Nature volume 574, pages394–398 (2019) | Download Citation

Article metrics
556 Accesses

13 Altmetric


Inorganic materials have essential roles in society, including in building construction, optical devices, mechanical engineering and as biomaterials1,2,3,4. However, the manufacture of inorganic materials is limited by classical crystallization5, which often produces powders rather than monoliths with continuous structures. Several precursors that enable non-classical crystallization—such as pre-nucleation clusters6,7,8, dense liquid droplets9,10, polymer-induced liquid precursor phases11,12,13 and nanoparticles14—have been proposed to improve the construction of inorganic materials, but the large-scale application of these precursors in monolith preparations is limited by availability and by practical considerations. Inspired by the processability of polymeric materials that can be manufactured by crosslinking monomers or oligomers15, here we demonstrate the construction of continuously structured inorganic materials by crosslinking ionic oligomers. Using calcium carbonate as a model, we obtain a large quantity of its oligomers (CaCO3)n with controllable molecular weights, in which triethylamine acts as a capping agent to stabilize the oligomers. The removal of triethylamine initiates crosslinking of the (CaCO3)n oligomers, and thus the rapid construction of pure monolithic calcium carbonate and even single crystals with a continuous internal structure. The fluid-like behaviour of the oligomer precursor enables it to be readily processed or moulded into shapes, even for materials with structural complexity and variable morphologies. The material construction strategy that we introduce here arises from a fusion of classic inorganic and polymer chemistry, and uses the same cross-linking process for the manufacture the materials.

Many materials are consolidated from their crystallized powders16, but their resulting discontinuous internal structures render them brittle with a poor ability to resist fracture17,18. By contrast, polymeric materials are ubiquitous in modern society, due not only to their varied properties but also to their ease of fabrication15,19. The polymerization strategy is superior to crystallization because of its efficiency and controllability. In polymer chemistry, covalent bonds have an important role in ensuring the linkage of small units. Although a few covalent-bond-based inorganic materials (for example silicone and silica)20,21 can be obtained as polymers, there is no general method for the preparation of such materials by crosslinking owing to the lack of investigation into ionic monomers or oligomers for this purpose. In the control of polymerization reactions, a capping agent is key22: capping can stabilize precursors, whereas de-capping can initiate polymerization. Analogously, we proposed that ionic oligomers could be stabilized by an appropriate capping agent. Capping based on hydrogen bonding was thought to be suitable, because most inorganic complexes contain oxygen. For example, triethylamine (TEA) can form a hydrogen bond with a protonated carbonate through its tertiary amine group. More importantly, TEA is a small molecule that can be volatilized at room temperature, and it was expected that this could initiate an expected crosslinking reaction.

The authors use triethylamine in a solvent, the solvent in this case being ethanol (which of course, unless the ethanol is recovered, makes industrialization questionable. The calcium carbonate is made by bubbling carbon dioxide through an ethanolic solution of calcium chloride. Mass spectrometry demonstrated the existence of calcium carbonate polymers from trimers to undecamers, with, for some reason, nonamers excluded. The structures were also studied by 13C NMR.

A figure from the paper:

The caption:

a, Left, scheme of the capping strategy and reaction conditions for producing (CaCO3)n oligomers; right, a photograph of gel-like (CaCO3)n oligomers. b, Mass spectra of (CaCO3)n oligomers with different Ca:TEA molar ratios. c, Liquid-state 13C NMR spectra of CO2 or the carbonates of (CaCO3)n oligomers with different Ca:TEA molar ratios in ethanol. d, Scattering plots of (CaCO3)n measured by SAXS. The red curve is the fitting result obtained using DAMMIF. I, scattering intensity; q, scattering vector. The error bar represents the standard deviation of twenty measurements. e, Pair–distance distribution function (P(r)) of the (CaCO3)n oligomers. The inset shows the shape simulation of the oligomer. Error bars represent one standard deviation, n = 20.

The fate of the chloride ions is not reported. This seems to me to be an important consideration, nevertheless this is an interesting paper.

Cross linking of the polymers is achieved by evaporating the ethanol and the triethylamine.

Another graphic:

The caption:

a, b, Molecular dynamics simulation of the evolution of the Ca–O (from carbonate) coordination number (a) and the average cluster size (b) from ions (Ca2+ and CO32− in the absence (black) or presence (blue) of TEA. c, A typical simulated CaCO3 cluster capped with TEA (an oligomer). d, In situ FTIR spectra during the drying of (CaCO3)n oligomers. e, The change in the coordination number of Ca–O during crosslinking. Owing to the uncertainty in the exact density during measurements, the blue and red lines are shown to represent the maximum and minimum coordination number of Ca–O, respectively. The black line shows the average coordination number. f, High-resolution TEM images of (CaCO3)n oligomers grown at different Ca:TEA ratios from 1:100 to 1:2. g, TEM images depicting the transformation of (CaCO3)n oligomers to larger structures during condensation.

Some text about the results:

Centimetre-sized monolithic CaCO3 materials were obtained by the crosslinking of oligomers. The resulting bulk maintained the original morphology of the compact gel precursor (Fig. 3a). FTIR spectroscopy and thermal gravimetric analysis indicated the formation of pure ACC without organic residue (Extended Data Fig. 7a, b). Scanning electron microscopy (SEM) and TEM showed the structural continuity in the bulk (Fig. 3b–e), and the internal continuous and integral textures were confirmed by artificially creating a crack (Fig. 3c). At scales from nano- to micrometres, the fabricated material was fully dense and smooth with no porosity or cracks (Fig. 3d, e). A nanoindentation test revealed that the ACC sample (Fig. 3l, blue circle in Fig. 3m) had a Young’s modulus of 8.0 ± 1.6 GPa and a hardness of 0.33 ± 0.07 GPa; these values are greater than those of most plastic materials28.

Another picture:

The caption:

a, Photograph of monolithic ACC prepared from (CaCO3)n oligomers. b–e, SEM (b, c) and TEM (d, e) images indicating the continuous solid phase of the prepared monolithic ACC. The inset of e is the fast-Fourier-transform image of the sample. Typically, the image of a crack in monolithic ACC exhibits continuity from the surface to the bulk (c). f, Snapshot of monolithic calcite prepared from monolithic ACC. g, Polarized-light optical microscopy (POM) image of the prepared monolithic calcite. h, SEM image of a surface on crystallized monolithic CaCO3. i, j, TEM images of the inner bulk of crystallized monolithic CaCO3. The inset of j is the fast-Fourier-transform image of the sample. k, XRD pattern of calcite powder, geological single-crystalline calcite and the calcite sample produced from (CaCO3)n oligomers. l, Load–displacement curves of the ACC sample, calcite sample and geological single-crystalline calcite sample measured by nanoindentation. m, Ashby plot of hardness (H) against Young’s modulus (E) for the prepared CaCO3 (including ACC and calcite) and other materials. The upper left inset is an exemplary residual indent of the Berkovich diamond tip on the crystallized CaCO3. Ē, plane strain modulus.

ACC, is amorphous calcium carbonate.

Some discussion of possible applications:

A considerable advantage of the crosslinking of ionic oligomers is that the oligomeric precursors can be moulded into shapes to enable continuously structured construction (Fig. 4a–c). This in turn enables the engineering of single-crystalline materials, including additive manufacturing. The construction of calcite rod arrays by oligomer crosslinking demonstrates the practicality of the preparation of single-crystal materials with structural complexity (Fig. 4d, e). This method can even be extended to repair damaged single crystals. Calcite single-crystal surfaces in optical devices30 can be damaged by mechanical crashing, scratching or corrosion, which reduces their functional performance—in particular transmittance. However, (CaCO3)n oligomers can generate oriented calcite within nano- and micro-sized pits or ditches of the damaged calcites in order to recover their smooth surface (inset of Fig. 4d, f, g). The repaired region (Fig. 4h, i) had exactly the same crystalline phase and orientation as the bulk beneath (Fig. 4j). The images of the high-resolution lattice fringes from the calcite bulk to the repaired front (Fig. 4k) demonstrate continuous (104) facets without any break, confirming that the same crystalline structure was reproduced exactly

A graphic on engineering utilizing this technique:

a, Moulded CaCO3 with different dimensions and morphologies. b, c, Moulded CaCO3 with different patterns. The inset of c shows a single CaCO3 rod. d, Schemes for pattern construction on single-crystalline calcite (top path), and the repair of rough single-crystalline calcite to smooth calcite (bottom path). The insets show optical microscopy images of the calcite surface at different stages: native, corroded, and repaired. e, POM images of the patterned calcite rotated at different angles. f, g, SEM images of the repaired calcite (surface and cross-section, respectively). h, TEM image of a cross-sectional view of the repaired calcite. The different layers labelled 1, 2, 3 and 4 were characterized by selected area electron diffraction and high-resolution lattice fringes in j and k. i, EDS mapping of the repaired calcite in h, showing the repaired CaCO3 as well as gold nanolabels. j, Selected area electron diffraction patterns of different layers (1–3) of h with an aperture of around 170 nm in diameter, showing the same patterns from the bulk to the repaired surface. The red dots in 1 are the simulated diffraction pattern viewed along the <−4, 4, 1> zone axis. k, High-resolution lattice fringes at the different layers (1–4) of h, exhibiting the facets of (104) with exactly the same orientation from the bulk to the repaired surface.

Interesting paper, I think, I thought I'd share it.

Have a great Friday.

Go to Page: « Prev 1 2 3 4 5 6 7 ... 67 Next »