HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » NNadir » Journal
Page: 1 2 Next »

NNadir

Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 23,816

Journal Archives

So my kid is taking a Chemical Engineering course called Unit Operations and the Professor asked...

...the class to contemplate a case where a customer who had been purchasing hydrochloric acid, a side product of their process, went out of business. The professor noted that environmental regulations prevented the release of the material and that waste disposal for it costs were high.

"What do you do?" the professor asked.

Someone raised his hand and said, "Pay a lobbyist to change the environmental regulations!"

Everybody laughed, including the professor.

(My son, who says that the class consists of fellow students who have a great sense of humor.)

Everybody laughed...everybody laughed...everybody laughed...

I explained to my son that this is a very real problem. Most of the world's waste hydrochloric acid waste is deep welled.

"Wouldn't that dissolve the rock and cause problems?" my son asked. "Couldn't that cause collapse?"

"Um...um...um..." I said, and went on to apologize on behalf of my generation to his. "These are the sorts of problems you are tasked to solve," I said. I apologized again.

This, by the way, is the chemical reaction that produces polysilicon:

HSiCl3 → Si + HCl + Cl2

The solar industry, which is a trivial industry, producing trivial amounts of energy despite more than half a century of mindless cheering for it, is a huge consumer of the explosive and corrosive gas that is the reactant in this chemistry, trichlorosilane.

Have a nice "hump day" tomorrow.

A Review of the Problem of Brine Disposal Connected With Desalination.

The scientific paper I will discuss in this post is this one: The state of desalination and brine production: A global outlook (Jones et al, Science of the Total Environment, 657 (2019) 1343–1356). It is a review article, with over 100 references to the primary scientific literature.

It is very clear to me that all efforts to address climate change have failed miserably. This failure, and indeed worldwide political and social outcomes that are actually accelerating the problem, not limited to the likes of Trump and Bolsanaro, but also including those who think that so called "renewable energy" is "green" energy and that it is a serious way to address climate change. It isn't. When one looks at the data one can see that the attempt to displace dangerous fossil fuels with so called "renewable energy" is rather the equivalent of announcing that the best way to deal with the floods associated with hurricane Katrina would have been to form a line of Louisianans stretching to New Mexico passing water filled Dixie cups to one another. The metaphor is entirely appropriate.

It's that bad.

Coupled with denial, our insistence of having faith in technologies that will not work and have experimentally demonstrated as much leaves the task of cleaning up our garbage to future generations, generations we have already robbed by the consumption (and dilution) of irreplaceable resources, thus impoverishing them.

As over my long lifetime I've grown increasingly appalled at this now inevitable environmental train wreck, and the father of two young men well along on establishing their careers, I've begun to focus my attention on the scientific investigation of technologies that may allow for the reversal of entropy associated with the accumulation of carbon dioxide in the atmosphere. With due (and great) respect to the marvelous science of people like Christopher W. Jones, (I hope to discuss the linked paper in a future post on this site), I personally believe that the key to removing carbon dioxide from the atmosphere will rely on using the oceans as an extraction device.

This technology will only be economically viable where the ocean water is also processed for other purposes, the most obvious being desalination, since an immediate effect of climate change - already being observed around the world - will be to destabilize fresh water supplies.

One should not, however, regard this technology as a "green" panacea, as people still - in spite of its obvious failure - regard so called "renewable energy" technologies like wind and solar.

The problem with desalination, as the paper referenced at the outset makes clear, is the disposal of brine.

According to the paper, there are 15,906 desalination plants operating around the world, a large fraction of them being located in the Middle East. The paper gives a nice overview of the types of technology that are employed to desalinate water, and the common abbreviations (which I will also use hereafter in this post) used to denote them:

Desalination technology was separated into seven categories: 1) Reverse Osmosis (RO); 2) Multi-Stage Flash (MSF); 3) Multi-Effect Distillation (MED); 4) Nanofiltration (NF); 5) Electrodialysis/Electrodialysis Reversal (ED); (6) Electrodeionization (EDI); and 7) Other. ‘Other’ included a variety of technologies such as 1) ForwardOsmosis (FO); 2)Hybrid (HYB); 3) Membrane distillation (MD); 4) Vapour compression (VP); and 5) Unknown. As the technologies grouped together under the ‘Other’ category contribute a total of b1% of the total desalinated water produced, these technologies were not considered individually.


All of these technologies require energy to operate.

For perspective, according to the paper these 15,906 plants produce roughly 95 million cubic meters of fresh water per day, which works out to 34.7 billion cubic meters per year. According to a public policy website in California, the State of California in a "wet" year, uses about 104 million "acre-feet" of water:



This translates to 128 billion cubic meters, meaning that all of the world's existing desalination plants produce - in the "percent talk" so loved by pixilated "renewable energy" advocates - about 27% of the water consumed in California in a flush year.


My own opinion is that from a purely environmental standpoint - albeit with several very important materials science issues being in need of consideration - those technologies that derive energy from heat, as listed above, MSF and MED, are superior, particularly if coupled with two technologies considered in the "other" category, VP and - for addressing issues with brine that the review paper discusses - FO.

It is worth noting that it seems to me at least that certain electrodialysis processes can actually recover some of the energy utilized for desalination, since an electrical current can be generated using certain kinds of membranes separating two solutions having a saline gradient, a factor that might prove worthwhile at say, oceanic outfall pipes for municipal waste water, simultaneously cleaning the water and recovering energy. I encounter papers along these lines from time to time in the journals I read, and sometimes briefly scan them; but will not discuss them further since I clearly have no serious expertise in this area.

My opinion on what might be environmentally sustainable technologies for desalination are informed by my also frequently stated opinion that nuclear energy, whether fission or in some far off future fusion, are the only sustainable technologies to address climate change, and that high temperature nuclear reactors are the best approach to the utilization of nuclear energy. High temperature reactors offer the capability of thermal desalination as a side product of isolating carbon based materials from air or seawater, a viable form of sequestration, as well as manufacturing chemical fuels where needed, thus closing the carbon cycle.

Still the paper makes clear there is a problem with brine produced by desalination, not all of which is the desalination of seawater.

The paper provides a map of desalination plants around the world, and one should immediately note that many are far from the ocean:



The caption:

Fig. 4. Global distribution of operational desalination facilities and capacities (N1000 m^3/day) by sector user of produced water.


This obviously implies that there are great differences in the types of water subject to "desalination" or in some cases, re-purification.

The authors provide a list, along with some useful brief comments, similar to that of the technologies in use for these feed water types:

Feedwater type is separated into six categories in DesalData (2018) expressed in ppm Total Dissolved Solids (TDS): 1) Seawater (SW) [20,000–50,000 ppm TDS]; 2) Brackish water (BW) [3000–20,000 ppm TDS]; 3) River water (RW) [500–3000 ppm TDS]; 4) Pure water (PW) [b500 ppm TDS]; 5) Brine (BR) [N50,000 ppm TDS]; and 6) Wastewater (WW). Despite having a typically high base quality (low salinity), desalination of RW is practiced for a range of different sectoral uses (e.g. drinking water, irrigation) to reduce water salinity below specific sectoral thresholds. PW as a feedwater source is typically used for industrial applications which require very high quality (low salinity) water, such as the pharmaceutical and food production industries.


The authors also discuss the level of scientific attention being paid to desalination technologies, some of which are clearly not mainstream but very worthy of deeper consideration. They do this by listing the number of scientific papers devoted to each type of technology.



The caption:

Fig. 2. Number of publications by type of desalination technology (Reverse Osmosis [RO], Multi-Effect Distillation [MED], Multi-Stage Flash [MSF], Electrodialysis [ED]), emerging technologies (Nanofiltration [NF], Forward Osmosis [FO] and Membrane Distillation [MD]) and other (Humidification-Dehumidification [HDH], Solar Stills [SS] and Vapour Compression [VC]).


While it may seem that in the case of MED and MSF, the technologies in which I personally have much hope, that not much is left to be said that serious issues remain to be addressed, in particular scaling (fouling) which can have effects on both heat transfer and corrosion. These are materials science questions, and I personally support deeper research into them.

The focus of scientific papers discussing desalination is also graphed:




The caption:

Fig. 1. Number of desalination publications by categorisation (total, technical, social, environment, energy & economic).


Is anyone surprised at the relative position of "environmental" is in this graphic, although it must be said that the combined category of energy and economic has definite implications for environmental factors?

The number and type of desalination plants around the world are also graphed:



The caption:

Fig. 3. Trends in global desalination by (a) number and capacity of total and operational desalination facilities and (b) operational capacity by desalination technology.


An alternative graphic, including information on feedwater type is also provided.




The caption:

Fig. 5. Number and capacity of operational desalination facilities by (a) technology and (b) feedwater type.


Nevertheless, irrespective of the technology and feed water type, a concentrated "waste" flow is produced, what the authors refer to in this paper as "brine," although the removed impurities may not be strictly limited to salts.

The authors offer a geographical graphic on the magnitude of this brine issue:



The caption:

Fig. 7. Volume of brine produced per country at a distance of a) b10 km and b) N50 km from the coastline.


I will plainly confess that I have not read the full paper, much less accessed the many interesting references therein, but it's certainly worth spending some time on this important issue.

My personal environmental philosophy is that there should be no such thing as "waste" of any type, or at least, to the extent possible, it should be minimized. The authors briefly suggest some approaches to utilizing "brine:"

Other potential economic opportunities associated with brine production have also sparked a wave in innovation in brine management that seeks to turn an environmental probleminto an economic opportunity (Sánchez et al., 2015). For example, Blackwell et al. (2005) identified sequential biological concentration (SBC) of saline drainage streams creating a number of financial opportunities,whilst concentrating thewaste streaminto a manageable volume. Qadir et al. (2015) suggested that integrating agriculture and aquaculture systems based on the SBC system using saline drainage water sequentially has the potential for commercial, social and environmental gains. Reject brine has been used for aquaculture, with increases in fish biomass of 300% achieved (ICBA, 2018). Reject brine has also been successfully used for Spirulina cultivation and the irrigation of halophytic forage shrubs and crops although this method was unable to prevent progressive land salinisation (Sánchez et al., 2015).


Good stuff, probably not all that significant given the scale of the problem, but good stuff all the same.

Seawater contains a lot of valuable resources, obviously NaCl itself, but also considerable amounts of other minerals, notably magnesium, which can be a key reagent for the control of carbon dioxide and carbonates, as well as an important material for many other applications. I often note that I favor the utilization of seawater's ability to extract uranium from rock and magma, which makes nuclear fuel inexhaustible. And, most importantly, seawater contains the bulk of the free carbon dioxide on earth, both as solvated gas and in the form of carbonate and bicarbonate ions and salts.

None of this is a panacea, of course, and any such utilization needs to be conducted with careful attention to environmental issues, which are profound. Nevertheless, as said, it's a worthy consideration.

I wish you a pleasant Sunday afternoon.

An extraordinarily high neutron capture cross section has been discovered in a zirconium isotope.

The paper I'll discuss in this post is this one: The surprisingly large neutron capture cross-section of 88Zr (Jennifer A. Shusterman, et al Nature 565, 328–330 (2019) ).

(It is nice to note that 4 of the 10 authors of this paper, including the lead author, are women scientists.)

The first step I personally took in my path from being a poorly educated anti-nuke to believing - as I do now - that for the foreseeable future nuclear fission energy is the only environmentally acceptable and only sustainable form of energy there is, was when I encountered a parameter called the "neutron capture cross section" in a table of nuclides in a book that is seldom necessary to own these days (but was critical in former times), the CRC Handbook of Chemistry and Physics. I was looking at these tables because it was around the time that Chernobyl blew up, and I was trying to familiarize myself with the half-lives of some of the fission products that were in the news around that time, thinking, as proved not to be true, that these isotopes would kill hundreds of thousands of people.

If I recall correctly, the existence of this parameter, which is generally described in a parameter called "barns," immediately suggested to me at the time that it should be possible to transmute radioactive materials I then thought of as "nuclear wastes" into non-radioactive materials. I was so stupid and so ignorant that I actually thought that people were ignoring this, despite the fact that the "neutron capture cross section" and related "cross sections" such as fission, scattering, [n, 2n] etc, etc are fundamental considerations that any competent nuclear engineer must understand completely.

(The "barn" was an originally whimsical term for the apparent "target" area that an atomic nucleus represents to a neutron setting out to run into it, and probably comes from the idiom "couldn't hit the side of a barn." It's units are area, and a barn is 10^(-24) square centimeters.)

The most famous, I think, of so called "nuclear wastes" is the element cesium, in particular, the Cs-137 isotope, which has a half-life of 30.08 years It occurred to me - and it turns out fairly naively, since at the time I was as much a moron as, say, the badly educated, arrogant, and ignorant anti-nuke Harvey Wasserman - that if Cs-137 captured a neutron, as the existence of the parameter implied it could do, it would be transformed into Cs-138, which had a half-life of 33.41 minutes, decaying rapidly into the stable isotope barium-138.

This raised the question one often hears from people who think they're pretty smart but actually know nothing at all, which is "Why don't 'they' just (do x, y, or z)" as in "why don't 'they' just desalinate the ocean" (often heard in droughts) or "why don't 'they' just go solar" or "why don't 'they' just run cars on hydrogen" and so on and on...

My form was, "Why don't 'they' just have cesium-137 capture a neutron and rapidly become non-radioactive barium-137."

Now, older and wiser, I realize I should have paused to consider who "they" might be, and in consideration of this, I have been studying the work that "they" - in this case nuclear engineers and nuclear scientists - have been doing, and trust me, they have considered all the points that ignoramuses staring at the Table of Nuclides might have considered that "they" should "just" consider doing.

(Speaking of "they," this reminds me of a cute exchange I had with a dumb anti-nuke over at Daily Kos, who like many people on the left who pretend to give a shit about climate change without understanding a damned basic practical thing about the practical aspects of actually addressing it as opposed to drooling over consumer junk like Tesla cars, who arrogantly told me with a patina of schadenfreude glee , after the Fukushima event, "'They' said this could never happen!" We may take this as evidence that anti-nukes, even those who write bathos inspiring newspaper articles about historic "Navajo" (Dine) uranium miners - without considering how many human beings die every day from air pollution - seldom open a science book or a scientific paper that "they" have written. By contrast with this dumb guy, who happens to be a journalist, I read scientific papers about nuclear energy all the time, and there are many thousands of papers written about all sorts of bad things that could happen, but only do so rarely. In fact, nuclear energy is the only form of energy that was investigated for worst cases before it was constructed. We may contrast this with dangerous coal, dangerous petroleum, and dangerous natural gas, especially with respect to the latter two with “fracking,” all of which have been built without consideration of possible consequences, and indeed continue to operate without consideration of their dire observed consequences, the worst of which is climate change.

Happily for both sides, I was banned (or liberated) from Daily Kos, as I like to say, for telling the truth, and this may come under the general rubric for those familiar with Christian mythology, of "Forgive them, for they know not what they do,” – not about me, since I’m hardly even close to being Jesus – but about the unbelievable stupidity of opposing nuclear energy. Ignorance kills . It kills people since nuclear energy saves lives. This is a fact Facts matter. There is no such thing as an "alternate fact." )

Anyway, it turns out that it is not really practical to transmute cesium-137 into barium-138 in a significant way that would be worthwhile, but it really doesn't matter because it is easy (and quite possibly critical) to find uses for this wonderful isotope, more uses than those few that currently exist. Although it will always be available in limited supply, regrettably, because of a physical limitation known as the “Bateman Secular Equilibrium,” this isotope can do some pretty wonderful things connected with cleaning things up, particularly some dangerous chemical things, should we ever be serious about doing so, not that there is any evidence that we will ever be so.

The range of values for known neutron capture cross sections for the thousands of known nuclides in the Table of Nuclides goes from zero (for Helium-4) to 2,600,000 (for Xenon-135). Until the paper cited at the outset, the second highest known neutron capture cross section was 250,000 barns (for Gadolinium-157.) All of these aforementioned isotopes play a role in nuclear technology, the helium and gadolinium in some reactors, the xenon-135 is all reactors. The first, helium, has been used as a combined coolant/moderator is gas cooled reactors, the second has been used either as a “burnable poison” in fuel or in control rods. The last, xenon-135, is a radioactive fission product, generally produced as a result of the decay of iodine-135, also a fission product. It’s neutron capture cross section is so high that it is necessary to follow its accumulation – to be aware of it – and add reactivity to the core to account for it. It was first discovered with the operation of the earliest nuclear reactors utilized during the Manhattan project, and it is a credit to the genius of the early reactor designers, notably Enrico Fermi, to quickly recognize and account for its effects.

All competent nuclear engineers know all about “xenon poisoning” – the effect wherein xenon-135 can cause a reactor to shut down. It can also cause a delay in the time it takes a reactor to restart after it shuts down. Xenon-135 is usually not formed directly in nuclear fission, but is actually a decay product of another radioactive isotope which forms in the reactor, Iodine-135. This isotope has a half-life of 6.57 hours. During normal reactor operations both I-135 and Xe-135 reach secular equilibrium, the point at which they are decaying as quickly as they are formed, I-135 largely by β- decay, and Xe-135 by a combination of neutron capture, where it is transformed into the stable (and valuable) isotope Xe-136, and by β- decay – it’s half-life is about 9.2 hours - where it is transformed into the radioactive (but short lived, (13 day half-life roughly) isotope Cs-136, Cesium-136) which itself decays into stable Ba-136.

When a reactor shuts down and fission (except for spontaneous fission) stops, iodine-135 is no longer being formed, and equilibrium is no longer established, while xenon-135 is no longer being consumed by neutron poisoning. As the iodine-135 decays from its steady state concentration, the amount of xenon-135 increases, until it also decays. Because of the neutron absorption of these higher concentrations of xenon-135, the reactor cannot be restarted for several hours, again, as all competent nuclear engineers know.

This graph shows the effect:



The caption:

2: Xe-135 concentration in the reactor and neutrons reactivity. (Source: Wikimedia Commons)


Apparently incompetent nuclear engineers also know about “xenon” poisoning, and did something very stupid, after disabling all of a reactor's safety systems in order to overcome it. To wit:

The Chernobyl accident occurred in one of four RBMK1000 reactors at the Chernobyl site 100 miles north of Kiev. The operators were preparing an experiment in which the energy of rotation of the turbine during shut down should produce emergency electrical power for the support of the diesel generators. Unexpectedly the experiment had to be interrupted for some time to comply with electricity supply which led to the buildup of the fission product Xe-135 (neutron poison). When the experiment could be continued the power level dropped to about 30 MW(th) because of operator error. This led to additional buildup of Xe-135 (neutron poison). As a consequence the operators had to withdraw the control rods manually to their upper limits after they had shut off the automatic control system. The RBMK1000 was known to have a positive coolant temperature coefficient. This gave rise to instabilities in power production, coolant flow and temperatures in the low power range.

Then the experiment began at the power level of 200 MW(th). Steam to the turbine was shut off. The diesel generators started and picked up loads. The primary coolant pumps also run down. However this led to increased steam formation as the coolant temperature was close to its boiling temperature. With its positive coolant temperature coefficient the RBMK1000 reactor now was on its way to power runaway. When the SCRAM button was pushed the control elements started to run down into the reactor core. However, due to a wrong design of the lower part of the control elements (graphite sections) the displacement of the water by graphite led to an increase of criticality. A steep power increase occurred, the core overheated causing the fuel rods to burst, leading to a large scale steam explosion and hydrogen formation...


The Severe Reactor Accidents of Three Mile Island, Chernobyl, and Fukushima

Interestingly, many people cite this event as "proof" that nuclear energy is unsafe, even though they don't announce that aircraft crashes prove that flying is unsafe, or that automotive crashes prove that cars are unsafe, or that natural gas explosions prove that dangerous natural gas is unsafe, or most interestingly, the deaths of more than 225 million people from air pollution since 1986 prove that dangerous fossil fuels are unsafe.

Selective attention I guess.

In the next 24 hours, more than 19,000 people will die from air pollution.

Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015 (Lancet 2016; 388: 1659–724)

We couldn't care less.

Anyway, let’s leave those implications aside, and go to the physics. A higher neutron capture cross section than gadolinium-157 has been discovered in a nucleon, Zr-88.

From the introductory text of the paper cited at the outset:

The neutron capture reaction cross-sections for the vast majority of radioactive nuclei are poorly known, despite the importance of this information to a range of topics in both fundamental and applied nuclear science. Essentially all the elements that are heavier than iron were created via successive neutron capture reactions and β decays (which convert neutrons to protons within the nucleus) in celestial environments, such as asymptotic giant branch stars13, core-collapse supernovae and neutron star mergers14. Understanding the origin of the elements in the cosmos is one of the most important overarching challenges in nuclear science and requires neutron capture cross-sections for radioactive nuclei produced along the nucleosynthesis pathways. Over the last century, nuclear reactors and weapons have exploited neutron-induced reactions to harness enormous amounts of energy, relying upon a detailed neutron inventory for predictable performance. In a nuclear reactor, nuclides with large neutron capture cross-sections act as poisons in the fuel and diminish performance, or can be introduced intentionally to control fuel reactivity. The United States’ Science-Based Stockpile Stewardship Program, which is used to maintain high confidence in the safety, security, reliability and effectiveness of the nuclear stockpile in the absence of nuclear testing15, relies in part on cross-sections for radioactive isotopes to interpret archival data from underground tests of nuclear devices. The transmutation of stable Y and Zr detector material used in underground tests produced radioactive isotopes, such as 88Zr (half-life t1/2 = 83.4 d), that served as important diagnostics sensitive to neutron and charged-particle fluences16...


88Zr is a neutron deficient nucleus, unlike the majority of fission products, which are generally neutron rich. However, neutron poor nuclei can be formed in high energy neutron fluxes by spallation reactions, for example, for the stable isotope 90Zr, 90Zr[n,3n]88Zr, which according to the paper, was known from underground nuclear weapons tests, in which 88Zr formed.

It should be pointed out however that in a nuclear explosion, a prompt critical event, the neutron flux is extremely high. By contrast, in a nuclear reactor, one would not expect 88Zr to form. However, were it to form, it apparently wouldn't survive very long, because, as the title indicates, the neutron capture cross section is huge. Just as the formation of 135Cs is suppressed by the enormously high capture cross section of 135Xe, which is normally rapidly depleted at a rate dwarfing its (short) half-life, so would any 88Zr formed be eliminated.

For the experiment, 88Zr was not made by neutron induced reactions, but rather in a cyclotron by proton bombardment:


In this work, the 88Zr(n,γ )89Zr cross-section was measured by producing and chemically separating multiple 88Zr samples, irradiating them in a high thermal-neutron flux of (6.7–8.7) × 1013 n cm−2 s−1 and determining the quantities of 88Zr and of the reaction product 89Zr using γ-ray spectroscopy. The 88Zr target material was produced via the 89Y(p,2n)88Zr reaction using a proton (p) beam from the University of Alabama at Birmingham Cyclotron Facility. 88Zr was chemically purified using anion-exchange chromatography and assayed before encapsulation as a salt residue in high-purity quartz tubes. The 37-kBq 88Zr samples and accompanying quartz-encapsulated natural-metal foils (Fe, Zr, Mo and Y), which served as flux monitors, were irradiated for 5 min–50 h in a primarily thermal-neutron flux in the graphite reflector of the University of Missouri Research Reactor (MURR). The neutron flux was determined with precision of 7%–11% from reactions in the monitor foils (Extended Data Table 1), which have well established cross-sections20, together with detailed MCNP5 (Monte Carlo N-Particle code, version 5) modelling of the neutron flux at the irradiation position to provide the neutron energy distribution (Extended Data Fig. 1).


Cool.

Figure 1:



The caption:

The spectra, which were collected with HPGe detectors immediately upon receipt at LLNL, are normalized on the basis of the live time of the measurement, initial target atoms and neutron flux. No decay or detector efficiency corrections have been applied. The unlabelled peaks between 400 and 800 keV, aside from the 511-keV peak, are from 187W and 82Br (activation products of residual impurities in the sample). The 511-keV pair-annihilation peak is primarily due to the positron emission from the decay of 89Zr and follows a trend nearly identical to that of the 909-keV peak.


From the calculated flux in the Missouri University Research reactor, the neutron capture cross section was derived from the following curves:



The caption:

Measured 88Zr atoms (blue squares) and 89Zr atoms (red circles) present in the samples following irradiation, as well as 88Zr atoms lost (black triangles), are normalized by the initial number of 88Zr atoms in each sample. The blue solid, red solid and black dashed lines show the corresponding fitting curves. The Zr populations have been corrected for decay between the beginning of irradiation and the measurements performed after irradiation. The error bars (1σ uncertainties) represent the summed correlated and uncorrelated contributions.


A graphic of the determined neutron capture cross sections, with an inset for the known cross sections for "normal" nucleons.




The caption:

The main plot shows all the existing data on a linear scale, and the inset displays the same data on a logarithmic scale. The vertical lines indicate the neutron-shell closures, which occur for nuclei with 2, 8, 20, 28, 50, 82 and 126 neutrons. The three isotopes with cross-sections of more than 105 b are labelled along with the year of the measurement.


It is interesting to note that many of the highest neutron capture cross sections exist in nuclei having neutron numbers that correspond roughly to the lanthanide elements. Some of these, at least the lighter lanthanides, are prominent fission products. In fact in solid fueled nuclear reactors, the fuel stops functioning well before the fissionable nuclei in it are consumed. This is because of the accumulation of highly neutron absorbing isotopes of elements like samarium, and to a lesser extent, europium and even promethium (as well as some other elements). This suggests that in a sensible world where people give a rat's ass about climate change - which is clearly not the world in which we live - that elements of used nuclear fuel might displace the elements now utilized in control rods, to adjust reactivity in fuel cores, since, despite what you may have heard, nuclear energy is the only form of energy that is scalable enough, sustainable enough and safe enough to address climate change.

Another interesting point suggested by the information above is a comment on the "reality" of concepts like "area" in the case of atomic nuclei. Above I indicated that a "barn" was a unit of area that is 10^(-24) cm^2 (10^(-28) m^2). If one does some simple, but naive calculations this suggests that a mole of xenon-135, (roughly 135 grams) should have, as a consequence of its neutron capture cross section of 2,600,000 barns, a combined nuclear area of around 156 square meters. Of course this is not observed. Although most - certainly among chemists - scientists tend to think of quantum effects involving electrons, which are fermions best described by wave functions subject to the Pauli exclusion principle...

(This text, excepted immediately below from the original form of this post, is wrong, as was pointed out in comments below by a correspondent. Both neutrons and protons are Fermions, and the Paui Exclusion Principle applies. On reflection, this should have been obvious to me, given that transitions between nuclear isomers is usually "monochromatic. For example, the decay of radioactive Ba-137m to stable Ba-137, a feature of the decay series of Cs-137 - and accounting for the gamma output of this series - always releases radiation with an energy of 661.659 keV. This would not be the case for a Bose-Einstein system. I thank the correspondent for the correction from which I learned two things, the fact of the matter, and not to rely too much on memory. The correspondent also pointing out that Wigner won his Nobel for the nuclear shell model by which this system operates.)

neutrons and protons are bosons, and thus fill nuclei under a different kind of statistics, Bose-Einstein statistics, according to the Breit-Wigner distribution; for orbitals of neutrons and protons, the Pauli exclusion principle does not apply.


Nevertheless, like all quantum phenomena, a nucleus has a wave function and should not be thought of as strictly particulate in nature. In fact, the neutron capture cross section is best described in a "center of mass" frame and not a laboratory frame, and a neutron about to collide with a nucleus is in fact a system, as opposed to to "particles" colliding. This wave function is thus a function of the energy of a system: The apparent "size" at which a neutron "sees" a nucleus as having depends on the energy (the velocity) the neutron possesses with respect to the target nuclei. The 2,600,000 barn figure for Xe-135 is only true for "thermal" neutrons, generally taken to be neutrons with an average energy of 0.253 eV. Despite the units of area, a nucleus is neither a thing nor a wave, but both.

(Eugene Wigner, co-developer of the formula above for neutron capture resonances, and also a Nobel Laureate, co-wrote with Alvin Weinberg, the first nuclear engineering textbook, in 1956. I believe it's still in print. The technical basis of most of the world's operating reactors was based on technology known in the 1950s and 1960s.)

The ideas around neutron cross sections have been floating in my brain for several decades now, and I was intrigued by this interesting paper which showed up just recently in Nature. Papers on this topic always grab my attention when I see them.

I wish you a pleasant Sunday.

Wouldn't you agree that it's time we had a smooth function...

...that we use "to switch between the low pressure Sips and the high pressure Langmuir–Henry isotherms"?

Well here it is:



Beautiful little thing, isn't it?

Comes from a paper by Christopher Jones at GA Tech, a man working to leave future generations something by which they may save themselves from what we have done to them.

Sorry kids, but at least you have this:

Moving Beyond Adsorption Capacity in Design of Adsorbents for CO2 Capture from Ultradilute Feeds: Kinetics of CO2 Adsorption in Materials with Stepped Isotherms

I love looking at equations that look like that little parameter.

It's a beautiful thing, esoteric, but very beautiful all the same.

Trust me. It is.

I had no idea that Trump had British relatives, and that these relatives visited New Zealand.

Tourists From Hell Visit New Zealand And The Whole Country Unites Against Them

Two nice boys playing Cello.

Government for the people, BY the people, FOR the people, OF the people, HAS vanished.

What we can say about James Buchanan, who was generally considered the worst President in US history, was a least the crisis that he failed miserably to address, the horror of human slavery, existed before he took office.

There had been many threats to break up the Union because Southern racists hated and exploited African Americans with more enthusiasm than they loved their country, just as Trump today hates Mexicans more than he loves Americans. (He loves nothing, save his withered self.)

It was fortunate that the United States has a man like Lincoln - from whose famous Gettysburg address the title here, of course, is modified - to succeed Buchanan.

Lincoln of course, cared what history would think of him, and Lincoln had something missing today in the party he helped found, intelligence and integrity and patriotism.


The two new candidates for succeed Buchanan in the minds of future historians, should historians exist, Trump and Bush Jr, manufactured crises, Bush in Iraq, Trump, even more mindlessly, at the Mexican border.

When Trump is done, I should not be surprised if the Mexican army could kick down the walls, take back Texas, New Mexico, Arizona and California, he is so weak, so ignorant, so vicious.

He is destroying this government, with the complicity of many of those in it. It is interesting that the same general area responsible for the Civil War, is also responsible for the destruction of the American Government 150 years after the fact.

Carbon Dioxide, Oxygen Depletion, and the Mass Extinction in the Permian Era.

The paper I'll discuss in this post is this one: Temperature-dependent hypoxia explains biogeography and severity of end-Permian marine mass extinction (Penn et al, Science, (2018) Vol. 362, Issue 6419, eaat1327).

This paper is the source material for a news article which came to my attention by a post here: Stanford Study: We Will Be 20% Of The Way To Permian Extinction 2.0 By 2100 With Business As Usual

From the introduction:

Volcanic greenhouse gas release is widely hypothesized to have been the geological trigger for the largestmass extinction event in Earth’s history at the end of the Permian Period [~252 million years (Ma) ago] (1, 2). At least two-thirds of marine animal genera and a comparable proportion of their terrestrial counterparts were eliminated, but the mechanisms connecting environmental change to biodiversity collapse remain strongly debated. Geological and geochemical evidence points to high temperatures in the shallow tropical ocean (3, 4), an expansion of anoxic waters (5–8), ocean acidification (9–12), changes in primary productivity (13, 14), and metal (15) or sulfide (16, 17) poisoning as potential culprits. However, a quantitative, mechanistic framework connecting climate stressors to biological tolerance is needed to assess and differentiate among proposed proximal causes.

In this study, we tested whether rapid greenhouse warming and the accompanying loss of ocean O2—the two best-supported aspects of end- Permian environmental change—can together account for the magnitude and biogeographic selectivity of end-Permianmass extinction in the oceans. Specifically, we simulated global warming across the Permian/Triassic (P/Tr) transition using a model of Earth’s climate and coupled biogeochemical cycles, validated with geochemical data.


This is an in silico evaluation, since the experimental loading of the entire atmosphere with excess carbon dioxide, while well underway, has not been completed, although some preliminary intermediate results are currently being observed. The experimental portion of the work described herein - other than burning all of the world's fossil fuels and dumping the waste in the atmosphere just described - is limited to viewing the metabolic effects of oxygen depletion on extant species. (Trilobites were not available for testing.) The in silico data is also compared with the fossil record, including oxygen isotope ratios in fossil conodonts, eel like animals that lived in those time, generally known from fossils of their teeth.

The following graphic from the paper touches on that point:

?width=800&height=600&carousel=1

The caption:

• Fig. 1 Permian/Triassic ocean temperature and O2.
(A) Map of near-surface (0 to 70 m) ocean warming across the Permian/Triassic (P/Tr) transition simulated in the Community Earth System Model. The region in gray represents the supercontinent Pangaea. (B) Simulated near-surface ocean temperatures (red circles) in the eastern Paleo-Tethys (5°S to 20°N) and reconstructed from conodont δ18O apatite measurements (black circles) (4). The time scale of the δ18O apatite data (circles) has been shifted by 700,000 years to align it with δ18Oapatite calibrated by U-Pb zircon dates (open triangles) (1), which also define the extinction interval (gray band). Error bars are 1°C. (C) Simulated zonal mean ocean warming (°C) across the P/Tr transition. (D) Map of seafloor oxygen levels in the Triassic simulation. Hatching indicates anoxic regions (O2 < 5 mmol/m^3). (E) Simulated seafloor anoxic fraction ƒanox (red circles). Simulated values are used to drive a published one-box ocean model of the ocean’s uranium cycle (8) and are compared to δ238U isotope measurements of marine carbonates formed in the Paleo-Tethys (black circles). Error bars are 0.1‰. (F) Same as in (C) but for simulated changes in O2 concentrations (mmol/m^3).


The test animal used to perhaps model metabolism is the common crab found along the East Coast of North America Cancer irroratus. Crustaceans, like the trilobites, which inhabited the oceans for 280 million years before their extinction in this event, are members of the phylum Euarthropoda (Arthropods) and like the trilobites, feature an exoskeleton that probably was fairly acid sensitive. It is not clear that the extinction of the trilobites was a function of increased acidity owing to the carbon dioxide content of the oceans, or whether it derived from oxygen depletion or perhaps both. The authors discuss this briefly in the discussion, but in a rather general and somewhat speculative way.

With this editor and the type of text used by Science I cannot produce the equation for the "metabolic index" used here, but for those with a modicum of a science back ground, this index is proportional to the partial pressure of oxygen divided by a term that looks very much like an Arrhenius term, an exponential operator on the negative value of energy (here measured in electron-volts), divided by the Boltzman constant (R/No) times the difference between reciprocal temperatures. The proportionality constant has units of inverse pressure and therefore the metabolic index, Φ, is dimensionless. This metabolic index (which differs from what your fitbit might put out or what you can see on a "lose your fat and look good" website) is described here: Climate change tightens a metabolic constraint on marine habitats, which seems to be along the same lines as the paper under discussion.

A graphic about the metabolic index:

?width=800&height=600&carousel=1

The caption:

• Fig. 2 Physiological and ecological traits of the Metabolic Index (Φ ) and its end-Permian distribution.
(A) The critical O2 pressure (pO2crit) needed to sustain resting metabolic rates in laboratory experiments (red circles, Cancer irroratus) vary with temperature with a slope proportional to Eo from a value of 1/Ao at a reference temperature (Tref), as estimated by linear regression when Φ = 1 (19). Energetic demands for ecological activity increase hypoxic thresholds by a factor Φcrit above the resting state, a value estimated from the Metabolic Index at a species’ observed habitat range limit. (B) Zonal mean distribution of Φ in the Permian simulation for ecophysiotypes with average 1/Ao and Eo (~4.5 kPa and 0.4 eV, respectively). (C and D) Variations in Φ for an ecophysiotype with weak (C) and strong (D) temperature sensitivities (Eo = 0 eV and 1.0 eV, respectively), both with 1/Ao ~ 4.5 kPa. Example values of Φcrit (black lines) outline different distributions of available aerobic habitat for a given combination of 1/Ao and Eo.


Text touching on the metabolic index is this paper:

pO2 and T are the O2 partial pressure and temperature of ambient water, respectively; kB is Boltzmann’s constant; and the parameters Ao (kPa^(−1)) and Eo (eV) represent fundamental physiological traits of a species. The inverse of Ao (i.e., 1/Ao, in kPa) is the minimum pO2 that can sustain the resting metabolic rate (i.e., the “hypoxic threshold”) at a reference temperature (Tref), and Eo is the temperature sensitivity of that threshold (Fig. 2A). The Metabolic Index measures the capacity of an environment to support aerobic activity by a factor of F above an organism’sminimumrequirement in a complete resting state (F = 1). For both marine and terrestrial animals, the energy required for sustained activity (e.g., feeding, reproduction, defense) is elevated by a factor of ~1.5 to 7 above resting metabolic demand (18, 25) and represents an ecological trait, termedFcrit. If climate warming and O2 loss reduce the Metabolic Index for an organism below its species-specific Fcrit, the environment would no longer have the capacity to support active aerobic metabolism and, by extension, long-term population persistence.


The graphic immediately following the one above:

?width=800&height=600&carousel=1

The caption:

• Fig. 3 Aerobic habitat during the end-Permian and its change under warming and O2 loss.
(A) Percentage of ocean volume in the upper 1000 m that is viable aerobic habitat (Φ ≥ Φcrit) in the Permian for ecophysiotypes with different hypoxic threshold parameters 1/Ao and temperature sensitivities Eo. (B) Relative (percent) change in Permian aerobic habitat volume (ΔVi, where i is an index of ecophysiotype) under Triassic warming and O2 loss. Colored contours are for ecophysiotypes with Φcrit = 3. Measured values of 1/Ao and Eo in modern species are shown as black symbols, but in (B) these are colored according to habitat changes at a species’ specific Φcrit where an estimate of this parameter is available. The gray region at upper left indicates trait combinations for which no habitat is available in the Permian simulation.


Some information about the distribution of oxygen depletion in the oceans:

?width=800&height=600&carousel=1

Fig. 4. Global and regional extinction at the end of the Permian. (A) Global extinction versus latitude, as predicted for model ecophysiotypes and observed in marine genera from end-Permian fossil occurrences in the Paleobiology Database (PBDB). Model extinction is calculated from the simulated changes in Permian global aerobic habitat volume (DVi) under Triassic warming and O2 loss (19). The maximum depth of initial habitat and fractional loss of habitat resulting in extinction (Vcrit) are varied from 500 to 4000 m (colors) and from 40 to 95% (right-axis labels), respectively.The observed extinction of genera combines occurrences from all phyla in the PBDB (points). Error bars are the range of genera extinction across two taxonomic groupings: phyla multiply sampled in the modern physiology data (arthropods, chordates, and mollusks) and all other phyla. Latitude bands with fewer than five Permian fossil collections are excluded. The average range is used for latitude bands missing extinction estimates from both taxonomic groupings (i.e., 80°S, 30°S, and 40°N). The main latitudinal trend—increased extinction away from the tropics—is found when including all data together and when restricting to the best-sampled latitude bands (fig. S14). In all panels, model values are averaged across longitude and above 500 m. (B) Average hypoxic threshold and Fcrit across ecophysiotypes versus latitude in the Permian. In (B) to (D), shading represents the 1s standard deviation at each latitude. (C) Regional extinction (i.e., extirpation) versus latitude for model ecophysiotypes, with individual contributions from warming and the loss of seawater O2 concentration. Extirpation occurs in locations where the Metabolic Index meets the active demand of an ecophysiotype in the Permian (F ≥ Fcrit) but falls below this threshold in the Triassic (F < Fcrit). (D) Same as (C) but including globally extinct ecophysiotypes (using a maximum habitat depth of 1000 m and Vcrit = 80%), and as observed in marine genera from end-Permian and early Triassic fossil occurrences of all phyla in the PBDB. Observed extirpation magnitudes are averaged across tropical and extratropical latitude bands (red points and horizontal lines). Regional 1s standard deviations are shown as vertical lines.


The authors conclude with somewhat obvious remarks on the relevance of this study to the present times:

The end-Permian mass extinction resulted in the largest loss of animal diversity in Earth’s history, and its proposed geologic trigger—volcanic greenhouse gas release—is analogous to anthropogenic climate forcing. Predicted patterns of future ocean O2 loss under climate change (30, 31) are broadly similar to those simulated here for the P/Tr boundary. Moreover, greenhouse gas emission scenarios projected for the coming centuries (32) predict a magnitude of upper ocean warming by 2300 CE that is ~35 to 50% of that required to account for most of the end-Permian extinction intensity. Given the fundamental nature of metabolic constraints from temperature-dependent hypoxia in marine biota, these projections highlight the potential for a future mass extinction arising from depletion of the ocean’s aerobic capacity that is already under way.


But you already knew that, didn't you?

To be clear, this paper refers to oxygen in the oceans, and not the atmosphere. Almost all of the oxygen now on earth originates in the oceans, but it's not clear how it partitions between the oceans and the air. In general, gases are less soluble in hot water than in cold water, as is clear to anyone who's messed around with carbonated beverages, but I'm not aware in any quantitative sense of how these solubility relations relate to oxygen as compared with carbon dioxide. (The latter is controlled, in water, by the equilibrium between solvated CO2, its water adduct, carbonic acid, bicarbonate and carbonate, all of which are present.) It is quite possible that the warm surface layers, rich with algae or other photosynthetic species, cranked out lots of oxygen after the Permian extinction, but that it all went into the air and did not remain in the ocean.

(From the text of the paper, one factor seems to have been the circulation patterns of oceanic water, which were arrested by the heating.)

I didn't mean to divert your attention from all the hoopla surrounding the orange fool, but frankly, he doesn't matter and has never mattered, and his ultimate significance will prove to be that of Caligula, so much as Caligula matters today - he doesn't - except for the amusing historical fact that Caligula put a horse in the Senate and the orange idiot has a turtle in the Senate.

Same difference.

Have a nice day tomorrow.

Metal Free Thermochemical Water Splitting at Unusually Mild Conditions.

The paper I'll discuss in this post is this one: Phosphorus-Doped Graphene as a Metal-Free Material for Thermochemical Water Reforming at Unusually Mild Conditions (Garcia et al ACS Sustainable Chem. Eng., 2019, 7 (1), pp 838–846.

Recently in this space I discussed the thermochemical splitting of carbon dioxide (into CO and O2 gases) using a cerium oxide based catalyst in which the oxygen evolution reaction took place at 1400C, showing that there is - as currently operated using "simulated solar energy" - not enough cerium on earth to split one billion tons of carbon dioxide, using either solar thermal or nuclear energy (although nuclear is considerably less onerous in terms of putative cerium demands). One billion tons of carbon dioxide about 3% of what we currently dump each year into the planetary atmosphere.

Here's that post: Cerium Requirements to Split One Billion Tons of Carbon Dioxide, the Nuclear v Solar Thermal cases

From my perspective, the thermochemical splitting of either carbon dioxide or water is probably the only serious manner in which climate change can be reversed, and even if taken seriously - there are few people on this planet left or right who are serious about addressing climate change - it would still be a long shot, but, as the only shot with a modicum of probable success, one worth taking.

Scientists however, continue to work on the problem.

I have spent many years considering thermochemical cycles for splitting either water or carbon dioxide using nuclear energy (or less seriously solar thermal energy), and most, with a few exceptions, involve metals - the main exception being the famous sulfur iodine cycle (which has metal based modifications however) - my personal favorite being the zinc oxide cycle for reasons I won't go into here. The one I'll discuss here - it's really a half reaction, not a full cyclic reaction - is new to me, I must admit. It clearly is not scalable or even worthy of consideration of scale, but the research is extremely interesting and certainly comes under the rubric of "a good lead," particularly since the required temperatures for hydrogen evolution are unusually low, about 900 C.

This involves an interesting material, graphene, which has been the subject of huge amounts of research in materials science.

From the introduction:

Among the most general ways to obtain graphene-related materials, the one starting with graphite that is submitted to deep chemical oxidation to graphite oxide, followed by subsequent exfoliation to graphene oxide (GO), and final chemical reduction provides a graphene material denoted as reduced graphene oxide (r-GO). r-GO is among the most widely studied graphene materials because it can be prepared in a reliable way in gram scale (Scheme 1).(1,2)

figure

Scheme 1. Process of Preparation of r-GO from Graphite Involving Oxidation to Graphite Oxide and Exfoliation to GOa


a(i) Chemical oxidation, (ii) exfoliation, and (iii) chemical reduction.


The above process to perform graphite exfoliation by conversion of graphene (G) into GO is based on the possibility of carrying out the oxidation and reduction of G/GO, increasing the oxygen content to above 50 wt % from G to GO, with a certain degree of control, and then, subsequently decreasing this oxygen content from 50 to about 10 wt %, which is characteristic of r-GO. This ability to increase and decrease the oxygen content on G sheets is reminiscent of the so-called Mars van Krevelen oxidation/reduction of nonstoichiometric transition metal oxides, in where the oxygen content of the inorganic oxide can be varied to a certain extent, generally much lower than the one commented in the case of G/GO/r-GO.(3) This Mars van Krevelen mechanism has been, however, advantageously used to promote catalytic oxidations/reductions, and more related to the present work, this swing between the two related materials with different oxygen contents is at the base of the thermochemical cycles for water splitting or steam reforming.

In steam reforming, a substrate (S) promotes the reduction of water, resulting in the generation of hydrogen (eq 1) and substrate oxidation. If the oxidized form of the substrate, most frequently inorganic oxides (for instance ceria, perovskites, or spinel ferrites) due to the required thermal stability (T = 1300–1500 °C), can subsequently be thermally reduced by oxygen evolution (eq 2), then the two steps can serve to perform cyclically the overall water splitting.(4,5) It has been reported, that one of the main challenges in thermochemical water reforming is the development of materials able to promote efficiently thermochemical transformations at low temperatures (<1100 °C), especially for large scale production.(5−7)


Graphene is a form of carbon in which all of the carbon atoms are bonded together in a plane, which is also characteristic of graphite, but unlike graphite, the graphene is exactly one atom thick. The layers are not connected.

What is interesting here is that the carbon source for the graphene is biomass, as opposed to a dangerous fossil fuel source, meaning that it is possible that this approach is sustainable, at least on a moderate scale.

One source is alginic acid, which is obtained from brown algae, many species of which are believed to be excellent tools for carbon capture from the atmosphere. The other is phytic acid, which is per-phosphorylated inositol, which is found in beans, and notably in manure, where it is responsible for the environmentally problematic concentration of phosphorous.

Graphene in the presence of steam is reformed normally, yielding carbon dioxide and hydrogen - and the reforming of biomass is probably an excellent approach to carbon capture as well as thermochemical splitting - however there are certain mineral considerations that represent significant hurdles.

In order to prevent the reformation of graphene, the authors here have phosphorylated graphene oxide.

Some pictures from the paper, first the synthesis of the graphene (and its oxide):



The caption:


Scheme 1. Process of Preparation of r-GO from Graphite Involving Oxidation to Graphite Oxide and Exfoliation to GO


(i) Chemical oxidation, (ii) exfoliation, and (iii) chemical reduction.



Next, the xray photoelectron spectrum (XPS) of the graphene:



The caption:

Figure 1. XPS survey spectrum (a) and C 1s (b), O 1s (c), and P 2p (d) high-resolution peaks recorded for Phy-G and their corresponding best deconvolution fits.


The chemical nature of the phosphorous attached to the graphene can be discerned from nuclear magnetic resonance spectrometry (NMR) since the only isotope of phosphorous that occurs naturally, 31P, is magnetically active. The 31P spectrum:



The caption:

Figure 2. Solid state 31P NMR spectrum of Phy-G, with indication of the assignment based on the literature.(31−34)


"Phy-G" is phosphorous doped graphene.

High resolution tunneling electron microscope images:



Atomic force microscope images:

The caption:

Figure 4. AFM images of Phy-G samples. (a) General wide-field image of Phy-G samples showing a 2D sheet on which smaller particles are supported. (b) 3D image of a wide-field region of the same Phy-G sample. (c) Image corresponding to a part of a 2D sheet, where the blue, green, and red lines indicate the height measurements. (d) Height measurement along the lines indicated with the same colors in image (c).


The hydrogen evolution over 21 cycles:

The caption:

Figure 6. H2 evolution upon 21 consecutive activation-oxidation cycles (red). The temperature cycles have been included in blue.


The authors do some in silico calculations. Here's some fun details of their approach:

The potential energy calculations were performed using spin polarized DFT with the VASP 5.4.1 code (Vienna ab inito simulation program) developed at the Fakultät für Physik of the Universität Wien.(20) We used the projector augmented wave (PAW) scheme(21) with the Perdew–Burke–Ernzerhof (PBE)(22) exchange and correlation (xc)-functional and a plane-wave energy cutoff of 400 eV. The system was modeled by a hexagonal 5 × 5 unit cell containing 50 atoms with a P atom substituting a C atom (2% doping),(23) with an optimized C–C bond separation of 1.429 Å and a 14 Å separation between graphene sheets. Γ-point sampling of the reciprocal space was used in the optimizations and the nudged elastic band (NEB)(24) method calculations.


Here's what they found:



The caption:

Figure 8. (a) Calculated PBE free energy profile (kcal/mol) at 650 °C for the stepwise thermochemical water splitting reaction on P-doped graphene (2%). The approximate transition structures TS1 and TS2 are the highest points on the NEB profiles (see Computational Details section). The structures include the most significant bond lengths in Å and angles in °. (b) Calculated PBE free energy in kcal/mol (relative to the R structure) for the intermediates formed in three successive hydrolysis steps (addition of a H2O molecule and cleavage of a P–C bond at every step) resulting in formation of phosphoric acid. Note that, in both figures, only the carbon atoms of the unit cell in the vicinity of the catalytic center are displayed.


Of course the main problem with this system is that oxygen is not evolved, the reduction of water to hydrogen is first accomplished by the oxidation of phosphorous, and finally, after a number of cycles, to the oxidation of the graphene, that is, ultimate reformation.

The authors write:

Lack of O2 Evolution
It is worth noting, that evolution of O2 was not detected in any step in these experiments, either using Phy-G or G, indicating that eq 2 does not take place. However, since H2 evolves in the hydrolysis steps, it is clear that the O atoms present in H2O must remain attached in the Phy-G catalyst or could promote some decomposition. In order to address the nature of the oxygenated groups being formed on Phy-G, Raman spectroscopy and XPS analysis of the Phy-G catalyst after extensive use in the thermochemical H2O reactions were carried out.

The XPS P 2p peaks of Phy-G, after its use in steam reforming and its best deconvolution fit, are presented as Figure 7, which also provides a comparison with the P 2p peak of the fresh sample. The first information provided by XPS was a decrease in the proportion of P quantified by the decrease of the P/C atomic ratio from the initial 0.072 value for the fresh Phy-G material to the 0.021 ratio determined for the Phy-G sample after its use in the thermochemical H2 generation from H2O. Comparison of P 2p spectra of fresh and used Phy-G confirms a shift in the P 2p peak of the used Phy-G toward higher binding energies, indicating the increasing presence of oxidized P in the catalyst composition. In addition, as it can be observed in Figure 7, the P 2p peak of Phy-G after the reaction presents only two main components instead of three. In this case, the component at 132 eV, related to the P–C bond, is no longer present, while components at 134 and 136 eV in relative percentages of 74.5 and 25.5%, respectively, are related to the formation of the P–O bonds...

...The solid-state 31P NMR spectra of fresh and used Phy-G have been similarly recorded, and they are compared in Figure S7. As it can be seen there, the contribution of peaks corresponding to triphenylphosphine and triphenylphosphine oxide has considerably decreased, while the peaks attributed to phosphate and other P oxide groups have undergone a notable increase in good agreement with the information provided by XP and Raman spectroscopies. Therefore, the incorporation of O atoms in P-doped G as phosphate groups is confirmed by three different techniques, and thus, the lack of O2 gas in the stream can be attributed to the oxophilic nature of P and also, to some degree, of graphenic C oxidation during reaction. Observation of CH4 and CO in the thermochemical cycles clearly indicates this gradual oxidation of G, since it is the most likely origin of CH4 is methanation of CO2.


Nevertheless, a cool paper, and quite interesting for the development of future catalytic systems.

An excerpt of the paper's conclusion:

It has been found experimentally that defective G obtained from biomass pyrolysis undergoes steam reforming at temperatures above 400 °C forming H2 and CO2. Grafting of P atoms on the G sheet increases considerably its stability under conditions of steam reforming. A graphenic material doped with P was obtained by pyrolysis of phytic acid. Characterization of this material shows that together with the expected P-doped G, the other nanoparticulated component is also present in much lesser proportions. Although the stability of Phy-G is notably higher than that of G, and H2 evolution is observed, no oxygen evolution could be achieved under the conditions tested. It seems that oxygen becomes too strongly attached to P atoms and also some degree of oxidation of the graphenic material to CO and CO2 (converted to CH4) is occurring...


Have a nice day tomorrow.

Reaching the end of a job interview, the Human Resources Manager asked the young engineer...

...fresh out of the university, , "And what starting salary were you looking for?"

The engineer said, "In the neighborhood of $100,000 a year, depending on the benefit's package."

The HR Manager said, "Well, what would you say to a package of $200,000 a year, 5 weeks vacation, 14 paid holidays, full medical and dental, company matching retirement fund to 50% of salary, and a company car leased every 2 years - say, a red Mercedes?"

The engineer sat up straight and said, "Wow!!! Are you joking?"

And the HR Manager said, "Of course, ...but you started it."
Go to Page: 1 2 Next »