Philosoraptor [he/him, comrade/them]

  • 22 Posts
  • 254 Comments
Joined 5 years ago
cake
Cake day: August 3rd, 2020

help-circle






  • Yeah, this is a much more serious issue. In particular, we have a lot of very good reasons to think that the impact on precipitation levels and distribution would be significant at even the levels that would be necessary to effect rather minor temperature reduction. The eruption of Mt. Pinatubo in the early 1990s dropped the global average temperature by about half a degree C, but also seems to have caused both severe droughts and severe flooding in various places. The precipitation disruption signal appeared and disappeared at basically exactly the same time that the temperature reduction signal did, so we’re pretty confident that the sulfur compounds Pinatubo put into the atmosphere caused both of them. The overall land-based precipitation on the planet decreased significantly:

    Just as worrying (or maybe even more worrying) was the change in how the remaining precipitation was distributed. This is a heat map of the Palmer Drought Severity Index values over the relevant time period. Warmer colors represent anomalously low precipitation levels, and cooler colors represent anomalously high precipitation levels:

    As you can see, to a very great extent the precipitation patterns are almost exactly inverted. Places that tend to be dry–the Amerikkkan southwest, parts of Africa, parts of Australia, etc.–were unusually wet. Places that tend to be wet–the Amazon, equatorial Africa, and southeast Asia in particular–were spectacularly dry. Both of those are bad: places that rely on the monsoons didn’t really get them, and places that aren’t used to large amounts of rain were flooded. We think now that this is largely attributable to complex changes in evaporation patterns as a result of the decreased solar intensity, and that was with only enough albedo modification to reduce the global temperature by half a degree. If we were to pursue this policy, we’d probably be looking at reductions at least two or three times as intense as that, which would almost certainly be associated with similarly increased precipitation disruptions. That might end up being more damaging than the warming itself.



  • Peak for the surface is ~7μm, so not NEAR near, but not super far infrared either. This is the whole pickle with the greenhouse effect: the atmosphere is basically transparent with respect to peak wavelength for incoming solar radiation, but close to opaque for outgoing infrared. The idea with albedo modification geoengineering is to sidestep that problem by just cutting down on the amount of energy coming into the system before it even has a chance to pass through the atmosphere. It would definitely work at its primary task of cooling the atmosphere, but both theoretical and real-world models (like the eruption of high-sulfur content volcanoes) shows it’s also very likely to significantly disrupt other parts of the climate system.


  • No, you wouldn’t want to increase the albedo in the infrared, because the peak emissivity of the sun is in the visible spectrum, but the peak emissivity of the Earth is in the infrared. You’d end up blocking more of the outgoing radiation than the incoming radiation. Stratospheric aerosol injection plans also would not meaningfully impact either solar panels or photosynthesis, though. The plans on the table would adjust the global albedo by low single digits watts per square meter. Neither plants nor (especially) solar panels are operating so close to the edge of full efficiency that such a small adjustment would meaningfully impair their performance. The big problem with this proposal is that it’s likely to have all sorts of knock-on climatological effects, especially on precipitation levels and distribution patterns.










  • What do you think of the assumptions mainstream climate science makes in their models compared with other numbers, like the amount of warming caused by co2 doubling? I’ve seen James Hansen put forward that it’s much higher then mainstream models, closer to 4.5c. I forget the exact range both put forward.

    There’s a lot of uncertainty surrounding this number (it’s called the “climate sensitivity”) and a pretty big range of estimates. Hansen’s number is on the higher end of the range, though not the highest you can find in the literature. I haven’t done a systematic survey of the literature lately, but my anecdotal impression is that the average estimate is shifting higher, especially in the last few years. As you say, a lot of the uncertainty comes from whether (and how much) a particular model includes the presence of various positive feedback mechanisms like permafrost melt-associated methane release or thermohaline shutdown. These sorts of things are, by their very nature, extremely hard to predict with a high degree of certainty, so the best we can do is assign a probability distribution over the relevant values. The overall predictions of the models depend pretty sensitively on the exact shape of those probability distributions, which in turn depend on the value of various other parameters.

    In the biz, we call those kinds of things “highly tunable parameters.” They’re processes that the model isn’t resolving explicitly in the model physics (we’re not directly simulating the melting of permafrost, the location of methane deposits, and the associated release of GHGs, for instance) that also have a rather large range of “physically plausible” values. The classic example of a highly tunable parameter is cloud formation. Because of computational limitations, our most detailed models run on grids with squares that are on the order of ~150km to a side. That means that anything that happens at a spatial scale smaller than about 100km is invisible to the model, since it can’t explicitly resolve sub-grid dynamics. Most clouds are significantly smaller than 100km, so we can’t really model cloud formation directly (in the sense of just having the physics engine do it), but clouds are (obviously) really important to the state of the climate for a whole bunch of reasons. The way we get around that is by parameterizing cloud formation in terms of stuff that the model can resolve. Basically, this means looking at each grid square and having the model figure out what percentage of the square is likely to be covered by clouds (and at what elevation) based on various values that we know are physically relevant (humidity, temperature, pressure, etc.) in that square. This is imperfect, but it does a pretty good job and lets the model work with stuff that it can’t directly simulate.

    Lots of feedback mechanisms are like this also. For one reason or another, many of them are not things that we’re simulating directly–sometimes because we don’t have a good enough theoretical understanding, sometimes because we don’t have the relevant data, and sometimes because (like clouds) they’re operating a spatial scales below what the model can “see.” But we all know that those things are important, so they’re incorporated as parameterizations. The problem is that each abstraction step here introduces another layer of uncertainty: the relevant parameters are often highly tunable so there’s uncertainty there, we’re not sure exactly how strong the coupling constants are so there’s uncertainty there, and we’re not sure we have all the relevant processes parametrized. That’s a big part of what explains the range of value estimates: depending on your preferred values for all those things, you can get a climate sensitivity as low as 2 or 3 degrees C and as high as 6 or 7.

    Part of how we deal with that problem is through the use of ensemble modeling. The big “grand ensemble” project I mentioned (CMIP, which stands for "coupled model intercomparison project) involves many different institutions and labs running a standardized series of virtual experiments with a uniform set of initial and boundary conditions. Every few years, scientists will get together and hammer out a set of questions to answer, turn those into model experiments, and then go home and run the same simulations on each of their own home institution’s in-house model. As part of that, those multi-model ensembles will incorporate what are called “perturbed physics ensembles,” which involve holding initial conditions constant and exploring how systematically varying the values of different parameters changes the final output. This helps us explore the “value space” for these highly tunable parameters and see which things look to be sensitively dependent on which other things. The final consensus predictions (that you see, for instance, in the IPCC reports) are the result of integrating the results from all of these different ensemble runs that varied the underlying model physics (multi-model ensembles), initial conditions (initial condition ensembles), and parameter values (perturbed physics ensembles). That’s why the official numbers tend to be in the moderate range: the ensemble approach “smooths out” the more pessimistic and optimistic predictions.

    Is that a guarantee that the consensus result is more accurate? No, not really, but it’s hard to see how we could do it any better. In particular, if there are systematic biases that infect all the major models (because, for instance, they’re all descended from a small number of early ancestor models that made some bad assumptions), ensemble modeling won’t fix that. Some models will also incorporate processes or parameters that others just ignore. If those models are “more right,” then their predictions are probably closer to reality. That’s very hard to see in advance, though. Hansen’s predictions are more pessimistic than many others partially because he leans toward parameter values that ascribe a stronger (and less self-limiting) role for positive feedbacks than many others; it’s looking increasingly like reality is bearing that out. There have also been some big surprises recently that almost nobody saw coming, like the collapse of land-based carbon sinks starting in 2023. Those sorts of long-tail processes are very, very hard to incorporate into models until after the fact because they represent “unknown unknowns,” but the general trend has been toward these “surprises” being pretty much uniformly bad; very rarely does something happen that makes warming run more slowly than the models suggested. Some people are trying to incorporate that into the models by over-sampling the more pessimistic end of parameter values when doing ensemble modeling. That’s controversial.


  • We’re not actually very good at weather manipulation, either in theory or in practice. Maybe counterintuitively, we have a much better handle on what we’d need to do for climate manipulation (especially via aerosol injection), and there’s definitely a robust research program investigating that, though it’s relatively new. We started systematically studying geoengineering proposals as part of CMIP6 in 2015 (I was actually part of the inaugural working group!) and it’s a pretty significant part of the overall effort now.

    Our understanding of weather (as opposed to climate) manipulation is much shakier. The most high profile attempt to engage in it was probably in China before the Beijing summer Olympics, and there’s not even a widespread consensus if it was successful. They attempted some significant cloud seeding to try to keep it from raining on the games, and it didn’t rain, but we’re not very confident that it was the cloud seeding that did it. Part of the reason to prefer physically grounded models, though, is that it’s relatively easy to incorporate this stuff. The model doesn’t care if cloud condensation nuclei are injected or naturally occurring. Would this have long-term butterfly type effects? Yeah, definitely, but the weather system is so chaotic that it honestly wouldn’t really matter much. There’s already a pretty hard time-horizon of about two weeks beyond which we might as well just be throwing darts to make predictions, and forecasts are really only reliable for 5-7 days out. Introducing deliberate weather manipulation wouldn’t put us in a significantly worse position, and there are pretty hard to overcome mathematical reasons why it’s challenging to improve forecasts beyond that timeframe already (at least for weather–obviously climate forecasting is different).