The article gives no indication as to how the model works (other neural networks) so it could be a fluke especially since it is not physics based.

The arstechnica piece has this at the bottom:

“It’s not immediately clear why the GFS performed so poorly this hurricane season,” Lowry wrote. “Some have speculated the lapse in data collection from DOGE-related government cuts this year could have been a contributing factor, but presumably such a factor would have affected other global physics-based models as well, not just the American GFS.”

  • BountifulEggnog [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    20 天前

    This is completely off topic and only related because of your work and my interest in climate change. What do you think of the assumptions mainstream climate science makes in their models compared with other numbers, like the amount of warming caused by co2 doubling? I’ve seen James Hansen put forward that it’s much higher then mainstream models, closer to 4.5c. I forget the exact range both put forward.

    Also, do models include things like arctic methane being released, boreal forests burning, etc? I again don’t have numbers in front of me but those are fairly significant amounts of ghg right?

    Would love to hear more about your work in general, climate change might be the single most important issue to me.

    • What do you think of the assumptions mainstream climate science makes in their models compared with other numbers, like the amount of warming caused by co2 doubling? I’ve seen James Hansen put forward that it’s much higher then mainstream models, closer to 4.5c. I forget the exact range both put forward.

      There’s a lot of uncertainty surrounding this number (it’s called the “climate sensitivity”) and a pretty big range of estimates. Hansen’s number is on the higher end of the range, though not the highest you can find in the literature. I haven’t done a systematic survey of the literature lately, but my anecdotal impression is that the average estimate is shifting higher, especially in the last few years. As you say, a lot of the uncertainty comes from whether (and how much) a particular model includes the presence of various positive feedback mechanisms like permafrost melt-associated methane release or thermohaline shutdown. These sorts of things are, by their very nature, extremely hard to predict with a high degree of certainty, so the best we can do is assign a probability distribution over the relevant values. The overall predictions of the models depend pretty sensitively on the exact shape of those probability distributions, which in turn depend on the value of various other parameters.

      In the biz, we call those kinds of things “highly tunable parameters.” They’re processes that the model isn’t resolving explicitly in the model physics (we’re not directly simulating the melting of permafrost, the location of methane deposits, and the associated release of GHGs, for instance) that also have a rather large range of “physically plausible” values. The classic example of a highly tunable parameter is cloud formation. Because of computational limitations, our most detailed models run on grids with squares that are on the order of ~150km to a side. That means that anything that happens at a spatial scale smaller than about 100km is invisible to the model, since it can’t explicitly resolve sub-grid dynamics. Most clouds are significantly smaller than 100km, so we can’t really model cloud formation directly (in the sense of just having the physics engine do it), but clouds are (obviously) really important to the state of the climate for a whole bunch of reasons. The way we get around that is by parameterizing cloud formation in terms of stuff that the model can resolve. Basically, this means looking at each grid square and having the model figure out what percentage of the square is likely to be covered by clouds (and at what elevation) based on various values that we know are physically relevant (humidity, temperature, pressure, etc.) in that square. This is imperfect, but it does a pretty good job and lets the model work with stuff that it can’t directly simulate.

      Lots of feedback mechanisms are like this also. For one reason or another, many of them are not things that we’re simulating directly–sometimes because we don’t have a good enough theoretical understanding, sometimes because we don’t have the relevant data, and sometimes because (like clouds) they’re operating a spatial scales below what the model can “see.” But we all know that those things are important, so they’re incorporated as parameterizations. The problem is that each abstraction step here introduces another layer of uncertainty: the relevant parameters are often highly tunable so there’s uncertainty there, we’re not sure exactly how strong the coupling constants are so there’s uncertainty there, and we’re not sure we have all the relevant processes parametrized. That’s a big part of what explains the range of value estimates: depending on your preferred values for all those things, you can get a climate sensitivity as low as 2 or 3 degrees C and as high as 6 or 7.

      Part of how we deal with that problem is through the use of ensemble modeling. The big “grand ensemble” project I mentioned (CMIP, which stands for "coupled model intercomparison project) involves many different institutions and labs running a standardized series of virtual experiments with a uniform set of initial and boundary conditions. Every few years, scientists will get together and hammer out a set of questions to answer, turn those into model experiments, and then go home and run the same simulations on each of their own home institution’s in-house model. As part of that, those multi-model ensembles will incorporate what are called “perturbed physics ensembles,” which involve holding initial conditions constant and exploring how systematically varying the values of different parameters changes the final output. This helps us explore the “value space” for these highly tunable parameters and see which things look to be sensitively dependent on which other things. The final consensus predictions (that you see, for instance, in the IPCC reports) are the result of integrating the results from all of these different ensemble runs that varied the underlying model physics (multi-model ensembles), initial conditions (initial condition ensembles), and parameter values (perturbed physics ensembles). That’s why the official numbers tend to be in the moderate range: the ensemble approach “smooths out” the more pessimistic and optimistic predictions.

      Is that a guarantee that the consensus result is more accurate? No, not really, but it’s hard to see how we could do it any better. In particular, if there are systematic biases that infect all the major models (because, for instance, they’re all descended from a small number of early ancestor models that made some bad assumptions), ensemble modeling won’t fix that. Some models will also incorporate processes or parameters that others just ignore. If those models are “more right,” then their predictions are probably closer to reality. That’s very hard to see in advance, though. Hansen’s predictions are more pessimistic than many others partially because he leans toward parameter values that ascribe a stronger (and less self-limiting) role for positive feedbacks than many others; it’s looking increasingly like reality is bearing that out. There have also been some big surprises recently that almost nobody saw coming, like the collapse of land-based carbon sinks starting in 2023. Those sorts of long-tail processes are very, very hard to incorporate into models until after the fact because they represent “unknown unknowns,” but the general trend has been toward these “surprises” being pretty much uniformly bad; very rarely does something happen that makes warming run more slowly than the models suggested. Some people are trying to incorporate that into the models by over-sampling the more pessimistic end of parameter values when doing ensemble modeling. That’s controversial.