In a case of refreshing transparency, a journal has published a detailed list of corrections it requested from authors of a paper on the costs of climate change, even though the authors declined to make most of them.
Earlier this year, the journal Ecological Economics published a paper that cast some doubt on the FUND model, which, as the article explains:
The FUND model of climate economics, developed by Richard Tol and David Anthoff, is widely used, both in research and in the development of policy proposals. It was one of three models used by the U.S. government’s Interagency Working Group on the Social Cost of Carbon in 2009 (Interagency Working Group on Social Cost of Carbon, 2010). The Working Group’s “central estimate” 1 of the social cost of carbon (SCC), i.e. the monetary value of the incremental damages from greenhouse gas emissions, was $21 per ton of CO2.
The paper concluded:
In FUND’s agricultural modeling, the temperature-yield equation comes close to dividing by zero for high-probability values of a Monte Carlo parameter. The range of variation of the optimal temperature exceeds physically plausible limits, with 95% confidence intervals extending to 17 °C above and below current temperatures. Moreover, FUND’s agricultural estimates are calibrated to research published in 1996 or earlier.
Use of estimates from such models is arguably inappropriate for setting public policy. But as long as such models are being used in the policymaking process, an update to reflect newer research and correct modeling errors is needed before FUND’s damage estimates can be relied on.
In other (simplified) words, FUND needs some work if we’re going to use it to make policy.
Tol, a professor of economics at the University of Sussex, and Anthoff were happy to have constructive criticism, but they also thought the paper got a number of things wrong. And they were a bit baffled by some of those errors, given that they had helped the authors through their analysis.
So they wrote a letter to the editor of the journal, as David Stern notes in a letter published online in the journal last week. Elsevier asked Stern, a professor at the Crawford School of Public Policy, part of Australian National University, Canberra, to adjudicate because the journal’s editor in chief, Richard Howarth, had published with Ackerman.
Along with the Stern letter, the journal has now published a commentary from Anthoff and Tol, and a response from Frank Ackerman and Charles Munitz. From Stern’s letter:
The main point of contention is around Section 4.1 of the paper, which claims that the results of the FUND model could be affected by a division by zero problem. In my investigation, I had access to correspondence between Anthoff and Tol and Frank Ackerman prior to publication of the paper. In this exchange, Anthoff and Tol had told Frank Ackerman that the apparent division by zero problem was in fact addressed by the FUND model and the results were not substantially affected by it. I also relayed Tol’s concerns to Ackerman and received a reply from him. Based on the responses I received and the previous correspondence, I determined that some statements in the paper were problematic and that Ackerman and Munitz did not report in their paper the information they had received from the model developers about the division by zero issue.
Richard Tol stated that the minimum set of corrections that would address his concerns is the following:
In the abstract:
[1] In place of: “We examine the treatment of climate damages in the FUND model.” substitute: “We examine the treatment of climate damages in a modified version of the FUND model.”
In section 2:
[2] In place of: “The analysis described here begins with the Working Group’s modified version of FUND (fn 2).” substitute: “The analysis described here begins with the Working Group’s modified version of FUND, to which further changes were made (fn 2).” and add the following to footnote 2:
[3] “We made changes to the FUND model code as described in this paper. These changes were not validated by the model developers, David Anthoff and Richard Tol, and they did not vet the results. David Anthoff and Richard Tol are, therefore, not responsible for any of the model results presented below.”
In section 4:
[4] In place of: “4.1. Risk of division by zero” substitute: “4.1. Apparent risk of division by zero”
[5] In place of: “A fix for the optimum temperature equation bug is planned for the next version of FUND.” substitute: “Changes to the optimum temperature equation are planned for the next version of FUND.”
Ackerman and Munitz were willing to accept [5] and a modified version of [3] but were not willing to accept the other changes. Given this, Ecological Economics could not publish a formal correction to the article. Therefore, I decided to include the full set of requested corrections in this Editor’s note, along with the commentary of Anthoff and Tol, and the response from Ackerman and Munitz.
I trust that with the publication of both the commentary and response, along with this note, the journal has provided all parties the opportunity to express their concerns and opinions.
For their part, Tol and Anthoff conclude their own letter, published online today:
We have been in repeated contact with AM on this matter. We helped with configuring the code to run on their machines. Mr Ackerman then contacted us because he thought he had found a division-by-zero error. We explained why his tests are inconclusive; AM’s “not an appropriate way” (p. 222) echoes this. Well before the AM paper was submitted, we shared the results of our standard diagnostic test and the one specifically tailored to the alleged problem. Neither test reveals a problem.
We are surprised that AM nevertheless chose to publish their division-by-zero claim, while remaining silent on the results of these diagnostic tests.
This all seems like a good example of post-publication peer-review.
The letter from Ackerman and Munitz doesn’t appear to be online yet, but we’ll keep an eye out for it and update.
Update, 6:15 p.m. Eastern, 7/9/12: Please see Frank Ackerman’s comment below, which includes the text of his and Munitz’s letter.
I’ve pasted in our response, as it will appear in Ecological Economics, below these comments.
The requested “corrections” to our article did not originate with the journal, but with authors whose work we discussed in our article (Tol and Anthoff). The journal relayed them to us, and later reprinted them, without taking a position on them. The “corrections” appeared to us to be attempts to change the tone of voice of our article, not to correct any specific, identifiable errors of fact. No such errors have been identified in our work, which was properly peer-reviewed before publication.
The journal in question, Ecological Economics, received strongly worded calls for retraction, or correction, of our article. In the end, it decided that neither retraction nor correction was warranted. Its editor’s letter publishes the corrections requested by Tol, but does not endorse (or reject) them.
When we first drafted our analysis, we showed it to Anthoff and Tol before circulating it to others; our article acknowledges their response. When the strength of Tol’s reaction became clear, we offered to organize an on-line debate giving equal space to each side; he repeatedly declined.
This is a case of an intense academic disagreement, with some implications for public policy. It is not, however, a case that involves retraction of any sort.
Reply to Anthoff and Tol
Frank Ackerman and Charles Munitz
We would like to comment on two issues: the message of our article and the remedies to the division-by-zero problem.
What we found
The FUND model, as used by the U.S. government’s Interagency Working Group, produces a very low estimate of the social cost of carbon, in part because it projects a large net benefit of climate change in agriculture.
We identified two defects in the FUND equation for crop yields as a function of temperature. First, the yield-maximizing temperature for each region is a Monte Carlo parameter that varies over implausibly wide ranges: the 95 percent confidence intervals stretch from well below the temperatures of the last ice age, up to temperatures that human beings cannot survive. Second, the crop yield equation includes fractions with denominators that would be equal to zero for a particular value of the Monte Carlo parameter.
Both defects would be expected to produce an excessively wide range of results on successive iterations of the Monte Carlo analysis – some with extremely large net benefits, others with extremely large net damages. (As the key parameter approaches the divide-by-zero point from one side, the fraction tends toward positive infinity; on the other side, it tends toward negative infinity.) FUND has a built-in limit on the size of agricultural damages but no corresponding limit on the size of agricultural benefits, as explained in a footnote in our article. This asymmetric limitation screens out the excessive damages but leaves the excessive benefits intact, making the average outcome artificially positive. That is, the defects we identified both tend to exaggerate the benefits of climate change for agriculture.
We then made two simple changes in the structure of the crop yield equation, leaving everything else
unchanged. Each of these changes more than doubled the estimate of the social cost of carbon. As we said in the article,
“These changes are introduced solely to explore the sensitivity of FUND outputs to the structure of [the crop yield equation], not as recommendations for a corrected model structure; the authors of FUND have, quite reasonably, responded that this simple tinkering with one equation is not an appropriate way to revise the model.”
Anthoff and Tol’s response
When we first drafted our analysis, we sent it to David Anthoff for comment before releasing it to anyone else. His thoughtful response is acknowledged in several places in our article, although we continue to disagree about a number of issues.
To date we have received no response on the question of FUND’s use of implausible temperature ranges. On the divide-by-zero problem, David Anthoff and Richard Tol have responded that the results of individual Monte Carlo runs are always screened after the fact, and any extreme values are removed; in the runs used by the Interagency Working Group, removal of an unspecified number of runs that came closest to the divide-by-zero point reportedly had little effect on the results.
This manual screening process appears to be an essential but undocumented aspect of Anthoff and Tol’s use of the FUND model. When Anthoff told us about this procedure, he was unsure about the number of extreme values that are removed. Since the FUND model is available for downloading with no description of the need for manual screening of results, the procedure cannot be considered part of the model apparatus per se.
It is possible to run a model with a known algebraic defect, and then manually remove any distortions caused by the defect – but it does not seem to us like an ideal modeling methodology. Since the appearance of our critique of FUND version 3.5, software (but no documentation) for FUND version 3.6 has been released. In version 3.6, the crop yield equation has been rewritten to remove the risk of division by zero.
In our comment, we show that the tests by Ackerman & Munitz are inconclusive.
The comment by David Stern makes it clear that the Ackerman & Munitz paper is wrong in five distinct places, revolving around two points:
1. Is the work presented by Anthoff & Tol or by Ackerman & Munitz? The original paper was ambiguous. With the corrections, the paper makes clear that the work is by Ackerman & Munitz, who downloaded code and did God knows what with it.
2. Are there errors in the FUND code as used for the work of the US Interagency Working Group? The original paper claims that this is the case. With the corrections, it is clear that this claim is false.
These corrections go beyond a change in the tone of voice.
If only the tone had changed, why would the authors have refused to make these corrections?
In his comment, Stern adds that Ackerman & Munitz suppressed relevant information that contradicts their conclusion.
I shall re-read it all in the cold light of day. It is somehow baffling to see a disagreement built up on a division by zero. Stern himself seems not to have analyzed it personally.
Division-by-zero is an elementary error. If you can stick that to someone, you severely damage a reputation.
Division-by-zero errors also manifest themselves quickly and loudly. We would have noticed had we made one, and we ran diagnostic tests just to make sure.
This is probably a lot more complicated as well. Prof. Tol is well known but also controversial in some circles and some of his analyses are highly favored by climate skeptics. Also, the work of this blog was featured highly in a recent post on What’s Up With That, a noted climate denier (or skeptic, what have you) site. The tone of the post was essentially that all science is corrupt and these retractions are evidence of that.
Grab your torch! Grab your pitchfork! RW is endangering the sanctity of climate science!
Wattswrongwiththis maintains all science is corrupt. Science has to be corrupt. It keeps giving the wrong answers. Grab your pitchforks and poke a climate scientist today.
An interesting debate. Discussing limitations of model applicability & questioning the assumptions behind it is always useful . (even for the best & the most useful models!)
Ackerman’s response seems very reasonable by itself; Tol’s is a bit shrill in tone, but that does not yet mean that he is wrong. I might just care enough to read the paper & Stern’s commentary to decide for myself…
In 2007, I received an email from Mr Ackerman. He could not reproduce our results on heat stress. I checked his math. He had dropped the urbanization equation. I wrote back. He acknowledged receipt.
In 2008, Mr Ackerman published a paper in which he claimed that he could not reproduce our results on heat stress.
See Bosello, F., R.Roson, and R.S.J.Tol (2008), ‘Economy-wide Estimates of the Implications of Climate Change — A Rejoinder’, Ecological Economics, 66, 14-15.
Mr Tol:
1. The (alleged) shortcomings of a particular model should be evaluated strictly based on the scientific evidence. For an objective evaluation, it is completely irrelevant whether or not there is a recurring pattern of disagreements (or even rivalry/animosity) between the scientists on both sides of the debate. Do you really expect everyone reading your papers (or using your code) to study the detailed history of your relations/email exchanges with your opponents?
2. If a publicly released software model implementation allows for a simple/likely misinterpretation of simulation results, one has to wonder about the usefulness of having that software publicly available.
The following passage from Mr Ackerman’s response seems particularly meaningful & fair. Could you perhaps comment on the accuracy of his account? Thanks in advance — I am sure many readers of this blog would appreciate it.
———————————————–
“On the divide-by-zero problem, David Anthoff and Richard Tol have responded that the results of individual Monte Carlo runs are always screened after the fact, and any extreme values are removed; in the runs used by the Interagency Working Group, removal of an unspecified number of runs that came closest to the divide-by-zero point reportedly had little effect on the results.
This manual screening process appears to be an essential but undocumented aspect of Anthoff and Tol’s use of the FUND model. When Anthoff told us about this procedure, he was unsure about the number of extreme values that are removed. Since the FUND model is available for downloading with no description of the need for manual screening of results, the procedure cannot be considered part of the model apparatus per se.
It is possible to run a model with a known algebraic defect, and then manually remove any distortions caused by the defect – but it does not seem to us like an ideal modeling methodology. Since the appearance of our critique of FUND version 3.5, software (but no documentation) for FUND version 3.6 has been released. In version 3.6, the crop yield equation has been rewritten to remove the risk of division by zero.”
Mr. Tol’s comment is a complete misrepresentation of our communication around that article. Interested parties should read both sides of the published exchange and judge for themselves. Our comments (by Frank Ackerman and Elizabeth A. Stanton) can be found at Ecological Economics, Volume 66, Issue 1, 15 May 2008, Pages 8-13, doi:10.1016/j.ecolecon.2007.10.006.
@Someone somewhere
You wondered about the shrillness of my tone. One reason is that Mr Ackerman is a serial offender.
As to your other points, the code is in the public domain because of reproducibility.
Any complex code can be misread. The safe alternative is not to do research.
The academic convention is that if you discover what looks like an error in someone else’s work, you contact that person and discuss.
Mr Ackerman indeed contacted us. We showed him all the evidence that the alleged error is imaginary. Yet, he published his claim.
Twice.
His main claim above is incorrect. We compare the mean to the trimmed mean to test for outliers. The published results are based on the mean.
It is true that we always screen our results before we publish them.
Mr Ackerman is not privy to our schedule of model revisions. His point about versioning is speculation. It is wrong too. The real reason is shrinkage.
@Mr Ackerman
I indeed omitted the bit where you argued that the heat stress model is misspecified. The English language distinguishes between “misspecification” and “irreproducible”. I also omitted the bit where I showed that the model is specified according to the then and now best epidemiological knowledge.
Mr. Tol: Thanks for clarifying. I am not taking any sides; just honestly trying to understand.
If you don’t mind, several follow up questions:
1. That screening you conducted (“with trimmed mean”, etc) — is it actually described in your papers and/or the software documentation? With all the trimming parameters? And the information on the percentage of “extreme values”/outliers omitted to compute the “trimmed mean” in those experiments — was it also reported?
2. If the answer is “no” or “partly” — why do you believe that your results are reproducible? Anyone using your publicly released software (v. 3.5) would also need to know the trimming parameters to reproduce the screening results — or am I missing something?
3. Did IWGSCC use similar screening/testing with your model? With the same parameters? What percentage of individual Monte-Carlo runs they had to remove as producing “extreme values” ?
4. In your opinion, is the following statement by Mr Ackerman actually correct (as of v. 3.5)?
“FUND has a built-in limit on the size of agricultural damages but no corresponding limit on the size of agricultural benefits, as explained in a footnote in our article. This asymmetric limitation screens out the excessive damages but leaves the excessive benefits intact, making the average outcome artificially positive.”
5. If the above (part 4) is mostly correct, then how do you know that your screening/trimming is sufficiently aggressive to remove the effects of this asymmetry?
6. When the outlier-effects are highly asymmetric, but the trimmed mean and the original mean are roughly the same, that typically means that the outliers are _very_ uncommon. If so, a simple test to show how (un)common asymmetric “excessive values” are [based on your other M-C parameters] would go a long way toward settling the issue. Is there a reason why this information is not provided?
Thanks again & best regards.
1. The trimmed results are not published and therefore the trimming method is not described in the papers.
2. See 1.
3. No.
4. This is a theoretical possibility, but I am not aware of any realization that shows this particular problem.
5. In principle yes, but in practice not needed.
6. In the papers under discussion, the focus is on the central part of the distribution. All our tests show that the estimates are computationally valid. Those results are not typically made available in accordance with common practice in this literature. In another paper, we focus on the tails of the distribution. The tests indicate that our estimate of the 99%ile is reliable.
Having read Ackerman & Munitz paper, David Stern’s letter, and helpful clarifications by both sides of the debate offered in the comments above, I now have a slightly better understanding of the issue.
[Disclaimer: I am certainly not an expert in ecological economics; merely someone with a general interest in mathematical modeling, who took time to study these materials.]
Here is a summary of how I see it (just in case other regular readers care):
1. T&A have introduced a popular FUND model of climate economics, whose software implementation (available for download) was used by the U.S. government’s IWGSCC.
2. A&M wrote a paper about some potential serious problems in the model & claimed that its (then current) version 3.5 was not suitable for policy determination. About half of the criticism boils down to an (apparent) danger of near-zero denominators in their formula (2), which (combined with other asymmetries built into FUND) leads to exaggerated positive returns.
3. Read by itself, A&M criticism seems reasonably convincing — particularly the discussion in the first half of section 4.1 (up to the paragraph starting with “Two simple ways of removing the problem…”). Up to that point their analysis seems equally applicable to a wide class of random variables (including those used in FUND) & explains why A&M expect that FUND-based predictions (produced by IWGSCC) are suspect.
4. The second half of that section deals with two simple alternative formulations, that would avoid the singularity problem. A&M explicitly acknowledge that these simple fixes are not “appropriate revisions of the model” but are merely used to test the sensitivity of FUND to “the structure” of formula (2).
They observe that both of their “different ways of eliminating the problem” produce significantly different predictions than the original FUND & then claim that “This result suggests that the FUND
estimate of the SCC is significantly affected by the Monte Carlo iterations” in which random variable realizations are close to the singular value.
5. This last conclusion seems at least partly overreaching — after all, FUND might be sensitive to _some_ changes in structure of formula (2) even if it’s not sensitive to (rare/screened out/trimmed) occurrences of near-zero denominators in formula (2). A much more convincing evidence could be found by using the original (unmodified) FUND and showing that the variance of predictions grows as the number of iterations increases (similar to what A&M showed in their Figure 5).
3. In the private pre-submission email exchange T&A “had told” A&M that the alleged problem “was in fact addressed by the FUND model and the results were not substantially affected by it” (quoting Stern’s letter). This is certainly not reflected by the text of A&M’s paper and Stern concludes that “some statements in the
paper were problematic and that Ackerman and Munitz did not report in their paper the information they had received from the model developers”.
4. I tend to agree with Stern’s conclusion, but am sincerely puzzled about the expected/appropriate behavior in this situation. After all, T&A’s papers & FUND software documentation don’t acknowledge any possibility of such issues. Instead, the developers of FUND have tested it internally & concluded that the issue does not significantly alter the predictions (based on Mr. Tol’s responses above & on their letter to the journal). The details of these tests (trimming/screening parameters, etc) were not released publicly, though T&A apparently sent some test results to A&M. I wonder if T&A would still object if the authors included a footnote “FUND model developers have notified us that their internal tests did not find any issues related to this singularity. We cannot confirm or deny these claims since the tests & the parameters used in them are not publicly available.”
5. T&A’s objections (in the letter to the journal) deal only with the second half of section 4.1.
They completely ignore the first half (& if their diagnostic tests are correct, it would appear that they are in contradiction with what is reported in there — I am not sure how they square it off). In addition, Mr. Tol’s above responses to my questions 4 & 5 don’t exactly inspire confidence. I really hope that his papers provide a better justification for his confidence in FUND’s predictions.
6. In view of the above, I believe that many changes/corrections demanded by T&A are unreasonable & overreaching. I understand why the authors refused to accept most of them. (A careful reading of Stern’s letter shows that this Editor did not endorse their demands himself even though he decided to publish them — RW’s note about this incident gives a somewhat misleading impression.)
7. I think that this entire controversy was easily avoidable if T&A have found a more transparent way to publicize their internal/still-unpublished tests. Given the above, it is not clear whether a naive use of FUND v.3.5 [without very careful manual screening to remove the outliers] could in fact produce rather different predictions. Without explicitly describing the required screening algorithm/parameters (& the tests comparing trimmed means with regular means), it is not clear why the model developers believe that they have attained “reproducibility” by simply releasing the implementation code. If IWGSCC & others continue using the FUND, its developers might want to address such issues in future releases.
Thanks for the detailed analysis — very illuminating. I wanted to respond to point 6. Frank Ackerman and I had an email exchange after this post went live, and I asked about what seemed to be a contradiction between a comment Ackerman left here, and Stern’s editorial:
Ackerman responded:
Thanks for clarifying. This corresponds to a rather careful wording in Stern’s letter (“Richard Tol stated that the minimum set of corrections that would address his concerns is the following:…”)
I’d be also curious to know what those “problematic statements” actually are and whether Mr. Ackerman would agree with my criticism of the second half of their section 4.1
@Someone somewhere
There are four issues with the A&M paper.
1. They present as ours work that is not ours.
2. They claim that there is an error, knowing that there is not.
These issues are dealt with in Stern’s letter.
3. They repeat, in a superficial way, the analysis in two papers by us, with referencing our work.
4. Their tests are inconclusive.
These issues are dealt with in a comment by us.
As to your point 7, we had not counted on someone wanting to write a paper with the theme “there appears to be an error but there really is not”.
It will not happen again. The code has been upgraded so that people can no longer make changes without revealing these.
without referencing, even
Thanks for summarizing. As to the points you’ve enumerated:
1. My impression from reading their paper is it is very clear where they stop talking about the “original” FUND and where they start talking about the “modified” FUND. It is very hard to see how anyone would be confused about this. They then use the results produced by the latter to generate (indirect) evidence about the properties of the former — that’s an overreach & I agree that they should have been more careful in their conclusions.
2. It is hard to know if “they know” that there is no error in the original/unmodified FUND. It is clear that they knew about your claim that the error is not there. (I agree that they should have acknowledged this fact.) It is possible that they found the evidence you offered to be unconvincing. Perhaps the optimal way to deal with this would be for you to make that evidence public — your letter/commentary published in the journal is insufficient for the readers to form their own informed opinion.
3. I don’t have enough knowledge to judge how much of this paper overlaps with your prior publications. (I did find it rather odd that no references to your papers were included in the bibliography.)
4. When discussing validity of a model, the responsibility for providing “conclusive evidence” lies with the author or with anyone claiming to prove that the model is plainly wrong. Anyone having doubts about the model might legitimately use indirect/inconclusive tests & that in itself should not prevent the publication — provided they don’t overstate their claims.
Moreover, your letter to the journal offers no evidence that “their tests are inconclusive” beyond explaining that the second half of section 4.1 deals with _modified_ FUND. But there are also tests and general analysis in the first half of section 4.1 (showing growth in experimentally measured variance for a random variable with a simple singularity — similar to the singularity present in original FUND v.3.5). Your comments up to this point did not address them in any way. Could you please explain how/why your diagnostic tests produce results seemingly inconsistent with this?
1. It may be clear to you. We have been asked repeatedly about exactly this point.
2. They knew the results of our tests, and they admit that their own tests are inconclusive. So, strictly, they knew they had no evidence for their claim that there is an error.
3. Odd, indeed. The paper does not refer to any paper about this particular model.
4. The conclusions of a paper should follow from the analysis. That is not the case here.
5. The model that underlies Figure 5 has a different structure and different parameters than the problem at hand. It is therefore irrelevant.
I appreciate the thoughtfulness and engagement with the substantive issues in recent posts (and Ivan Oransky’s presentation of my comments above). You’d have to ask David Stern exactly what he meant by “problematic statements”, and whether a footnote disclaimer such as the one suggested by “someone_somewhere” (point 4 of the last major post above) would have resolved the problem. That suggested footnote would have been entirely accurate: they told us their tests showed there was no problem, but never, to my knowledge, published any results of or detailed specifications for those tests – so, as the suggested footnote says, we could not confirm or deny those claims.
I’d like to step back from the intensity of disagreement over this single point to mention again the broader issue that we raise: while there is considerable scientific research suggesting serious threats of irreversible damages from climate change, models such as FUND (among others) suggest that the problem is quite small in economic terms. The dissonance between these two ways of framing the climate problem should give rise to serious inquiry about what’s missing or misstated in the economic analyses.
Our article did find a risk of division by zero (never a good thing in a quantitative model) in FUND. It also found that FUND’s very low estimate of climate damages, and hence the social cost of carbon, stems from very small damages in many areas, partially offset by sizeable estimated net global benefits of warming in agriculture. In the version of FUND run by the US government’s interagency task force, the projected increase in the cost of air conditioning is the largest cost of climate change; excluding air conditioning costs, the model implies that climate change would be a net benefit to the world.
We looked in more detail at the agriculture estimates, to understand the source of the estimated global benefit in that area. In addition to the division by zero issue, we found that FUND allows physically implausible variation in its assumed temperature that maximizes crop yields – from well below the temperature of the last ice age, to well above temperatures that human physiology can survive. This seems difficult to believe, as a model of climate and agriculture. It is supported exclusively (according to the FUND documentation) by research done in 1996 or earlier – a long time ago in a fast-moving area of research such as this. Newer research suggests a very different relationship between temperature and crop yields, implying significant losses from the first few degrees of warming.
We never meant to start a debate exclusively about the merits of dividing by zero, an issue that we assume needs little discussion. Rather, the important issue is how to model the economics of climate change in a manner that is consistent with the increasingly ominous scientific findings on the subject. This is a work in progress – but it is the topic that should be getting attention.
Thanks again for taking this so seriously and looking at it so carefully – rare qualities in on-line debate!
Having also read the Ackerman and Munitz (A&M) paper and some of the documentation in light of the comments on this blog page, it seems to me that the A&M paper is a justified critique, which is likely to be useful as a cautionary in applying the FUND model without full consideration of its parameterization. Although much of the discussion has been around the “division by zero” problem, the other main critique that A&M raise is potentially equally problematic and hasn’t been addressed by Anthoff & Tol either in their “in press” response on the Ecological Economics site or here on this blog.
A&M question the use of the FUND relationship between the carbon fertilization effect on agricultural yield as a function of increasing atmospheric CO2 concentrations, and make what seem to be a reasonable critque of this, namely that (i) the temperature ranges encompassing optimal temperature (for agricultural output) at 95% confidence level are unrealistically large, (ii) that field studies don’t support a relationship in which the agricultural output increases progressively (logarithmically) with respect to increasing [CO2], (iii) that FUND has a lower bound on agricultural damage but no upper bound on agricultural benefit, and (iv) that parameterization of the agricultural output-[CO2] relationships are based on data that neglects abundant research of the last 15 years that indicates that the response of agricultural output is unlikely to be as parameterized in FUND.
Do you have a scientific response to these critiques, Dr Tol? In your Anthoff/Tol response in Ecological Economics you simply dismiss them out of hand:
“AM also take issue with the range of uncertainty about the impacts of climate change reported in the peer-reviewed literature, an issue that is outside our control”
when in fact parameterization of the relationships between agricultural output and [CO2]/temperature, in the light of up to date research does seem to be under your control.
In fact the optimal use of a programme like FUND would seem to be as a framework within which parameterization of elements of the model (like agricultural output/CO2/temperature relationships) can be tested and updated in the light of developing science.
1. That’s what the literature says.
2. A&M confuse the impact of CO2 fertilization with and without N&P limits.
3. This asymmetry is there is principle, but not in practice. Indeed, our PDF is right-skewed rather than left-skewed.
4. A&M misread the documentation: There is a standardized procedure (described in Tol 2002) to bring in new evidence. We in fact omit only one recent study (Hertel and co), but its conclusions are broadly in line with previous work, so there is no reason to believe that the parameters would be much different.
Don’t get me wrong. There are real issues in the literature on the economic impacts of climate change. But these are not among them.
Might be worth noting here that others have taken issue with Mr. Tol’s FUND model, in this case for ranking the value of life/death differently . Sorry for self-linking, the underlying article seems to have disappeared:
http://bigcitylib.blogspot.ca/2011/01/tolgate.html
That issue was solved in
Fankhauser, S., R.S.J. Tol and D.W. Pearce (1997), ‘The Aggregation of Climate Change Damages: A Welfare Theoretic Approach’, Environmental and Resource Economics, 10, 249-266.
Fankhauser, S., R.S.J. Tol and D.W. Pearce (1998), ‘Extensions and Alternatives to Climate Change Impact Valuation: On the Critique on IPCC WG3’s Impact Estimates’, Environment and Development Economics, 3, 59-81.
The solution introduced other undesirable elements, which are solved in
Anthoff, D., C. Hepburn and R.S.J. Tol (2009), ‘Equity Weighing and the Marginal Damage Costs of Climate Change’, Ecological Economics, 68, 836-849.
Anthoff, D. and R.S.J. Tol (2010), ‘On International Equity Weights and National Decision Making on Climate Change’, Journal of Environmental Economics and Management, 60, 14-20.
Mr. Tol: Sorry to be so persistent, but I am still trying to understand.
“The model that underlies Figure 5 has a different structure and different parameters than the problem at hand. It is therefore irrelevant.”
Are you saying that, if one were to similarly measure the SD of a random variable satisfying equation (2) in the A&M paper, the resulting picture would be qualitatively different from their Figure 5 ?
If so, this should be very easy to check. Could you please provide the “right” values of T & A
+ the distribution to use for T^{opt} ?
A&M show the behaviour of 1/X, where X is normally distributed.
Our function is Y/(1-X)-YX/(1-X), where X and Y are normally distributed.
Therefore, Fig 5 is an illustration of how a potential division by zero could manifest itself, but Fig 5 is otherwise irrelevant to the argument.
The model parameters can be found at http://www.fund-model.org/
The above exchanges between Tol and Ackerman are fascinating, but need to be settled empirically. AM seem to ignore all evidence showing higher yields for most crops with both elevated CO2 and warmer temperatures (e.g. FACE results, and the Crimp et al. study commissioned from CSIRO by the Garnaut Review 2008, Table 6.5; that Table shows higher yields for wheat almost everywhere in Australia with CO2 at 550 ppm by 2100 plus concomitant higher temperatures).
I agree that the effect of climate on agriculture is an empirical question. There are a number of studies finding that FACE experiments show lower CO2 fertilization effects than earlier (indoor) research – and there is virtually no CO2 fertilization effect on maize, sorghum, and sugar cane, and a strongly negative effect on cassava yields. Meanwhile, recent work by Schlenker, Lobell, and others shows sharp declines in yield for many crops above a temperature threshold; numerous articles in Nature Climate Change within the last year have documented this relationship for different crops and regions. I’ll be writing more about this in the near future.
Many thanks Frank, and especially for the mention of Schlenker, I was not aware of his very impressive work in this area. I remain to be convinced of his use of temperature data. For example, there appears to be no evidence (namely Durbin-Watson statistics) in his paper with Roberts on non-linear effects that they checked for spurious correlations resulting from autocorrelation. Their very high t-statistics in the final column of their Table 4 are suggestive that could well be a problem.
Ever immodest, I’d be interested in any comments you might have on my own paper “Climate Change and Food Production” (2009), it’s at my website http://www.timcurtin.com. where I do check for spurious correlations.
Schlenker’s paper with Lobell on CC and crop yields in Africa probably suffers from the same defect. But possibly even worse is that the two worst performing countries with respect to CC effects on crop yields are said by Schlenker & Lobell to be South Africa and Zimbabwe. In the former around 2,000 well capitalised farmers (white) have been murdered since 1992, and many others expropriated, with their farms taken over by non-white squatters who do not of course have title and thereby no access to capital. Similarly in Zimbabwe since 2000 virtually all its 5,000 well capitalised (white) farmers with title have been expropriated and replaced by untitled squatters (I have direct personal knowledge, see ref. at my website to my co-authored book with David Lea on Land Titling in Papua New Guinea which includes my Appendix on the Land Reform in Zimbabwe). Are yields of dead and expropriated farmers likely to show a rising trend when correlated with AGW?
I spent some of my earlier life working on or financing agriculture in Zimbabwe, Zambia, Sudan, Egypt, and Nigeria. Climate variability could be a problem, yet it was always associated with ENSO, but ongoing secular change, NEVER, no such thing!
I have just posted a comment about this discussion on Judith Curry’s web site: http://judithcurry.com/2012/07/18/climate-models-at-their-limit/#comment-220434
I’ll post my comment in full below for convenience of readers here:
Exchange between Ackerman and Tol
For the past few days there has been an interesting exchange between Frank Ackerman and Richard Tol: http://www.retractionwatch.com/2012/07/09/noteworthy-journal-posts-all-the-corrections-it-wanted-in-a-climate-change-paper-after-authors-refuse-most/#comments . Ackerman says:
My thoughts on this exchange:
1. The exchange was well adjudicated by Professor David Stern
2. Ackerman’s assertions seem reasonable (but some of his comments suggest an alarmist bias)
3. I also understand why Tol has reacted as he has.
4. Excellent questions and summaries by ‘someone_somewhere’. He appears to be unbiased and expert in the subject area of model validation but not in the specific area of science that this model addresses; his summaries are informative and focused on what is important. He states: “ Disclaimer: I am certainly not an expert in ecological economics; merely someone with a general interest in mathematical modeling, who took time to study these materials.
However, the really important points I take from this debate are:
1. I get the impression Ackerman is an alarmist and he is looking to find faults that make the FUND model give too high benefits and too low costs.
2. He is not looking hard to find faults or errors that would cause the model to produce too high costs and too low benefits.
3. In fact, no one in the Climate Science community is putting much effort into looking at models like this to try to find faults or errors that would cause the model to produce too high costs and too low benefits
4. So they are being missed or, if identified, not thoroughly investigated with the same enthusiasm to find fault.
5. That is because of the inbuilt bias caused by the massive government funding and group think which supports “the consensus” view of climate science.
The focus of a number of the comments on this thread seems to be that the damage costs are being understated; there is nothing to give a sense that researchers are checking as thoroughly to see if the damage costs may be overstated. Examples in this discussion that illustrate my point:
Pinko Punko @ July 9, 2012 at 8:12 pm
Ad hom. comment. Clearly from a man on a mission; an advocate for “The Cause”.
John Havery Samuel @ July 16, 2012 at 3:13 am
Ad hom. comment. Clearly biased towards alarmism.
bigcitylib @ July 12, 2012 at 7:21 pm
Another ad hom. comment clearly from an Alarmist. Tol addressed this comment.
Frank Ackerman @ :July 12, 2012 at 4:26 pm [my highlights]:
This all seems very reasonable at first blush. [Tol responds to some of the key criticisms in several comments] However, my concern is: who is checking from the opposite perspective? Who is checking that the damages are not being overstated in the models? What competent groups are being funded to check that the damage estimates are not being overstated and the costs understated?
Frank Ackerman @ July 16, 2012 at 10:34 am
All these examples point to research that shows some crops will be less productive under high CO2. But is this a balanced comment, or is it a sign of an alarmist bias? What about the main crops that feed the world like wheat and rice? What about our demonstrated ability to improve crop yields to suit conditions? What about the fact that the planet is far more productive when warmer according to the paleoclimate record (IPCC AR4 WG1 Chapter 6). It strains credulity to believe that, while the planet is far better for life when warmer (life thrives when the planet is warmer than now and struggles when colder), that would not be the case if the planet warms due to AGW.
I don’t get the sense from Ackerman is impartial from his comments. I get the impression he is a man on a mission – an advocate for The Cause.
Hmm. I accept my reply was ad hom. But my reply was to omnologos’
“Grab your torch! Grab your pitchfork! RW is endangering the sanctity of climate science!” Perhaps you’re not as studiously neutral as you set out. 🙂
JHS – and I was replying to a “candid” soul offended because Tol’s work is mentioned by…the WRONG PEOPLE…
Excellent discussion, very interesting to read through. I had nothing to add until I read Peter Lang comments. Allow me:
Mr. Lang:
”5. That is because of the inbuilt bias caused by the massive government funding and group think which supports “the consensus” view of climate science.”
It is Earth Observation evidence that is compelling “the consensus.”
Pay attention to the increasing weather extremes and the weirding of the northern Jet Stream we’ve been witnessing these past decades.
Re: “Alarmist:” You know, if my home is on fire I want some alarmists around who are will to do some shouting and take some action. Much rather than the impartial witness sitting across the street, cozy in their comfort, drinking beer and watching what’s going to happen, rather than getting involved. ;- )
~ ~ ~ ~ ~ ~ ~
Mr. Lang writes:
“All these examples point to research that shows some crops will be less productive under high CO2. But is this a balanced comment, or is it a sign of an alarmist bias? What about the main crops that feed the world like wheat and rice? What about our demonstrated ability to improve crop yields to suit conditions? What about the fact that the planet is far more productive when warmer according to the paleoclimate record (IPCC AR4 WG1 Chapter 6). It strains credulity to believe that, while the planet is far better for life when warmer (life thrives when the planet is warmer than now and struggles when colder), that would not be the case if the planet warms due to AGW.”
~ ~ ~
It strains credulity to compare today’s intense cultivation with the rampant jungles of previous eons.
Further it strains credulity to so focus on controlled experiments with CO2 and plants where all other factors are controlled and optimized.
What about the increasingly severe incidents of monster heat waves, droughts and torrential down pours? What about the disruption of age old patterns of synchronized timing between seasonal temps, pollinators, rain fall. What about the social disruptions that will have their rippling impacts of intensively managed farms?
It strains incredulity that folks think massive climate shifts can occur without massive disruption for all species that have established themselves under a completely different climate regime.
Nordhaus explains why the post processing trimming does not fix the div0 issue:
http://frankackerman.com/Tol/Nordhaus_comment_on_Tol.pdf
It really was a very serious bug.
Thank you for posting this link! It confirms that my initial suspicions about their model were valid.
The EPA has recalculated the Social Cost Of Carbon estimates using the DICE, FUND and PAGES IAM models.
This is from the May 2013, Revised November 2013 “Technical Support Document: Technical Update of the Social Cost of Carbon for Regulatory Impact Analysis Under Executive Order 12866 Interagency Working Group on Social Cost of Carbon, United States Government”
Under the section “Summary of Model Updates” which canvassed changes to the IAM models since the 2010 calculation we have the following.
“In FUND, the damages associated with the agricultural sector are measured as proportional to the sector’s value. The fraction is bounded from above by one and is made up of three additive components that represent the effects from carbon fertilization, the rate of temperature change, and the level of the temperature anomaly. In both FUND 3.5 and FUND 3.8, the fraction of the sector’s value lost due to the level of the temperature anomaly is modeled as a quadratic function with an intercept of zero. In FUND 3.5, the coefficients of this loss function are modeled as the ratio of two random normal variables. This specification had the potential for unintended extreme behavior as draws from the parameter in the denominator approached zero or went negative. In FUND 3.8, the coefficients are drawn directly from truncated normal distributions so that they remain in the range [0,) and (,0] , respectively, ensuring the correct sign and eliminating the potential for divide by zero errors. The means for the new distributions are set equal to the ratio of the means from the normal distributions used in the previous version. In general the impact of this change has been to decrease the range of the distribution while spreading out the distributions’ mass over the remaining range relative to the previous version. The net effect of this change on the SCC estimates is difficult to predict.”
http://www.whitehouse.gov/sites/default/files/omb/assets/inforeg/technical-update-social-cost-of-carbon-for-regulator-impact-analysis.pdf
Once again, while all of this controversy is fascinating from a mathematical and statistical point of view, it is absolutely irrelevant to settling the foundational question of climate change study: that is, What percentage of climate change is being caused by humans and therefore realistically reversible by humans? This is the main sticking point of controversy between climate change “believers” and “deniers”.
Virtually no reasonable person denies that climate change is taking place. Of course it is, as it always has been.
The crux of the controversy, what remains unproven by either side, is what is causing it? If the “believers” could provide incontrovertible evidence it is predominantly or even significantly man-caused, then perhaps we could agree that ongoing research dollars should be focused on how we can reverse this man-caused contingent; or at least we could agree on studying whether the effects outweigh the benefits, which is the center of this posting’s controversy. This is the only scenario in which it makes sense to spend precious assets on studying the costs of climate change; i.e., if you can’t realistically stop it or slow it down significantly, who cares how much it costs? It is what it is.
Similarly, if “deniers” could provide incontrovertible evidence that the proportion of climate change that is truly man-caused is virtually insignificant, then perhaps both sides could agree that future research efforts should be focused on how best to deal with the coming (likely) changes. Because if this is the case, as many intelligent and knowledgeable people believe, then we may as well abandon efforts at trying to turn it around.
In the absence of irrefutable scientific proof, both sides currently reside firmly in the area of faith. The faith of the “believers” is, I believe, inherently rooted in humanism. The belief goes something like this: If people put their best thinking and efforts together, there is nothing we can’t accomplish. Therefore, it doesn’t truly matter whether or not climate change is primarily man-caused. If we spend enough money on it, we can reverse its present course, or even it’s presumptive future course.
The faith of the “deniers” is implicitly rooted in the idea that there is a greater power (or powers) at work in the universe than all of man’s powers combined. Deniers are thus naturally skeptical that the activities of man can be primarily at fault for climate change. They tend to believe that to the degree climate change may be occurring, it is more than likely naturally-caused and therefore virtually irreversible despite the best efforts of men. They therefore take the approach that if you are going to spend precious assets, assets belonging to the People, in trying to reverse it, you must first prove that it is man-caused and thus at least theoretically man-reversible.
Those deniers who have studied climate change, at least on the surface of it, see a remarkable correlation between sunspot activity and evidence of past climate changes. They deem this too great a coincidence to ignore. They also notice that while studies of ice core samples and tree rings, etc., seem to demonstrate a correlation between atmospheric CO2 concentrations and temperatures, that the periods of high temperatures seem to precede the periods of high CO2 concentrations. Which, of course, is the reverse of what the “believers” hoped to prove with said studies.
So what deniers are essentially saying to believers is that if you want us to jump into high alarmist mode, as we think you are in, you must PROVE to us that their is some benefit for us doing so. Prove to us that you aren’t simply a lot like Chicken Little, or the Boy Who Cried “Wolf”. Prove to us that the Climate Accords are not simply a scheme to redistribute wealth from industrialized 1st World nations to developing 2nd and 3rd World countries.
And no, statements like, “It’s settled science” aren’t going to solve this dilemma, because as hopefully you can now see, they are akin to statements like, “Just take our word for it, because we really know better than you,” and “You really need to just take it on faith.” For as long as “science” continues to be overrun by outright fraud, as these retractions are proving, it isn’t going to engender a whole lot of faith from “deniers”.