Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-23T09:09:13.912Z Has data issue: false hasContentIssue false

17 - Uncertainty

from Part IV - Processes

Published online by Cambridge University Press:  08 December 2022

Kari De Pryck
Affiliation:
Université de Genève
Mike Hulme
Affiliation:
University of Cambridge

Summary

In IPCC reports, calibrated language is used to communicate confidence and/or agreement in claims. This language is highly specialised and has developed over time to account for diverse sources of knowledge and types of agreement. Currently, the IPCC uses two typologies for calibrated language — a qualitative confidence scale that assesses the amount of evidence, and expert agreement about that evidence; and a more quantitative scale that measures and expresses uncertainty. IPCC leadership intends for calibrated language to help make their reports scientifically clearer, although the resulting stylised language raises readability challenges. Calibrated IPCC language is also used, cynically, as a diplomatic tool during the report adoption plenaries of the Panel, as government delegates raise questions about the characterisation of climate facts. Uncertainty language in the IPCC, then, signifies both technical advancement in the characterisation of uncertainty and the challenges of communicating climate science in diverse contexts.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Overview

In reports of the Intergovernmental Panel on Climate Change (IPCC), calibrated language is used to communicate confidence and/or agreement in claims. This language is highly specialised and has developed over time to account for diverse sources of knowledge and types of agreement. Currently, the IPCC uses two typologies for calibrated language – a qualitative confidence scale that assesses the amount of evidence, and expert agreement about that evidence, and a more quantitative scale that measures and expresses uncertainty. IPCC leadership intends for calibrated language to help make their reports scientifically clearer, although the resulting stylised language raises readability challenges. Calibrated IPCC language is also used, cynically, as a diplomatic tool during the report adoption plenaries of the Panel, as government delegates raise questions about the characterisation of climate facts. Uncertainty language in the IPCC, then, signifies both technical advancement in the characterisation of uncertainty and the challenges of communicating climate science in diverse contexts.

17.1 Introduction

There is no uncertainty here, or very little. It is at most an alibi.

Jean-Pierre Dupuy (Reference Dupuy2012: 586).

In his article, beautifully titled ‘the precautionary principle and enlightened doomsaying’, French philosopher Jean-Pierre Dupuy gets right to the heart of why IPCC authors spend countless hours of volunteer labour pouring over their uncertainty language, carefully calibrated with their chapter group of expert assessors, ensuring that the terms align with the research at hand and the guidance provided by the IPCC. It is work of care and standardisation, precise depictions of what is known and what isn’t, what has been fully investigated and what is emergent as a topic of research, and where and how experts agree about climate science. Uncertainty, in IPCC documents, emerges from managed, calibrated epistemic and authorial processes – processes that the IPCC has taken up with incredible technocratic enthusiasm.

Calibrated language in IPCC reports, specifically in their Summaries for Policymakers (SPMs), is intended to provide scientific clarity and precision to the text. However, this is often perceived to be at the expense of readability, particularly among lay people or non-expert decision makers as the IPCC seeks to expand its audience beyond environmental ministries (Barkemeyer et al., Reference Barkemeyer, Dessai, Monge-Sanz, Renzi and Napolitano2016). The highly stylized language, requiring specific knowledge to comprehend, is a barrier to accessing climate information in IPCC reports.

Uncertainty is also an alibi. Using uncertainty language offers an alternative to providing statements of fact, which may take scientists decades or even centuries (if ever) to come to agreement upon. Formal uncertainty language helps shade in details about knowledge that always comes in the form of ranges of possible future outcomes – like climate modelling – or knowledge that is partial, underway, incomplete or currently in a state of some expert disagreement. Uncertainty allows for plausible hedging. It is protective and, like much of scientific dispositions, it is conservative, offering ranges that may narrow or widen as more becomes known.

Uncertainty is also used cynically. Dupuy contrasts epistemic uncertainty to the uncertainty of random variables in life. Scientists know this well, characterising different types of uncertainty in response to how it is generated – through computer models or through conflicts in expert agreement, to name two examples. But climate contrarians have regularly taken the deployment of uncertainty by scientists to underscore what is not known, suggesting that action cannot be taken until knowledge is complete. This approach does not hold up well in the face of the overwhelming evidence of human-caused climate change.

Instead, the cynical approach to uncertainty is used in increasingly sophisticated ways, including using the careful process of calibrated language in IPCC reports as a tactic for stalling and derailing the adoption of these reports at a political level, as described later. But first, this short chapter will provide an overview of the history and typology of uncertainty language before examining a case study of political re-interpretations of IPCC uncertainty guidance.

17.2 A Brief History and Typology of Calibrated Language in the IPCC

Scientific uncertainty is a means for communicating precision in ranges of outcomes. There are two main types of uncertainty – model uncertainty and socially derived uncertainty – which further encompasses conflict uncertainty, judgement uncertainty and ethical uncertainty. Model uncertainty can reflect parameters of climate models or the structural uncertainty inherent in making decisions about the code (Funtowicz & Ravetz Reference Funtowicz and Ravetz1990; Draper Reference Draper1995; Patt, 2007). Conflict uncertainty (Patt, 2007) is generated by subjective, expert disagreement relating to how to interpret evidence. Judgement uncertainty (O’Reilly et al., Reference O’Reilly, Brysse, Oppenheimer and Oreskes2011), like conflict uncertainty, is socially derived and is generated through the cultural specificity of the group of experts charged with assessing information. The IPCC is just beginning to consider the ethical implications of model choices as Integrated Assessment Models (IAMs) continue to gain power as epistemic and political tools (see Chapter 15). The IPCC communicates model uncertainty and, to an extent, conflict uncertainty. The social act of performing the assessment creates judgement uncertainty, which the IPCC generally does not assess.

The history of uncertainty treatment in the IPCC originates with attempts to standardise the communication of model uncertainty and, over time, develops into more elaborate devices to calibrate socially derived uncertainty (see also Swart et al., Reference Swart, Bernstein, Ha-Duong and Petersen2009). In the First Assessment Report (AR1), only Working Group I (WGI) used uncertainty language and this hewed closely to quantitative, probabilistic statements familiar to earth systems modellers. In the Second Assessment Report (AR2), WGII included qualitative confidence statements in their Executive, Summaries as well (Mastrandrea & Mach, Reference Mastrandrea and Mach2011: 661). The AR3 provided the first attempt to standardise IPCC approaches to assessing and communicating uncertainty, although this was only picked up by WGI and WGII (for an insider’s account, see Petersen, [2006] 2012). Moss and Schneider (Reference Moss, Schneider, Pachauri, Taniguchi and Tanaka2000) wrote the guidance document, a wide-ranging article that offered advice on how to match a style of uncertainty communication with the type of uncertainty being assessed. This guidance was applied interpretively, chapter by chapter, as the expert authors iterated on the guidance to suit the publications they assessed. While this makes intuitive sense from a scholarly perspective, it did not help the report readers more clearly understand the information assessed. AR4 leadership worked towards a more systemised approach, at least within – and for the first time, across all – WGs.

AR4 was written with a four-page guidance document for calibrating uncertainty (IPCC, 2005). This note built upon Moss and Schneider’s advice, along with the substantial proceedings of a 2004 IPCC workshop titled ‘Describing Scientific Uncertainties in Climate Change to Support Analysis of Risk and of Options’ (Manning et al., Reference Manning, Petit and Easterling2004). Each of the three WGs could, in essence, choose one of several approaches to calibrating their confidence language depending on the epistemic traditions of their WG, including how best to communicate uncertainty for the type of literature generally assessed (IPCC, 2005). The InterAcademy Council (IAC) report, an independent assessment of the IPCC, took place after the plaudits and scandals emerging from AR4 (see Chapters 3 and 6). An entire chapter is devoted to ‘IPCC’s evaluation of evidence and treatment of uncertainty’ (IAC, 2010: 27). After analysing the three different WG uncertainty standards, the IAC review authors recommended that the WGIII approach – using a qualitative level-of-understanding scale describing the amount of evidence available and the degree of agreement among experts – was ‘convenient’ and recommended that this become the standard across all three WGs, supplemented with qualitative uncertainty judgements when possible (IAC, 2010: xiv–xv).

The IPCC took this advice into account for AR5 and AR6, both of which used the AR5 guidance note on uncertainty, although they have elaborated their approach slightly. This is because, while ‘level-of-understanding’ language can help readers understand the knowledge basis that guides the authors’ judgement, this language does not communicate their likelihood assessment. Authors communicate their qualitative level-of-understanding and then – depending on the type of knowledge being assessed – calibrate their assessment with formalised, qualitative confidence language or quantified uncertainty language. Specifically, the guidance note (Mastrandrea et al., Reference Mastrandrea, Field and Stocker2010) instructs authors to:

  1. 1. ‘evaluate the validity of a finding’: type, amount, quality, consistency of evidence

  2. 2. If high agreement and robust confidence, do one of the following:

    1. a. Qualitative level of confidence based on author judgement (very low, low, medium, high, very high) (Mastrandrea et al., Reference Mastrandrea, Field and Stocker2010: 2)

    2. b. Quantitative measure of uncertainty (virtually certain, very likely, likely, about as likely as not, unlikely, very unlikely, exceptionally unlikely) (Mastrandrea et al., Reference Mastrandrea, Field and Stocker2010: 2)

      1. i. Statistical analysis to determine probability distribution

      2. ii. Alternately, a formal, quantitative survey of expert views can determine probability distribution (Mastrandrea et al., Reference Mastrandrea, Field and Stocker2010: 4)

The AR5 (and AR6) uncertainty guidance included a figure and a table to help visualise the ranges of uncertainty, along with the appropriate calibrated language. The qualitative ‘confidence scale’ combines the level-of-understanding along axes of evidence and agreement (Figure 17.1). Confidence, because it is a collective judgement by the authors about the state of the literature being assessed, can be evaluated even when evidence is limited if existing literature is generally in agreement (Mastrandrea et al., Reference Mastrandrea and Mach2011: 679).

Figure 17.1 Confidence scale comparing evidence and agreement.

Adapted from Figure 1 in Mastrandrea et al., Reference Mastrandrea, Field and Stocker2010: 3

When the information at hand allows assessors to make quantitative judgements about uncertainty, another scale of language is used to describe likelihood. Figure 17.2 helps authors align their probabilistic assessment with likelihood language, articulating the numerical range behind the prose. The table’s footnote suggests that some additional likelihood terms from AR4 can be carried forward if that probabilistic estimate is more appropriate for the assessment.

Figure 17.2 Likelihood scale matching terms to probability ranges.

Adapted from Table 1 in Mastrandrea et al., Reference Mastrandrea, Field and Stocker2010

Figure 17.1 and the table in Figure 17.2 function as devices for IPCC authors to align their assessment with the calibrated language expected in the full assessment reports. Along with these reference tools, IPCC authors working on internal IPCC documents – such as uncertainty guidance – often publish more conceptual versions of their IPCC-adjacent work in peer-reviewed journals. This allows for additional scrutiny, as well as ensuring that their work is attributable (read: citable) to a broader audience than the universe of IPCC authors (see Manning et al., Reference Manning, Petit and Easterling2004; Mastrandrea et al., Reference Mastrandrea and Mach2011 for key examples related to IPCC uncertainty). For AR5, the IPCC guidance note – with its clear, stepwise, user-guide style – contrasts with the longer scholarly style of the lengthier concept paper, although the content remains consistent. Significantly, the peer-reviewed concept paper lays out the importance of creating a ‘traceable account’ of all uncertainty statements in the IPCC, moving from individual chapters through to the SPM and Technical Summary. Such traceable accounts are important for rigour and precision, as well as for assisting those representing the IPCC at the approval plenaries (Mastrandrea et al., Reference Mastrandrea and Mach2011). Additionally, these publications provide guidance for assessing uncertainty related to ‘key findings’, which suggests that key findings should be those with robust evidence and agreement along with relatively high levels of confidence or likelihood.

This process has become increasingly legible, transparent and standardised. But the fact that an IPCC ‘key finding’ must be adorned with varying linguistic levels of uncertainty further relegates knowledge that cannot be treated in this way to findings that are not, by default, ‘key’. Adler and Hirsch Hadorn (Reference Adler and Hirsch Hadorn2014) note several critiques about scholarship that is either difficult or impossible to calibrate. This includes scholarship coming from models of linear expertise (Beck, Reference Beck2011a), small scale, holistic studies that are the hallmark of anthropology (Bjurström & Polk, Reference Bjurström and Polk2011), and a lack of inclusion of interpretive social sciences (Hulme & Mahony, Reference Hulme and Mahony2010). The totalising demands of the IPCC’s uncertainty language marginalises entire forms of knowledge and sets of disciplinary expertise, while at the same time paints over the knowledge that is assessed with a veneer of completeness and authority.

What is thereby left out of the IPCC’s ‘key findings’ may well be knowledge that is essential to understanding how to survive the climate crisis. For example, this might be knowledge from the interpretive social sciences that reveals the possibilities or barriers to behavioural, political and cultural change in different contexts, or Indigenous knowledge or otherwise marginalised knowledge (see Chapter 13). What is left in the text is often conceptually vague, either because of slippage in the use of formal uncertainty language or else due to more fundamental misunderstandings. Aven and Renn (Reference Aven and Renn2015) note that the conceptual and theoretical underpinnings of risk and uncertainty in IPCC reports remain unclear, even as guidance over the assessment process has become more directed and authors attempt to take a more standardised approach to calibrated language.

The requirement for deploying rigid uncertainty language raises another concern about what is left out of IPCC texts. This is the way the IPCC communicates low-probability, high-risk events, such as the rapid disintegration of the polar ice sheets (O’Reilly et al., Reference O’Reilly, Oreskes and Oppenheimer2012). In the case of the ice sheets, AR4 authors did not include assessments of ‘rapid dynamical flow’, although they noted the exclusion – a wispy, flagging gesture towards a serious conflict in this part of the report (see Chapter 12 and Box 12.1). For AR5, uncertainty guidance included encouragement to consider such events, exhorting author teams to ‘provide information of the range of possible outcomes, including the tails of distributions of variables relevant to decision-making and risk management’ (Mastrandrea et al., Reference Mastrandrea and Mach2011: 681). Building levees that address the middle range projections for sea-level rise is very different from building levees that account for the higher-end projections.

The users of the IPCC reports – who are sometimes framed as decision-makers, other times as ‘consumers’ of the products of the reports – also have diverse lenses through which they read these apparently clear words. In their literature review categorising IPCC approaches to and critiques of uncertainty, Adler and Hirsch Hadorn (Reference Adler and Hirsch Hadorn2014: 669) included a box about end users titled ‘pluralism of epistemic standards and values of users’. These different standards and values become apparent in the example offered in the following section, as the WGI AR6 report travelled to the report adoption IPCC plenary in July and August 2021.

17.3 When Calibration Veers Off Course: Political Re-interpretations of Uncertainty

Comparing the AR4 and AR5 reports, the use of calibrated language in IPCC texts increased in both frequency and diversity across the three scales: evidence/agreement, confidence and likelihood (Mach et al., Reference Mach and Field2017). Janzwood (Reference Janzwood2020) extended this analysis to note further increases in calibrated uncertainty language use in the Special Reports of the AR6 cycle. Mach et al. – a team of experts who have held roles as IPCC authors, co-chairs, advisors, and Technical Support Unit staff – are clear about the goals underpinning the more sophisticated set of AR5 uncertainty guidance: ‘a harmonised, more broadly applicable approach, enabling consistent treatment of uncertainties in communicating the assessed state of knowledge’ (Mach et al., Reference Mach and Field2017: 3). However, I additionally suggest that the increase in confidence language is not just about institutional decision-making by the IPCC and increased maturity in uncertainty guidance for the authors. It is also in anticipation of – or in direct response to – governmental requests for clarification at the approval plenary stage.

IPCC reports undergo several rounds of expert and government review before taking a final step at the report adoption plenary (see Chapter 11). At this meeting, the governments that form the Panel approve the SPM, sentence by sentence. Delegates come to the meetings ready to intervene on matters of style and substance. Unsurprisingly, these interventions often take up matters of national interest. These interests range from assuring the correct scientific representation of climate change and concerns about due process from various states, to acting as an upholder of particular scientific values, to ensuring that scientific understanding accelerates the energy transition from petroleum to renewables – or not (see De Pryck, Reference De Pryck2021a).

Over the course of the AR6 WGI report adoption plenary – conducted virtually from 26 July to 6 August 2021 – particular delegates offered interventions that were important for their countries to communicate. These interventions became dependable and predictable in their repetition. One illustrative example was the Saudi Arabian delegate’s interventions on calibrated language, the characterisation of statements of facts, and ensuring that textual statements could be associated with quantification. In their opening statement, as reported by the Earth Negotiations Bulletin (ENB), ‘SAUDI ARABIA pointed to instances in the report where non-calibrated language is used, and called for clarifying uncertainties relating to the use of models and projections’ (ENB, 2021: 3). Additional, representative interventions along these lines (and the outcomes) in the ENB report include:

SAUDI ARABIA objected to “unequivocally.” Delegates noted compromise on explicit reference to warming of “atmosphere, ocean, and land,” rather than “climate system,” as these are unequivocally associated with human influence. INDIA opined that human influence has varying levels of confidence and likelihood across the three. Co-Chair Masson-Delmotte said this is a statement of fact and the authors concurred. After some discussion, SAUDI ARABIA accepted the compromise formulation with a small editorial change and the Headline Statement was approved.

(ENB, 2021: 4, regarding A.1)

SAUDI ARABIA preferred to keep “main,” arguing “dominant” is not IPCC-calibrated language and that ”more than 50%” refers to something being “likely” rather than “very likely.” The paragraph was approved with no further amendments

(ENB, 2021: 5, regarding A.1.3)

Saudi Arabia was not the only delegation at the approval Plenary to intervene about calibrated language and quantification, but they were the most persistent. Nor was this topic the only point that they brought into the Plenary. Their delegation used quantification – and the IPCC’s turn towards increasing quantification – as a means for slowing down the proceedings, as well for raising doubts about the validity of statements if they were not easily translated in quantifiable – or quasi-quantifiable – prose. As evident from the statements above, sometimes the suggested changes were incorporated into the SPM, sometimes the authors conferred and made small edits, sometimes the authors explained their rationale and the original wording was accepted. And sometimes others – like WGI co-chair Dr. Masson-Delmotte – benchmarked the language against past practice or against the broader narrative of the report.

While the Saudi Arabian delegation regularly urged deletion of text if their concerns could not be quantified, or at least clarified, the German delegation worked on calibrated language from the other end of the spectrum. That is, several times a German representative noted that statements of fact did not need calibrated language attached. For example:

GERMANY asked why the first sentence states it is “virtually certain” that the land surface will continue to warm more than the ocean surface rather than a statement of fact. The authors clarified it is not a statement of fact because the assessment concludes that, in the near term or for low levels of global warming, internal variability can be high and temporarily mask warming

(ENB, 2021:12, regarding B.2.1)

The pull-and-push over uncertainty language at the low- and high-uncertainty ranges shows that the edges of uncertainty continue to matter as points of political and scientific import. The removal of calibrated language represents the point at which a claim becomes fact. The inclusion of low-likelihood, but high-impact, information becomes a point of policy relevance, even policy demand, even as the scientific information at hand remains unresolved. Janzwood (Reference Janzwood2020) notes that authors know that these critiques are imminent at the adoption Plenary and may consider leaving out information in anticipation of the debate that might ensue at the political level. In an interview Janzwood conducted with an IPCC author, this author noted that being made to consider levels of confidence when elevating statements to the level of the SPM constitutes a ‘reality check’ (Janzwood, Reference Janzwood2020) : 1666). Authors decide in advance that some matters are too politically contentious to withstand the scrutiny and the slowing of the approval process, choosing some statements to defend and others to remain only in the main report or the Technical Summary, both of which receive less scrutiny.

17.4 Achievements and Challenges

As an institution, the IPCC has demonstrated enthusiasm for creating and implementing increasingly sophisticated means for calibrating uncertainty language. Most importantly, the move towards systematising qualitative information has encouraged trust and comparison between the quantifiable and probabilistic findings typical of natural and physical sciences and forms of knowledge coming from different disciplinary traditions. In AR6, the concept of risk was also scaled up into rubrics, decision matrices and standardised, calibrated language, building off the apparent success of IPCC uncertainty guidance (Reisinger et al., Reference Reisinger, Howden and Vera2020). IPCC authors, leadership and staff regularly convey that this elaboration of uncertainty guidance reflects community values within the IPCC – values including traceability, transparency, professionalism, rigour and care.

However, this trajectory has some sticking points, both epistemic and political. In terms of knowledge, the process of standardising calibrated language, even as it seeks to be more inclusive of diverse methodological and disciplinary traditions, excludes some forms of knowledge that don’t easily adhere to formal uncertainty calibration due to their descriptive or interpretive nature. As an in-language, frequent use of formal uncertainty calibration can alienate the audience that the IPCC hopes to engage. And in diplomatic spaces, the rhetoric of precision can be cynically deployed to slow the approval process, obfuscate or remove findings that cannot clear the language bar, or else engender debates about the nature of scientific facts.

References

Three Key Readings

Adler, C. E. and Hirsch Hadorn, G. (2014). The IPCC and treatment of uncertainties: topics and sources of dissensu s. Wiley Interdisciplinary Revi ews: Climate Change, 5(5): 663676. http://doi.org/10.1002/wcc.297 This article lends critical purchase to the epistemic commitments and absences in IPCC uncertainty communication practices.Google Scholar
Mach, K. J., Mastrandrea, M. D., Freeman, P. T. and Field, C. B. (2017). Unleashing expert judgement in assessment. Global Environmental Change, 44: 114. http://doi.org/10.1016/j.gloenvcha.2017.02.005 This article, written by IPCC authors and leaders, demonstrates the pragmatic and inclusive state of uncertainty thinking within the institution.CrossRefGoogle Scholar
Swart, R., Bernstein, L., Ha-Duong, M. and Petersen, A. (2009). Agreeing to disagree: uncertainty management in assessing climate change, impacts and responses by the IPCC. Climatic Change, 92(1): 129. http://doi.org/10.1007/s10584–008-9444-7 This article provides a comprehensive historical overview of the development of uncertainty guidance in the IPCC through AR4.Google Scholar
Figure 0

Figure 17.1 Confidence scale comparing evidence and agreement.

Adapted from Figure 1 in Mastrandrea et al., 2010: 3
Figure 1

Figure 17.2 Likelihood scale matching terms to probability ranges.

Adapted from Table 1 in Mastrandrea et al., 2010

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Uncertainty
  • Edited by Kari De Pryck, Université de Genève, Mike Hulme, University of Cambridge
  • Book: A Critical Assessment of the Intergovernmental Panel on Climate Change
  • Online publication: 08 December 2022
  • Chapter DOI: https://doi.org/10.1017/9781009082099.022
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Uncertainty
  • Edited by Kari De Pryck, Université de Genève, Mike Hulme, University of Cambridge
  • Book: A Critical Assessment of the Intergovernmental Panel on Climate Change
  • Online publication: 08 December 2022
  • Chapter DOI: https://doi.org/10.1017/9781009082099.022
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Uncertainty
  • Edited by Kari De Pryck, Université de Genève, Mike Hulme, University of Cambridge
  • Book: A Critical Assessment of the Intergovernmental Panel on Climate Change
  • Online publication: 08 December 2022
  • Chapter DOI: https://doi.org/10.1017/9781009082099.022
Available formats
×