Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-25T14:35:48.996Z Has data issue: false hasContentIssue false

Majorization and randomness measures

Published online by Cambridge University Press:  16 October 2024

K. Nidhin*
Affiliation:
Government Arts and Science College, Calicut
*
*Postal address: Department of Statistics, Government Arts and Science College, Calicut, Kerala, India. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

A series of papers by Hickey (1982, 1983, 1984) presents a stochastic ordering based on randomness. This paper extends the results by introducing a novel methodology to derive models that preserve stochastic ordering based on randomness. We achieve this by presenting a new family of pseudometric spaces based on a majorization property. This class of pseudometrics provides a new methodology for deriving the randomness measure of a random variable. Using this, the paper introduces the Gini randomness measure and states its essential properties. We demonstrate that the proposed measure has certain advantages over entropy measures. The measure satisfies the value validity property, provides an adequate extension to continuous random variables, and is often more appropriate (based on sensitivity) than entropy in various scenarios.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Muirhead [Reference Muirhead27] initiated the work on majorization, and the book by Hardy, Littlewood, and Pólya [Reference Hardy, Littlewood and Pólya13] provides an excellent foundation for the subject. The application of majorization is one of the significant research areas. Arnold and co-workers [Reference Arnold3], [Reference Arnold and Sarabia4], and [Reference Marshall, Olkin and Arnold26] have provided a good survey of this area. Hickey’s papers [Reference Hickey14Reference Hickey16] show an important application of the majorization property for randomness ordering of the probability mass functions (PMFs) and probability density functions (PDFs); see also [Reference Joe17], [Reference Joe18], and [Reference Marshall, Olkin and Arnold26]. These papers establish the role of majorization in measuring randomness, but they provide only an ordering and do not discuss models for measuring uncertainty.

The first notable measure of randomness in classical probability theory is the Shannon entropy (see [Reference Klir20]), which satisfies majorization criteria (see [Reference Hickey14]). Then, many researchers introduced different entropy measures, including Rényi entropy and Kullback–Leibler divergence (see [Reference Kullback and Leibler22] and [Reference Ribeiro, Henriques, Castro, Souto, Antunes, Costa-Santos and Teixeira29]). The entropy measures enjoy many properties that are well known in the literature (see [Reference Baumgartner6], [Reference Csiszár9], [Reference Gour and Tomamichel12]). Entropy measures also possess some limitations. Kvålseth [Reference Kvålseth24] explains that the entropy measures do not satisfy the value validity property. Entropies extending to continuous random variables have certain restrictions (see [Reference Ribeiro, Henriques, Castro, Souto, Antunes, Costa-Santos and Teixeira29, page 5]). Also, the sensitivity of entropy measures is another property for choosing entropy as a randomness measure. One example is Shannon entropy, which is sensitive to the ratio of probabilities (see Section 4 for a detailed discussion).

This paper provides a methodology for deriving randomness measures that preserve majorization ordering using a pseudometric-based approach. For this, we study a new class of pseudometric spaces, which characterizes a majorization property. The novel class of pseudometrics established on the space of probability distributions is used in this study to construct the randomness measure of random variables. Also, we show that the measure developed using the methodology has overcome the limitations of entropy.

2. A new family of pseudometric spaces

This section explains a class of pseudometrics on $\mathbb{R}^n$ which preserves majorization ordering. Let $\tilde{\mathbf{x}}=(x_1,x_2,\ldots,x_n)\in \mathbb{R}^n$ ; in particular, $\tilde{\mathbf{x}}_i=(x_{i1},x_{i2},\ldots,x_{in})$ , where $i\in \mathbb{N}$ , and $\mathbf{a}^*=(a,a,\ldots,a)$ , $\mathbf{a}^{**}=(a,0,,0,\ldots,0)$ for $a\in \mathbb{R}$ . We start with the concept of majorization in $\mathbb{R}^n$ (see [Reference Marshall, Olkin and Arnold26]).

Definition 2.1. Let $\tilde{\mathbf{x}},\tilde{\mathbf{y}}\in\mathbb{R}^n$ ; then $\tilde{\mathbf{x}}$ is majorized by $\tilde{\mathbf{y}}$ (denoted by $\tilde{\mathbf{x}}\prec\tilde{\mathbf{y}}$ ) if

\begin{equation*}\begin{cases}\sum_{i=1}^{k}x_{[i]}\leq \sum_{i=1}^{k}y_{[i]} & \text{for all $k=1,2,\ldots,n-1$,}\\[3pt] \sum_{i=1}^{n}x_{[i]} = \sum_{i=1}^{n}y_{[i]} , &\end{cases}\end{equation*}

where $(x_{[1]}, x_{[2]},\ldots,x_{[n]})$ represents decreasing rearrangements of $(x_1,x_2,\ldots,x_n)$ .

The following definition defines a general class of pseudometrics that preserves an essential property of majorization.

Definition 2.2. A function $d\,\colon\, \mathbb{R}^n \times \mathbb{R}^n\rightarrow [0,\infty)$ is called an m-pseudometric if it satisfies the following properties, for any $\tilde{\mathbf{x}}_1,\tilde{\mathbf{x}}_2,\tilde{\mathbf{x}}_3 \in\mathbb{R}^n$ :

  1. 1. $d(\tilde{\mathbf{x}}_1,\tilde{\mathbf{x}}_2)\geq 0$ , $\tilde{\mathbf{x}}_1=\tilde{\mathbf{x}}_2 {\Rightarrow} d(\tilde{\mathbf{x}}_{1},\tilde{\mathbf{x}}_{2})=0$ ,

  2. 2. $d(\tilde{\mathbf{x}}_1,\tilde{\mathbf{x}}_2)=d(\tilde{\mathbf{x}}_2,\tilde{\mathbf{x}}_1)$ ,

  3. 3. $d(\tilde{\mathbf{x}}_1,\tilde{\mathbf{x}}_2)\leq d(\tilde{\mathbf{x}}_1,\tilde{\mathbf{x}}_3)+d(\tilde{\mathbf{x}}_3,\tilde{\mathbf{x}}_2)$ ,

  4. 4. If $\tilde{\mathbf{x}}_1\prec\tilde{\mathbf{x}}_2$ and $\tilde{\mathbf{x}}_2\nprec \tilde{\mathbf{x}}_1$ (from Definition 2.1, $\sum_{j=1}^{n}x_{1j}=\sum_{j=1}^{n}x_{2j}$ ), which implies that $d(\tilde{\mathbf{x}}_1,\mathbf{x}_{\mathbf{1.}}^*) \lt d(\tilde{\mathbf{x}}_2,\mathbf{x}_{\mathbf{1.}}^*)$ , where $x_{1.}=\tfrac{1}{n}\sum_{j=1}^{n}x_{1j}\cdot.$

Note that from the relation between majorization and the Lorenz curve (see [Reference Arnold and Sarabia4]), axiom 4 is equivalent to:

  1. 4 If $ \tilde{\mathbf{x}}_{1},\tilde{\mathbf{x}}_{2} \in\mathbb{R}^n$ with $\sum_{j=1}^{n}x_{1j}=\sum_{j=1}^{n}x_{2j}$ and $ L_{\tilde{\mathbf{x}}_{1}}(p)\geq L_{\tilde{\mathbf{x}}_{2}}(p)$ , $0 \lt p \lt 1$ and for some p, $ L_{\tilde{\mathbf{x}}_{1}}(p) \gt L_{\tilde{\mathbf{x}}_{2}}(p) \Rightarrow d(\tilde{\mathbf{x}}_{1},\mathbf{x}_{\mathbf{1.}}^{*}) \lt d(\tilde{\mathbf{x}}_{2},\mathbf{x}_{\mathbf{1.}}^{*}),$ where $L_{\tilde{\mathbf{x}}}(p)$ denotes the Lorenz curve of $\tilde{\mathbf{x}}$ .

Example 2.1. Let d be the Euclidean metric in $\mathbb{R}^n$ , that is,

\[d(\tilde{\mathbf{u}},\tilde{\mathbf{v}})=\Biggl[\sum_{i=1}^n(u_i-v_i)^2\Biggr]^{{{1}/{2}}},\]

where $\tilde{\mathbf{u}}=(u_1,u_2,\ldots,u_n)$ and $\tilde{\mathbf{v}}=(v_1,v_2,\ldots,v_n)$ . Then d is an m-pseudometric on $\Re^n$ .

Example 2.2. Let d be the $L^r$ -metric in $\mathbb{R}^n$ with $1 \lt r \lt \infty$ , that is,

\[d(\tilde{\mathbf{u}},\tilde{\mathbf{v}})=\Biggl[\sum_{i=1}^n(u_i-v_i)^r\Biggr]^{{{1}/{r}}}.\]

Assume $\tilde{\mathbf{u}}\prec\tilde{\mathbf{v}}$ and $\tilde{\mathbf{v}}\nprec \tilde{\mathbf{u}}$ . Since the function $|x-u_.|^r$ , where $r \gt 1$ , $x\in \mathbb{R}$ and $u_. = \frac{1}{n} \sum_{j=1}^{n} u_{j}$ , is a convex function, we have $d(\tilde{\mathbf{u}},\mathbf{u}_{\mathbf{.}}^*) \lt d(\tilde{\mathbf{v}},\mathbf{v}_{\mathbf{.}}^*)$ (see [Reference Berge7, page 184]). So d is an m-pseudometric on $\mathbb{R}^n$ .

The following metric property is still valid for an m-pseudometric.

Theorem 2.1. Let d be an m-pseudometric; then ${{d}/{(1+d)}}$ is an m-pseudometric.

Proof. Since d is an m-pseudometric, the result follows from the elementary result that if $x \gt 0$ , $y \gt 0$ , and $x \lt y$ , then

\begin{equation*} \frac{x}{1+x} \lt \frac{y}{1+y}.\end{equation*}

3. A new family of randomness measures

This section introduces a new methodology to measure the randomness of a discrete random variable using an m-pseudometric. So this is a metric-based measure of randomness and can be considered an alternative measure of entropy for measuring randomness.

The following defines a special subset of $\mathbb{R}^n$ :

(3.1) \begin{equation}\Gamma=\Biggl\{\tilde{\mathbf{p}}=(p_1,p_2,\dots,p_n)\in\mathbb{R}^n \mid\sum_{i=1}^np_i=1, p_i\geq 0, i=1,2,\ldots,n\Biggr\}.\end{equation}

Let X be a random variable with support $\{x_1,x_2,\ldots,x_n\}$ . Let $\tilde{\mathbf{p}}=(p_1,p_2,\ldots,p_n)$ be the probability distribution of X. That is, $\mathbb{P}(X=x_i)=p_i,i\in\{1,2,\ldots,n\}$ . Then clearly $\tilde{\mathbf{p}} \in \Gamma$ .

Definition 3.1. ([Reference Hickey14, Reference Hickey15].) Let $\tilde{\mathbf{p}}_1,\tilde{\mathbf{p}}_2\in \Gamma$ be two probability distributions. Then we say that $\tilde{\mathbf{p}}_1$ has more randomness than $\tilde{\mathbf{p}}_2$ if $\tilde{\mathbf{p}}_1\prec\tilde{\mathbf{p}}_2$ and $\tilde{\mathbf{p}}_2\nprec \tilde{\mathbf{p}}_1$ .

For any $\tilde{\mathbf{p}} \in \Gamma$ , $\tfrac{\mathbf{1}}{\mathbf{n}}^*\prec \tilde{\mathbf{p}}\prec \mathbf{1}^{**}$ . That is, a random variable with equally likely probabilities represents the most randomness, and a random variable degenerate at one point represents non-randomness, and all other PMFs are between these two extreme cases. Let Y, for example, follow a uniform distribution with parameter n and let Z be a random variable which degenerates at a point (i.e. $\mathbb{P}(Z=a)=1$ , $ a\in \mathbb{R}$ ). Then, for any X with support $\{x_1,x_2,\ldots,x_n\}$ , where each $x_i^{\prime}$ , $i=1,2,\ldots,n$ , is distinct, the randomness of Y is greater than the randomness of X, which is greater than the randomness of Z. If $\tilde{\mathbf{p}}=(p_1,p_2,\ldots,p_n)$ and $\tilde{\mathbf{q}}=(p_{(1)},p_{(2)},\ldots,p_{(n)})$ , where $((1),(2),\ldots,(n))$ is a permutation of $(1,2,\ldots,n)$ , then $\tilde{\mathbf{p}}\prec \tilde{\mathbf{q}}$ and $\tilde{\mathbf{q}}\prec \tilde{\mathbf{p}}$ , and therefore the randomness of $\tilde{\mathbf{p}}$ and $\tilde{\mathbf{q}}$ is identical [Reference Hickey15].

We call the m-pseudometric defined on $\Gamma$ a random pseudometric. Since, for any $\tilde{\mathbf{p}} \in \Gamma $ , we have $\tilde{\mathbf{p}}\prec \mathbf{1}^{**}$ and $\mathbf{1}^{**} \nprec \tilde{\mathbf{p}}$ , this implies $d(\tilde{\mathbf{p}},\tfrac{\mathbf{1}}{\mathbf{n}}^*)\lt d(\mathbf{1}^{**},\tfrac{\mathbf{1}}{\mathbf{n}}^*)$ . So, the random pseudometric is bounded. The following definition introduces a new class of randomness measures (we call it the N-randomness measure) of a discrete random variable X having finite support.

Definition 3.2. Let $\Gamma$ be a class of probability distributions defined as

\[\Gamma=\Biggl\{\tilde{\mathbf{p}}=(p_1,p_2,\dots,p_n)\in\mathbb{R}^n \mid \sum_{i=1}^np_i=1, p_i\geq 0, i=1,2,\ldots,n \Biggr\},\]

and let d be a random pseudometric on $\Gamma$ . Let X be a random variable with PMF $\tilde{\mathbf{p}}=(p_1,p_2,\ldots,p_n)$ . Then the N-randomness measure of X is defined as

\[\zeta(X)=1-k_{n}d\biggl(\tilde{\mathbf{p}},\dfrac{\mathbf{1}}{\mathbf{n}}^*\biggr),\]

where

\[k_{n}=\dfrac{1}{\max d\bigl(\tilde{\mathbf{p}},\tfrac{\mathbf{1}}{\mathbf{n}}^*\bigr)}=\dfrac{1}{d\bigl(\mathbf{1}^{**},\tfrac{\mathbf{1}}{ \mathbf{n}}^*\bigr)}.\]

Remark 3.1. A similar form of the above measure is used in biological studies for checking value validity in evenness indices of species distributions (see [Reference Kvålseth24]). If the measure can be represented as in the above form, then Kvålseth [Reference Kvålseth24] argues that the measure satisfies the condition of the value validity property given in Definition 4.3. So, using the $L^r$ -metric, only two known measures (the Bulla measure and the Williams measure for $r=1,2$ respectively) satisfy this property. But Kvålseth [Reference Kvålseth24] considers only d as a metric, so many widely used measures do not satisfy this property. Then Chao and Ricotta [Reference Chao and Ricotta8] introduced evenness measures by replacing the metric with a divergence measure. But in general, these measures do not satisfy the majorization property (see [Reference Chao and Ricotta8]).

The following properties are essential for randomness measures, location invariance, invariance under injective transformation and symmetry (invariance under permutation). Since $\zeta$ depends only on probabilities, the results follow,

Theorem 3.1. Let X be a random variable with outcomes $\{x_1,x_2,\ldots,x_n\}$ . The PMF of X is defined as $\mathbb{P}(X=x_i)=p_i$ , $i\in\{1,2,\ldots,n\}$ . Then $\zeta(X)=\zeta(X-a)$ , for all $ a\in \mathbb{R}$ .

Theorem 3.2. If $g\,\colon\, \mathbb{R}\rightarrow \mathbb{R}$ is an injective function, then $\zeta(X)=\zeta(g(X))$ .

Theorem 3.3. Let X and Y be two random variables, with the probability vector of X being a permutation of the probability vector of Y. Then $\zeta(X)=\zeta(Y)$ .

The following example illustrates the above theorems in binomial distributions.

Example 3.1. Let $X\sim B(3,\tfrac{1}{2})$ and $Y=X+2$ , where B(n,p) represents a binomial distribution with parameters n and p. Then, from Theorem 3.1, $\zeta(Y)=\zeta(X)$ . Now assume $Z=X^2$ . Since the square function is injective on the positive real line and from Theorem 3.2, $\zeta(Z)=\zeta(X)$ . Let W be a random variable with PMF

\[f_W(w)=\begin{cases}\frac{3}{8} & \text{if $w = 0,1$,}\\\frac{1}{8} & \text{if $w = 2,3$.}\end{cases}\]

Then, from Theorem 3.3, $\zeta(W)=\zeta(X)$ .

4. Gini randomness measure for discrete distributions

This section discusses a randomness measure based on a new pseudometric, which we call the Gini pseudometric.

Definition 4.1. Let $\Gamma$ be the class of discrete probability distributions as defined in (3.1). Then the Gini pseudometric $d_g$ is defined as $d_g\,\colon\, \Gamma \times\Gamma\rightarrow [0,\infty)$ such that

(4.1) \begin{equation}d_g(\tilde{\mathbf{u}},\tilde{\mathbf{v}})=\Biggl\lvert\sum_{i=1}^n\sum_{j=1}^n\lvert u_i-u_j\rvert-\sum_{i=1}^n\sum_{j=1}^n\lvert v_i-v_j\rvert \Biggr\rvert,\end{equation}

where $\tilde{\mathbf{u}},\tilde{\mathbf{v}}\in \Gamma$ .

The Gini pseudometric is a random pseudometric. The first three axioms in Definition 2.2 are direct. Now $d_g(\tilde{\mathbf{u}},\mathbf{u}_{\mathbf{.}}^*)=\sum_{i=1}^n\sum_{j=1}^n|u_i-u_j|$ is a constant multiple of the Gini coefficient of $\tilde{\mathbf{u}}$ (see [Reference Kendall, Stuart and Ord19]). Hence $\tilde{\mathbf{u}}\prec\tilde{\mathbf{v}}$ and $\tilde{\mathbf{v}}\nprec \tilde{\mathbf{u}}$ implies $d_g(\tilde{\mathbf{u}},\mathbf{u}_{\mathbf{.}}^*) \lt d_g(\tilde{\mathbf{v}},\mathbf{v}_{\mathbf{.}}^*)$ (see [Reference Allison2], [Reference Marshall, Olkin and Arnold26]). That is, the function $d_g$ satisfies axiom 4 of Definition 2.2.

Using the Gini pseudometric, a direct calculation gives $d_g(\mathbf{1}^{**},\tfrac{\mathbf{1}}{\mathbf{n}}^*)=2(n-1)$ . Hence the following definition introduces a randomness measure which we call the Gini randomness measure; a similar formula is used for measuring evenness in ecology (see [Reference Chao and Ricotta8], [Reference Kvålseth24]).

Definition 4.2. Let X be a discrete random variable with PMF

\[f_X(x)=\begin{cases}p_i & \text{if $x = x_i, i\in\{1,2,\ldots,n\}$,}\\0 & \text{otherwise.}\end{cases}\]

The Gini randomness measure of X ( $\zeta_g(X)$ ) is defined as

\[\zeta_g(X)=1-\dfrac{1}{2(n-1)}\sum_{i=1}^n\sum_{j=1}^n|p_i-p_j|.\]

Different formulas can be used to calculate the Gini randomness measure; see [Reference Sen31] and [Reference Yitzhaki33]. The major advantage of this randomness measure is the powerful theory available for the Gini coefficient. That can be useful for further study of randomness (see [Reference Yitzhaki and Schechtman34]). To illustrate this point, from [Reference Yitzhaki and Schechtman34], there are different ways to represent the Gini coefficient, which again means that the Gini randomness measure has different representations. The Gini coefficient, for instance, can be defined using the Lorenz curve (see [Reference Gastwirth11]), and hence the Gini randomness measure of a random variable X can be defined using the Lorenz curve; see Definition 7.2.

Example 4.1. Let X and Y be two random variables, such that $X\sim B(4,\tfrac{1}{2})$ and $Y\sim B(4,\tfrac{1}{3})$ . Then a direct calculation, using (4.1), gives $d(\widetilde{P_X},\widetilde{P_Y})=0.0752$ (i.e. $7.52\%$ deviation of the randomness of two distributions). Also, $\zeta_g(X)=0.5937$ and $\zeta_g(Y)=0.5185$ , that is, the randomness in $B(4,\tfrac{1}{2})$ is $59.37\%$ and in $B(4,\tfrac{1}{3})$ it is $51.85\%$ .

In addition to Theorems 3.1, 3.2, and 3.3, the Gini randomness measure is continuous with respect to each $p_1,p_2,\ldots,p_n$ . The entropy measure also satisfies these properties (see [Reference Csiszár9]). So, both the Gini randomness measure and the entropy measure satisfy many of the essential properties of a randomness measure. The following discussion shows the advantages of the Gini randomness measure over entropy measures in applied situations. Example 4.2 illustrates a comparative study of the Gini randomness measure and Shannon entropy on sensitivity. The example shows some situations in which the Gini randomness measure is more sensitive than the entropy measure.

Example 4.2. (Comparison of Shannon entropy and Gini randomness measure.) Let $X\sim B(1,p)$ . The Shannon entropy of X (denoted by H(X)) is

\[H(X)=-((1-p)\log_{2}(1-p)+p\log_{2}p), \quad 0\leq p \leq 1.\]

The Gini randomness measure of X is

\begin{equation*}\zeta_g(X)=\begin{cases}2p & \text{if $0 \leq p \leq \tfrac{1}{2} $,}\\2- 2p & \text{if $\tfrac{1}{2} \leq p \leq 1 $.}\end{cases}\end{equation*}

Figure 1 illustrates the difference between the two measures. The blue line represents the Shannon entropy for p, and the red line represents the Gini random measure for p. Note that both measures take the maximum when p is $\tfrac{1}{2}$ and the minimum when p is 0 and 1.

Figure 1. The figure illustrates the Shannon entropy and Gini randomness measure of a random variable following B(1,p) distribution.

Also, the sensitivity of the two measures is different. The Gini randomness measure is more sensitive when more probabilities are scattered, whereas Shannon entropy is sensitive to the ratio of probabilities (see [Reference Allison2]). To illustrate this point, let X and Y have probability distributions $(p_1,p_2,\ldots,p_n)$ and $(q_1,q_2,\ldots,q_n)$ respectively. The probabilities are in descending order with $p_k=q_k$ for $k=1,2,\ldots,i-1,i+1,\ldots,j-1,j+1,\ldots,n$ and $q_i=p_i+a$ , $q_j=p_j-a$ , where a is a constant. From [Reference Allison2], $\zeta_g(X)-\zeta_g(Y)=c_1 a(j-i)$ , whereas $H(X)-H(Y)=c_2 a \log({{p_i}/{p_j}})$ , where $c_1$ and $c_2$ are constants. Note that the sensitivity of the Gini randomness measure depends on the ranks of i and j in $1,2,\ldots,n$ , but the Shannon entropy depends on the numerical values of $p_i$ and $p_j$ . So $\zeta_g(X)-\zeta_g(Y)$ is high when more probability values lie between $p_i$ and $p_j$ , and $H(X)-H(Y)$ is high when $p_i$ is near to one and $p_j$ is near to zero.

This can be seen in Figure 1, since for each p, the Bernoulli distribution has two probabilities $(p_1,p_2)$ , and from the previous paragraph, $i=2$ and $j=1$ , and the difference in $\zeta_g(X)-\zeta_g(Y)$ depends only on the transfer probability a whereas $H(X)-H(Y)$ depends on both a and the values of probabilities $p_i$ and $p_j$ . So the Gini randomness curve has a constant slope and the Shannon entropy curve takes a higher slope when p is near zero or one (so $p_1$ is near to one and $p_2$ is near to zero).

For example, let p change from $0.52$ to $0.51$ ; then the corresponding change in entropy is $0.0009$ , which is almost zero, and the variation in the Gini randomness measure is $0.02$ . So, in Bernoulli distribution, when p changes from $0.52$ to $0.51$ , Gini ( $0.02$ ) exhibits more change in randomness than entropy ( $0.0009$ ). Now if p changes from $0.02$ to $0.01$ , then the variation in the Gini randomness measure shows the same value $0.02$ (since here both a and $j-i$ are the same as in the previous case). However, the corresponding change in entropy is 0.06. So, in this case, the entropy ( $0.06$ ) shows more change in randomness than Gini ( $0.02$ ) since here p is small. This implies that $p_1$ is near to 1 and $p_2$ is near to zero. The same thing can be seen when the number of probabilities in a distribution is more than two. So, to study changes in the randomness of probability distributions that are more scattered, the Gini randomness measure provides better results since it captures the changes more effectively than the Shannon entropy.

The following property of the Gini randomness measure is another advantage that popular entropy measures do not satisfy.

Definition 4.3. (Value validity property [Reference Kvålseth23].) A measure has value validity if all of its potential values provide numerical representations of the size (extent) of the attribute being measured that are true or realistic with respect to some acceptable criterion.

To check that the Gini randomness measure satisfies the value validity property, we use the following procedure from Kvålseth [Reference Kvålseth23, page 4859]. Let U and V be uniform and degenerate random variables, respectively. Let X be a random variable with PMF $(1-\lambda+{{\lambda}/{n}},{{\lambda}/{n}}, \ldots, {{\lambda}/{n}})$ , $\lambda \gt 0$ (denoted by $\widetilde{P_X}$ ). Then a measure $\gamma$ satisfies the value validity property if

(4.2) \begin{equation}\dfrac{\gamma(U)-\gamma(X)}{\gamma(U)-\gamma(V)}=\dfrac{d(U,X)}{d(U,V)}=1-\lambda,\end{equation}

where d denotes the metric, and Kvålseth suggests that the Euclidean distance and the Minkowski class of distance metrics satisfy (4.2). Since

\[d_g(\tfrac{\mathbf{1}}{\mathbf{n}}^*,\widetilde{P_X})=2(n-1)(1-\lambda),\quad d_g\big(\tfrac{\mathbf{1}}{\mathbf{n}}^*, \mathbf{1}^{**}\big)=2(n-1)\quad\text{and}\quad \zeta_g(X)=\lambda,\]

the Gini pseudometric satisfies (4.2) and the Gini randomness measure satisfies the value validity property.

Kvålseth [Reference Kvålseth23, Reference Kvålseth24] argues that the value validity property is crucial for a realistic representation of evenness characteristic of a species distribution and in many other applied situations. The popular entropy measures do not satisfy this property, so in such situations, the Gini randomness measure is more appropriate than the entropy measures. Another crucial advantage of the Gini randomness measure compared to entropy measures is its nice continuous extension (see Section 7).

The theory discussed for discrete random variables in the previous sections can be extended to continuous random variables without losing many of its crucial properties, which are the topic of the following sections.

5. Random pseudometric on the space of bounded random variables

This section extends the methodology defined on $\Gamma$ , a subset of $\mathbb{R}^n$ , to the space of bounded random variables (here ‘bounded random variable’ means either bounded discrete random variables or continuous random variables with bounded interval support). So, the results could be extended to continuous cases. Let $(\Omega,\texttt{A},\mu)$ be a measure space where $\Omega$ is a non-empty finite set of real numbers or a bounded interval subset of $\mathbb{R}$ , and $\mu$ is a counting or Lebesgue measure, respectively. Let $\Lambda$ be a class of all bounded non-negative random variables such that its integral with respect to $\mu$ is unity. That is, if a random variable $X\in \Lambda$ , then $\int_{0}^{\infty}X \,{\mathrm{d}} \mu =1$ .

Proposition 5.1. Let $X\in \Lambda$ and X be degenerate. Then $X\equiv{{1}/{(\mu(\Omega))}}$ .

Proof. Let $\Omega$ be a sample space with finite measure and suppose $X\equiv a$ . Then

\begin{equation*} \int_{\Omega }X(w) \,{\mathrm{d}} \mu(w) =1 \end{equation*}

implies

\begin{equation*} a =\dfrac{1}{\int_{\Omega } \,{\mathrm{d}} \mu(w)}.\end{equation*}

Let $\nu$ denote the degenerate random variable at ${{1}/{(\mu(\Omega))}}$ , that is, if $\Omega$ is a finite set with n elements or a bounded interval (a,b), then $\nu$ is a random variable degenerate at ${{1}/{n}}$ or ${{1}/{(b-a)}}$ respectively. Let

\[ L_X(p)=\frac{1}{\mu_X}\int_0^pQ_X(q)\,{\mathrm{d}} q,\quad 0\leq p \leq1 , \]

where $\mu_X=\mathbb{E}(X)$ and $Q_X$ is the quantile function of X (for the Lorenz curve definition using the quantile function, see [Reference Gastwirth11]). The following class of metrics is a generalization of the random pseudometrics in the class of random variables $\Lambda$ .

Definition 5.1. Let a function $d\,\colon\, \Lambda \times \Lambda\rightarrow [0,\infty)$ and, for any $X, Y, Z \in\Lambda$ ,

  1. 1. $d(X,Y)\geq 0$ , $ d(X,Y)=0 $ only if X and Y have the same distribution,

  2. 2. $d(X,Y)=d(Y,X)$ ,

  3. 3. $d(X,Y)\leq d(X,Z)+d(Z,Y)$ ,

  4. 4. If $L_{X}(p)\geq L_{Y}(p), 0 \lt p \lt 1$ and for some p, $L_{X}(p)\neq L_{Y}(p)$ implies $d(X,\nu)\lt d(Y,\nu)$ .

Then d is a random pseudometric.

Example 5.1. Let $X,Y \in \Lambda$ and

\[ M_g(X)=g^{-1}\biggl(\int_{J}g(x)\,{\mathrm{d}} F(x)\biggr) \]

be the quasi-arithmetic mean (see [Reference Porcu, Mateu and Christakos28]), where g is a continuous, strictly increasing, and concave function, J is the support of X, and $\mu_X$ represents E(X). Then the function

\[d(X,Y)=\biggl\lvert\dfrac{M_g(X)}{\mu_X}-\dfrac{M_g(Y)}{\mu_Y}\biggr\rvert\]

is a random pseudometric. The first three axioms in Definition 5.1 are direct, and since

\[d(X,\nu)=1-\dfrac{M_g(X)}{\mu_X},\]

which is the Atkinson index, then axiom 4 follows (see [Reference Allison2], [Reference Atkinson5]).

6. Randomness measure of general random variable

Let $\Phi$ be a set of all random variables, either discrete or continuous, with bounded support. Let $f_X$ be the PMF or PDF of X. Let U be a uniform random variable that is either discrete or continuous.

For a given $X\in \Phi$ , let $f_X(U)$ be a random variable created by transforming U using the PMF or PDF of X; U and X have the same support (a similar transform can be seen in the literature; see [Reference Ahmad and Kochar1], [Reference Di Crescenzo, Paolillo and Suárez-Llorens10], and [Reference Rommel, Bonnans, Gregorutti and Martinon30]). If X is a discrete random variable having outcomes $\{x_1,x_2,\ldots,x_n\}$ and the PMF is $\mathbb{P}(X=x_i)=p_i$ , $i\in\{1,2,\ldots,n\}$ , then $L_{\tilde{\mathbf{p}}}(r)=L_{f_X(U)}(r)$ , for all $r\in(0,1)$ . For example, if $X\sim B(4,\tfrac{1}{2})$ , then to calculate $f_X(U)$ , consider $\mathbb{P}(U=i)=\tfrac{1}{5}$ , $i=0,1,2,3,4$ . Then use the binomial PMF, that is,

\[f_X(x)=\begin{cases}\tfrac{1}{16} & \text{if $x= 0 , 4$,}\\[3pt] \tfrac{4}{16} & \text{if $x= 1 , 3$,}\\[3pt] \tfrac{6}{16} & \text{if $x= 2 $.}\end{cases}\]

This implies that $f_X(U)$ is a random variable with PMF

\[ f_{f_X(U)}(x)= \begin{cases} \tfrac{2}{5} & \text{if $x=\tfrac{ 1}{16} $,}\\[3pt] \tfrac{2}{5} & \text{if $x=\tfrac{ 4}{16} $,}\\[3pt] \tfrac{1}{5} & \text{if $x= \tfrac{ 6}{16} $.} \end{cases} \]

Then its Lorenz curve is

\[L_{\tilde{\mathbf{p}}}(r)=L_{f_X(U)}(r)=\begin{cases}\tfrac{5}{16}r & \text{if $0\leq r \lt \tfrac{2}{5}$,}\\[3pt] \tfrac{5}{4}r- \tfrac{3}{8} & \text{if $\tfrac{2}{5}\leq r \lt \tfrac{4}{5}$,}\\[3pt] \tfrac{15}{8}r-\tfrac{7}{8} & \text{if $\tfrac{4}{5}\leq r \leq 1$,}\end{cases}\]

where $\tilde{\mathbf{p}}=(\tfrac{1}{16},\tfrac{4}{16},\tfrac{6}{16},\tfrac{4}{16},\tfrac{1}{16})$ . Also, if $\tilde{\mathbf{p}}$ and $\tilde{\mathbf{q}}$ correspond to $f_X(U)$ and $f_Y(U)$ , then $\tilde{\mathbf{p}}\prec \tilde{\mathbf{q}}$ if and only if $L_{f_X(U)}(r)\geq L_{f_Y(U)}(r)$ , for all $ r\in[0,1]$ (see [Reference Marshall, Olkin and Arnold26, page 718]). Theorem 6.1 extends this result to continuous random variables, that is, the relation between $f_X$ (the density function of X) and $f_X(U)$ (a random variable created by transforming U using the PDF of X; U and X have the same support). The theorem establishes that axiom 4 in Definition 5.1 is a valid extension of axiom 4 in Definition 2.2. To prove Theorem 6.1, the following results (from [Reference Hickey16, page 924] and [Reference Marshall, Olkin and Arnold26, page 719] respectively) are required.

Lemma 6.1. Let $f_X$ and $f_Y$ be two densities with the same support. Then $f_X$ is majorized by $f_Y$ ( $f_X\prec f_Y$ ) if and only if

\[\int_{-\infty}^{\infty}h(f_X(x))\,{\mathrm{d}} x\leq\int_{-\infty}^{\infty}h(f_Y(x))\,{\mathrm{d}} x,\]

for every continuous convex function h.

Lemma 6.2. Suppose that X and Y are two random variables and $\mathbb{E}X = \mathbb{E}Y$ . Then $L_X(p)\geq L_Y(p)$ , for all $ p\in[0,1]$ , if and only if $\mathbb{E}[h(X)]\leq \mathbb{E}[h(Y)]$ for every continuous convex function h, that is, if and only if $X \leq_{cx} Y$ .

Theorem 6.1. Let $f_X$ and $f_Y$ be two densities with the same support. Then $f_X\prec f_Y$ if and only if $L_{f_X(U)}(r)\geq L_{f_Y(U)}(r)$ , for all $ r\in(0,1)$ .

Since $f_X(U)\in \Lambda$ , the following definition generalizes Definition 3.2 to a bounded random variable.

Definition 6.1. Let $X\in \Phi$ be a random variable. Then the N-randomness measure of X is defined as

\[\zeta(X)=1-K_Xd(f_X(U),f_U(U)),\]

where d is a random pseudometric, U is a uniform random variable with X and U having the same support, and

\[K_X=\dfrac{1}{\sup_{f_X(U)\in \Lambda} d(f_X(U),f_U(U))}.\]

The following example illustrates the possible X which attain the supremum.

Example 6.1. Assume, without loss of generality, that [0,1] is the support of $f_X(U)$ . Then consider the sequence of random variables $\{f_{Z_n}(U)\}_{n\geq 1}$ with PDF

\[f_{f_{Z_n}(U)}(z)=\dfrac{1}{n}z^{{{(1-n)}/{n}}},\quad 0\leq z\leq 1.\]

Then $f_{Z_n}(U)\in \Lambda$ . Also, it is easy to verify that $\mathbb{E}(f_{Z_n}(U))={{1}/{(n+1)}}$ ,

\begin{align*}F_{f_{Z_n}(U)}(z)&=z^{{{1}/{n}}},\quad 0\leq z\leq 1,\\[3pt] Q_{f_{Z_n}(U)}(p)&=p^n,\quad 0\leq p\leq 1,\\[3pt] L_{f_{Z_n}(U)}(p)&=p^{n+1},\quad 0\leq p\leq 1.\end{align*}

Therefore,

\begin{equation*} \lim_{n\rightarrow\infty}L_{f_{Z_n}(U)}(p)=\begin{cases}0 & \text{if $0 \lt p \lt 1 $,}\\1 & \text{if $p = 1$.}\end{cases}\end{equation*}

So the random variable X which satisfies $\sup_{f_X(U)\in \Lambda} d(f_X(U),f_U(U))$ has the above Lorenz curve; see Theorem 7.3.

Note that Definitions 3.2 and 6.1 are the same when X is a discrete random variable. Since $\zeta$ depends only on the PDFs, the result follows.

Theorem 6.2. For any $X\in \Phi$ , $\zeta(X)=\zeta(X-a)$ , for all $a\in \Re$ .

The above theorem shows that the location change does not affect the randomness of a random variable.

7. Gini randomness measure for general distributions

This section introduces the Gini randomness measure of a random variable. The following definition gives the Gini pseudometric on $\Lambda$ .

Definition 7.1. (Gini pseudometric.) Let $X,Y\in \Lambda$ be two random variables. Then the Gini pseudometric d is defined as $d\,\colon\, \Lambda \times \Lambda\rightarrow [0,1]$ such that

\[d(X,Y)=\int_0^1|L_{X}(p)-L_{Y}(p)|\,{\mathrm{d}} p.\]

The Gini pseudometric is the usual $L_1$ -metric defined on $\Lambda$ , which implies properties 1, 2, and 3 of Definition 5.1. Since $L_{\nu}(p)=p$ , $0\leq p\leq 1$ , and for any random variable X, $L_{X}(p) \leq p$ for $0\leq p\leq 1$ ,

\begin{align*}d(X,\nu) &= \int_0^1|L_{X}(p)-L_{\nu}(p)|\,{\mathrm{d}} p\\&= \int_0^1 (p-L_{X}(p))\,{\mathrm{d}} p \\&= \dfrac{1}{2}-\int_0^1L_X(p)\,{\mathrm{d}} p .\end{align*}

So, $d(X,\nu)$ is the Gini coefficient (see [Reference Arnold and Sarabia4, page 51]), which satisfies the Lorenz ordering (see [Reference Wei, Allen and Liu32]), and this implies property 4.

Theorem 7.1. The function d in Definition 7.1 is a random pseudometric.

The following defines the constant $K_X$ :

\begin{equation*}K_X=\begin{cases}\dfrac{n}{n-1} & \text{if} \, \textit{X} \text{is discrete having support $\{x_1,x_2,\ldots,x_n\}$,}\\[7pt]1 & \text{if} \, \textit{X} \, \text{is continuous with bounded support}.\end{cases}\end{equation*}

Using Definitions 6.1 and 7.1, the following definition introduces the Gini randomness measure for a bounded random variable.

Definition 7.2. For any $X\in \Phi$ , the Gini randomness measure of X $(\zeta_g(X))$ is defined as

\begin{equation*}\zeta_g(X)=\begin{cases}\displaystyle 2\dfrac{n}{n-1}\int_0^1L_{f_X(U)}(p)\,{\mathrm{d}} p-\dfrac{1}{n-1} & \text{if X has support $\{x_1,x_2,\ldots,x_n\}$,}\\[11pt]\displaystyle 2\int_0^1L_{f_X(U)}(p)\,{\mathrm{d}} p & \text{if} \textit{X} \text{is continuous with bounded support.}\end{cases}\end{equation*}

Note that Definitions 4.2 and 7.2 are the same when X is a discrete random variable. The following theorems show that the Gini randomness measure attains the maximum 1 for a uniform distribution and the minimum 0 for a degenerate distribution.

Theorem 7.2. If $X\in\Phi$ , then $\zeta_g(X)=1$ if and only if X follows a uniform distribution.

Theorem 7.3. If $X\in\Phi$ , then $\zeta_g(X)=0$ if and only if X takes probability 1 for one outcome and 0 for the others.

Example 7.1. (Truncated exponential distribution.) Let X be a random variable which follows a truncated exponential distribution. Then its PDF is

\[f_X(x)=\dfrac{\theta}{1-{\mathrm{e}}^{-\theta \nu}}{\mathrm{e}}^{-\theta x}, \quad 0 \lt x \lt \nu,\ \nu \gt 0,\ \theta \gt 0.\]

Let U be a uniform random variable in the interval $(0,\nu)$ . Then

\[f_X(U)=\dfrac{\theta}{1-{\mathrm{e}}^{-\theta \nu}}{\mathrm{e}}^{-\theta U}.\]

The PDF of $f_X(U)$ is

\begin{equation*}f_{f_X(U)}(x)=\begin{cases}\dfrac{1}{\theta \nu x} & \text{if $\dfrac{\theta}{{\mathrm{e}}^{\theta \nu}-1} \lt x \lt \dfrac{\theta}{1-{\mathrm{e}}^{-\theta \nu}}$,}\\[7pt]0 & \text{otherwise,}\end{cases}\end{equation*}

and the distribution function of $f_X(U)$ is

\[F_{f_X(U)}(x)=\dfrac{1}{\theta \nu}\ln\biggl\{\dfrac{x({\mathrm{e}}^{\theta\nu}-1)}{\theta}\biggr\}, \quad\dfrac{\theta}{{\mathrm{e}}^{\theta \nu}-1} \lt x \lt \dfrac{\theta}{1-{\mathrm{e}}^{-\theta \nu}}.\]

Therefore the quantile function of $f_X(U)$ is

\[Q_{f_X(U)}(p)=\dfrac{\theta}{{\mathrm{e}}^{\theta \nu}-1}{\mathrm{e}}^{\theta\nu p},\quad 0 \lt p \lt 1.\]

A simple calculation gives $\mathbb{E}(f_X(U))={{1}/{\nu}}$ . Using the Lorenz curve equation

\[L_X(p)=\dfrac{1}{\mu_X}\int_0^pQ_X(q)\,{\mathrm{d}} q,\quad 0\leq p \leq1,\]

we get

\[L_{f_X(U)}(p)=\dfrac{{{\mathrm{e}}^{\theta \nu p}-1}}{{\mathrm{e}}^{\theta \nu}-1},\quad 0\leq p\leq 1.\]

So, by Definition 7.2, the randomness measure of X is

(7.1) \begin{equation}\zeta_g(X)=\dfrac{1}{\theta\nu}-\dfrac{1}{{\mathrm{e}}^{\theta\nu}-1}.\end{equation}

For a fixed $\nu$ , $\zeta_g(X)$ is a decreasing function of $\theta$ , which implies that $\theta$ is an uncertainty parameter in the sense of [Reference Hickey16]. That is, for a constant $\nu$ , if $X_{\theta}$ denotes X, which follows a truncated exponential distribution with parameter $\theta$ , then $X_{\theta_1}$ has more randomness than $X_{\theta_2}$ if $\theta_1 \lt \theta_2$ .

Example 7.2. In the previous example, if $Y=aX$ with $a \gt 0$ , then it is easy to verify that Y has the same randomness measure as in (7.1). That is, $\zeta_g(aX)=\zeta_g(X)$ for every $a \gt 0$ .

Also, if Y follows

\[f_Y(x)=\dfrac{\theta}{\tfrac{\alpha}{\nu}(1-{\mathrm{e}}^{-\theta\nu})}{\mathrm{e}}^{{{(-\theta \nu x)}/{\alpha}}}, \quad 0 \lt x \lt \alpha,\ \nu \gt 0,\ \theta \gt 0,\ \alpha \gt 0,\]

then Y has the same randomness for different values of $\alpha$ and the randomness is independent of the parameter $\alpha$ .

The following theorem shows that the Gini randomness measure is scale-invariant, an important property of a randomness measure.

Theorem 7.4. For any $X\in\Phi$ and $c\neq0$ , $\zeta_g(cX)=\zeta_g(X)$ .

8. Conclusion and future work

This paper presents a new class of randomness measures (the N-randomness measure) using a pseudometric approach. A comprehensive discussion is presented on the properties and illustrations of the N-randomness measure. The specific case of the N-randomness measure that we examine is the Gini randomness measure. We provide the crucial properties of the Gini randomness measure, which are necessary for a randomness measure. By giving the calculation of the Gini randomness measure of a truncated exponential distribution, we illustrate the calculation of the Gini randomness measure of distributions. This paper shows the advantages of the Gini randomness measure over entropy. Unlike entropy measures, the Gini randomness measure adheres to the value validity property. The Gini randomness measure has a nice continuous extension, and in some situations the Gini randomness measure is more sensitive than entropy. Additionally, we acknowledge that the definition of an N-randomness measure for a distribution with infinite support is not straightforward, which poses a limitation on this methodology.

The future research involves expanding the N-randomness measure to distributions with infinite support. Also, this paper studies only the Gini randomness measure as a particular case of the N-randomness measure. Some models in the literature satisfy majorization criteria and possess a finite range (specifically, certain inequality indexes, as shown in Example 5.1). The construction of other distinct N-randomness measures is possible using these models, and examining their properties is an interesting direction for future work. Also, uncovering additional applications for the Gini randomness measure is another important area of future work.

Appendix A. Proofs

In this section we provide proofs of different theorems in the above sections.

A.1. Proof of Theorem 6.1

Let the support of X and Y be [a,b]. Then the uniform PDF on this interval is

\begin{equation*}f_U(x)=\begin{cases}\dfrac{1}{b-a} & \text{if $a \leq x \leq b$,}\\[7pt]0 & \text{otherwise.}\end{cases}\end{equation*}

Since X has the support [a, b], then

\[\mathbb{E}\{f_X(U)\}=\int_a^bf_X(u)f_U(u)\,{\mathrm{d}} u=\dfrac{1}{b-a}\int_a^bf_X(u)\,{\mathrm{d}} u=\dfrac{1}{b-a}.\]

That is, for any $X,Y\in \Phi$ implies $\mathbb{E}(f_X(U))=\mathbb{E}(f_Y(U))$ , if U has the same support. Then, by Lemma 6.2, $L_{f_X(U)}(r)\geq L_{f_Y(U)}(r)$ , for all $ r\in(0,1)$ , if and only if $f_X(U) \leq_{cx} f_Y(U)$ . Now $f_X(U) \prec f_Y(U)$ (in the sense of the definition in [Reference Marshall, Olkin and Arnold26], i.e. $X\prec Y$ if and only if $X \leq_{cx} Y$ ) is equivalent to $f_X\prec f_Y$ , since

\[\mathbb{E}[h(f_X(U))]=\int_a^bh(f_X(u))f_U(u)\,{\mathrm{d}} u=\dfrac{1}{b-a}\int_a^b h(f_X(u))\,{\mathrm{d}} u.\]

That is, $f_X\prec f_Y$ if and only if $f_X(U) \leq_{cx} f_Y(U)$ . This completes the proof.

A.2. Proof of Theorem 7.2

Let $X\sim U(a,b)$ . Then

\begin{equation*}f_X(x)=\begin{cases} \dfrac{1}{b-a} & \text{if $a \lt x \lt b $,}\\[7pt]0 & \text{otherwise.}\end{cases}\end{equation*}

Then $f_X(U)\equiv {{1}/{(b-a)}}$ , that is,

\[Q_{f_X(U)}(p)=\dfrac{1}{b-a},\quad 0 \lt p \lt 1\]

(quantile function of a random variable degenerate at ${{1}/{(b-a)}}$ ). Then $L_{f_X(X)}(p)=p$ , $0 \lt p \lt 1$ . This implies $\zeta_g(X)=1$ , i.e. maximum randomness.

Let X be discrete uniform with $n\in N$ outcomes. Then

\[\mathbb{P}\biggl(f_X(U)=\dfrac{1}{n}\biggr)=1.\]

Therefore

\[Q_{f_X(X)}(p)=\dfrac{1}{n},\quad 0 \lt p \lt 1,\quad\text{and} \quad\mu_{f_X(X)}=\dfrac{1}{n}.\]

Then $L_{f_X(X)}(p)=p,0 \lt p \lt 1$ . This implies $\zeta_g(X)=1$ , i.e. maximum randomness.

Now, the objective is to prove that only a uniform distribution reaches maximum randomness. To reach this maximum, the $L_{f_X(U)}(p)$ should be p for all $0 \lt p \lt 1$ . Now

\[\int_{0}^1Q_{f_X(U)}(q)\,{\mathrm{d}} q=\mu_{f_X(U)}L_{f_X(U)}(p)\]

implies

\[ Q_{f_X(U)}(p)=\mu_{f_X(U)}\dfrac{{\mathrm{d}} L_{f_X(U)}(p)}{{\mathrm{d}} p},\quad 0 \lt p \lt 1,\]

that is, $Q_{f_X(U)}(p)=\mu_{f_X(U)}$ , $0 \lt p \lt 1$ , which implies that $f_X(U)$ is a constant. If X is a discrete distribution with finite support having PMF $\mathbb{P}(X=x_i)=p_i,i\in\{1,2,\ldots,n\}$ , then $\zeta_g(X)$ can be written as

\[1-\dfrac{1}{2(n-1)}\sum_{i=1}^n\sum_{j=1}^n|p_i-p_j|\]

(see Definition 4.2). Now $\zeta_g(X)=0$ implies $\sum_{i=1}^n\sum_{j=1}^n|p_i-p_j|=0$ , i.e. $p_1=p_2=\cdots=p_n$ , thus completing the proof.

A.3. Proof of Theorem 7.3

Let X be a discrete distribution with support $\{x_1,x_2,\ldots,x_n\}$ , for $n \gt 1$ , with PMF

\begin{equation*}\mathbb{P}(X=x)=\begin{cases}1 & \text{if $x = x_1$,}\\0 & \text{if $x \in M=\{x_2,x_3,\ldots,x_n\} $.}\end{cases}\end{equation*}

Now the quantile of $f_X(U)$ is

\begin{equation*}Q_{f_X(U)}(q)=\begin{cases}0 & \text{if $0 \lt q \leq 1-\tfrac{1}{n}$,}\\1 & \text{if $1-\tfrac{1}{n} \lt q \leq 1 $,}\end{cases}\end{equation*}

and $\mu_{f(X)}={{1}/{n}}$ . Then

\begin{equation*}L_{f_X(U)}(p)=\begin{cases}0 & \text{if $0 \lt p \leq 1-\tfrac{1}{n} $,}\\np+1-n & \text{if $1-\tfrac{1}{n} \lt p \leq 1 $.}\end{cases}\end{equation*}

This implies $\zeta_g(X)=0$ , i.e. minimum randomness (for $n=1$ , logically, this means there is no other choice, so the study of randomness is meaningless). So if Z is a random variable with uncountable support and the probability distribution is degenerate at a point, then M contains an infinite number of outcomes, which implies that

\begin{equation*}L_{f_Z(U)}(p)=\lim_{n\rightarrow\infty}L_{f_X(U)}(p)=\begin{cases}0 & \text{if $0 \lt p \lt 1 $,}\\1 & \text{if $p = 1 $.}\end{cases}\end{equation*}

Therefore $\zeta_g(Z)=0$ , i.e. minimum randomness. The reverse part is direct.

A.4. Proof of Theorem 7.4

Assume $c \gt 0$ . Let $Y=cX$ be a transformation of X where the PDF (or PMF) of Y is $f_Y$ . If X is discrete, then Theorem 3.2 implies $\zeta_g(cX)=\zeta_g(X)$ . If X has PDF $f_X$ and support on [a,b], then Y has PDF

(A.1) \begin{equation}f_Y(y)=\dfrac{1}{c}f_X\biggl(\dfrac{y}{c}\biggr),\quad ca\leq y \leq cb .\end{equation}

Let $U_{[a,b]}$ denote a random variable which follows a uniform distribution with support [a,b]. From (A.1), $f_X(U_{[a,b]})$ has support $[f_X(a),f_X(b)]$ and $f_Y(U_{[ca,cb]})$ has support

\[\biggl[\dfrac{1}{c}f_X(a),\dfrac{1}{c}f_X(b)\biggr].\]

Also, if $Z\sim U_{[ca,cb]}$ then ${{Z}/{c}}\sim U_{[a,b]}$ , and using (A.1) implies that

\[f_Y(U_{[ca,cb]})=\dfrac{1}{c}f_X\biggl(\dfrac{U_{[ca,cb]}}{c}\biggr)=\dfrac{1}{c}f_X(U_{[a,b])}).\]

Then,

\[F_{f_Y(U_{[ca,cb]})}(u)=\mathbb{P}(f_Y(U_{[ca,cb]})\leq u)=\mathbb{P}(f_X(U_{[a,b]})\leq cu)=F_{f_X(U_{[a,b]})}(cu).\]

This implies that

\begin{align*}Q_{f_Y(U_{[ca,cb]})}(v) &= \inf\{w/F_{f_Y(U_{[ca,cb]})}(w) \gt v\}\\&= \dfrac{1}{c}\inf\{c w/F_{f_X(U_{[a,b]})}(c w) \gt v\}\\&= \dfrac{1}{c}Q_{f_X(U_{[a,b]})}(v).\end{align*}

Then, for $p\in(0,1)$ ,

\begin{align*}L_{f_Y(U_{[ca,cb]})}(p) &= \dfrac{1}{\mathbb{E}\{f_Y(U_{[ca,cb]})\}}\int_0^pQ_{f_Y(U_{[ca,cb]})}(q) \,{\mathrm{d}} q\\&= \dfrac{1}{{{1}/{(cb-ca)}}}\int_0^p\dfrac{1}{c}Q_{f_X(U_{[a,b]})}(q) \,{\mathrm{d}} q\\&= \dfrac{1}{\mathbb{E}\{f_X(U_{[a,b]})\}}\int_0^pQ_{f_X(U_{[a,b]})}(q) \,{\mathrm{d}} q\\&= L_{f_X(U_{[a,b]})}(p).\end{align*}

Similarly, we can prove it for $c \lt 0$ .

Acknowledgement

I wish to thank the Editor-in-Chief, the Executive Editor, and two anonymous referees for their important comments that significantly enhanced the manuscript.

Funding information

There are no funding bodies to thank relating to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Ahmad, I. A. and Kochar, S. C. (1988). Testing for dispersive ordering. Statist. Prob. Lett. 7, 179185.CrossRefGoogle Scholar
Allison, P. D. (1978). Measures of inequality. Amer. Sociol. Rev. 43, 865880.CrossRefGoogle Scholar
Arnold, B. C. (2007). Majorization: here, there and everywhere. Statist. Sci. 22, 407413.CrossRefGoogle Scholar
Arnold, B. C. and Sarabia, J. M. (2018). Majorization and the Lorenz Order with Applications in Applied Mathematics and Economics, 1st edn. Springer.CrossRefGoogle Scholar
Atkinson, A. B. (1970). On the measurement of inequality. J. Economic Theory 2, 244263.CrossRefGoogle Scholar
Baumgartner, B. (2014). Characterizing entropy in statistical physics and in quantum information theory. Found. Phys. 44, 11071123.CrossRefGoogle Scholar
Berge, C. (1963). Topological Spaces, 1st edn. Oliver & Boyd, London.Google Scholar
Chao, A. and Ricotta, C. (2019). Quantifying evenness and linking it to diversity, beta diversity and similarity. Ecology 100, e02852.CrossRefGoogle ScholarPubMed
Csiszár, I. (2008). Axiomatic characterization of information measures. Entropy 10, 261273.CrossRefGoogle Scholar
Di Crescenzo, A., Paolillo, L., and Suárez-Llorens, A. (2024). Stochastic comparisons, differential entropy and varentropy for distributions induced by probability density functions. Metrika. https://doi.org/10.1007/s00184-024-00947-3 Google Scholar
Gastwirth, J. L. (1971). A general definition of the Lorenz curve. Econometrica 39, 10371039.CrossRefGoogle Scholar
Gour, G. and Tomamichel, M. (2021). Entropy and relative entropy from information-theoretic principles. IEEE Trans. Inf. Theor. 67, 63136327.CrossRefGoogle Scholar
Hardy, G. H., Littlewood, J. E. and Pólya, G. (1934). Inequalities, 1st edn. Cambridge University Press.Google Scholar
Hickey, R. J. (1982). A note on the measurement of randomness. J. Appl. Prob. 19, 229232.CrossRefGoogle Scholar
Hickey, R. J. (1983). Majorisation, randomness and some discrete distributions. J. Appl. Prob. 20, 897902.CrossRefGoogle Scholar
Hickey, R. J. (1984). Continuous majorisation and randomness. J. Appl. Prob. 21, 924929.CrossRefGoogle Scholar
Joe, H. (1987). Majorization, randomness and dependence for multivariate distributions. Ann. Prob. 15, 12171225.CrossRefGoogle Scholar
Joe, H. (1988). Majorization, entropy and paired comparisons. Ann. Statist. 16, 915925.CrossRefGoogle Scholar
Kendall, M. G., Stuart, A. and Ord, J. K. (2015). Kendall’s Advanced Theory of Statistics, vol. 1, Distribution Theory, 6th edn. Wiley, New Delhi.Google Scholar
Klir, G. J. (2006). Uncertainty and Information: Foundations of a Generalized Information Theory, 1st edn. Wiley, Hoboken, NJ.Google Scholar
Klir, G. J. and Wierman, M. J. (1999). Uncertainty-based Information: Elements of Generalized Information Theory, 1st edn. Springer, Berlin.CrossRefGoogle Scholar
Kullback, S. and Leibler, R. A. (1951). On information and sufficiency. Ann. Math. Statist. 22, 7986.CrossRefGoogle Scholar
Kvålseth, T. O. (2014). Entropy evaluation based on value validity. Entropy 16, 48554873.CrossRefGoogle Scholar
Kvålseth, T. O. (2015). Evenness indices once again: critical analysis of properties. SpringerPlus 4, 232.CrossRefGoogle ScholarPubMed
Kvålseth, T. O. (2016). On the measurement of randomness (uncertainty): a more informative entropy. Entropy 18, 159.CrossRefGoogle Scholar
Marshall, A. W., Olkin, I. and Arnold, B. C. (2011). Inequalities: Theory of Majorization and its Application, 2nd edn. Springer, New York.CrossRefGoogle Scholar
Muirhead, R. F. (1903). Some methods applicable to identities and inequalities of symmetric algebraic functions of n letters. Proc. Edinburgh Math. Soc. 21, 144–157.CrossRefGoogle Scholar
Porcu, E., Mateu, J. and Christakos, G. (2009). Quasi-arithmetic means of covariance functions with potential applications to space–time data. J. Multivariate Anal. 100, 18301844.CrossRefGoogle Scholar
Ribeiro, M., Henriques, T., Castro, L., Souto, A., Antunes, L., Costa-Santos, C. and Teixeira, A. (2021). The entropy universe. Entropy. 23, 222.CrossRefGoogle ScholarPubMed
Rommel, C., Bonnans, J. F., Gregorutti, B. and Martinon, P. (2021). Quantifying the closeness to a set of random curves via the mean marginal likelihood. ESAIM Prob. Statist. 25, 130.CrossRefGoogle Scholar
Sen, A. (1973). On Economic Inequality, 1st edn. Clarendon Press, Oxford.CrossRefGoogle Scholar
Wei, X., Allen, N. J. and Liu, Y. (2016). Disparity in organizational research: How should we measure it? Behavior Research Methods 48, 7290.CrossRefGoogle Scholar
Yitzhaki, S. (1998). More than a dozen alternative ways of spelling Gini. In [34], pp. 13–30.Google Scholar
Yitzhaki, S. and Schechtman, E. (2013). The Gini Methodology: A Primer on a Statistical Methodology (Springer Series in Statistics 272), 1st edn. Springer, New York.Google Scholar
Figure 0

Figure 1. The figure illustrates the Shannon entropy and Gini randomness measure of a random variable following B(1,p) distribution.