Introduction
Empires and nation states are usually represented in the social sciences as two distinct types of political organization. The former are associated with the premodern world, while the latter have come to be seen as political forms that are paradigmatic of the modern world and that are, therefore, of primary significance to the disciplines of sociology and political science. The processes attributed to the emergence of nation states are also those that are believed to ultimately lead to the dismantling of empires, and, relatedly, to transitions to modernity. For example, while empires have generally been ruled by dynasties which proclaimed for themselves a divine right—or, as in the case of the Mughals, a divine light—justifying that rule, nation states came to be organized around ideas of sovereignty resting in the people. However, the neat story of empires gradually being replaced by nation states is not clear-cut. The exemplar nation states of modernity—Britain, France, and the Netherlands—were established simultaneously with the development of their overseas empires. These nation states did not emerge in the process of replacing existing empires, but rather through the creation of new overseas colonial empires.
In recent work, I have questioned the conceptual coherence of the category of “empire” as it is commonly used within the social sciences [Bhambra Reference Bhambra2024]. From the early, magisterial work of Shmuel Eisenstadt [Reference Eisenstadt1963] to the comprehensive accounts provided by Michael Doyle [Reference Doyle1986], John Darwin [Reference Darwin2008], Jane Burbank and Frederick Cooper [Reference Burbank and Cooper2010], and, more recently, Krishan Kumar [Reference Kumar2017, Reference Kumar2021], empires have tended to be understood in terms of their shared structural features; primarily as hierarchical, heterogenous, and organized in relation to vertical lines of “belonging”. Even when differences are acknowledged between empires—for example, in terms of how they employ a politics of difference, their use of intermediaries, the strength of elites, and their differing repertoires of power as regards direct or indirect rule—the similarities in their structures and modes of operation are seen as key to their common designation as empires [Burbank and Cooper Reference Burbank and Cooper2010]. Where differences are understood to be significant, it is often in terms of the consequences of different colonial practices for the shaping of economic prospects in the modern period. This means that the distinctiveness (or otherwise) of premodern empires from modern overseas empires is often neglected.
Andre Gunder Frank, for example, argues that the “postcolonial” differences in economic success between North America and Latin America can be traced to the differences between their experiences as part of European colonial empires. While North America, he suggests, benefitted from “the transplantation of the progressive institutions of British capitalism”, Latin America was impeded by the establishment of “the regressive institutions of decadent Iberian feudalism” [Frank Reference Frank1972: 17]. James Mahoney [Reference Mahoney2003] further explores historical differences between the European colonial experience in countries in Latin America, which, he argues, went on to have an impact upon differences in their economic prosperity in the present. Such arguments have gained renewed impetus with the award of the 2024 Nobel Prize in EconomicsFootnote 1 to Daron Acemoglu, Simon Johnson, and James A. Robinson for their work on the significance of colonialism for long-run economic development. They argue that historical differences in the (European) colonial experience—effectively between settler colonies, which established what they call the “inclusive societal institutions” of what would become new nations, and extractive colonies, which did not—were formative of the significant inequalities in income between countries that exist today [Reference Acemoglu, Johnson and Robinson2001, Reference Acemoglu, Johnson and Robinson2002].
While the significance of different colonial practices is acknowledged within such work, these practices are all varieties of European modes of colonialism that occurred within empires that emerged in the modern period. Despite the diversity of these practices, from the settler colonialism that (according to Acemoglu, Johnson, and Robinson [Reference Acemoglu, Johnson and Robinson2001, Reference Acemoglu, Johnson and Robinson2002]) produces Neo-Europes, through to the form of extraction typified by the Belgian colonization of Congo, I argue that they nonetheless constitute a commonality of a form of empire as distinct from those forms of empire associated with the premodern world [Bhambra Reference Bhambra2024]. In this article, I examine the modes of political and economic governance of two empires: the Mughal Empire of the premodern period, and British colonial rule in India from the time of the East India Company through to direct rule. I look in particular at how these empires dealt with the problem of famine, which, although it is thought to have been prevalent in premodern India, only became endemic there under British rule. While Kumar has argued that despite their differences, “all empires had to deal with many of the same problems” [Reference Kumar2017: 7], I suggest that the contrasting ways in which they responded to these problems—here, of dearth and famine—matter in terms of differentiating between them.
Examining the extent and range of differences between the Mughal and British empires enables us to develop a sociological understanding of the distinctiveness of European overseas empires, which sets them apart from other political entities, also called empires. This is important because hitherto within the social sciences, the comparative study of empires, organized in terms of their political form, has tended to ignore the distinctive political economy of overseas empires. Where the economic dimension is acknowledged, such as in studies looking at path dependence between colonial histories and postcolonial development, the focus is primarily on differences between European colonial practices and the impact of capitalist development upon them. Here, the political form of empire is understood as less relevant. In both versions, the focus is on Europe, European nation states, and European colonial practices. Such practices, however, tend to be elided in standard accounts of capitalism. This contributes to a Eurocentric complacency about the ongoing legacies of such colonial practices, whereby all relevant understandings can be traced back to industrialization and capitalism, rather than to colonialism and colonial political economy [Bhambra Reference Bhambra2021].
In this article, I first set out the importance of famine to the argument being made. I then discuss the differences between the Mughal and British empires in terms of the political and economic modes of governance they adopted in their response to issues of dearth and famine. This is done in an attempt to establish the significance of these distinctions in such a way that we can come to understand empires themselves differently.
Famine and Its Contexts
The histories of Britain and India throughout the early modern period are marked by instances of famine and dearth. As Ayesha Mukherjee argues, there were nine periods of famine and dearth in Britain from 1555 to 1757 and seven such periods in India, which occurred very close to one another, “matched almost decade by decade” [Reference Mukherjee and Mukherjee2019a: 4]. One of the suggested explanations for this “remarkable parallelism” has been the climate; that is, the association of severe El Niño events with poor agricultural production in Europe and globally. Although El Niño events continued to occur, there are no recorded instances of famine or dearth in Britain (Ireland excepted) after 1757. The fact that they continued unabated and with increased intensity in India suggests that we need to look at what else happened in 1757. This was the year of the Battle of Plassey, in which the British East India Company, under Robert Clive, defeated the Nawab of Bengal, Siraj-ud-Daulah, and began the process of consolidating its hold across the subcontinent. Within a few years, the Company had formally taken over the administrative and tax-collecting functions for the provinces of Bengal, Bihar, and Orissa, which had a combined population of 30 million.
The right to collect tax on behalf of the Mughal emperor soon turned into the Company’s right to claim the revenue itself as private wealth; at this point, the British state sought to establish its own claim to that wealth. From 1767 onwards, the Company was required to pay an annual tribute of £400,000 from its colonial tax revenue to the British state. This income was used to reduce the land tax paid by the propertied class by 25 per cent [Bowen Reference Bowen1991]; it also contributed to changes in state policy with regard to mitigating domestic instances of food scarcity. The transfer of this wealth further meant that there were fewer resources available with which to deal with issues of food scarcity as they arose in the colonized territories. The implications of the transfer of rule from the Mughal Empire to British India via the East India Company will be discussed in more detail across the following sections. Specifically the article will examine this transformation in the mode of rule as it manifested itself in the state’s relationship to the populations it governed. It will also address the development of forms of political economy in the context of famine, and policies to address food shortages.
Famines have been documented throughout human history and across cultures and societies. They are exceptional events of acute food scarcity that often arise from unexpected weather patterns and the failure of crops; or, relatedly, from ongoing wars and situations of social and political instability that lead to a culmination of chronic scarcity and dearth. They are, for the most part, periodic crises as opposed to the normal condition of things, and there is a general expectation that political authorities will respond to them by protecting the populations for whom they are responsible from starvation and disease. Indeed, the legitimacy of the polity is bound up with its ability to address the persistent conditions that lead to famines, as well as its response to the crises of subsistence caused by them. Amartya Sen, for example, has argued that, while famines involve the “sudden collapse of the level of food consumption” [Reference Sen1981: 41], they are not necessarily about there not being enough food to eat; rather, they result from people not having enough food to eat. That is, famines are about social and political issues of distribution and entitlement, and not simply about scarcity.
Sen’s work has done much to extend understandings of famine beyond simple issues of food availability, and scholars such as Amrita Rangasami [Reference Rangasami1985a, Reference Rangasami1985b] have further sought to establish an understanding of it as a process that is not only a biological one that results in death, but also one that is embedded in longer-term political and socio-economic processes. This applies from the famine’s onset, which is marked by dearth, to its final phase, which can culminate in death. Even where food scarcity is attributable to crop failures following what have traditionally been described as natural disasters (but are, increasingly, now seen as the consequence of catastrophic climate change), the social and political contexts of those disasters need to be considered. As well as issues of distribution once a famine has begun, there are precursor issues related to, for example, the maintenance (or not) of modes of irrigation, levels of taxation, and the continuation of customary practices regarding grain reserves, among others. It is the effectiveness—and will—of the government in intervening in the various determinants of famine that enable us to distinguish between those famines that cause hunger and privation and those that kill, as Alex de Waal [Reference De Waal1989] notes.
Further, famines rarely affect whole populations, as different groups within the areas concerned vary in their ability to access and, in Sen’s terms, to “command” food. People starve, he suggests, when they are not able “to command food through the legal means available in society” [Reference Sen1981: 45]. This is the basis of one of his best known claims: that famines have never occurred within functioning democracies and that they are, instead, associated with modes of authoritarian rule. The protective power of political liberty, Sen argues—as embodied in “regular elections, opposition parties, basic freedom of speech and a relatively free media” [Reference Sen2009: 342]—ensures that people are able to hold those governing them to account and thereby require them to mitigate the consequences of famines. He points, in particular, to the fact that the prevalence of famines in India throughout the period of British colonial rule “ended abruptly with the establishment of a democracy after independence” [Reference Sen2009: 342].
Sen’s demarcation of the end of famines in India as associated with the establishment of democracy after independence assumes, albeit implicitly, that famines had been a constant throughout Indian history in the preceding centuries. While it may be correct that there have never been famines in functioning democracies, democracies are not the only political entities to have effectively managed famines. Nor is democracy an unproblematic descriptor, given the British Empire’s characterization of itself as governed through parliamentary representation while at the same time prohibiting the political representation of Indians within that empire. That is, for much of the period of British colonial rule in India, a period marked by the prevalence of famines, Britain considered itself a liberal democracy.
During this period, the number and intensity of famines—particularly of famines that kill—increased exponentially. In the ninety years of East India Company rule, there were twelve famines, not including periods of severe scarcity, beginning with the 1770-71 famine that resulted in the deaths of 10 million—a third of the population—in Bengal, Bihar, and Orissa [Dutt Reference Dutt1900]. The first forty years of direct rule by the British state, from 1860 to 1900, saw not only a significant increase in the incidence of famines, but also a substantial rise in the number of deaths associated with them. In 1901, the Lancet’s Indian special correspondent estimated that 19 million people had died there between 1890 and 1900 either as a consequence of direct starvation or of the diseases arising from starvation [in Digby Reference Digby1901: 137–38].
This presents quite a stark contrast with the period prior to British colonial rule, within both the Mughal Empire and the other kingdoms across the subcontinent. In the early 20th century, Alexander Loveday, a British economist who worked for the League of Nations, compiled a list of famines in the history of India from the 3rd century onwards. He stated that major famines in the early modern period appeared to occur “in cycles of fifty years” and that after exceptional periods of drought, “a time of comparative prosperity may be expected, varying in length from forty to fifty years” [1914: 25, 26]. This was not the pattern of major famines within British India, where they were far more regular, a regularity which led to conditions of chronic poverty from which populations were not able to recover in the subsequent years. William Digby [Reference Digby1901], a journalist involved in humanitarian famine relief efforts in India, concurred with Loveday on the frequency of famines in the early modern period, suggesting that there had been about eighteen major famines in India from the 11th century to the middle of the eighteenth. He further noted that “not one approached in extent or intensity the three great distresses of the last quarter of the nineteenth” [Digby Reference Digby1901: 122; Bhatia Reference Bhatia1967].
This is not to suggest that there had been no famines in the earlier period or that none of these famines had led to mortality, sometimes in catastrophic numbers [Kaw Reference Kaw1996]. The most destructive of all recorded famines in the early modern period, for example, occurred in Gujarat and the Deccan (or Dakhin) in 1630–32 [Habib Reference Habib1963]. The failure of the rains was followed by plagues of mice and locusts which, in turn, were succeeded by excessive flooding such that, as Irfan Habib [Reference Habib1963] sets out, pestilence killed those who had survived starvation. At least 4 million people were said to have died across the region, with many areas left desolate. Across a similar period, in the last decade of the 16th century, severe subsistence crises in England, caused by the failure of the wheat harvest, had led to mortality rates of between 21 and 26 per cent above trend [Walter and Schofield Reference Walter, Schofield, Walter and Schofield1989: 34]. The difference that is being suggested, and that will be elaborated upon throughout this article, is between periodic famines associated primarily with climatic events, including floods, drought, and plagues, and the systematically produced famines resulting from state policy decisions in periods when the issue was not the absolute unavailability of food. I suggest that these differences in state policy reflect the differences between the types of empire with which they are associated.
W. H. Moreland, writing in the 1920s and drawing on the late 19th century reports on famine within British India by Baird Smith, distinguished between “food-famines” and “work-famines”. He suggested that the famines recounted in the chronicles of 17th century India were not work-famines, which were characteristic of famines under British rule, but food-famines; that is, these were “times when it was not a question of obtaining the means to pay for food, but of getting food at all” [Reference Moreland1923: 205]. As such, he suggests that it is not possible to draw an adequate comparison between famines in the periods of Mughal and British rule. However, even in his own terms, there is a distinction to be made between famines produced as a consequence of there actually not being enough food to eat, and those that occurred because the poor were unable to “command” food that was otherwise available.
In his narrative chronology of famines across the Mughal period, Habib notes the variation in the frequency and intensity of famines in the different regions of the subcontinent; specifically, he observes that throughout this period, Bengal had had “no serious famine on record”, not even after the bad harvests of the 1730s [Reference Habib1963: 109]. Yet, within five years of the East India Company taking over responsibility for the region, it would oversee one of the most devasting famines in recorded human history, in which one third of the country’s population would perish. For reference, the worst subsistence crisis in England had happened 400 years earlier when, during the Great Famine of 1315–17, it is estimated that “half a million people, something like 10 per cent of the population, died” [Walter Reference Walter and Mukherjee2019: 22]. By the late 18th century, however, the fifty-year cycles of major famines had been broken in England (and, by now, in Britain—except in Ireland, as mentioned earlier), but had intensified in British India. Part of the explanation of these differences, as I will go on to discuss, rests on the fact that the Mughal and the British realms were two distinct types of empire. The different logics central to their modes of governance and of political economy—incorporation or extraction, moral or colonial—are, I will suggest, what establishes them as distinct.
The Mughal Period
The Mughal period in India was inaugurated with the victory of the Timurid prince, Babar, over the Lodi sultanate in Delhi in the 1520s. Subsequent rulers gradually extended their rule over much of India, and the Mughal Empire reached its greatest geographical extent with the reign of Aurangzeb in the late 17th century. As Alam and Subrahmanyam relate, “Aurangzeb presided over a sprawling domain that extended well into southern India, besides stretching from the borders of Burma virtually to Central Asia” [Reference Alam, Subrahmanyam, Alam and Subrahmanyam2001: 33]. After Aurangzeb’s death in 1707, the remit of the state receded and its authority diminished. While the Mughal state had never been a unitary one—in that, as Alam and Subrahmanyam argue, it “resembled a ‘patchwork quilt’ rather than a ‘wall-to-wall’ carpet”—its political configuration after Aurangzeb’s death was explicitly marked “by the rise of regional states and kingdoms” [Reference Alam, Subrahmanyam, Alam and Subrahmanyam2001: 57, 58], indicating patterns of plural sovereignty. Some of these states and kingdoms reasserted the boundaries of regional states that had existed in the pre-Mughal period, whereas others came to be organized around emerging ethnic and religious groups, such as the Marathas and Sikhs, albeit while still nominally under the idea of Mughal sovereignty. The Sikh Empire of Maharaja Ranjit Singh was established across much of Punjab in the 19th century [Atwal Reference Atwal2020]. Its eventual fall and annexation by the British in the 1840s, together with the defeat of what has been termed the Indian Mutiny of 1857, signalled the formal end of the Mughal period of Indian history.
Among the Mughal state’s various achievements over its 300-year history was the creation of its centralized administrative and fiscal system—specifically, the relatively systematic organization of revenue collection it used in relation to agrarian production [Ali Reference Ali1978; Moosvi Reference Moosvi2008]. The organization of agrarian society in India was highly complex, with various forms of stratification. As Tapan Raychaudhuri explains, in some areas the peasantry could be “the owners of the bulk of the agricultural land, while others had occupancy rights as tenants or were landless” [Reference Raychaudhuri, Alam and Subrahmanyam2001: 274]. Peasants usually cultivated the land, and “the state or the intermediaries collected revenue” [Ibid.]. The most significant group of intermediaries consisted of the land-revenue functionaries often called zamindars, a term used “to denote the various holders of hereditary interests, ranging from powerful, independent, and autonomous chieftains to petty intermediaries at the village level” [Hasan Reference Hasan, Alam and Subrahmanyam2001: 285]. These chieftains were incorporated into the imperial structures of the Mughal Empire and gained significant benefits from this—both financial and in terms of status. There were various types of land rights, sometimes overlapping and interlocking, and, relatedly, obligations in terms of taxes—“the qanungo, patwari, shiqdar and sazawal” [Kaw Reference Kaw1996: 67]—to be paid to the zamindars and the imperial treasury. The taxes were paid by the peasantry, the cultivators of the land, as agriculture was the primary source of wealth.
This necessarily brief overview of the shape and structure of the Mughal Empire seeks to highlight two key issues. First, that the empire was organized in terms of practices of integration and incorporation. As the Mughal Empire extended its geographical reach, it integrated the newly conquered territories into its administrative and fiscal structures. At the same time, the local nobles, princes, chieftains, and others were brought into the imperial framework through, as Alam describes, “a system of lavish jagir assignments and other symbols of rank and authority” [Reference Alam1986: 18]. It was the effective coordination of these complex relationships, he continues, that “determined the existence of the imperial structure and political stability in Mughal India” [Reference Alam1986: 19]. This leads us to the second key issue. The empire was understood to be constituted through the entanglement of relationships across all strata of society and was characterized by the reciprocal, albeit unequal, forms of obligations and solidarity between different groups. As Khondker [Reference Khondker1986] argues, the centrality of the village—organized around the joint family system and located within broader connections of caste and clan—to the structure of Indian society is thought to have provided a significant degree of protection against crises of subsistence in the early modern period. The sovereign’s claims over the individual subject were, in turn, justified by a capacious theory of social contract founded on an understanding of social needs [Ali Reference Ali1971]. Here, the stress would fall on the idea of the social in the contract, in contrast to Western liberal understandings of the individual as the bearer of natural rights.
Concern for public welfare, then, was a central element of the ideology and legitimation of rule under the Mughals; a circumstance that, as I have suggested elsewhere, is typical of empires of incorporation [see, for example, Edgerton-Tarpley Reference Edgerton-Tarpley2013; Bhambra Reference Bhambra2024]. As Irfan Habib [Reference Habib1998] argues, Abu’l Fazl, the official chronicler of the reign of Akbar, set out his ideas on the nature of sovereignty and state policy in this regard; specifically, that sovereignty served the needs of the secular social order. Habib notes that “Abu’l Fazl appeals to a broad theory of social contract to justify the necessity of political authority” and “assumes two classes of sovereigns, just and unjust” [Reference Habib1998: 332]. In Abu’l Fazl’s discussion of taxation in the Ain-i Akbari, taxes are seen as “wages of protection”; that is, they were paid on the understanding that the king would, in return, maintain social order [Khan Reference Khan2009]. Social order, here, is organized in terms of “the four ‘essences’ (property, life, honour, religion)” [Habib Reference Habib1998: 332). Outside of war, the main reason for the breakdown of social order was the instances of food scarcity, dearth, and famine brought about by failure of the monsoon rains. While the emperor could not alter the weather patterns, there was an expectation that the regime would intervene to mitigate consequences such as famine.
Common responses to instances of dearth and famine included exemptions for cultivators from land and other taxes, establishing a fair price for grain, distribution of grain to those in need, an embargo on exports of grain, creation of food kitchens and the distribution of food, employment in public works, and the organization of granaries to build reserves [Alam Reference Alam1986; Khondker Reference Khondker1986]. In addition, as Habib recounts, there was a political and moral stipulation against exploitation set out in the Ain-i Akbari that “just sovereigns do not take more than what suffices for their task and do not soil their hands by desiring more” [Reference Habib1998: 332]. Of course, the outline of what constitutes a just sovereign does imply that there are also unjust sovereigns, whose practices are excessive. However, there is no indication that these excesses included diverting revenue away from areas of need during periods of scarcity.
As Kaw [Reference Kaw1996] argues, this does not mean that the policies enacted were necessarily those that would have been most effective in obviating need. For example, he describes how pre-Mughal policies regulating the course of rivers in Kashmir were designed to avert potential problems of flooding, and not simply to provide relief subsequent to the event. But this reinforces the point that I am making: that relief was a recognized duty on the part of the ruler. For the most part during the Mughal period, there were attempts to intervene actively during times of food scarcity in order to mitigate the effects of dearth and famine. This, Khondker [Reference Khondker1986] argues, points to the existence of a moral economy at the level of local society—perhaps to several moral economies in different societies incorporated in the empire—which arose out of agricultural circumstances. Rule in this context did not involve the imposition of similar practices of land “title” and use; rather, the relationships governing such practices were allowed to be both diverse and local. Nonetheless, there was an overarching political understanding of sovereignty, organized around responsibility to and for the people.
This moral economy was to be disrupted with the reorganization of agriculture by the British, who introduced cash crops and the orientation of India’s rural economy to the needs of the British national economy. It is common within the literature for scholars to argue, as Khondker himself does, that these issues arise as a result of “the incorporation of a rural economy into the world capitalist system” [1986: 26]. However, the Indian rural economy had long been incorporated into a “world system” of extended trade and had done so successfully, to mutual benefit. The difference between this state of affairs and the arrival and political settlement of the British is the extraction of resources from India for the benefit of the colonial metropole. As Eric Stokes argues, “the tide of British policy in India moved in the direction set by the development of the British economy” [1959: xiii]. The consequences of this, and that economy’s colonial nature, will now be addressed.
British Colonial Rule
The establishment of British rule in India was markedly different from India’s incorporation into the earlier Mughal Empire. This was a consequence of the relocation of the country’s political and economic “centre of gravity” outside of its territories in the British metropole. Whereas the Mughals had made India their home, the British remained foreign, failing to integrate into its social and cultural norms. Further, political rule itself and the formation of economic policies for India emanated from London and were oriented to its interests. For its first ninety years, British colonial rule in India was managed by a Governor-General and a Court of Directors elected by the shareholders of the East India Company, and a Board of Control, both based in London. As Ambirajan sets out, “the decisive policies were laid down in London, and the Government in India had merely to execute them” [Reference Ambirajan1978: 8], a situation that became even more pronounced after the establishment of direct rule by the British Crown in 1858. The government of India was now headed by the Viceroy, based in Calcutta, but the supreme authority was the newly established Secretary of State for India, who sat in the British cabinet and headed the India Office in London. The individuals who formed and executed policy in Britain and in India all “owed their origin, allegiance and interests to Britain” [Ambirajan Reference Ambirajan1978: 27].
The East India Company was set up as a joint-stock trading company in 1600 in England. For its first 150 years it largely engaged in commercial activities, embedded in politico-military forms, across India under the authority of the Mughal emperors. However, as Stern demonstrates, by the late 17th century it had begun to envision itself “as both a sovereign sea power and a corporate tributary to the Mughal empire” [Reference Stern2008: 254]. It held the right, through its charter from the English (and later, British) monarch, to appoint officials abroad, prosecute offenders, mint money, and conduct diplomacy, which included waging war on non-Christians. Relatedly, the Company held (or sought to hold) firmans from rulers in the Indo-Persian world, which granted rights to trade, and other privileges, within and across those territories [Stern Reference Stern2008]. It was through the authority granted to it by holding firmans that the East India Company was able to establish settlements in India; these were initially locations for the warehouses and lodgings it needed for its trading activities, although they came, in time, to be the basis on which a more extensive colonization was made possible.
Discussion regarding the value of colonial settlements was organized in terms of their commercial possibilities; specifically, the extent to which they could facilitate the growth of trade, and thereby increase the power of the nation [Ambirajan Reference Ambirajan1978]. In this way, colonies—and the conquest upon which they relied—were understood as integral to trade and to establishing the nation’s wealth. In time, trade came to be of secondary importance, as the Company sought to establish itself as “a self-sustaining political and military establishment in India founded upon the raising of local revenue” [Stern Reference Stern2008: 280]. When the Mughal Empire was beset by crises in the 18th century, the Company intensified its military skirmishes with local rulers, gaining a decisive victory in 1757 at the Battle of Plassey. Within ten years, under the leadership of Robert Clive, it had formally taken over the administrative and tax-collecting functions across the provinces of Bengal, Bihar, and Orissa. These revenues were used to extend its reach across the subcontinent and, by the early 19th century, the East India Company had become, according to Stokes, “a purely military and administrative power” [Reference Stokes1959: 38].
While the trading activities of the East India Company had furnished significant profits to its shareholders, these were vastly superseded by the scale of revenues under its command after it was granted the diwani, that is, the right to collect revenue. The ability to draw tax and tribute from the populations within its territories transformed the Company from a primarily commercial organization into one that was also concerned with issues of governance. Company discussions about good governance were mainly about extracting the largest amount of revenue possible, with the least amount of effort and expenditure. As Chaudhuri [Reference Chaudhuri1960] argues, while initially the machinery of the land-revenue system was largely left in place, the main focus of the Company was to collect as much tax as it could from the land. This meant that if the collectors, out of concern for the welfare of the people among whom they lived, refused to collect tax in times of hardship, then they were replaced by collectors without any connections to the area. As a result, the customary and traditional practices of mitigating or waiving the tax burden in times of food scarcity were no longer accepted by the authorities. The East India Company also mobilized the political power it had gained through the grant of the diwani to prohibit weavers from selling their products on the open market. Instead, it required them to sell only to the Company for the low prices it paid, thus reducing the artisans and weavers to a new level of poverty [Chaudhuri Reference Chaudhuri1960]. This point will be taken up further in the conclusion.
An Emerging Political Economy of Colonialism
While 19th century Britain has tended to be understood as a parliamentary democracy—at least since the reforms of 1832, which extended the franchise to 7 per cent of men within Britain and explicitly barred women from voting—its rule over colonial territories was authoritarian, even in its own definition. The earlier Mughal regime was claimed to be despotic, and the remedy for this despotism was, according to James Mill, “the submission of the Indian Government to the control of the British Parliament” [Stokes Reference Stokes1959 68]. The sleight of hand involved here enabled the suggestion to be made that India was governed by British parliamentary democracy, rather than British despotism. The submission that Mill discusses was not only in terms of government and the administration of justice, but, perhaps most significantly, in the area of political economy as well. When India was initially conquered by the East India Company, it was easier to separate the modes of rule of Britain and India and assume that activities in India had few, if any, political repercussions in the metropole. While this is increasingly contested—especially as East India Company rule was, as Govind [Reference Govind2017] argues, subject to the authority of the British state from the very beginning—my focus here is on the nature of its rule. As Govind [Reference Govind2011] sets out, the Company’s articulation of political economy enabled its despotism at the same time as constituting it as political economy masked that despotism. This device comes to be integral to how the British Empire functions as an “empire of extraction” while appearing to engage in merely commercial activities, otherwise the avowed subject of classical political economy.
The classical school of political economy encompasses the work of figures ranging from Adam Smith to James Mill, David Ricardo, Thomas Malthus, and John Stuart Mill. It is concerned with issues of free trade, population pressures, taxation, and the role of the state. The latter, it was commonly agreed, should be as limited as possible, and the competitive market, organized in terms of the forces of supply and demand, should be responsible for allocating resources. The school’s influence on colonial policy in India and on officials within the Indian administration, as Kate Currie [Reference Currie1991] argues, was significant. The East India college at Haileybury, set up to train those seeking employment in the Indian Civil Service (ICS), appointed Thomas Malthus as its first Chair of Political Economy, a subject that would remain compulsory for all students taking the ICS exams through to the end of the 19th century. As Eric Stokes [Reference Stokes1959] explains, the classical political economists’ ideas on government and administration were given free rein in India in a way that would not have been possible in Britain itself. This is perhaps most clearly illustrated in the development of a theory of rent within classical political economy.
The theory of rent was developed around the assumption that the “net produce” of land—that is, the unearned increment—is a kind of surplus.Footnote 2 It is neither a payment for labour nor for necessary capital, and, as both Adam Smith and Thomas Malthus argue, should be considered as “an excess of price over cost of production” [Lackman Reference Lackman1976: 290]. As such, it was argued that it could be taken in taxation by the state without negatively affecting the country’s resources or the ability of the population to engage in productive economic activity. Despite this, few political economists ever advocated for the implementation of such taxation within Britain, as they understood that a tax on rent was a tax on landlords, which interfered with their private property rights. Private property was believed to be necessary to economic growth and to be politically central to the establishment of progress and individual liberty; thereby, it was also essential to constitutional government. While the practical influence of classical political economy in Britain was its use as a tool by “the commercial and industrial classes in their campaign to reduce the economic intervention of the State to a minimum” [Stokes Reference Stokes1959: 77], in India, its effect was the exact opposite.
As land was owned privately in Britain, “the practical and political difficulties in the way of meeting the financial needs of the State out of rent”, Stokes argued, were deemed to be “insuperable” [Reference Stokes1959: 77]. The situation in India, however, was seen to be different. The administrators of the East India Company believed, erroneously (and to strategic effect), that the Mughal emperor had been the sole proprietor of the land and thereby entitled to its entire revenue. This, together with a belief in the arbitrary nature of rule during the Mughal period, was the basis of its understanding of the preceding regime as despotic. As the administration moved from collecting revenue for the Mughal regime to appropriating the revenue itself, so discussions about the level and form of revenue collection and the legitimacy of the Company’s right to it came to the fore. These discussions occurred in the context of its self-serving belief that private property did not exist in India and that a new way of organizing the ownership of the land was required [Guha (1963) Reference Guha1996]. Further, it should also be noted that over half the income at the Company’s disposal came from the collection of revenue associated with the land [Bhattacharya (1971) Reference Bhattacharya2005].
East India Company administrators acknowledged that they themselves held no good title to the land. Alexander Dow, for example, noted that while the provinces were held “in appearance, by a grant from the present emperor”, in reality, they were only maintained “by the right of arms” [quoted in Guha (1963) Reference Guha1996: 25]. As such, what was necessary was to establish the legitimacy of dominion beyond that which was provided through the act of conquest. This was done by developing economic theories that saw the territory governed as if it were a Company estate. One element of this was to define the revenue that was collected as “rent”, rather than a “tax”. Indeed, James Mill argued that the British state should consolidate its position in India as the sole landlord and establish the immediate cultivators as its tenants, working the land on lease. The theory of rent, then, came to supplant the criterion of assessment that had been used by preceding regimes in India in terms of determining the correct level of taxation. The idea that rent could and should be wholly absorbed through state taxation came to predominate in the surveys and assessments undertaken by colonial officials seeking to determine the extent of taxation to be imposed on the cultivators. The effect of this, as Stokes argues, “was to set up a highly authoritarian conception of the rights of the State” [Reference Stokes1959: 95]; it further refuted the idea that any limitations could be imposed upon that state, including on the standard of assessment, through recourse to custom and tradition. Govind [Reference Govind2011] goes further, arguing that, in effect, Mill here is advocating that the East India Company establish itself according to what he had otherwise presented as the precepts of Oriental despotism.
As Travers sets out, even if the initial constitutional structure of British India could be seen as despotic, by the late 18th century, with the creation of a separate judicial branch of government, there was “at least the semblance of an independent judiciary and a regular government” [2009: 153]. This, it was claimed by Company administrators, marked a break from the despotism of the earlier Mughal emperors and the nawabs, whereby rule had been based on arbitrary principles, as it was now organized in terms of “a discourse of commercial improvement under the benevolent stewardship of enlightened rulers” [Travers Reference Travers and Kelly2009: 157]. These debates around the status of land, how it was best to be managed, and the consequences of this for the possibility of colonial governance and deriving revenue over the longer term, came to define British colonial rule in India. Calling land revenue “rent” rather than taxation also enabled East India Company administrators, and later the British government, to seek to legitimize their rule in India by suggesting that the tax burden upon the colonial population was very light and, indeed, was preferable to the taxation practices of the despotic regimes that had preceded it. As these debates were couched in the discourse of classical political economy, this became a way, as Govind [Reference Govind2011] argues, of occluding not only the workings of colonial rule central to British administration in India, but also the significance of colonial rule to the national state itself [Mukherjee Reference Mukherjee2010].
The land tax burden in India under East India Company rule, as noted by Edmund Burke at the time—discussed in Travers [Reference Travers2004]—was double the rate that was enforced in Britain. Within two years of the East India Company’s having obtained the right to collect tax in Bengal, Bihar, and Orissa, the British state sought to establish its rights to that revenue. While, as Travers points out, this attempt failed, from 1767 onwards the Company was nonetheless required “to pay an annual tribute to the British crown of £400,000” [Reference Travers2004: 525]. This in turn enabled the government in Britain to reduce the land tax burden domestically by 25 per cent, thereby placating the landed elite, who were central to British politics. The responsiveness of the British government to one of its key constituencies, the landed elite, is often seen as prefigurative of the constitutional reforms central to its self-identity as a liberal democracy. That it placated its domestic elite through its acceptance of the despotism of the East India Company’s practices has rarely been regarded as significant in its own terms. Nor has there been much systematic discussion, since Burke first raised such concerns, of the distorting effects of this unearned income—or the despotic practices from which it was obtained—upon the nature of government in Britain [see Mehta Reference Mehta1999]. This, as I will go on to argue in the conclusion, is central to an understanding of the British Empire as an empire of extraction.
The classical theory of rent, while it was never fully realized in practice, nonetheless informed colonial policy in India through into the early 20th century. There was a foundational assumption that, as the state was able to derive its revenue from “rent”, this meant that the country’s economy and productive capacity was otherwise left unaffected. This assertion continued to be repeated even as scholars and political figures began to question the association between the extraction of revenue and the increasing number and intensity of famines under British colonial rule. The first famine under British rule in India occurred in 1770 under the administration of John Cartier, Governor of Fort William in Bengal. It affected the population of 30 million living in Bengal, Bihar, and Orissa and, according to Warren Hastings, the subsequent governor-general, killed a third of that population, that is, 10 million people. Alongside the deaths caused by starvation and the diseases consequent to starvation, there was significant population decline in the affected areas, as people migrated in search of food and to avoid being subject to intensified demands for increased taxation. The Najai tax was imposed on the population that survived the famine and was, as Hastings wrote in 1772, intended “to make up for the loss sustained in the rents of their neighbours who are either dead or have fled the country” [cited in Chaudhuri Reference Chaudhuri1949: 240 fn4]; that is, the living had to pay the taxes owed by the dead.
Despite the deaths of a third of the population, and the fact that much of the land had been left uncultivated as a consequence, the East India Company collected more revenue in 1770–71 than it had in all previous years [Chaudhuri Reference Chaudhuri1960; Damodaran Reference Damodaran2007]. It should be noted that if the theory of rent was being consistently applied, this situation would have been understood to have reduced the amount of the surplus that constituted rent and thereby the extent of taxation should have been reduced commensurate to that. The fact that this did not happen, and indeed the taxation increased—in the city of “Moorshedabad” (now known as Murshidabad), for example, Loveday notes that the government collected “Rs. 25,77,428 more… in 1771 than in 1769” [Reference Loveday1914: 33]—indicates that economic theories were used as needed to justify extraction, with little to no accountability in the process. While there was some charitable distribution of grain and rice and a limited remission of taxes, this was not sufficient to alleviate distress or prevent famine deaths. Further, the local population contributed over four times as much as the government for the relief of the starving population [Loveday Reference Loveday1914: 32–33]. Indeed, local government officials were accused of requisitioning the grain of farmers, and even taking the seed that would be needed for the next harvest [Chaudhuri Reference Chaudhuri1960; Damodaran Reference Damodaran2007]. As Loveday notes, it is “little exaggeration to say that the Company was more concerned with the dividends of its shareholders than with the lives of those from whom those dividends were drawn” [Reference Loveday1914: 30].
This pattern of prioritizing revenue over the distress of the population was to be repeated ad infinitum during the period of British colonial rule. It became particularly stark after direct rule was instituted in 1858. In 1875, for example, Viceroy Lytton in his instruction to a local administrator, Richard Temple, stated: “we must plainly admit that the task of saving life, irrespective of the cost, is one which it is beyond our power to undertake” [quoted in Ambirajan Reference Ambirajan1978: 93]. Indeed, every time there was dearth and impending famine, calls would be made to introduce measures such as price controls or an embargo on the export of grains, and each time, as Ambirajan notes, “the principles of political economy were cited to justify a policy of non-interference” [Reference Ambirajan1978: 72]. This non-interference in the mode of relief, however, was matched by extensive state interference in the traditional and customary practices engaged in by the population to mitigate the effects of food scarcity. As Damodaran [Reference Damodaran2007] argues, in earlier periods of scarcity, the affected populations could make use of the abundance of forest produce in order to avert serious crises. Under British rule, however, the exploitation of forest areas intensified, and common lands were increasingly made private and put out of bounds for the local population. Districts that had seen severe droughts and dearth across the early 19th century without these developing into famines would be subject to serious famines by the latter part of that century. As Damodaran [Reference Damodaran2007] puts it, local food strategies that had been developed over centuries and that had protected populations from the consequences of scarcity were destroyed by colonial policies.
Conclusion: Towards a Colonial Political Economy
Food scarcity has been a recurrent feature of human societies, but it has not always led to famines that kill. Indeed, famines that kill have been exceptional phenomena within most political systems. One exception is that of European overseas empires, which, over time, normalized and systematized the conditions of food scarcity and hunger that are the prelude to mass starvation and deaths. In India, as described above, chronic poverty was produced through processes of colonial drain that Dadabhai Naoroji [Reference Naoroji1901], among others, have argued were a result of significant and systematic transfers of wealth from India to Britain. Excessive taxation, as well as over-assessment in the levels of taxation, impoverished the population over two centuries. This exacerbated both the intensity and frequency of famines, especially as colonial populations were denied any mitigation funds from the resources collected. I have set out the argument here in relation to British colonial rule in India, but I suggest that it also holds in relation to the Dutch and French colonial empires across the nineteenth and twentieth centuries [see Fernando Reference Fernando2010; Slobodkin Reference Slobodkin2023].
The asymmetrical relationship created by the taxation of a population in the absence of representation is, I suggest, one of the defining features of overseas colonial empires. While the absence of political representation is also a feature of empires in the premodern period, they were, nonetheless, organized in terms of modes of moral economy that sought to regulate, more or less effectively, the relationship between rulers and the peasantry. The famines that occurred within such empires, as discussed above as regards the Mughal Empire, tended to be associated with scarcity that led to hunger rather than to death. The famines that produced death were episodic and, in contrast to those that occurred within overseas colonial empires, were neither systematic nor associated with the operation of state policies. In this context, those famines that systematically produced death point to something distinctive about the new economic order that was being produced through colonialism.
Most social scientific accounts, however, represent the emergent economic order as constituted by capitalist social relations rather than those of colonialism and, more specifically, the development of overseas colonial empires. In his early writings, Karl Marx, for example, had identified a new form of poverty associated with the incipient capitalist political economy [Lubasz Reference Lubasz1976]. This, he argued, was based on the exclusion of people from the established political order—the unincorporated poor—and was in the process of being generalized across Europe as a consequence of emerging capitalism. It was capitalism that, for Marx, constrained political interventions by the state to improve the conditions of the poor in Europe. However, as I have argued above, the production of death by famine in India was not a consequence of economic imperatives, but of political rule by a colonial and extractive state. Further, the colonial resources appropriated by the metropolitan national state were then used to mitigate poverty within Europe. In this sense, there is no real distinction to be made between “neo-Europes” and extractive colonies. All European overseas colonial empires were extractive, operating for the (asymmetrically expressed) benefit of domestic populations.
My primary focus in this article has been to examine the significant differences between the rule of the Mughal Empire and British colonial rule in India. This has enabled me to make a distinction between forms of empire—those of premodern territorial contiguity and those of modern colonial overseas empire. My purpose has also been to distinguish the forms of moral economy associated with each, and to illustrate them through respective responses to dearth and famine. In some ways, this reflects the familiar distinction between moral and political economy that is associated with the modern transition to a capitalist market economy. However, I have also shown that the latter is integrally bound up with colonial practices that determine both forms of appropriation and distribution and that cannot be understood in standard market terms. What I am setting out, then, is a sociological reconstruction that distinguishes among empires as a means of understanding the central role of colonialism, and the political economy integral to that system, in the making of the modern world.
Acknowledgements
I would like to acknowledge the Leverhulme Trust for its support through the award of a Major Research Fellowship on “Varieties of empire, Varieties of colonialism”. I would also like to thank John Holmwood, Rahul Govind, and the anonymous reviewers for comments that helped to clarify the arguments being made here.