Hostname: page-component-7bb8b95d7b-495rp Total loading time: 0 Render date: 2024-10-04T20:55:53.414Z Has data issue: false hasContentIssue false

Black holes as tools for quantum computing by advanced extraterrestrial civilizations

Published online by Cambridge University Press:  16 October 2023

Gia Dvali
Affiliation:
Arnold Sommerfeld Center, Ludwig Maximilians University, Theresienstraße 37, 80333 Munich, Germany Max Planck Institute for Physics, Föhringer Ring 6, 80805 Munich, Germany
Zaza N. Osmanov*
Affiliation:
School of Physics, Free University of Tbilisi, 0183 Tbilisi, Georgia E. Kharadze Georgian National Astrophysical Observatory, 0301 Abastumani, Georgia
*
Corresponding author: Zaza N. Osmanov; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We explain that black holes are the most efficient capacitors of quantum information. It is thereby expected that all sufficiently advanced civilizations ultimately employ black holes in their quantum computers. The accompanying Hawking radiation is democratic in particle species. Due to this, the alien quantum computers will radiate in ordinary particles such as neutrinos and photons within the range of potential sensitivity of our detectors. This offers a new avenue for search for extraterrestrial intelligence, including the civilizations entirely composed of hidden particles species interacting with our world exclusively through gravity.

Type
Research Article
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Introduction

The search for extraterrestrial intelligence (SETI) is one of the longstanding problems of humanity. It is clear that any technological civilization most probably should have certain technosignatures that might be potentially detectable by our facilities. The first SETI experiments started in the last century in the framework of radio listening to stars (Tarter, Reference Tarter2001). Nowadays, the most advanced technology is the 500 m aperture spherical radio telescope Five-hundred-meter Aperture Spherical Telescope (FAST), which is dedicated to observing the sky, and one of the missions is to search for radio messages from extraterrestrial civilizations (Lu et al., Reference Lu, Lee and Xu2020). In 1960, Freeman Dyson proposed a revolutionary idea of searching for megastructures visible in the infrared spectrum (Dyson, Reference Dyson1960). He has proposed that if an advanced alien civilization is capable of constructing a spherical shell around their host star to utilize its total emitted energy, the megastructure, having a radius of the order of 1 astronomical unit (AU), will be visible in the infrared spectral band. This idea has been revived, thanks to the discovery of the Tabby's star (Boyajian, Reference Boyajian2016) (characterized by flux dips of the order of $20\%$, automatically excluding the possibility of a planet shading the star). This stimulated further research and a series of papers have been published considering the Fermi's paradox (Osmanov, Reference Osmanov2021a), the planetary megastructures (Osmanov, Reference Osmanov2021b), the stability of Dyson spheres (DS) (Wright, Reference Wright2020), extension to hot DSs (Osmanov and Berezhiani, Reference Osmanov and Berezhiani2018, Reference Osmanov and Berezhiani2019), megastructures constructed around black holes (Hsiao, Reference Hsiao2021), white dwarfs (Zuckerman, Reference Zuckerman2022) and pulsars (Osmanov, Reference Osmanov2016, Reference Osmanov2018).

Since the search for technosignatures is very complex and therefore sophisticated, the best strategy for SETI should be to test all possible channels. In the light of this paradigm, it is worth noting that, on average, the solar-type stars in the Milky Way (MW) are much older than the Sun. Therefore, potentially detected technosignatures will belong to advanced alien societies.

It is natural to assume that the technological advancement of any civilization is directly correlated with the level of its computational performance. This is fully confirmed by our own history. Here, we wish to point out that there exist certain universal markers of computational advancement which can be used as new signatures in SETI.

In particular, in the present paper, we wish to argue that any highly advanced civilization is expected to employ black holes for the maximal efficiency of their quantum computations. The optimization between the information storage capacity and the time-scale of its processing suggests that a substantial fraction of the exploited black holes must be the source of highly energetic Hawking radiation.

This gives new ways for the SETI. It also provides us with new criteria for constraining the parameters of highly advanced civilizations. It is worth noting that the role of black holes as attractors of alien civilizations has been considered by Vidal (Reference Vidal2011). Introducing the transcension hypothesis, Smart (Reference Smart2012) proposed that any highly advanced civilization tends to the so-called ‘inner space’, thus to a black-hole-like destination. Both papers discuss the role of black holes in advanced alien societies, but in a perspective different from that of our paper.

The logic flow of the paper is as follows. We first introduce the current framework of fundamental physics and justify why the most basic laws of quantum theory and gravity must be shared by all extraterrestrial intelligence (ETI). This applies to civilizations that may entirely be composed by yet undiscovered elementary particle species and experiencing some unknown interactions.

We then argue that for all civilizations, regardless of their composition, black holes are the most efficient storers of quantum information. This does not exclude the parallel use of other hypothetical objects which within their particle sector maximize the information storage capacity, the so-called ‘saturons’ (Dvali, Reference Dvali2021a). Following the previous paper, we explain that among all such objects, the black holes are the most efficient and universal capacitors of quantum information. The ultimate physical limits to computation, referring to black holes, has been studied by Lloyd (Reference Lloyd2000), but the mentioned work concerned about the speed of actual computing which we are not touching upon. Moreover, the mentioned paper is not concerned with Quantum field theory (QFT) validity issues, the black hole discussion it gives is less stringent than what it actually is. For example, during the half-decay time, the black hole (BH) qubits perform only of order-one operation each. This is sufficient for retrieving the information stored in these qubits for further processing. We did all cross-checks with actual microscopic description of a black hole as N-graviton quantum critical state, as it is strongly suggested by current understanding of gravity.

Our conclusions are based on recent advancements in the development of a microscopic theory of a black hole (Dvali et al., Reference Dvali, Flassig, Gomez, Pritzel and Wintergerst2013; Dvali and Gomez, Reference Dvali and Gomez2013, Reference Dvali and Gomez2014) as well as on understanding the universal mechanisms of enhanced information capacity (Dvali, Reference Dvali2017, Reference Dvali2018a, Reference Dvali2018b, Reference Dvali2018c; Dvali et al., Reference Dvali, Michel and Zell2019, Reference Dvali, Eisemann, Michel and Zell2020; Dvali and Panchenko, Reference Dvali and Panchenko2016) and using them for ‘black hole inspired’ quantum computing in prototype systems (Dvali and Panchenko, Reference Dvali and Panchenko2016).

Especially important is the knowledge that among objects saturating the universal quantum field theoretic bound on microstate entropy, black holes posses the maximal capacity (Dvali, Reference Dvali2021a).

Distilling the above knowledge, we argue that black hole-based computer technologies represent universal attractor points in the advancement path of any sufficiently long-lived civilization.

Relying solely on the laws of physics that must be shared by all ETI, we draw some very general conclusions about their usage of black hole computing. The optimization of information storage capacity and the time of its retrieval suggest that it is likely that ETI invest their energy in manufacturing a high multiplicity of microscopic black holes, as opposed to few large ones. Correspondingly, their computers are likely to produce a Hawking radiation of high temperature and intensity.

Another source of radiation of comparable frequency is expected to come from the manufacturing process of black holes. As we explain, this process likely requires the high energy particle collisions.

We give some estimates of what the above implies for SETI. In particular, considering the IceCube instruments (the most efficient neutrino observatory), based on their sensitivity and the observed very high energy (VHE) neutrino flux in the energy domain 40 TeV, we conclude that the mentioned observatory can detect emission from the black holes of advanced civilizations if their numbers are in the following intervals: $2.5\times 10^3\leq N^{\nu }_{_{\rm II}}\leq 4.2\times 10^4$, $2\times 10^6\leq N^{\nu }_{_{\rm III}}\leq 1.4\times 10^8$.

Finally, we discuss the possibility that nature houses a large number of hidden particle species interacting with us via gravity. We point out that because of the known universal relation between the number of species and the mass of the lightest black holes (Dvali, Reference Dvali2010a; Dvali and Redi, Reference Dvali and Redi2008), the existence of many species makes black hole manufacturing more accessible for ETI.

This applies equally to our civilization. In this light, a possible creation of microscopic black holes at our particle accelerators shall reveal a powerful information about the computational capacities of ETI.

The paper is organized as follows: in sections ‘The framework’, ‘Optimization of information storage’, ‘Why black holes?’, ‘Closer look at black holes’, ‘Competing devices: saturons’, ‘Can ETI use other saturons instead of black holes?’, ‘Some remarks on black hole manufacturing technology’, ‘Remarks on optimization of information processing’ and ‘Time-scales and messengers’, we examine basic concepts of quantum computing using black holes, in section ‘Some general estimates’ we discuss the observational features of such black holes, in section ‘ETI from dark side’ we discuss the hidden sector of particles and in section ‘Summary’ we summarize our findings.

The framework

Quantum physics and elementary particles

Our current framework for describing nature at a fundamental level is based on quantum physics. This framework passes all the tests, both theoretical and experimental, available to our civilization. The experiments prove that the quantum laws work to the shortest distances directly probed at the human made accelerators. In particular, the scale ~10−17 cm has been reached at the Large Hadron Collider (LHC) at CERN. This extraordinary resolution is achieved in particle collisions at around ~TeV energies.

In addition, the predictions obtained by the theoretical extrapolations of the quantum framework to much higher energies are fully consistent with all the existing data. A classic example is provided by quantum chromo-dynamics (QCD), which is the theory of the nuclear forces. QCD is an asymptotically free quantum field theory, self-consistently valid at arbitrarily short distances/high energies.

This success by no means implies that our knowledge of fundamental forces is complete. Certainly, new interactions can (and hopefully will) be discovered by the experiments with higher energies and/or higher precisions. Many such interactions have been hypothesized and studied. These are motivated by various open questions, such as the origin of dark matter, early cosmology, unification of forces and so on. For none of these extensions, there exists even a slight evidence for non-applicability of the basic laws of quantum physics. Even the string theory, which theoretically culminates the idea of unification of all known interactions, fully relies on the laws of quantum physics.

Of course, the particular realizations of the quantum framework may encounter some computational difficulties in certain regimes. This happens, for instance, when the interactions among the elementary particles become strong. In such a case, the perturbation theory in a given coupling constant may break down and one needs to resort to non-perturbative techniques such as, for example, the lattice simulations. None of these computational obstacles bare any indication of the invalidity of the basic concepts of the quantum theory. Rather, they only signal the need for improvement of computational tools without going outside of the quantum framework.

In the light of all the above, in our analysis we shall assume the validity of the laws of quantum physics for ETI.

Gravity

The second powerful factor is gravity. At the classical level, the gravitational interaction is described by Einstein's general relativity (GR). Again, this is a remarkably successful theory, tested by direct observations within an impressive range of distances, 10−1–1028 cm. In addition, various successes of the early cosmology suggest that the validity domain of GR extends to much shorter distances.

At the quantum level, the excitations of the gravitational field are gravitons, and the massless particles of spin  = 2. All the classical interactions of macroscopic sources can be derived from the quantum theory in the form of exchanges of virtual gravitons, in the expansion in series of Newton's constant G.

The unique property of graviton is that it is sourced by the energy momentum tensor. This feature is universal and accounts for all contributions including the energy momentum of the graviton itself. Due to this property, the gravitons self-gravitate. In other words, any graviton acts as a source for others.

The quantum states of the isolated gravitons have never been tested directly in lab experiments. This is explained by an extraordinary weakness of their coupling at laboratory scales. For example, an interaction strength of a graviton of approximately centimetre wavelength is suppressed by ~10−66 (Dvali and Gomez, Reference Dvali and Gomez2013). The strength of the classical gravitational field produced by the macroscopic sources is a collective effect of a large number of gravitons. This is why, it is much easier to detect the classical effects of gravity as opposed to its quantum manifestations.

The story however is scale-dependent. In the quantized version of Einstein gravity, the interactions among the individual gravitons are weak at distances larger than the Planck length:

(1)$$L_P \equiv \sqrt{\hbar G/c^3},\; $$

where $\hbar$ is the Planck's constant and c is the speed of light. The Planck length is approximately 10−33 cm. At such separation, the characteristic momentum-transfer among the individual gravitons is of order the Planck momenta, ~M Pc, where

(2)$$M_P \equiv {\hbar \over c L_P}$$

is the Planck mass. The quantum gravitational effects become strong at this scale. Due to this, L P is commonly considered to be an ultimate cutoff of the Einstein theory of gravity, viewed as effective quantum field theory.

However, in any extension of the pure Einstein gravity, the actual length-scale, $L_\ast$, at which quantum gravity becomes strong is longer than L P. In general, if a theory includes N sp distinct elementary particle species, we have (Dvali, Reference Dvali2010a; Dvali and Redi, Reference Dvali and Redi2008)

(3)$$L_\ast = \sqrt{N_{\rm sp}} L_P.$$

The corresponding scale of the momentum-transfer, $M_\ast c$, is respectively lowered to

(4)$$M_\ast c = {M_Pc\over \sqrt{N_{\rm sp}}}.$$

This scale is usually referred to as the ‘species scale’.

The scale $L_\ast$ imposes an absolute lower bound on compactness. In particular, the localization radius of a quantum information bit is bounded by $L_\ast$ (Dvali and Gomez, Reference Dvali and Gomez2009).

The currently known elementary particle species include the degrees of freedom of the standard model (quarks, leptons, Higgs and gauge bosons) and the graviton. Counting all physical polarizations, this amounts to N sp ~ 100. Correspondingly, the theory-independent bound on the compactness cutoff is $L_\ast \gtrsim 10 L_P \sim 10^{-32}$ cm.

Universal criteria obeyed by ETI

Putting all the knowledge together, we cannot assume that an advanced ETI necessarily shares the elementary particle composition with us. In particular, some ETIs may originate from entirely different particle species, interacting with us only gravitationally.

In the light of the above, we shall make the following (maximally conservative) assumption:

We share with ETI the laws of quantum physics and gravity.

One of the consequences of this is that the physics of black holes is common, irrespectively by which ETIs they are manufactured. This shall play the crucial role in our idea of SETI.

Optimization of information storage

The two essential ingredients of any type of computation are hardware and software. We shall not attempt to make any assumptions about the software used by the advanced ETI. The power of programming and the sophistication of their algorithms are most likely way beyond our imagination.

However, since the laws of quantum physics and gravity are common, we can make certain conclusions about the limiting capacities of their hardware. We shall parameterize these limits according to certain well-defined characteristics. The main features we will be interested in are the efficiencies of the quantum information storage and of its retrieval. These parameters are not payed much attention at the present stage of development of the quantum computing by our civilization, due to more urgent technological issues. However, they must become crucial for any civilization that is able to approach the limiting capacities.

In order to understand what these limits imply, we go in two steps. First, in the current section we define what it means for a generic device to have an enhanced capacity of information (memory) storage and what is the underlying mechanism behind it. That is, we shall reduce the feature of the enhanced capacity of information storage to its bare essentials borrowing the results of the series of papers (Dvali, Reference Dvali2017, Reference Dvali2018a, Reference Dvali2018b, Reference Dvali2018c; Dvali et al., Reference Dvali, Michel and Zell2019, Reference Dvali, Eisemann, Michel and Zell2020). The reader can find a self-contained summary and discussion in Dvali et al. (Reference Dvali, Eisemann, Michel and Zell2020).

Next, we shall explain that among all possible hypothetical devices that saturate the information storage capacity, the black holes are the most efficient ones (Dvali, Reference Dvali2021a).

In general, the quantum information is stored in a quantum state of the system. These states are usually labelled by the excitation levels of a set of N elementary degrees of freedom. Their choice is a matter of convenience depending on the system parameters, available technology and the computational task. The choice may range from atomic spin states, for less advanced ETI, all the way to graviton excitations that can be used by highly advanced civilizations. Regardless of their particular nature, it is customary to describe them as quantum oscillators in the number representation. Following the terminology of Dvali (Reference Dvali2017, Reference Dvali2018a, Reference Dvali2018b, Reference Dvali2018c) and Dvali et al. (Reference Dvali, Michel and Zell2019, Reference Dvali, Eisemann, Michel and Zell2020), we shall refer to these as the ‘memory modes’.

The simplest mode is a qubit, a quantum degree of freedom that can exist in two basic states. These states can be labelled as the occupation number eigenstates |0〉 and |1〉 of the corresponding quantum oscillator with a number operator $\hat {n}_j$, and j = 1,  2,  …,  N is the mode label.

Correspondingly, the set of N qubits produces n st = 2N basic states. These represent the tensor products of individual basis states of N distinct memory modes, |n 1,  n 2,  …,  n N〉 ≡ |n 1〉 ⊗ |n 1〉 ⊗ ,  …, ⊗ |n N〉, where n j = 0 or 1, for each j. In short, they count all possible sequences of 0-s and 1-s, such as $\vert 0,\; \, 0,\; \, \ldots ,\; \, 0 \rangle ,\; \, \ \vert 1,\; \, 0,\; \, \ldots ,\; \, 0 \rangle ,\; \, \ldots ,\; \, \vert 1,\; \, 1,\; \, \ldots ,\; \, 1 \rangle$. We shall refer to these states as the ‘basic memory patterns’. They form a Hilbert (sub)space, which we shall call the ‘memory space’.

The system can exist in an arbitrarily entangled state of the memory modes. A typical entangled state would represent a linear superposition of order-one fraction of n st basis states. However, there can exist the maximally entangled states of special types, such as the generalized Einstein-Podolsky-Rosen (EPR) state, ${1}/{\sqrt {2}}( \vert 0,\; \, 0,\; \, \ldots ,\; \, 0 \rangle + \vert 1,\; \, 1,\; \, \ldots ,\; \, 1 \rangle )$.

As a measure of the information storage capacity of the system, it is convenient to use the microstate entropy. As usual, we define it as the log from the dimensionality of the memory space:

(5)$$S = k_B\ln( n_{ts}) .$$

We shall focus on two important characteristics of the efficiency of a quantum memory device: the energy efficiency and the compactness.

The first one is determined by the energy gap of qubits, $\epsilon$. This also determines the total energy gap occupied by all n st memory states, which we denote by ΔE.

For example, if for each degree of freedom the states |0〉 and |1〉 are gapped by $\epsilon$, the gap spanned by the memory space will be $\Delta E = N\epsilon$.

Another important parameter is the radius of localization, R, of the quantum information storing device. Ordinarily, the compactness of the system increases the gap. For example, consider a qubit realized by a relativistic particle, say a photon, localized within a box of size R. The two basis states can, for example, be chosen as the states with one and zero photons. If the interactions of the photon inside the box are weak, the minimal energy gap due to quantum uncertainty is $\epsilon \sim c {\hbar }/{R}$. Consequently, the localization of N such qubits within the same box will result in a basic energy cost $\Delta E \sim N{\hbar c}/{R}$.

The first necessary requirement fulfilled by the device of the enhanced memory capacity is to minimize the gap of the memory qubits below the ‘naive’ quantum uncertainty:

(6)$$\epsilon \ll {\hbar c\over R},\; $$

while maximizing their number N (equivalently, S).

The crucial point is that the two are not independent: for a system of given compactness R, there exists an absolute upper bound on S imposed by unitarity (Dvali, Reference Dvali2021a). We shall discuss this bound later.

At the same time, the energy gap of a quibt $\epsilon$ sets the lower bound on the time-scale required for processing the information stored in that qubit:

(7)$$t_q = {\hbar\over \epsilon}.$$

In particular, this is the minimal time required for extracting the information from the qubit. For example, (7) is the minimal time required for detecting a photon of energy $\epsilon$.

Notice that $\epsilon$ stands for an effective energy gap in which the contributions due to interactions with the environment, with other qubits and other parts of the device, are already taken into account.

The expression (7) shows that there is a tradeoff. The cheap storage of information requires a longer time for its retrieval. This shall play an important role in estimating the computational parameters of ETI.

At the earliest stages of the development of quantum computers, as it is currently the case with our civilization, the pressing issues related to the hardware are the problems of decoherence. The questions of the energy efficiency of qubits, posed above, are secondary and not payed much attention. This makes sense, since even for highly optimistic numbers of the memory qubits, the energy costs due to the minimal quantum uncertainty are insignificant.

For example, for N ~ 1010, the energy gap of the memory space for a modern-technology information storing device of size R ~ cm is only ΔE ~ 105 eV. This is less than the rest energy of a single electron. Obviously, this is negligible as compared to the rest energy of a typical supporting device that consist of the macroscopic number of atoms.

However, the amount of quantum information processed by an advanced civilization must require uncomparably larger values of N. Therefore, an advanced civilization that cares about the compactness and the energy efficiency of its computations must find the way of optimizing the information storage capacity of the system. That is, it has to manufacture devices that maximizes the microstate entropy per given size of the system. Basically, the goal is to maximize S for given R.

Let us now explain that, for achieving the above goal, the sufficiently advanced ETIs are expected to use black holes.

Why black holes?

Black holes exhibit an extraordinary capacity of the information storage and processing. For various comparisons, it is illuminating to take the human brain as a reference point (Dvali, Reference Dvali2018b). For example, a black hole of a mass of a human brain (~kg) has a million times larger memory capacity while being in size only ~10−25 cm and having the operational time of memory read-out t q ~ 10−19 s. This is impressive, but are they the optimal tools?

In order to classify black holes as the ultimate device used in quantum computing by advanced ETIs, the following set of questions must be answered:

  • How unique are black holes in their ability of information storage?

  • What are the general mechanisms behind this ability?

  • Can these mechanisms be implemented in other devices for achieving a comparable, or even a higher, efficiency of information storage and processing?

Fortunately, these questions have already been largely addressed in a research programme consisting of three parallel strategies:

Remarkably, all three strategies have converged on one and the same picture of the properties of the black hole memory modes (qubits) and the mechanism behind their gaplessness and abundance.

In particular, it turns out that these features are generic for the objects saturating certain universal upper bounds on the microstate degeneracy imposed by unitarity (Dvali, Reference Dvali2021a). The objects are called ‘saturons’. Since the bound is universal, it is shared by all sectors of the universe, regardless of their particle composition. In particular, it is applicable even for yet undetected hypothetical particle species from the hidden sectors.

A long list of such objects has been identified in various consistent quantum field theories (Dvali, Reference Dvali2021b, Reference Dvali2021c; Dvali et al., Reference Dvali, Kaikov and Bermúdez2022a, Reference Dvali, Kühnel and Zantedeschi2022b; Dvali and Sakhelashvili, Reference Dvali and Sakhelashvili2022). Interestingly, they evidently exist in already discovered interactions, such as QCD, in the form of colour glass condensate of gluons (Dvali and Venugopalan, Reference Dvali and Venugopalan2022).

The universal mechanism of enhanced information storage capacity can be potentially implemented in laboratory settings, such as cold atoms. This is a realistic prospect even for our civilization in not far future.

The bottomline of this discussion is that an advanced civilization can certainly use non-gravitational saturons for the efficient information storage.

Depending on the perspective, this may be regarded as good news or bad news. On one hand, it is exciting to be able to exploit the black hole mechanism of information storage without the need of manufacturing the actual black holes, which for our civilization can be a long prospect. Instead, we can implement the same mechanism in ordinary systems such as cold atoms or the nuclear matter.

On the other hand, one may worry that this option makes the signatures for SETI less universal and more technology dependent.

However, we can claim with certainty (Dvali, Reference Dvali2021a) that among all possible saturons, the black holes stand out as the most efficient information storers. This gives us a confidence that a highly advanced ETI must be using them as the information storing device.

This provides us with a prospect of search for advanced civilizations even if they are composed out of entirely unknown particle species. For example, ETI may consist of particles from a hidden sector not sharing any constituents with the species that are known to us.

Nevertheless, the black hole quantum computing device of such a hidden civilization will provide signatures detectable by our observers. This is because Einstein gravity is universal for all the particle species. A direct consequence of this fact is the universality of the Hawking radiation. Correspondingly, the quantum computers of dark ETI will radiate democratically into the particles of all the species, including the particles of the standard model which we can potentially detected.

Closer look at black holes

Generalities

Black holes are the most compact objects of nature. Their effective size is given by the gravitational radius R. For a non-rotating and electrically neutral black hole, this is set by the Schwarzschild radius, R = 2GM/c 2.

In the classical description, black holes are stable and eternal. In quantum picture, the story changes. Due to quantum effects, the black holes evaporate by emitting Hawking radiation. Hawking's original calculation (Hawking, Reference Hawking1974) was performed in exact semi-classical limit. This implies taking the black hole mass to infinity and Newton's constant to zero, while keeping the black hole radius fixed, $G = 0,\; \, \ M = \infty \ {\rm and}\, \ R =$ finite. In this limit, the back reaction from the emitted quanta vanishes and the emission rate can be evaluated exactly. The resulting spectrum of radiation is thermal, with an effective temperature given by (Hawking, Reference Hawking1974)

(8)$$T = {\hbar c^3\over 8\pi k_{_B}MG}.$$

This leads to the following Stephen-Boltzmann law:

(9)$$c^2 {dM\over dt} = - N_{\rm sp} 4\pi\sigma R_{_{H}}^2T^4.$$

Here, N sp counts all particle species (all polarizations) lighter than T. That is, all species are produced with an universal thermal rate. The particles heavier that T are Boltzmann suppressed and can be ignored. The democratic production of all particle species is one of the important properties of the Hawking radiation. This feature originates from the universal nature of the gravitational interaction.

Of course, in Hawking's computation, the black hole has infinite mass and therefore it evaporates eternally.

For a finite mass black hole, the story is different. In order to estimate the decay time, one needs to make some additional assumptions. The most common is to assume that the black hole evaporation is self-similar. That is, the relation between the mass and the temperature (8) holds throughout the evaporation process. Under this assumption, the system can be easily integrated and one gets the following expression for the half-decay time:

(10)$$\tau_{_{1/2}} = N_{\rm sp}^{-1} {4480\pi G^2 M^3\over \hbar c^4}\, {\rm s}.$$

For getting a better feeling about time-scales, it is useful to rewrite this as an approximate expression:

(11)$$\tau_{_{1/2}} \simeq 0.7 \times{100\over N_{\rm sp}}\times\left({M\over 10^9\; g}\right)^3\, {\rm s}.$$

The factor ~100 in the numerator stands for a normalization relative to the number of known particle species, while N sp refers to a total number of species available at initial temperature of a black hole.

However, it is important to keep in mind that for the finite mass black holes, the thermality is only an approximation, since the back reaction is finite. By the time the black hole loses half of its mass, both expressions are expected to receive the substantial corrections due to quantum back reaction. This is evident from various sources: the general consistency arguments (Dvali, Reference Dvali2016), the detailed analysis of prototype quantum systems (Dvali et al., Reference Dvali, Eisemann, Michel and Zell2020), as well as from the explicit microscopic theory of a black hole (Dvali and Gomez, Reference Dvali and Gomez2013). It is independently evident that due to the back reaction from the carried quantum information, the black hole evaporation cannot stay self-similar throughout the process, and certainly not beyond its half-decay (Dvali et al., Reference Dvali, Eisemann, Michel and Zell2020).

This knowledge is no obstacle for our analysis. For the conservative estimates, the above expressions can be used as good approximation till the black hole half-decay time. This self-consistently shows that the expression (11) gives a correct order of magnitude approximation for this time-scale. This is fully sufficient for our purposes. Our discussions will be limited by this time-scale.

Let us now discuss the information storage capacity of black holes. It is well known that a black hole of radius R has a microstate entropy equal to its surface area measured in units of G (Bekenstein, Reference Bekenstein1973):

(12)$$S = k_B{\pi c^3R^2\over \hbar G} = k_B{\pi R^2\over L_P^2},\; $$

where in the last part of equation we have expressed it through the Planck length.

Another important feature of a black hole is the information retrieval time. The huge entropy of a black hole shows that it can carry a large amount of information. During the evaporation, the black hole loses its mass. However, initially a very little information comes out. This is because at initial stages of decay, the spectrum of Hawking radiation is thermal within a very good approximation. The quantum information about the black hole state is encoded in deviations from thermality, which are of order 1/S as per one emission time (Dvali, Reference Dvali2016; Dvali and Gomez, Reference Dvali and Gomez2013). Initially, the effects of these corrections are small and it takes a very long time to resolve them.

Page (Reference Page1993) argued that the start of the information retrieval from a black hole is given by its half-decay time (11). As we shall discuss below, this feature has a physically transparent explanation in terms of gaps of the black hole qubits (Dvali and Gomez, Reference Dvali and Gomez2013). It was also shown that the same feature is shared by all objects of maximal microstate entropy (Dvali, Reference Dvali2021a).

Taking (12) as a fact, one can deduce an useful knowledge about the information storing qubits without making any assumptions about their nature (Dvali, Reference Dvali2017, Reference Dvali2018a, Reference Dvali2018b; Dvali et al., Reference Dvali, Eisemann, Michel and Zell2020, Reference Dvali, Franca, Gomez and Wintergerst2015). In particular, it is clear that the number of qubits is N ~ S/k B. From here, it follows that the energy gap of black hole qubits is around

(13)$$\epsilon = {k_B\over S}{\hbar c\over R} = {\hbar^2 G\over \pi c^2 R^3}.$$

Notice that for N sp ~ 1, the corresponding information storage time-scale (7),

(14)$$t_q = {\hbar\over \epsilon} = {S\over k_B} {R\over c} = {\pi c^2 R^3\over \hbar G},\; $$

is of the same order as the half-decay time (11). Thus, we have,

(15)$$t_q \sim \tau_{1/2}.$$

This nicely reproduces the Page's estimate. It also shows that this estimate is a straightforward consequence of the energy gaps of the black hole qubits.

Of course, for N sp ≫ 1, the decay time is shortened and so is the time-scale of information processing:

(16)$$t_q = {1\over N_{\rm sp}} {S\over k_B} {R\over c} = {1\over N_{\rm sp}} {\pi c^2 R^3\over \hbar G}.$$

Cross-check from a microscopic theory

The expressions (8), (12), (13), (14) and (16) have been independently derived within a microscopic theory of a black hole, the so-called ‘quantum N-portrait’ (Dvali and Gomez, Reference Dvali and Gomez2013). This theory tells us that at the fundamental level, a black hole is described as a bound state of N ~ S gravitons of wavelengths ~R. This collection of gravitons can be regarded as a coherent state or a condensate.

The black hole qubits originate from the collective Bogoliubov-type excitations of the graviton condensate. As shown in Dvali and Gomez (Reference Dvali and Gomez2014), and further studied in series of papers (Dvali, Reference Dvali2017, Reference Dvali2018a, Reference Dvali2018b, Reference Dvali2018c; Dvali et al., Reference Dvali, Flassig, Gomez, Pritzel and Wintergerst2013, Reference Dvali, Franca, Gomez and Wintergerst2015, Reference Dvali, Michel and Zell2019, Reference Dvali, Eisemann, Michel and Zell2020; Dvali and Panchenko, Reference Dvali and Panchenko2016), a similar emergence of gapless qubits is characteristic to systems of attractive Bose–Einstein condensates at quantum critical points.

For a black hole, the qubits represent the excitations of graviton with high angular momentum, which become gapless due to attractive interaction with the condensate. These excitations can be described in several equivalent languages. For example, they can be viewed as Goldstone modes of the symmetries that are spontaneously broken by a black hole. Their emergence can also be understood in terms of the mechanism of ‘assisted gaplessness’ (Dvali, Reference Dvali2018a). The essence of this phenomenon is that the negative energy of the attractive interaction compensates the positive kinetic energy of the would-be free mode.

The independent derivation of the relations (8), (12), (13), (14) and (16) from the microscopic theory provides an important cross-check for the validity of these equations.

Competing devices: saturons

Let us now ask: How do we know that there exist no information storing devices competing with black holes? For a long time, it was implicitly assumed that the area form of the entropy (12) was exclusively the property of black holes. However, this understanding has changed recently (Dvali, Reference Dvali2021a). It has been shown that the area-law expression is common for all field theoretic objects that have maximal entropy permitted by unitarity. Namely, for a localized object of radius R, there exists the following universal upper bound on the microstate entropy (Dvali, Reference Dvali2021a):

(17)$$S = k_B{\pi c^3 R^2\over \hbar \tilde{G}},\; $$

where $\tilde {G}$ is the coupling of the Nambu-Goldstone mode of spontaneously broken Poincare symmetry. This mode is universally present for an arbitrary localized object, since any such object breaks the Poincare symmetry spontaneously. Due to this, $\tilde {G}$ represents a well-defined parameter for an arbitrary macroscopic object. The objects saturating the above bound are referred to as ‘saturons’ (Dvali, Reference Dvali2021a).

Various explicit examples of such objects have been constructed in non-gravitational quantum field theories, including some candidates of cold atomic systems (Dvali, Reference Dvali2021b, Reference Dvali2021c; Dvali et al., Reference Dvali, Kühnel and Zantedeschi2022b, Reference Dvali, Kaikov and Bermúdez2022a; Dvali and Sakhelashvili, Reference Dvali and Sakhelashvili2022).

An important finding of these studies is that all saturons exhibit the same information processing properties as black holes, with the role of G taken up by $\tilde {G}$. Namely, all saturons decay by emission of Hawking-like radiation with temperature given by (8):

(18)$$T = {\hbar c \over 4\pi k_{_B}R}.$$

The energy gap of saturon quibts is

(19)$$\epsilon = {k_B\over S}{\hbar c\over R} = {\hbar^2 \tilde{G}\over \pi c^2 R^3},\; $$

and the corresponding information recovery time is

(20)$$t_q = {\hbar\over \epsilon} = {S\over k_B} {R\over c} = {\pi c^2 R^3\over \hbar \tilde{G}}.$$

Of course, unlike black holes, the generic saturons radiate only in particle species to which their constituents interact.

It is certainly conceivable that some civilizations use non-gravitational saturon devices, as opposed to black holes, for their quantum computations. In fact, even our civilization may not be too far from implementing the saturated systems for such a purpose.

First, soon after the formulation of black hole N-portrait, the existence of similar gaplessness mechanisms were established in the prototype N-boson models (Dvali et al., Reference Dvali, Flassig, Gomez, Pritzel and Wintergerst2013, Reference Dvali, Franca, Gomez and Wintergerst2015; Dvali and Gomez, Reference Dvali and Gomez2014).

Furthermore, it has been proposed (Dvali, Reference Dvali2017, Reference Dvali2018b; Dvali et al., Reference Dvali, Michel and Zell2019; Dvali and Panchenko, Reference Dvali and Panchenko2016) that critical Bose–Einstein condensates can be used for implementing this black hole mechanism for controlled information processing.

More recently, it has been argued (Dvali and Venugopalan, Reference Dvali and Venugopalan2022) that saturated states have already been created in laboratory experiments in the form of the colour glass condensate of gluons (Gelis et al., Reference Gelis, Iancu, Jalilian-Marian and Venugopalan2010). It is not necessarily too far-fetched to expect the use of such states for quantum information processing on a reasonable time-scale of the advancement of our civilization.

Can ETI use other saturons instead of black holes?

In the above light, one may wonder whether the existence of saturons as of alternative information storing devices may impair our idea of SETI. The reason is that the radiation signature from such quantum computers will not be universal and will rather depend on a particle composition of ETI. Fortunately, this is not the case.

The point is that among all saturated objects of any given size R, the black holes are unbeatable in their information storage capacity (Dvali, Reference Dvali2021a). This is due to the fact that the spontaneous breaking of Poincare symmetry is maximized by a black hole. Correspondingly, the coupling of the Poincare Goldstone in case of a black hole is the weakest. That is, for any other object of the same size, we have $\tilde {G} \geqslant G$. In other words, the non-black-hole saturons, irrespective of their composition, must have $\tilde {G} > G$. Any decrease of $\tilde {G}$ below G will collapse the saturon into a black hole.

The details of the proof can be found in the original article (Dvali, Reference Dvali2021a). However, due to an interdisciplinary nature of the present paper, for the sake of the reader's comfort, we shall make the presentation maximally self-contained. For this purpose, we provide the following physical explanation.

Imagine we would like to produce a saturated device of size ~R in the form of a self-sustained bound state of N quanta of the wavelengths ~R. Let us assume that the dominant interaction responsible for binding the system has an interaction strength set by a dimensionless coupling α. That is, the attractive potential energy among the pair of particles is $V = - \alpha \hbar c/R$.

Of course, at the same time, these particles interact gravitationally, with the strength $\alpha _{\rm gr} = {\hbar G}/{c^3R^2} = {L_P^2}/{cR^2}$. The physical meaning of this parameter is easy to understand if we recall that the energies of quanta are $\sim \hbar c/R$ (see, e.g. Dvali and Gomez, Reference Dvali and Gomez2013). However, we have assumed that gravity is subdominant with respect to the binding force of coupling α.

It is clear that in order to form a bound state, the kinetic energy of each quantum ${c\hbar }/{R}$ must be balanced by the potential energy of attraction from the rest, $V = \alpha N c \hbar /R$. This gives an equilibrium condition αN ~ 1.

Now, the bound state breaks Poincare symmetry spontaneously. It breaks both the translations and the Lorentz boosts. The order parameter for breaking of Poincare symmetry is $\hbar N/\pi c^3R^2$, and the Goldstone coupling is $\tilde {G} = \pi c^3 {R^2}/{\hbar N}$. Then, according to the unitarity bound (17), the maximal possible entropy of such a bound state is S = k BN. Let us assume that this entropy is attained. Can it exceed the entropy of the same size black hole (12)?

This requires making $\tilde {G} = \pi c^3 {R^2}/{\hbar N}$ as small as G. But this implies that the gravitational coupling α gr must become equal to the one of non-gravitational binding force α ~ 1/N. Thus, we are pushed to equality α gr = α. Remembering that the rest energy of the object is $\sim N c\hbar /R$, it is easy to evaluate its gravitational radius, which comes out equal to ~R.

It is clear that at this point, the object is within its gravitational radius and thus is a black hole. That is, any attempt of pushing the entropy of the saturon of size R beyond the entropy of the same size black hole collapses the saturon into a black hole.

Correspondingly, if the civilization is sufficiently advanced, for information processing, it will ultimately resort to black holes rather than to other saturons.

This gives an exciting prospect of searches for advanced civilizations based on a detection of Hawking radiation from their quantum computers. This applies equally to the civilizations belonging to hidden particle species, interacting with us exclusively via gravity. Regardless, due to its universal nature, the Hawking radiation from their quantum computers will inevitably contain our particle species.

Some remarks on black hole manufacturing technology

An universal prescription for a creation of a black hole of mass M is that the energy E = Mc 2 must be localized within the sphere of the corresponding Schwarzschild radius R = 2MG/c 2.

Of course, we cannot make a precise prediction of the methods used by an advanced civilization for achieving this goal. However, it is possible to outline some expectations depending on the masses of black holes manufactured by ETI.

In particular, we wish to separate the two regimes of black hole formation, which we shall refer to as ‘soft’ and ‘hard’, respectively.

If the mass of a black hole is sufficiently high, it can be manufactured softly, by putting together a low density of many non-relativistic particles and letting them to collapse. This is essentially how an ordinary stellar collapse works.

However, for creation of microscopic black holes, this method may not be used. Instead, one needs to accelerate particles to high energies and collide them with small impact parameters. Such collisions are usually referred as ‘hard’ in the sense that the momentum-transfer as per colliding particle exceeds their rest masses.

Let us explain the difference on a concrete example. In order to produce a solar mass black hole (R ~ km), it is sufficient to place of order 1057 non-relativistic neutrons within the proximity of few kilometre. Such a system will collapse into a black hole without the need of any hard collision among the neutrons.

At the same time, if we wish to produce a black hole of the size of a neutron R ~ 10−14 cm, this method will not work. The mass of such a black hole is approximately 1038 GeV, which is 1038 times the rest mass of a neutron. We thus need to localize about 1038 neutrons within a Compton wavelength of a single neutron. This cannot be done softly due to the resistance from the nuclear forces. For example, a chunk of a nuclear matter consisting of 1038 neutrons will not collapse easily into a black hole, due to Fermi pressure and other factors.

Instead, in order to achieve the goal, the neutrons have to be accelerated to ultra-relativistic energies and collided with a very high momentum-transfer.

For example, in an extreme case, we can accelerate two neutrons to a centre of mass energy of ~1038 GeV and collide them with an impact parameter ~10−14 cm. Of course, one can instead collide more neutrons each with less energy. However, for each choice, the collision must be hard since the neutrons must be relativistic and exchange momenta exceeding (or comparable to) their rest masses.

This example shows the general tendency: manufacturing the microscopic black holes requires hard collisions among high energy particles.

Of course, one can imagine an exotic situation when an advance civilization has at its disposal a new type of a heavy elementary particle, interacting purely via gravity. For example, we can imagine a stable scalar particle of mass ~M P. Notice, such a particle is essentially a quantum black hole, since its Compton wavelength as well as its gravitational radius are both of order L P (Dvali and Gomez, Reference Dvali and Gomez2013). Using this elementary building block, the black holes of arbitrary masses can be produced softly. Indeed, an arbitrary number of non-relativistic particles with proper initial velocities can collapse into a black hole. Of course, upon crossing the Schwarzschild radius, the system always becomes relativistic. But the process is soft in the sense that such particles need not exchange momenta exceeding their rest masses.

Putting such highly exotic possibilities aside, it is reasonable to expect that for manufacturing small black holes, the advanced ETI use the method of colliding highly energetic elementary particles.

Such collisions are expected to be accompanied by the radiation of quanta of the typical energy given by the inverse impact parameter. Since the latter is of order the black hole radius, the expected energy of the radiated quanta is $\sim \hbar c/R$. This is comparable to the temperature of the Hawking radiation from a black hole that is the outcome of the collision. The difference of course is that the collision radiation is neither thermal nor democratic in particle species.

Due to this, its detectability, unlike the Hawking radiation, will depend on the composition of particle species used in collisions for manufacturing black holes.

In conclusion, the following general message emerges. The ETI can also be searched through the primary radiation emitted during the process of their manufacturing of black holes. The characteristic energy is set by the inverse impact parameter and is comparable to the energy of the Hawking quanta emitted by a black hole manufactured in the process.

Remarks on optimization of information processing

Another remark we would like to make concerns the optimization of information processing by ETI. Of course, we are not going to speculate about the details. Our point will be limited by aspects that are imposed by laws of physics where all ETI have to respect. These are the laws of quantum field theory and gravity. These laws imply the relation (16) between the information extraction time from a black hole and its radius/temperature and entropy. The analogous relation holds for all saturons (20) (Dvali, Reference Dvali2021a).

This universal relation shows that for a single black hole, the information extraction time t q is fixed in terms of its radius R (or temperature, T ~ 1/R) and N sp. Of course, from the same relation, the entropy of a black hole is also uniquely fixed in terns of t q and N sp. Notice that the total number of species N sp at a given scale is the constant of nature, where ETI cannot change. The optimization problem must thereby be solved for the fixed N sp.

This allows us to draw some general conclusions about a likely arrangement of information storing devices by ETI. Namely, for a given t q, the only way of increasing the amount of stored information is by multiplying the number of black holes. Increasing the entropies of individual black holes is not possible, as this will increase t q.

As it is clear from (16), for a system of n black holes, each of mass M, the information retrieval time $t_q \propto M^3/M_P^2$ is independent of n. One can thus arbitrarily increase the entropy of the system by increasing n without altering t q. The total entropy grows as n while t q stays the same.

In contrast, taking a single black hole of the mass M′ = nM increases the entropy of the system by extra power of n relative to the total entropy of n separated black holes. However, simultaneously the information retrieval time grows as the cubic power of n:

(21)$$t_q \rightarrow t_q' = n^3\, t_q.$$

Obviously, this compromises the information processing efficiency of the system. Thus, the optimal way of increasing the information capacity of the system for a fixed t q is to increase the number of black holes instead of investing the same total energy into a creation of bigger ones.

We thus arrive to a very general conclusion:

The quantum computing devices of advanced ETI are more likely to consist of multiple small black holes rather than a fewer big ones.

This implies that from black hole ‘farms’ of ETI, we are more likely to expect the Hawking radiation of higher temperature and intensity.

Time-scales and messengers

The relation (16) between the information processing time t q and the black hole radius/temperature allows us to give some estimates about the energies and the nature of the emitted Hawking radiation. We must however note that such estimates change if we move beyond Einstein gravity. This departure is scale-dependent. As already discussed, in any theory with number of particle species N sp, the gravitational cutoff is lowered to the species scale $L_\ast$ given by equation (3) (Dvali, Reference Dvali2010a; Dvali and Redi, Reference Dvali and Redi2008).

This length gives an absolute lower bound on the size of a black hole tractable within Einstein gravity. By taking into account the number of all currently known elementary particle species (N sp ~ 100), the minimal size of a black hole is still very short, $L_\ast \sim 10^{-32}$ cm.

The corresponding half-life of an Einsteinian black hole is only slightly longer than the Planck time, t q ~ 10−43 s. This time sets an absolute lower bound on the information processing speed offered by any ETI.

Here, for definiteness, we shall assume that $L_\ast \sim 10^{-32}$ cm marks the validity domain of Einsteinian gravity. A discussion of the cases with much larger number N sp will be postponed towards the end of the paper. However, here we wish to note that increasing N sp only helps in more efficient manufacturing of ‘artificial’ black holes. This is due to the fact that the threshold energy of black hole formation in particle collisions is lowered according to (4) (Dvali and Redi, Reference Dvali and Redi2008). The size of the smallest black hole (4) is correspondingly much larger than in Einstein gravity with a small number of species.

Correspondingly, existence of large number of species makes the black hole quantum computing more likely accessible for less advanced civilizations. In particular, for the limiting case of N sp ~ 1032, manufacturing of smallest black holes will become (or perhaps already is) possible by our civilization.

Assuming for the time being that we are within the validity of Einstein gravity for the scales of interest, we can estimate the parameters of the expected Hawking radiation for certain reasonable time-scales.

For example, demanding that t q is not much longer than a few seconds, we arrive to the conclusion that the energy of the Hawking quanta is above ~TeV energy. With such a high temperature, all the known particle species will be radiated by ETI black hole computers.

Of course, how efficiently the radiation can reach us depends on number of factors. These factors include the presence of a medium that can absorb or shield the radiation.

In general, since the neutrinos have the deepest penetration length, they appear to be the most robust messengers. In addition, the mediation by high energy gamma quanta and other species is certainly a possibility.

Of course, the high energy radiation coming from the distant ETI will be subjected to the same universal cutoffs as the high energy cosmic rays produced by the natural sources. The example is given by the Greisen-Zatsepin-Kuzmin cutoff (Greisen, Reference Greisen1966; Zatsepin and Kuzmin, Reference Zatsepin and Kuzmin1966) due to scattering of high energy protons at the cosmic microwave background.

On top of the known natural factors, there can be unknown artificial shielding effects. For instance, ETI may compose absorbing shields around their black hole computers. The motivation may vary from a simple defence against the life-threatening high energy radiation, as it is the case for humans, all the way to sophisticated energy-recycling purposes. These are the factors about which it is hard to speculate. However, in their light, the case for neutrino, as of a most promising known messenger, is strengthened.

Some general estimates

All the parameters required for our estimates – such as the time-scale of processing, the temperature of accompanying radiation, the black hole mass and the entropy - can be expressed through one another. The only extra quantity is N sp, which shortens the processing time. At the moment, we assume that this quantity is not much different from the number of known species. Then, we are left with a single input parameter which we shall take to be the information processing time t q (14). As we already discussed (15), this time is approximately equal to a half-decay time of a black hole τ 1/2, which shall be used below.

We assume that the typical time-scale which is required for computation is of the order of 1 s. Then, from equation (11), one can show that the black hole mass will be of the order of 1.2 × 109 g. In this context, an important issue a civilization has to address is energy, which is required for constructing an extremely compact object, Mc 2 ≃ 1030 erg.

In order to understand the capabilities of extraterrestrial civilizations, one should examine them in the light of the classification introduced by Kardashev (Reference Kardashev1964). According to this approach, alien societies should be distinguished by their technological level. Type-I civilizations, in particular, use the entire power attained on a planet. In the case of the Earth, the corresponding value is of the order of $P_{_{I}}\simeq {L_{\odot }}/{4}\times ( {R_E}/{R_{\rm AU}}) ^2\simeq 1.7\times 10^{24}$ erg s−1, whereas the current power consumption is about 1.5 × 1020 erg s−1, classifying our civilization as level 0.7 (Sagan, Reference Sagan1973). Type-II civilization utilizes the whole stellar power, $P_{_{\rm II}}\simeq L_{\odot }$, and Type-III is an alien society that consumes the total power of a host galaxy, $P_{_{\rm III}}\simeq 10^{11}\times L_{\odot }$. Here, $L_{\odot }\simeq 3.8\times 10^{33}$ erg s−1 is the solar luminosity, R AU ≃ 1.5 × 1013 cm is the distance from the Earth to the Sun (AU) and R E ≃ 6.4 × 108 cm denotes the radius of the Earth. From these estimates, it is obvious that even type-I civilization can build a black hole with the aforementioned mass: it only needs to accumulate the energy incident on the planet during ~7 days.

In the SETI, the major task is to find specific fingerprints of their activity. Using black holes will inevitably lead to such observational features which, on the one hand, might be detected and, on the other hand, distinguished from typical astrophysical events. In particular, we use the knowledge that black holes emit particles with energy in the range (E,  E + dE) with the rate (Hawking, Reference Hawking1974)

(22)$$R( E,\; M) \equiv {{\rm d}^2N\over {\rm d}t{\rm d}E} = {1\over 2\pi\hbar}\times{\Gamma_s( E,\; M) \over e^{{8\pi GME}/{\hbar c^3}}-( -1) ^{2s}},\; $$

where $\Gamma _s( E,\; \, M) = E^2\sigma _s( E,\; \, M) /( \pi \hbar ^2 c^2)$, s denotes the spin of a particle, σ s = α sG 2M 2/c 4, α 1/2 = 56.7, α 1 = 20.4 and α 2 = 2.33 (MacGibbon and Webber, Reference MacGibbon and Webber1990). As discussed, the flux of particles might be screened out but we assume that neutrinos escape. Therefore, the major observational feature might be the energetic flare of neutrinos. Using equation (22), one can easily demonstrate that the spectral power, ER, of neutrinos (s = 1/2), has a peak at energies

(23)$$E^{\nu}_m\simeq{3.13 \hbar c^3\over 8\pi GM}\simeq 48\times\left({100\over N_{\rm sp}}\times{1\; {\rm s}\over \tau_{_{1/2}}}\right)^{1/3} {\rm TeV},\; $$

which can be detected by the IceCube detector (Aartsen et al., Reference Aartsen2014). For the neutrino emission luminosity at E m, we obtain

(24)$$\eqalign{L^{\nu}_m& \simeq 3\times ( E^{\nu}_m) ^2R( E^{\nu}_m,\; M) \cr & \simeq 1.2\times 10^{28}\times\left({120\over N_{\rm sp}}\times{1\; {\rm }s\over \tau_{_{1/2}}}\right)^{2/3}\; {\rm erg/s},\; }$$

where the multiplier, 3, comes from the three flavours of neutrinos. As it is evident, even a single event is characterized by the very high luminosity of the neutrino flare. In general, it is obvious that a civilization might perform many calculations per second. On the other hand, any civilization is limited by the level of technology it possesses. The maximum number of quantum processings per second, n x (where x = (I,   II,   III) denotes the level of technology), is severely constrained by a civilization's maximum power, n x ≃ P x/〈L〉, where $\langle L \rangle \simeq Mc^2/2\tau _{_{1/2}}$ is the average luminosity of black hole's emission:

(25)$$n_{_{\rm I}} = 3.5\times 10^{-6}\times\left({100\over N_{\rm sp}}\right)^{1/3}\times\left({\tau_{_{1/2}} \over 1\, {\rm s}}\right)^{2/3}\; {\rm s}^{-1},\; $$
(26)$$n_{_{\rm II}} = 7.7\times 10^{3}\times\left({100\over N_{\rm sp}}\right)^{1/3}\times\left({\tau_{_{1/2}} \over 1\, {\rm s}}\right)^{2/3}\; {\rm s}^{-1},\; $$
(27)$$n_{_{\rm III}} = 7.7\times 10^{14}\times\left({100\over N_{\rm sp}}\right)^{1/3}\times\left({\tau_{_{1/2}} \over 1\, {\rm s}}\right)^{2/3}\; {\rm s}^{-1}.$$

Since the value of $n_{_{\rm I}}$ is very small, the best way is to capture a single event of duration $\tau _{_{1/2}}$. For Type-I alien societies the luminosity will be defined by equation (24), but for Type-II and Type-III civilizations, the luminosity might be much higher, reaching the solar luminosity for Type-II and the galactic luminosity for Type-III societies.

This is not the end of the story. In 1961, Frank Drake formulated an equation used to estimate the number of communicative civilizations (Drake, Reference Drake1961):

(28)$$N = R_{\star}\times f_p\times n_e\times f_l\times f_i\times f_t\times {\cal L},\; $$

where $R_{\star }$ denotes the average rate of star formation, f p denotes the fraction of stars with planets, n e denotes the average number of potentially habitable planets per star, f l is the fraction of planets that can potentially support life and that actually develop it, f i denotes the fraction of planets with life that develop intelligent life, f t is the fraction of technological communicative civilizations and ${\cal L}$ is the average length of a time-scale for which alien advanced civilizations release detectable signals into space. From the study of the MW, it is well known that the modern value of the star formation rate is in the interval (1.5 − 3) stars per year (Kennicutt and Evans, Reference Kennicutt and Evans2012). Based on the Kepler space mission, it has been revealed that there should be around 40 billion Earth-sized planets in the habitable zones of solar-type stars and red dwarfs (Petigura et al., Reference Petigura, Howard and Marcy2013). This automatically implies that a value of f p × n e should be 0.4. The average value of $R_{_{\rm Astro}} = R_{\star }\times f_p\times n_e\simeq 0.9$ is more or less well defined, but about the rest, almost nothing is known and we can only apply very speculative approaches. Based on the observation that life on Earth appeared soon after the conditions became favourable, one can speculate that the value of f l is close to 1. In his statistical approach to the Drake equation, Maccone (Reference Maccone2012) uses a moderate value of 1/3 and a value of 0.01 for f i × f t. In the scientific literature, a range (10−3 − 1) (Prantzos, Reference Prantzos2013) is considered for the quantity $f_{_{\rm Biotech}} = f_l\times f_i\times f_t$.

For estimating the value of ${\cal L}$, we assume that it should not be less than the time-scale on which we need to reach the corresponding level of technology. Following an approach by Dyson (Reference Dyson1960), if one accepts that $1\%$ of the annual growth rate of industry is maintained, then by taking the current power consumption, P 0 ≃ 1.5 × 1020 erg/s, into account, one can straightforwardly show that to reach Type-I and Type-II civilizations, one requires approximately t I = 1000 years and t II = 3000 years, respectively. Calculation of t III should be performed in a different way. By definition, a galactic civilization is the one which colonized the whole galaxy; therefore, the corresponding value should be of the order of D MW/υ, where D MW ≃ 26.8 kpc is its average diameter of the MW galaxy and υ is a velocity by which a civilization covers the mentioned distance. By taking a very moderate value of it, 0.01 c, one obtains t III ≃ 108 years. It is clear that a theoretical minimum value of t III imposed by the speed of light is of the order of 106 years.

By assuming that the minimum values of ${\cal L}_x$ are of the order of t x, then the number of civilizations, $N = R_{_{\rm Astro}}\times f_{_{\rm Biotech}}\times {\cal L}$, can vary in the interval (3,   1.2 × 103) (Type-I) and (6,   3.6 × 103) (Type-II). The aforementioned values might be even higher, if ${\cal L}_x$ exceed t x. One should also emphasize that since nearly $75\%$ of stars with habitable planets are by ~1 Gy older than the Sun (Lineweaver et al., Reference Lineweaver, Fenner and Gibson2004), the previously mentioned assumption is quite natural (Cameron, Reference Cameron1963; Sagan, Reference Sagan1963; Shklovskii and Sagan, Reference Shklovskii and Sagan1966), where the number of civilizations has been estimated to be of the order of 106.

For the uniform distribution of civilizations in the galactic plane, one can find that the order of magnitude of a minimum distance between alien societies is given by $d_m\simeq \sqrt {A_{\rm MW}/N}$, where $A_{\rm MW}\simeq \pi D_{\rm MW}^2/4$ is the area of the MW galaxy. Then, for the neutrino flux, $F^{\nu }\simeq L^{\nu }_m/( 4\pi d_m^2)$, multiplied by $\pi ^2\; n_{_{\rm II}}$ for Type-II civilizations (multiplier π 2 comes from the summation over all civilizations in the galactic plane), one obtains

(29)$$\eqalign{F^{\nu}_{_{I}}& \simeq 10^{-13}\times\left({100\over N_{\rm sp}}\times{1\, {\rm s}\over \tau_{_{1/2}}}\right)^{2/3}\cr & \quad\times{R_{\rm Astro}\over 0.9\, {\rm year}^{-1}}\times{\,f_{\rm Biotech}\over 1}\times{{\cal L}_{_{I}}\over t_{_{I}}}\; {\rm GeV}\, {\rm s}^{-1}\, {\rm cm}^{-2},\;}$$
(30)$$ \eqalign{F^{\nu}_{_{\rm II}}& \simeq 2.3\times 10^{-8}\times{100\over N_{\rm sp}} \cr& \times{R_{\rm Astro}\over 0.9\, {\rm year}^{-1}}\times{\,f_{\rm Biotech}\over 1}\times{{\cal L}_{_{\rm II}}\over t_{_{\rm II}}}\, {\rm GeV}\, {\rm s}^{-1}\, {\rm cm}^{-2}.}$$

Considering a Type-III civilization, for the flux from a nearest galaxy, one obtains

(31)$$\eqalign{F^{^{^{\nu}}}_{_{\rm III}}& \simeq 9.2\times 10^{-6}\times{100\over N_{\rm sp}}\cr & \times\left({R_{\rm Astro}\over 0.9\, {\rm year}^{-1}}\times{\,f_{\rm Biotech}\over 1}\times{{\cal L}_{_{\rm III}}\over t_{_{\rm III}}}\right)^{2/3}\, {\rm GeV}\, {\rm s}^{-1}\, {\rm cm}^{-2}.}$$

We have taken into account that an average distance between closest galaxies is given by (V g/N g)1/3, where V U ≃ 1.22 × 1013 Mpc3 is the volume of the visible universe and N g ≃ 1012 is an average number of galaxies in the universe.

As we have already explained, for Type-I societies, the best way is to capture a single event, whereas for Type-II and Type-III civilizations, upper limits of possible fluxes will be defined by a net effect of all quantum processing events every second. An interesting feature of equations (30) and (31) is that they do not depend on $\tau _{_{1/2}}$ or black hole mass. Such a behaviour is a direct consequence of the fact that $n_x\sim \tau _{_{1/2}}/M\sim \tau _{_{1/2}}^{2/3}$, multiplied by $L^{\nu }_m\sim \tau _{_{1/2}}^{-2/3}$ (see equation (24)) cancels the dependence on $\tau _{_{1/2}}$.

The IceCube sensitivity to detecting muon neutrinos is of the order of $F^{\nu }_{\rm IC, min}\simeq $ $7\times10^{-9}\, {\rm GeV}\, {\rm s}^{-1}\, {\rm cm}^{-2}$ (Aartsen et al., Reference Aartsen2014); therefore, the Type-I alien society cannot be detected for the mentioned parameters and it will be possible only for $f_{\rm Biotech}\times {\cal L}_{_{\rm I}}/t_{\rm I}\simeq 7\times 10^4$, which seems to be an unrealistically large value.

On the contrary, Type-II and Type-III alien societies might be visible to the IceCube detectors. Therefore, it is reasonable to estimate upper limits of the number of civilizations which might provide the flux for >30 TeV: $F^{\nu }_{\rm IC}\simeq 3.6\times 10^{-7}\, {\rm GeV}\, {\rm s}^{-1}\, {\rm cm}^{-2}$ observed by the IceCube instruments. By assuming that $F^{\nu }_{\rm IC}$ is produced by the advanced ETI, one obtains

(32)$$N^{\nu}_{_{\rm II, \max}}\simeq{D_{\rm MW}^2 F^{\nu}_{\rm IC}\over n_{_{\rm II}} L_m^{\nu}} \simeq 4.2\times 10^4,\; $$
(33)$$N^{\nu}_{_{\rm III, \max}}\simeq V_U\times\left({36 F^{\nu}_{\rm IC}\over 13 \pi n_{_{\rm II}} L_m^{\nu}}\right)^{3/2} \simeq 1.4\times 10^8,\; $$

where we have taken into account that a flux from the nearest Type-III civilization should be multiplied by 13π 2/9 to obtain the net effect from all civilizations uniformly distributed over the observed universe. Similarly, by taking $F^{\nu }_{\rm IC, \min }$ into account, one can obtain the minimum values of civilizations, which might be detectable by the IceCube instruments: $N^{\nu }_{_{\rm II, \min }}\simeq 2.5\times 10^3$ and $N^{\nu }_{_{\rm III, \min }}\simeq 2\times 10^6$.

Other signatures also might exist if ETI does not need to screen out the black hole radiation. In this case, other particles will escape a region. In particular, if one considers escaping photons (s = 1 and α 1 = 20.4; see equation (22)), one can straightforwardly show that the spectral power peaks at

(34)$$E^{\gamma}_m\simeq{2.82 \hbar c^3\over 8\pi GM}\simeq 43\times\left({100\over N_{\rm sp}}\times{1\, {\rm s}\over \tau_{_{1/2}}}\right)^{1/3}\, {\rm TeV},\; $$

corresponding to the following gamma ray luminosity

(35)$$\eqalign{L^{\gamma}_m& \simeq ( E^{\gamma}_m) ^2R( E^{\gamma}_m,\; M) \cr & \simeq 1.5\times 10^{27}\times\left({100\over N_{\rm sp}}\times{1\, {\rm s}\over \tau_{_{1/2}}}\right)^{2/3}\, {\rm erg/s}.}$$

By taking the number of all civilizations in the galaxy into account, one can derive the high energy photon fluxes for Type-I, II, III ETIs

(36)$$\eqalign{F^{\gamma}_{_{\rm I}}& \simeq 1.2\times 10^{-14}\times\left({100\over N_{\rm sp}}\times{1\, {\rm s}\over \tau_{_{1/2}}}\right)^{2/3}\cr & \times{R_{\rm Astro}\over 0.9\, {\rm year}^{-1}}\times{\,f_{\rm Biotech}\over 1}\times{{\cal L}_{_{\rm I}}\over t_{_{\rm I}}}\, {\rm GeV}\, {\rm s}^{-1}\, {\rm cm}^{-2},\;} $$
(37)$$\eqalign{F^{\gamma}_{_{\rm II}}& \simeq 2.8\times 10^{-9}\times{100\over N_{\rm sp}}\times{R_{\rm Astro}\over 0.9\, {\rm year}^{-1}}\cr & \times{\,f_{\rm Biotech}\over 1}\times{{\cal L}_{_{\rm II}}\over t_{_{\rm II}}}\, {\rm GeV}\, {\rm s}^{-1}\, {\rm cm}^{-2},\;}$$
(38)$$\eqalign{F^{^{^{\gamma}}}_{_{\rm III}}& \simeq 1.1\times 10^{-6}\times{100\over N_{\rm sp}}\cr & \times\left({R_{\rm Astro}\over 0.9\, {\rm year}^{-1}}\times{\,f_{\rm Biotech}\over 1}\times{{\cal L}_{_{\rm III}}\over t_{_{\rm III}}}\right)^{2/3}\, {\rm GeV}\, {\rm s}^{-1}\, {\rm cm}^{-2}.}$$

The major atmospheric imaging cherenkov (MAGIC) telescope aiming to detect VHE gamma rays has sensitivity of the order of 10−13 erg cm−2 s−1 ≃ 6 × 10−11 GeV cm−2 s−1 (Aharonian, Reference Aharonian2004). Therefore, MAGIC can detect the VHE gamma rays from Type-I civilizations if ${\cal L}_{_{\rm I}}\sim 5\times 10^3 t_{_{\rm I}}\sim 5\times 10^6$ years.

From equations (37) and (38), it is evident that for the chosen parameters, the VHE photons from Type-II and Type-III alien societies might be detected by the MAGIC instruments. But one should clearly realize that if the advanced civilizations do not manufacture the black holes at the maximum rate (see equations (25)–(27)), the fluxes will be smaller. In any case, the sensitivity of MAGIC facilities impose a constraint on the minimum number of civilizations $N^{\gamma }_{_{\rm II, \min }}\simeq 60$ and $N^{\gamma }_{_{\rm III, \min }}\simeq 7.5\times 10^3$.

ETI from dark side

All currently known elementary particles are described by the standard model plus Einstein gravity. The known particle species consist of the Higgs scalar, the fermions (quarks and leptons) and the mediators of gravitational and gauge interactions. Counting all polarizations, this amounts to N sp ~ 100 field theoretic particle species.

The standard model is an extraordinarily successful theory. It suffices to say that within the range of distances of approximately 10−17–1028 cm, there is no experimental evidence invalidating it. Yet, we know that this theory is incomplete and there must exist particle species beyond it.

Perhaps, the most direct evidence for this is provided by the dark matter. Although a part of dark matter could reside in black holes (for a review, see, Escrivà et al., Reference Escrivà, Kuhnel and Tada2022), almost certainly, this option too requires the existence of new particle species. The current experimental constraints tell us that, if lighter than ~TeV, the new particle species cannot interact with us via the known forces other than gravity. A potential exception is provided by the hypothetical particles with very small electric charges.

Of course, new particles may interact with our species via some yet undiscovered forces. The only constraint on such interactions is that they are sufficiently weak for avoiding the experimental detection.

Nothing much beyond this is known about the hidden sector species. They can come in number of interesting forms. For example, dark sectors can be organized in the form of many hidden copies of the standard model (Dvali, Reference Dvali2010a; Dvali and Redi, Reference Dvali and Redi2008) or the parallel branes separated from us in large extra dimensions (Arkani-Hamed et al., Reference Arkani-Hamed, Dimopoulos and Dvali1998).

However, their number is limited. There exist an universal upper bound (Dvali, Reference Dvali2010a):

(39)$$N_{\rm sp} \sim 10^{32}$$

on the total number of distinct particle species. This bound comes from the fact that species lower the fundamental scale of quantum gravity to (4) (Dvali, Reference Dvali2010a; Dvali and Redi, Reference Dvali and Redi2008). Then, the lack of the observation of strong quantum gravity effects – such as the creation of micro black holes – at LHC puts the lower bound $M_\ast \gtrsim$ TeV. This translates in the upper bound (39) on the total number of particle species. This number of course includes the number of the known particle species of the standard model.

Currently, there are no model-independent restrictions on the ways the hidden species organize themselves in different sectors and on how they interact with one another. Given this vast freedom and our almost zero knowledge of the possible forms and the evolutionary paths of intelligence, there is no reason to restrict ETI exclusively to the creatures consisting of particles of the standard model. Thus, with our current knowledge, there is a room for the existence of ~1032 distinct dark sectors each producing more than one type of intelligent beings.

On this extended landscape of possibilities, the black holes offer to provide an universal search tool. First, as explained, due to their maximal efficiency, the black hole-based devices represent universal attractor points in the evolution of quantum computing. Second, the usage of black holes as the information storing devices can produce the observable signatures of the dark ETI. Due to the fact that Hawking radiation is democratic in particle species, the dark quantum computers will inevitably radiate the species of the standard model, such as neutrinos and photons.

Some comments are however in order. If the number of species N sp is significant, the characteristics of the lightest black holes will be affected. In pure Einstein gravity, the quantum gravity becomes strong at the Planck scale M P. That is, the interactions of particles at distances ~L P are fully dominated by gravity.

Correspondingly, the Planck scale sets both the mass, M ~ M P, as well as the radius, R ~ L P, of the lightest black hole. No lighter black hole can exist in nature within Einstein theory of gravity.

This fact creates a technical obstacle for the civilizations seeking to manufacture the light black holes. For example, in order to produce a M P-mass black hole, we need to collide some elementary particles with a centre of mass energy ~M P and an impact parameter ~L P. This is highly non-trivial. For example, with the current technology available to our civilization, boosting particles to such high energies would require a particle accelerator of the cosmic extend.

However, in a theory with large number of particle species, the story is dramatically different. The scale of strong gravity is lowered to the scale $M_\ast$ given by (4). Correspondingly, the mass of the lightest black hole is now given by $M_\ast$, as opposed to M P. Such a low scale can be much easily accessible. It suffices to notice that, for example, in the extreme case of N sp ~ 1032, such a black hole can even be manufactured by our current civilization at LHC (Dvali and Redi, Reference Dvali and Redi2008).

In fact, a particular case of this is provided by a well-known prediction (Antoniadis et al., Reference Antoniadis, Arkani-Hamed, Dimopoulos and Dvali1998) of the theory of large extra dimensions introduced in Arkani-Hamed et al. (Reference Arkani-Hamed, Dimopoulos and Dvali1998). In this theory, the role of additional particle species is played by the Kaluza-Klein tower of gravitons. Their number is precisely N sp ~ 1032 (Dvali, Reference Dvali2010a).

The general message is that with large N sp, the black holes smaller than a certain critical radius $R_\ast$ are lighter than their counterparts in pure Einstein gravity (Dvali, Reference Dvali2010b). Below this size, the precise relation between M and R is theory-dependent. For example, in theory, Arkani-Hamed et al. (Reference Arkani-Hamed, Dimopoulos and Dvali1998) with d extra dimensions of size $R_\ast$, the black holes of radius $R < R_\ast$ and mass M obey the following relation:

(40)$$R^{1 + d} \sim {M\over M_\ast ^{2 + d}}.$$

The theory-independent phenomenological bound on the critical scale $R_\ast$ comes from the short distance tests of Newtonian gravity. The current bound of $R_\ast \lesssim 38\mu$m is provided by the torsion-balance experiments (Lee et al., Reference Lee, Adelberger, Cook, Fleischer and Heckel2020).

In the light of the above, we must take into account that for large number of species, it becomes easier to manufacture the small black holes. Correspondingly, with larger N sp, the compact storage and a fast processing of information via black holes become more accessible even for less advanced ETIs.

Summary

The advancement of civilization is directly linked with its ability to efficiently store and process information. It is therefore important to understand what are the fundamental limitations of this advancement imposed by our current understanding of laws of nature. At most fundamental and experimentally best-verified level, this understanding comes from the framework of quantum field theory.

Recent studies (Dvali, Reference Dvali2021a) show that the validity of quantum field theory imposes an universal upper bound on the information storage capacity of a device that can be composed by the corresponding quanta. The objects saturating this bound, the so-called ‘saturons’, are the most efficient storers of quantum information.

Interestingly, the universality of the gravitational interaction tells us that, among all possible hypothetical saturons, the black holes have the highest capacity of the information storage (Dvali, Reference Dvali2021a).

Correspondingly, any sufficiently advanced civilization is ultimately expected to develop the black hole-based quantum computers. This opens up an exciting prospect for SETI through the Hawking radiation from such computers.

Certain generic features emerge.

The estimated efficiency of information processing indicates that ‘memory disks’ that allow for short time (e.g. $t_q \lesssim$ s) of information retrieval must be based on microscopic black holes (e.g. of size, $R \lesssim 10^{-18}$ cm). Correspondingly, the Hawking radiation from ETI must be in a high energy spectrum that is potentially detectable by the human devices.

We would like to make emphases on the following point. As we have argued, it is reasonable to assume that the computation time for advanced ETI is of order or shorter than a second. The black holes that are able to provide such a short information retrieval time are lighter than 109 g. This provides an unique smoking gun for our idea of SETI, since the signals coming from the evaporation of such artificial black holes have no competitors among the natural ones. The reason is that naturally produced black holes of the masses <1015 g must be exclusively primordial, since no available natural mechanisms exist for their production in contemporary universe. The primordial black holes with such a low mass would have evaporated long ago. This leaves us with an exciting option that any Hawking signal in the proposed energy range must come from a black hole that is artificially manufactured.

Even if ETI is entirely composed of totally new types of particle species, due to universality of Hawking radiation, the emitted quanta will contain the known particles, such as neutrinos or photons. Thus, the ultimate efficiency of black hole quantum computing allows us to search for ETI even if they are entirely composed of some unknown quanta.

The main message of our paper is that a new knowledge about ETI can be deduced by analysing the Hawking radiation from their quantum computers. In turn, the non-observation of such a radiation can be translated as new limits on ETI.

We gave some indicative estimates. Analysing the IceCube's VHE (>30 TeV) data, we showed that the mentioned instruments are able to detect the neutrino emission flux from black hole farms of Type-II, III advanced civilizations. In particular, the current sensitivity and the observed VHE flux impose constraints on the numbers of civilizations $2.5\times 10^3\leq N^{\nu }_{_{\rm II}}\leq 4.2\times 10^4$ and $2\times 10^6\leq N^{\nu }_{_{\rm III}}\leq 1.4\times 10^8$.

We have also pointed out that need for the black hole computers can lead to accompanying signatures from the manufacturing process. For example, black hole factories will result into a radiation from the high energy collisions that are necessary for formation of micro black holes.

Although, most likely, our knowledge covers only a limited number of the existing particle species; the above features are independent on this lack of knowledge. Moreover, a detection of Hawking radiation from ETI can indirectly contribute to widening of our knowledge about hidden particles. This is due to a general fact that larger is the number of particle species N sp, easier it becomes for any civilization to manufacture black holes (Dvali and Redi, Reference Dvali and Redi2008). This is true also for our civilization, which has a chance of creating micro black holes at colliders, provided N sp ~ 1032.

In summary, the universality of the laws of quantum physics and gravity allows us to draw some surprisingly powerful conclusions about ETI. We have argued that on the evolution road of any long-lasting civilization, the black hole-based quantum computers are the natural attractor points. This opens up a new avenue for SETI.

The general smoking guns include potential detections of high energy neutrino fluxes with approximately thermal distribution. Another interesting but more exotic possibility, which requires additional clarifications, is that farms of closely spaced black holes which likely produce and extend high temperature medium may occasionally create and radiate away heavy composite structures.

Data

Data are available in the article and can be accessed via a DOI link.

Acknowledgements

The work of G.D. was supported in part by the Humboldt Foundation under Humboldt Professorship Award, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC-2111- 390814868 and Germany's Excellence Strategy under Excellence Cluster Origins. Z.O. would like to thank Arnold Sommerfeld Center, Ludwig Maximilians University and Max Planck Institute for Physics for hospitality during the completion of this project. The research of Z.O. was partially supported by the EU fellowships for Georgian researchers, 2023 (57655523). Z.O. would also like to thank Torino Astrophysical Observatory and Universitá degli Studi di Torino for hospitality during working on this project.

Competing interests

The authors report no conflict of interest.

References

Aartsen, MG, et al. (2014) Observation of high-energy astrophysical neutrinos in three years of icecube data. Physical Review Letters 113, 101101.CrossRefGoogle Scholar
Aharonian, FA (2004) Very High Energy Cosmic Gamma Radiation – A Crucial Window on the Extreme Universe. Singapore: World Scientific Publishing Co. Pte. Ltd.CrossRefGoogle Scholar
Antoniadis, I, Arkani-Hamed, GR, Dimopoulos, S and Dvali, GR (1998) New dimensions at a millimeter to a Fermi and superstrings at a TeV. Physics Letters [Part B] 436, 257, arXiv:hep-ph/9804398 [hep-ph].CrossRefGoogle Scholar
Arkani-Hamed, N, Dimopoulos, S and Dvali, GR (1998) The hierarchy problem and new dimensions at a millimeter. Physics Letters [Part B] 429, 263, arXiv:hep-ph/9803315 [hep-ph].CrossRefGoogle Scholar
Bekenstein, JD (1973) Black holes and entropy. Physical Review D 7, 2333.CrossRefGoogle Scholar
Boyajian, TS, et al. (2016) Planet hunters IX. KIC 8462852 - where's the flux? Monthly Notices of the Royal Astronomical Society 457, 3988.CrossRefGoogle Scholar
Cameron, ACW (1963) Communicating with extraterrestrial intelligence on other worlds. Sky Telescope 26, 258.Google Scholar
Drake, FD (1961) Discussion of Space Science Board. National Academy of Sciences, Conference on Extraterrestrial Intelligent Life, November 1961, Green Bank, WV.Google Scholar
Dvali, G (2010a) Black holes and large N species solution to the hierarchy problem. Fortschritte Der Physik 58, 528, arXiv:0706.2050 [hep-th].CrossRefGoogle Scholar
Dvali, G (2010b) Nature of microscopic black holes and gravity in theories with particle species. International Journal of Modern Physics A: Particles and Fields, Gravitation, Cosmology 25, 602.CrossRefGoogle Scholar
Dvali, G (2016) Non-thermal corrections to hawking radiation versus the information paradox. Fortschritte der Physik 64, 106, arXiv:1509.04645 [hep-th].CrossRefGoogle Scholar
Dvali, G (2017) Critically excited states with enhanced memory and pattern recognition capacities in quantum brain networks: lesson from black holes, arXiv:1711.09079 [quant-ph].Google Scholar
Dvali, D (2018a) Area law microstate entropy from criticality and spherical symmetry. Physical Review D 97, 105005, arXiv:1712.02233 [hep-th].CrossRefGoogle Scholar
Dvali, D (2018b) Black holes as brains: neural networks with area law entropy. Fortschritte der Physik 66, 1800007, arXiv:1801.03918 [hep-th].CrossRefGoogle Scholar
Dvali, G (2018c) A microscopic model of holography: survival by the burden of memory, arXiv:1810.02336 [hep-th].Google Scholar
Dvali, G (2021a) Entropy bound and unitarity of scattering amplitudes. Journal of High Energy Physics 3, 126.CrossRefGoogle Scholar
Dvali, G (2021b) Unitarity entropy bound: solitons and instantons. Fortschritte der Physik 69, 2000091, arXiv:1907.07332 [hep-th].CrossRefGoogle Scholar
Dvali, G (2021c) Area law saturation of entropy bound from perturbative unitarity in renormalizable theories. Fortschritte der Physik 69, 2000090, arXiv:1906.03530 [hep-th].CrossRefGoogle Scholar
Dvali, G and Gomez, C (2009) Quantum information and gravity cutoff in theories with species. Physics Letters [Part B] 674, 3031940.Google Scholar
Dvali, G and Gomez, C (2013) Black hole's quantum N-portrait. Fortschritte der Physik 61, 742.CrossRefGoogle Scholar
Dvali, G and Gomez, C (2014) Black holes as critical point of quantum phase transition. The European Physical Journal C 74, 2752.CrossRefGoogle Scholar
Dvali, G and Panchenko, M (2016) Black hole based quantum computing in labs and in the sky. Fortschritte der Physik 64, 569.CrossRefGoogle Scholar
Dvali, G and Redi, M (2008) Black hole bound on the number of species and quantum gravity at CERN LHC. Physical Review D 77, 045027, arXiv:0710.4344 [hep-th].CrossRefGoogle Scholar
Dvali, G and Sakhelashvili, O (2022) Black-hole-like saturons in gross-neveu. Physical Review D 105, 065014, arXiv:2111.03620 [hep-th].CrossRefGoogle Scholar
Dvali, G and Venugopalan, R (2022) Classicalization and unitarization of wee partons in QCD and gravity: The CGC-black hole correspondence. Physical Review D 105, 056026.CrossRefGoogle Scholar
Dvali, G, Flassig, D, Gomez, C, Pritzel, A and Wintergerst, N (2013) Scrambling in the black hole portrait. Physical Review D 88, 124041.CrossRefGoogle Scholar
Dvali, G, Franca, A, Gomez, C and Wintergerst, N (2015) Nambu-Goldstone effective theory of information at quantum criticality. Physical Review D 92, 125002.CrossRefGoogle Scholar
Dvali, G, Michel, M and Zell, S (2019) Finding critical states of enhanced memory capacity in attractive cold bosons. EPJ Quantum Technology 6, 1, arXiv:1805.10292 [quant-ph].CrossRefGoogle Scholar
Dvali, G, Eisemann, L, Michel, M and Zell, S (2020) Black hole metamorphosis and stabilization by memory burden. Physical Review D 102, 103523.CrossRefGoogle Scholar
Dvali, G, Kaikov, O and Bermúdez, JSV (2022a) How special are black holes? Correspondence with saturons in generic theories. Physical Review D 105, 056013, arXiv:2112.00551 [hep-th].CrossRefGoogle Scholar
Dvali, G, Kühnel, F and Zantedeschi, M (2022b) Vortices in black holes. Physical Review Letters 129, 061302, arXiv:2112.08354 [hep-th].CrossRefGoogle Scholar
Dyson, F (1960) Search for artificial stellar sources of infrared radiation. Science 131, 1667.CrossRefGoogle Scholar
Escrivà, A, Kuhnel, F and Tada, Y (2022) Primordial black holes, arXiv:2211.05767 [astro-ph.CO].CrossRefGoogle Scholar
Gelis, F, Iancu, E, Jalilian-Marian, J and Venugopalan, R (2010) The color glass condensate. Annual Review of Nuclear and Particle Science 60, 463.CrossRefGoogle Scholar
Greisen, K (1966) End to the cosmic-ray spectrum? Physical Review Letters 16, 748.CrossRefGoogle Scholar
Hawking, SW (1974) Black hole explosions? Nature 248, 30.CrossRefGoogle Scholar
Hsiao, TTY (2021) A Dyson sphere around a black hole. Monthly Notices of the Royal Astronomical Society 506, 1723.CrossRefGoogle Scholar
Kardashev, NS (1964) Transmission of information by extraterrestrial civilizations. Soviet Astronomy 8, 217.Google Scholar
Kennicutt, RC and Evans, NJ (2012) Star formation in the milky way and nearby galaxies. ARA&A 50, 531.Google Scholar
Lee, JG, Adelberger, EG, Cook, TS, Fleischer, SM and Heckel, BR (2020) New test of the gravitational 1/r 2 law at separations down to 52 μm. Physical Review Letters 124, 101101.CrossRefGoogle Scholar
Lineweaver, CH, Fenner, Y and Gibson, BK (2004) The galactic habitable zone and the age distribution of complex life in the milky way. Science 303, 59.CrossRefGoogle Scholar
Lloyd, S (2000) Ultimate physical limits to computation, arXiv: 9908043 [quant-ph].CrossRefGoogle Scholar
Lu, J, Lee, K and Xu, R (2020) Advancing pulsar science with the FAST. Science China Physics, Mechanics, and Astronomy 63, 229531.CrossRefGoogle Scholar
Maccone, C (2012) Societal Statistics by virtue of the Statistical Drake Equation. Acta Astronautica 78, 3.CrossRefGoogle Scholar
MacGibbon, JH and Webber, BR (1990) Quark- and gluon-jet emission from primordial black holes: the instantaneous spectra. Physical Review D 41, 3052.CrossRefGoogle Scholar
Osmanov, ZN (2016) On the search for artificial Dyson-like structures around pulsars. International Journal of Astrobiology 15, 127.CrossRefGoogle Scholar
Osmanov, ZN (2018) Are the Dyson rings around pulsars detectable? International Journal of Astrobiology 17, 112.CrossRefGoogle Scholar
Osmanov, ZN (2021a) On the resolution of a weak fermi paradox. International Journal of Astronautics and Aeronautical Engineering 6, 49.Google Scholar
Osmanov, ZN (2021b) From the SpaceX Starlink megaconstellation to the search for Type-I civilizations. Journal of the British Interplanetary Society 74, 193.Google Scholar
Osmanov, ZN and Berezhiani, VI (2018) On the possibility of the Dyson spheres observable beyond the infrared spectrum. International Journal of Astrobiology 17, 356.CrossRefGoogle Scholar
Osmanov, ZN and Berezhiani, VI (2019) Anomalous variability of dyson megastructures. Journal of the British Interplanetary Society 72, 254.Google Scholar
Page, DN (1993) Information in black hole radiation. Physical Review Letters 71, 3743.CrossRefGoogle Scholar
Petigura, EA, Howard, AW and Marcy, GW (2013) Prevalence of Earth-size planets orbiting Sun-like stars. Proceedings of the National Academy of Sciences of the United States of America 110, 1927319278.CrossRefGoogle ScholarPubMed
Prantzos, N (2013) A joint analysis of the Drake equation and the Fermi paradox. Journal of the British Interplanetary Society 12, 246.Google Scholar
Sagan, C (1963) Direct contact among galactic civilizations by relativistic interstellar spaceflight. Planetary and Space Science 11, 485.CrossRefGoogle Scholar
Sagan, C (1973) The Cosmic Connection: An Extraterrestrial Perspective. New York: Dell Publishing Co. Inc.Google Scholar
Shklovskii, I and Sagan, C (1966) Intelligent Life in the Universe. New York: Random House.Google Scholar
Smart, JM (2012) The transcension hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI. Acta Astronautica 78, 55.CrossRefGoogle Scholar
Tarter, J (2001) The search for extraterrestrial intelligence (SETI). ARA&A 39, 511.Google Scholar
Vidal, C (2011) Black holes: attractors for intelligence?, arXiv:1104.4362 [physics.gen-ph].Google Scholar
Wright, JT (2020) Dyson spheres. Serbian Astronomical Journal 200, 1.CrossRefGoogle Scholar
Zatsepin, GT and Kuzmin, VA (1966) Upper limit of the spectrum of cosmic rays. JETP Letters 4, 78.Google Scholar
Zuckerman, B (2022) Infrared and optical detectability of Dyson spheres at white dwarf stars. Monthly Notices of the Royal Astronomical Society 514, 227.CrossRefGoogle Scholar