Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-23T04:49:24.541Z Has data issue: false hasContentIssue false

A new industrial revolution for a sustainable energy future

Published online by Cambridge University Press:  13 November 2013

Arun Majumdar*
Affiliation:

Abstract

Access to affordable and reliable energy has been a cornerstone of the world’s increasing prosperity and economic growth since the beginning of the Industrial Revolution. Our use of energy in the 21st century must also be sustainable. This article provides a techno-economic snapshot of the current energy landscape and identifies several research and development opportunities and challenges, especially where they relate to materials science and engineering, to create the foundation for this new industrial revolution.

Type
Technical Feature
Copyright
Copyright © Materials Research Society 2013 

Introduction

Former US Energy Secretary Steven Chu and I co-authored a paper last summer that appeared in Nature. Reference Chu and Majumdar1 In this paper, we talked about a new industrial revolution. Let me explain what we meant. Sit back for a moment and imagine yourself during the time when the United States was being created, during the times of George Washington and Thomas Jefferson. What was life like? People were traveling on horses. They were using whale blubber as fuel for lighting; that was the state-of-the-art technology.

The last 250 years have been perhaps the most remarkable time in human history because our lives have been transformed, and this transformation is what we called in our paper “from horse power to horsepower.” Reference Chu and Majumdar1 We drive to the grocery store in cars that have the equivalent of 300 horses, we travel across the nation in planes powered by 100,000 horses in about five or six hours, a trip that would have otherwise taken a month or more. That’s a remarkable transition in the history of humankind, and that was just for mobility.

Now let us look at the electricity grid. It’s only about 110 years old and, according to the US National Academy of Engineering, it is the greatest engineering achievement of the 20th century. At almost every moment in our lives, we are receiving the benefit of 250 years of industrial revolution, and that has led to an immense increase in productivity and economic prosperity. Our gross domestic product per capita has gone up exponentially due to the Industrial Revolution. When we plot the data on how much energy we have used, it has also gone up exponentially. The Industrial Revolution was all about how we sourced, distributed, and used energy, and it was predominantly fossil fuel-based energy.

The question we are asking today is how can we sustain our economic growth? The population has exploded. We had 700 million people in the world at the beginning of the Industrial Revolution, and today we have 7 billion. We have about 3 billion people in the world who have either no or very limited access to electricity. In the next 100 years, we are going to add another 3 billion people, mostly in regions of the world that are not yet developed. If we want everyone to increase their economic prosperity and have access to energy similar to what we have in the United States, do we have the resources? Since the Industrial Revolution was based on the use of fossil fuels, do we have enough fossil fuels for the future to sustain the economic growth of a growing world population?

A lot of people talk about peak oil. If you look at the data, you will not find any peaks, because the technology for discovery and extraction of fossil fuels keeps on improving and keeping pace and, in fact, is ahead of the demand. Reference Chu and Majumdar1 So our oil and gas reserves keep increasing. It’s fair to say that at least in the next 75 or 100 years, we will have enough fossil fuel, and our technology for discovery and production will keep improving. The consequences of using these fossil fuels, of course, lead to global warming and climate change, which in the scientific community is a well-settled matter.

What is the consequence of global warming and greenhouse gas emissions? We all know that the average temperature has risen 0.8 degrees since the beginning of the Industrial Revolution. If I go on the streets of San Francisco to a layperson and explain, “The average temperature rise is 0.8 degrees,” it is very hard for people to perceive the impact. In our homes and local weather, we are used to fluctuations of more than 10 degrees. So they may say, “Who cares about 0.8 degrees?” But the average tells only a small part of the story.

Let’s look at the temperature deviation from the average over summer time (see Figure 1 ). It follows a Gaussian distribution and can be normalized by the standard deviation. Since the 1960s, the whole distribution has moved toward higher temperatures, and the distribution has broadened. While the average has certainly increased, what is most important to note are the tails of the distribution. The tails on the hotter side are reaching 3 to 5 times the standard deviation at probabilities that we would have never thought. These tails are what we call “heat waves,” and they have a disproportionate impact on our lives than the average. Last year we had a heat wave in the Midwest; this year we may have a cooler summer in the Midwest, but a few years ago, we had the hotspot in Moscow where thousands of people died; before that, it was in Europe. These hotspots move around like a little bubble in a carpet; you press it down here, and it pops up somewhere else in the world. And these heat waves can be devastating for our ecosystem, livestock, and agriculture with major economic impact.

Figure 1. Evolution over six decades (1950–2011) of the statistical distribution of the deviation of local summer temperatures in the Northern Hemisphere from their local average temperatures. Blue is colder than average, whereas red is hotter than average. The movie (http://www.youtube.com/watch?v=zSHiEawPRiA) 2 shows that the distribution (a–b) not only moves to the right, suggesting hotter temperatures, but also broadens, and the tails reach 3–5 times the standard deviation at probabilities that are an order of magnitude higher (b) than those six decades ago (a). Reference Hansen, Sato and Ruedy3

Let’s take stock of what’s going on and where we are going in the future. What is the total CO2 cumulative emission that we have released to the atmosphere since the beginning of the Industrial Revolution over the last 250 years? If you do the numbers, it’s approximately a trillion tons of CO2. The lifetime of CO2 in the atmosphere is about a few hundred years. Some of the CO2 gets absorbed in the ocean, thereby acidifying it. 4 If I take today’s known reserves of fossil fuel and burn all of it, how much more CO2 can we emit? The answer is about 3 trillion tons, three times more. So remember, a trillion tons in 250 years of Industrial Revolution. You ask the question: How long would it take to emit 3 trillion tons based on today’s projection of economic growth and fossil fuel energy use in a business-as-usual scenario? The answer is 75 to 100 years. So three times more emissions in three times less the time; it’s almost a 10X factor.

One may then ask, “What is the dollar value of those 3 trillion tons when the carbon is in the form of fossil fuels?” The answer is tens of trillions of dollars. So our society is often given the following choice: “Should we keep those trillions of dollars in the earth and not use it for our economic growth, or should we use that for economic growth and ignore the environment?” That is a false choice because it extrapolates the past and does not account for the human mind to explore, invent, and create a different future. There is a well-known quote from Sheikh Ahmed Yamani, former oil minister of Saudi Arabia: “The Stone Age did not end because we ran out of stones.” It ended because we transitioned to better solutions. We need research based on science and engineering to find solutions that are cheaper, better, and faster than what we use today. That was the purpose of creating the US Advanced Research Projects Agency-Energy (ARPA-E), to catalyze innovations that are too risky for the private sector to initiate, but if successful, would create the foundations for entirely new industries that do not exist today.

Energy systems

There are two kinds of energy systems. One is stationary power, such as the electricity infrastructure for the generation, transmission, and distribution systems, and natural gas distribution for heating and manufacturing. The other system is transportation, including oil, gasoline, and diesel and their use in mobility. Unless you widely use electric vehicles—which is a very small fraction today—these systems are largely independent of each other.

Shale gas

The biggest change that has happened over the last few years in the United States is the discovery and extraction of low-cost shale gas. From a materials aspect, here are some of the challenges. Globally, there are lots of reserves of shale gas. The United States is ahead of other nations in trying to use horizontal drilling and hydraulic “fracking” because we created the infrastructure for extraction and use of shale gas. A lot of shale formations not only have shale gas, but also contain shale oil. China has the biggest reserves of shale gas, and if China starts producing—they haven’t quite gotten into the act of actually creating the infrastructure for horizontal drilling and fracking—this could alter the geopolitical nature of energy.

Shale is a highly impermeable rock. You have to go down about a mile or more into shale formations that have organic matter, then go horizontally a couple of miles. You have to use water at high pressure to fracture the shale rock and increase the permeability such that natural gas can flow at a sufficiently high rate. The natural gas is adsorbed on the surface of pores in shale rock, which is coated with an organic wax-like material called kerogen. The adsorption-desorption process follows Langmuir isotherms, and once you increase the permeability of the rocks, the natural gas desorbs and flows through the pores into the fractures, which are then collected and brought out.

After you drill a well, the natural gas flow rate first peaks, and then there is a long decay. The fracking process has improved over time, and so this peak keeps increasing to higher levels. What we don’t understand is the nature of the decay and the tail. When will the tail stop, and what will be the flow rates over time? This is a coupled solid mechanics and a fluid mechanics problem because the permeability of the shale rock depends on the pore pressure. One also finds that because the pores are 10–100 nm in size, which is on the order of the mean free path of gas molecules, one encounters a combination of Knudsen flow and viscous flow. No one really knows now how long the tail is going to last because we don’t fully understand the coupling between the solid mechanics and the fluid mechanics. In fact, the simulation models that are used to model the solid and fluid mechanics of shale rock do not have that coupling. Geophysicists are learning now because shale was essentially ignored for fuel extraction for a long time. So this is a good time to start working in this field.

Let me also point out that not all shales are equal. Some shales fracture, but others do not, regardless of the pressure. These actually show viscoplastic flow because of the high clay content of the material. Clay is a two-dimensional material, and the concentration of clay can be high. When you try to increase the stress, it induces slip but not fracture. So there’s organic matter in these types of shale that are trapped, but it is difficult to release them. The challenge for fracture mechanics people here is to think of some ways you could fracture the viscoplastic shale rock and get access to the natural gas that cannot be extracted today. There’s a lot to be learned, and there’s a lot of research still remaining.

In the energy sector, cost and scale are everything. If something does not go down in cost to a competitive level without subsidies, and if something does not scale up in volume, it doesn’t matter. Research in shale gas production has been going on over the last 25 or 30 years. But the whole shale gas revolution is only about five or six years old in terms of low-cost extraction at scale. One of the consequences of finding abundant shale gas and producing more than expected is that the price is quite low. Over the last couple of years, there has been an ongoing transition in the electricity generation sector to switch from coal to natural gas. The use of natural gas in combined cycle turbines is the cheapest way to produce electricity at about 5 cents a kilowatt hour, not just because of cheap natural gas but because these turbines are more than 60% efficient.

Renewables

So where do the renewables stand? Wind is at about 5 to 10 cents a kilowatt hour today, and sometimes it’s actually cheaper than coal. Note these are unsubsidized costs. Wind cannot quite compete with electricity from natural gas, but natural gas is playing a very important role in the renewables industry. Natural gas combined cycle turbines can also ramp up very quickly—about 50 MW/min. This allows such turbines to balance intermittent sources such as wind, which need such ramp rates for predictable electricity dispatch. Natural gas is considered a bridge fuel because it is giving time for renewables to come down in cost and become competitive without subsidies. But in order to achieve this, we need to make sure that there is a sufficient market for renewables for this scaling to occur, which would allow their cost to further come down.

But if you were to scale wind and solar, what are the materials issues? In wind turbines, the mechanical to electrical energy conversion is done through the use of permanent magnets. Most of these systems use neodymium- and dysprosium-based iron boride magnets. The problem is that 95% of the supply of such rare earths comes from China, and there’s a genuine supply risk because China has its increasing domestic demand as well.

We started a program in ARPA-E to create rare-earth-free magnets that are not just equivalent, but are actually better than neodymium-based magnets. This program funded a portfolio of approaches, some of which were tried in the 1970s and early 1980s and abandoned because neodymium-based iron boride magnets were discovered and dominated the market. Looking back at the literature, in many of these abandoned approaches, 5 you see certain regions in the phase diagram that you could access, but it was difficult to make them in bulk form. Take, for example, Fe16N2. Its energy density is even higher than that of neodymium iron boride magnets. Fe16N2 magnets have now been shown in thin-film forms but not in bulk form. There are groups at the University of Minnesota as well as at Case Western University and Oak Ridge National Lab who are trying to make it in bulk, which is a non-trivial problem to solve. That’s why research is needed. If they can solve this problem, it will be a game changer if it is cost-competitive. This is an example where materials science can change the ballgame, if you can come up with a different permanent magnet to replace neodymium- and dysprosium-based iron boride magnets.

Now let me talk about solar energy. Solar has shown the fastest downward trend in electricity cost that we have seen in the energy business. It is now about 10 to 15 cents a kilowatt hour at the utility scale, and the question remains: Can it go down further? The Department of Energy started the “SunShot Initiative.” Just like President Kennedy created the Moon Shot—to develop within a decade a way to go to the moon and return safely—the idea of SunShot is to reduce the unsubsidized cost of solar electricity generation to 5 cents a kilowatt hour within this decade. I think there’s a very good chance we’ll get there.

Figure 2 shows the cost of residential solar photovoltaic (PV) installations on the left-hand side and commercial- and utility-scaled solar on the right-hand side. All of these have come down. In fact, they’ve come down by almost a factor of 2 over the last four years. This is mostly due to reduction in the PV module cost, which has been reduced as a result of global market conditions, as well as improvements in manufacturing processes. The balance of system (BOS) cost includes power electronics, installation, permitting, and labor. Currently, the BOS cost is actually higher than that of the panel. You can give away the panel for free today, and you’re not going to reach 5 to 6 cents a kilowatt hour. So you can ask the question: What technological knobs do we have to reduce the balance of system cost?

Figure 2. Evolution from the 4th quarter of 2009 to the 4th quarter of 2012 of the cost (in 2010 $US per W DC electricity produced) of fully installed solar photovoltaic systems for residential, commercial, and utility scale applications. Courtesy: National Renewable Energy Laboratory. Note: c-Si, crystalline silicon; OH, overhead.

One is to increase the efficiency of the cells, and another is to reduce the weight, because both affect installation. How much head room do we have? For single junction cells, there’s a something called the Shockley–Queisser limit. The practical achievable efficiencies at this limit are about 30% for the bandgaps that we use today. So how far are we from this limit for production level cells? For crystalline silicon, production level cells are at about 24% efficiency. So there is still some room there, but we’re getting close.

What about the others? Copper indium gallium selenide (CIGS), cadmium telluride, and multicrystalline silicon are all at about 13–15% efficiency. There’s no physical law that says that these cannot be more than 20%, but it’s an issue of lifetime of the excitons, which depends on the dislocation density and the interface quality. These have to be improved to achieve efficiencies of 20% or more. Within SunShot, there is a small subinitiative called the Michael Jordan program that has the goal of increasing the efficiency of CdTe cells to 23%. 6 If you can get to 23% efficiency in thin films on a cheap substrate like metal foils or thin glass or plastic, that’s a game changer, because this will then bring down the cost of the overall system. The levelized cost of electricity would then approach 5 to 6 cents a kilowatt hour or perhaps even lower.

Alta Devices has a solar cell that consists of a III–V material on plastic, which exhibits 28.8% efficiency, and the company is now trying to go into production. If you can create multijunction cells at a cost of about 60 or 70 cents a watt, that’s a total game changer, and that’s where research ought to be done.

The grid

We all know that solar and wind are intermittent sources. Grid-level storage could be very helpful in firming up these resources and enabling predictable dispatch. The cheapest way to store electricity today is pumped hydro, where water is pumped up a dam, brought down, and you have almost 90% round trip efficiency. 7 The capital expenditure of doing so is about $100 a kilowatt hour, and with the roundtrip efficiency and the number of cycles performed, the additional levelized cost for storage is 2 to 2 1/2 cents a kilowatt hour. Of course, it is difficult to use pumped hydro everywhere, so we put up a challenge as part of ARPA-E to look at grid-level storage by means other than pumped hydro—with compressed air, for example, or electrochemical storage, or superconducting magnetic energy storage, or flywheels. 8 We were technology agnostic, but if someone can make systems that can get to $100 a kilowatt hour, that will be game changing.

There are many examples of such attempts I can talk about, but I don’t have the time to do so. So I will pick this one at the Massachusetts Institute of Technology, in Yet-Ming Chiang’s laboratory, which spun out into a company called 24M. 9 They took the best of the chemistry of lithium ions and changed the architecture into a flow battery using a conducting fluid. Now they’re trying to get to about $60 a kilowatt hour. Even if they reach $100/kWh, it will make an impact. But, they would have to compete with all the different approaches that are being pursued by the other ARPA-E funded teams.

Storage is important because it can enable intermittent sources of electricity to be dispatched in a predictable manner, which would allow their integration with the grid. The grid itself poses many materials issues, including power electronics. If you go around the country, you will find that this field has been largely ignored by many electrical engineering departments; I hope it is emphasized more in the future. Power electronics is not just about the grid—it is necessary for industrial motors, automotive applications, and lighting, for example. The important issues in terms of cost and reliability in light-emitting diodes are the packaging and power electronics needed for AC-to-DC conversion.

What are the issues in power electronics? Power electronic transistors can switch at a certain frequency, ω. This frequency controls the impedance of the inductors (jωL, where j is $\surd ( - 1)$ and L is the inductance) and the capacitors (1/jωC, where C is the capacitance), and these devices are integrated with the transistors into various circuit topologies to create switching power conversion devices. The push is to go to higher and higher frequencies. Why? Because if we increase the frequency, we can use smaller inductors and capacitors for the same impedance, and thereby reduce the cost of the overall system. If we go up in frequency, the capacitor size can be reduced, and we can possibly go from electrolytic capacitors to solid-state capacitors, which are much more reliable. We can integrate capacitors, inductors, and switches all on the same platform and thereby reduce cost and increase reliability.

The question then is how can we go to a higher frequency? What are the issues? We have to use different materials because silicon is not ideal. The goal is to use wide-bandgap semiconductors: gallium nitride, silicon carbide, zinc oxide, and hopefully diamond if we can dope it. The challenge is to manufacture high-quality wide-bandgap semiconductors at low cost and fabricate power switching devices. For power conversion, we also need soft magnets for the inductors. The US government has invested a lot in hard magnets because of data storage, but not in soft magnets. As we go up in frequency, the losses also increase. These losses originate from eddy currents and also from motion of the domain walls. There’s a lot of innovation going on, especially using nanostructured soft magnetic materials where the losses are low, 10 because single domain magnetic particles are being encapsulated in an insulating matrix, thereby reducing both losses. If these can be integrated with wide-bandgap semiconductors, one can make a substantial impact on the cost and performance of power-conversion devices and systems.

The US grid uses the same architecture of centralized generation, a transmission network, and a distribution system that was developed by Tesla, Edison, and their industrial partners. We have about a trillion dollars worth of assets in the grid, and it will be very difficult to change it any time soon. Many of these assets are getting old. For example, we currently have transformers within a distribution substation that handle about a megawatt of electrical power. These drop the voltage from hundreds of kilovolts to about 10s of kilovolts. They weigh about 8000 pounds each, and operate at 60 hertz. The average age of these transformers in the United States is about 42 years, two years beyond its projected lifespan. So we’re living on borrowed time. The question we asked in ARPA-E is whether we should keep installing the same transformers or whether there is a better way to achieve power conversion? And for that you need power transistors that can be switched at high frequencies.

One of the examples from the ARPA-E program on power electronics is from the company Cree. 11 It is a silicon carbide transistor that can handle a 15 kilovolt drop in about 200 microns of silicon carbide, so the material quality has to be excellent. It can also handle about 100 amps of current. This means 1.5 megawatts of electrical power can be switched by a single transistor, which then becomes the heart of a switching power conversion unit. The switching speed they’re shooting for is 50 kilohertz, not 60 hertz, and if you could do that, the whole transformer—because the inductor size now goes down—could be about 100 pounds, not 8000 pounds. That’s the kind of transformative technology that will provide better solutions in the future.

Cooling

About 75 percent of the electricity from the US grid goes to homes and commercial buildings. Roughly 40–50 percent of the loads in our buildings are cooling and heating. Based on an energy analysis, we showed that the primary energy use of the cooling units is about a factor of 10 away from the theoretical limit. 12 We also conjectured that there could possibly be pathways that could get to a factor of 5 away from the theoretical limit. So the question ARPA-E asked the technical community is as follows: Could you reduce the primary energy consumption in cooling not incrementally by 10 or 20 percent, but by a factor of 2? That was one motivation.

The other motivation is a very interesting issue. With the introduction of the Montreal protocol in the late 1980s, chlorofluorocarbons (CFCs) were phased out as refrigerants to reduce ozone depletion, and hydrofluorocarbons (HFCs) were introduced instead. But it so happens that the global warming potential of these HFCs is about 2000–3000 times more than CO2. A paper in the Proceedings of the National Academy of Sciences Reference Velders, Fahey, Daniel, McFarland and Anderson13 suggested that if you look at the projections of air conditioner and refrigerator use, about 10–45% of global warming by 2050 will be due to HFC refrigerants, and the number depends on whether CO2 emissions will be capped (10% is the lower limit in a business-as-usual scenario, and 45% is the upper limit if the CO2 emissions are capped at 450 ppm). There is now a question of amending the Montreal protocol to look at this issue, and, in fact, the US Environmental Protection Agency (EPA) will probably look into this. So in 2010, we put out a challenge to not only make existing air conditioners a factor of 2 better in primary energy consumption, but to do so using other forms of refrigerants with global warming potential less than or equal to 1.

What is the real problem in energy efficiency; that is, why are we a factor of 10 away? The “psychrometric chart” in Figure 3 , for example, shows an environment outside that is at 35°C, 90% relative humidity, and we need to deliver air to a building at 15°C, 20% relative humidity. In current air conditioning systems, we first cool humid air to 100% relative humidity to extract all the moisture, which means a massive latent heat load, and then we reheat it again to reach 15°C. This is a significant waste of energy. An alternative is to adiabatically adsorb the water vapor in desiccants, where the enthalpy of adsorption heats up the desiccant and the dry air beyond 35°C. Then we cool this dry but hot air. We therefore need a much larger cooling unit because we start from a much higher temperature down to 15°C, 20% humidity. These are the reasons why we are a factor of 10 away from the theoretical limit.

Figure 3. Psychrometric chart of humidity ratio versus temperature used for cooling of buildings. To cool a space from 35°C and 90% relative humidity (RH) to a desirable temperature of 15°C and 20% RH, there are two paths that have traditionally been followed. First is the blue line that indicates cooling the humid air to reach 100% RH, thereby extracting moisture from the air. Once the desired moisture level is reached, the air is then reheated to 15°C such that the RH is 20%. Second is the black line, which shows adiabatic adsorption of humid air in a desiccant, which extracts the moisture, but the enthalpy of adsorption heats up the air. Once the desired moisture level is reached, the air is then cooled from a temperature much higher than 35°C to 15°C. A more energy efficient approach is to decouple dehumidification and cooling, in other words, could we isothermally dehumidify the moist air (green line) to a desired moisture level and then cool the air from 35°C to 15°C to reach a RH of 20%? Reference Moran, Shapiro, Boettner and Bailey23

We then asked the question, could we do better? In fact, could we possibly achieve isothermal dehumidification such that you can decouple dehumidification from cooling? One of the solutions that was created is to use membranes that are selective to water transport due to capillary condensation. If you apply a vacuum on one side, water condensation blocks the capillaries such that air does not go through, but rather water is selectively transported out at very low energy cost. One could use certain types of zeolites or polymer membranes. 14 The goal is to have sufficient throughput of water vapor, and the membrane cost ought to be sufficiently low. In fact, this is being used to retrofit existing air conditioners; just dehumidify and then go back and cool down the dry air.

Solid-state semiconductor-based thermoelectrics offer another way of cooling without any greenhouse gas emissions. The following equation captures the materials issues:

$$ZT = [({s^2}\sigma T)/k],$$

where the figure of merit, ZT, of the material depends on S, which is the thermopower, or Seebeck coefficient, squared, σ, the electrical conductivity, and T, the temperature in the numerator, while the denominator contains k, the thermal conductivity. In most relevant materials, the thermal conductivity is dominated by phonons, and the parameters in the numerator are electronic—the power factor, S 2σ. While it is possible to change each individual material property by orders of magnitude, people have been trying to increase ZT by a factor of 2. Over the last 50 years, they’ve stumbled because S, σ, and k are all coupled together, and it’s very hard to decouple them.

Over the last 15 years or so, there has been a huge amount of research on nanostructuring thermoelectric materials. Among the things we found is that nanostructuring blocks phonons in very unique ways, but there are also ways to modify the material to enhance its electronic power factor. Our laboratory at UC Berkeley and Lawrence Berkeley National Laboratory was dealing with III–V materials, silicon nanowires, molecular hybrid materials, and complex oxides as well. Our work on silicon nanowires led to a start-up company, Alphabet Energy, which is trying to create products based on these materials and commercialize them. We have been funded by US federal agencies; I thank them very much for their support.

While materials are important, they must be thought of in the context of systems. The cost and performance of systems are critical for the success of the technology. There could be many innovations in the systems as well, which could leverage the materials performance in the best way. Sheetak is a start-up company in Austin. One of the innovations in their system is the use of thermal diodes, which transmit heat preferably in one direction. 15 They packaged the system such that the performance comes from the thermoelectric engine, and the cost reduction comes from the rest of the system. Products could be made for developing economies where large refrigerators are replaced by small ones. That’s a different economic game.

Transportation

Let me talk briefly about transportation. Today, we have essentially one approach that is overwhelmingly dominant: gasoline or diesel as the fuel, and internal combustion engines of reciprocating or rotary types. This lack of diversity renders some vulnerability to our economy, both from supply risk as well as global price fluctuations, as opposed to our stationary power systems where we have diverse sources, for example, solar, coal, wind, natural gas, and nuclear.

Over the last few years, the EPA has come up with a ruling on the corporate average fuel efficiency standard of 54.5 miles per gallon by 2025—this is a long-term signal to the business community to innovate. Actually, some countries have already reached this mark. Japan has a car that offers 70 miles per gallon today. In order to meet the EPA standard, car companies are looking at electrification, lightweighting, and various other approaches, but without compromising safety. All of these involve materials research. For lightweighting, there is a search for ultrahigh strength steel that is ductile. The mechanical behavior of steel is such that the higher the tensile strength, the more brittle it becomes. The challenge is to produce steel with strength not in the range of 300 MPa, but rather 1000 MPa that can tolerate elongations up to 40% or 50%; that is, they are ductile. These materials challenges will require controlling the alloy content and nanoscale grain and particle structure. Needless to say, if you can bring down the cost of carbon composites by a factor of 2 or 3, this is a very big deal.

The next question is, “What are the challenges in vehicle electrification?” The question boils down to battery systems. Our goal in ARPA-E was to go for those batteries that would enable the range and cost to be comparable to cars with internal combustion engines, but without subsidies. Today, lithium-ion batteries are somewhere around $500 per usable kilowatt hour, and for a car to travel 100 miles at the same cost as internal combustion engine cars without subsidies, this needs to be reduced to about $250, that is, by a factor of 2. The pack-level energy density is about 100 watt hours per kilogram today; it needs to get to about 200 watt hours per kilogram, which means that the energy density of individual cells has to be 400 watt hours per kilogram, which is close to the limits of a lithium-ion battery. This is not fiction anymore, because last year a start-up company called Envia announced, with third-party verification, a 400 Wh/kg lithium-ion battery using a silicon anode and a manganese-based cathode. 16 There’s a lot of research that needs to happen to make it work properly in a product, make it safer, and go for a long cycle life, but this is a start. There are several metal–air battery and lithium sulfur batteries that researchers are working on. 17 While these are still in the research phase with the uncertainty as to which of these will eventually succeed, at least we have a portfolio of diverse approaches, and we hope one of these will be disruptive to today’s lithium-ion battery.

Biofuels today are all based on photosynthesis—one needs sunlight, CO2, and water, and then use the photosynthetic machinery in plants or algae to eventually make oil. What is not often appreciated is the conversion efficiency from sunlight to chemical bonds in oil, which is less than 1%. This major inefficiency comes from the Calvin–Benson cycle in photosynthesis. There are enzymes that lose their carbon fixation efficacy at higher temperatures. The result of low efficiency of the photosynthetic process is that we need a lot of land and water to capture sufficient sunlight to produce oil. It turns out that feedstock collection and processing represent a majority of the cost of biofuels because biomass is fluffy and has very low energy density compared to oil. At ARPA-E, we created a program called PETRO (plants engineered to replace oil) to address this problem. Typically, corn has an energy density of approximately 80 gigajoules per hectare per year, whereas sugarcane in Brazil offers a value of 200 gigajoules per hectare per year. In our PETRO program, we were shooting for 160 gigajoules per hectare per year at $50 a barrel equivalent, which would be a game changer. This led to some interesting ideas.

We all know that algae can directly produce oil. The problem, though, is oil from algae can be expensive because of the cost of photobioreactors, the need for water, and the fact that algae can possibly get infections, which could reduce their effectiveness. So the idea in one particular PETRO project was to take the metabolic pathway that produces oil in algae and insert it into a plant-like tobacco that grows in bad soil. If it works, then you would simply need to wring the leaves, and oil would be squeezed out—at least, that is the idea. I hope this team is wildly successful, because you would then have big tobacco and big oil come together and save the world!

In ARPA-E, we also asked the question, why do you really need biology? The photosynthetic process is all about fixing carbon dioxide by making carbon–carbon bonds. You cannot beat biology in the specificity to make these bonds. We then asked the question, do you really need the Calvin–Benson cycle to make carbon–carbon bonds? The answer is no. There are many other cycles—reverse Krebs cycle and Wood–Ljungdahl cycle, 18,19 for example—in biology that make carbon–carbon bonds that are found in organisms such as extremophiles, 20 which are present in deep ocean vents that have no light, but they’re still making carbon–carbon bonds. These had never been used to make biofuels.

We created a program at ARPA-E called Electrofuels. The idea is to generalize the process of photosynthesis to non-photosynthetic organisms. You first broaden the idea of using reducing equivalents beyond photons. Thus, you can, for example, use electrons, which can come from renewable sources, or hydrogen sulfide, which is a waste product of the oil and gas industry, or hydrogen, which can be produced from natural gas. You can then take carbon dioxide along with a reducing equivalent and feed them to nonphotosynthetic microbes. If you can engineer the carbon fixing pathways in these non-photosynthetic organisms in the right way, you can produce a molecule such as acetyl-CoA (acetyl coenzyme A). 21 Once you have done that, the production of long-chain hydrocarbons (e.g., oil) or bioproducts is fairly well known. We realized that if this worked, it could be potentially 10 times more efficient than the photosynthetic pathway, so this was worth a try. And sure enough, within a couple of years, out of the 15 groups we funded, about five or six of them are actually producing oil right now. One example is OPX Biotechnology—that came out of the University of Colorado at Boulder and partnered with North Carolina State University—that produced the first vial of electrofuel, the first biofuel obtained without the use of sunlight. 22 This is still early, and we don’t know whether it will scale in cost and volume because that will require engineering of electrofuel bioreactors. But this is what ARPA-E was created for: to take a new and seemingly high-risk approach, to try a new pathway, and when people say, “This is not going to work,” we’d say, “Let’s give it a shot and see,” because it does not violate any fundamental laws of nature.

Conclusion

I’m going to end my talk by revisiting my earlier saying about the Stone Age. History has taught us that if you allow humans to explore, create, and innovate, you are likely to find new and better solutions that are not extrapolations of the past. These solutions can be achieved by different pathways as long as you don’t violate the laws of nature. For the younger people in the audience, if you propose a new “out-of-the-box” idea and someone says “no,” look for data, not dogma. Make sure your idea does not violate the laws of nature. If that is true, then try it out. Even if you don’t succeed, you will learn something new. But you must think outside the box.

I’m going to share with you a little bit of humor. This is about ignoring the ability of the human mind to explore and invent and the dangers of extrapolating the past. There are some famous predictions that people have made in the past that are worth reflecting upon. You could call them “infamous” as well.

Here’s the first one: “The horse is here to stay, but the automobile is only a novelty, a fad.” This is the president of Michigan Savings Bank when asked to invest in Ford Motors in 1903.

This is someone from the scientific community: “Radio has no future, x-rays will prove to be a hoax, and heavier-than-air flying machines are impossible.” That was Lord Kelvin; it’s three strikes against him. He was opinionated, but he was wrong.

And he was not the only one doubting heavier-than-air flying machines. Here is Wilbur Wright in 1901: “Man will not fly for 50 years.” I’m glad he didn’t take himself too seriously.

This is an interesting one: “There is not the slightest indication that nuclear energy will ever be obtainable; it would mean that the atom would have to be shattered at will,” Albert Einstein. And of course, we shattered the atom at will. It doesn’t violate the laws of physics.

Finally, “Drill for oil? You mean drill into the ground and try and find oil; you’re crazy.” These were associates of Edwin Drake, who in 1859, was the first one to find oil by drilling in Pennsylvania. That was thought to be a wild idea, which is what people also thought about shale gas.

Arun Majumdar is currently a vice president for energy at Google, where he is driving Google.org’s energy initiatives and advising the company on its broader energy strategy. In October 2009, Majumdar was nominated by US President Obama and confirmed by the Senate to become the founding director of the Advanced Research Projects Agency-Energy (ARPA-E), where he served until June 2012. Between March 2011 and June 2012, Majumdar also served as the acting under-secretary of energy and a senior advisor to the secretary of energy. Prior to joining the Department of Energy, Majumdar was the Almy and Agnes Maynard Chair Professor of Mechanical Engineering and Materials Science and Engineering at the University of California, Berkeley, and the associate laboratory director for energy and environment at Lawrence Berkeley National Laboratory. His research interests include the science and engineering of nanoscale materials and devices as well as large engineered systems. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences. He received his bachelor’s degree in mechanical engineering at the Indian Institute of Technology, Bombay, and his PhD degree from the University of California, Berkeley. Majumdar can be reached by email at .

Footnotes

To view a video of Arun Majumdar’s presentation at the MRS 2013 Spring Meeting, visit http://www.mrs.org/s13-plenary-video/.

References

Chu, S., Majumdar, A., Nature 488, 294 (2012).Google Scholar
Hansen, J., Sato, M., Ruedy, R., PNAS 109 (37), 14726 (2012).Google Scholar
Velders, G.J.M., Fahey, D.W., Daniel, J.S., McFarland, M., Anderson, S.O., PNAS 106, 10949 (2009).CrossRefGoogle Scholar
Moran, M.J., Shapiro, H.N., Boettner, D.S., Bailey, M.B., Fundamentals of Engineering Thermodynamics, 7 th ed. (Wiley, NY, 2010).Google Scholar
Figure 0

Figure 1. Evolution over six decades (1950–2011) of the statistical distribution of the deviation of local summer temperatures in the Northern Hemisphere from their local average temperatures. Blue is colder than average, whereas red is hotter than average. The movie (http://www.youtube.com/watch?v=zSHiEawPRiA)2 shows that the distribution (a–b) not only moves to the right, suggesting hotter temperatures, but also broadens, and the tails reach 3–5 times the standard deviation at probabilities that are an order of magnitude higher (b) than those six decades ago (a).3

Figure 1

Figure 2. Evolution from the 4th quarter of 2009 to the 4th quarter of 2012 of the cost (in 2010 $US per WDC electricity produced) of fully installed solar photovoltaic systems for residential, commercial, and utility scale applications. Courtesy: National Renewable Energy Laboratory. Note: c-Si, crystalline silicon; OH, overhead.

Figure 2

Figure 3. Psychrometric chart of humidity ratio versus temperature used for cooling of buildings. To cool a space from 35°C and 90% relative humidity (RH) to a desirable temperature of 15°C and 20% RH, there are two paths that have traditionally been followed. First is the blue line that indicates cooling the humid air to reach 100% RH, thereby extracting moisture from the air. Once the desired moisture level is reached, the air is then reheated to 15°C such that the RH is 20%. Second is the black line, which shows adiabatic adsorption of humid air in a desiccant, which extracts the moisture, but the enthalpy of adsorption heats up the air. Once the desired moisture level is reached, the air is then cooled from a temperature much higher than 35°C to 15°C. A more energy efficient approach is to decouple dehumidification and cooling, in other words, could we isothermally dehumidify the moist air (green line) to a desired moisture level and then cool the air from 35°C to 15°C to reach a RH of 20%?23