The CHIPS Act and the creation of MASH come at a critical inflection point in the history of an industry driven by changing applications of computer chips and limitations on the way they are designed and manufactured. In particular, the explosive growth of artificial intelligence over the past few years has led to a surge in demand for advanced chips that are capable of processing the massive amount of data that make ChatGPT and other AI systems tick. Running these chips in large-scale data centers requires an immense amount of energy, and as a result the AI revolution is already starting to have a meaningful carbon footprint that is rolling back progress on various climate goals. At the same time, the amount of data processed by AI applications has grown by orders of magnitude, and handling this data requires stuffing ever more transistors onto a computer chip.
For the past 60 years, the world has reaped the benefits of exponential growth in the efficiency of chips. Since the 1950s, the amount of data that can be processed by a computer chip has increased a trillionfold, while the amount of computation that can be done per kilowatt-hour of energy has decreased by an estimated 100 billionfold. Today, the chip in your coffeemaker has more computing capacity than a 1970s supercomputer. This eye-watering progress can be attributed mostly to a single factor: the ability to make ever smaller transistors.
The shrinking size of transistors has been the driving force behind Moore’s Law for more than half a century. Today, the most advanced chips in production are packed full of billions of transistors that are each a mere 3 nanometers long; TSMC, the largest chip manufacturing firm in the world, has already announced plans for producing a 1.6 nanometer chip.
The problem is that we are quickly brushing up against physical limits on the size of transistors. As they become more tightly packed on a chip, they generate more heat, making them both costly and challenging to cool. But even if the heating problem is solved, silicon-based transistors start to exhibit quantum interference at sizes below 5 nanometers, which limits transistors’ ability to route electrons across the chip.
So far, chip companies have been able to sidestep this problem by tweaking the thickness and chemistry of silicon computer chips. But even if this trend continues, it’s impossible to make a silicon transistor that is smaller than one-fifth of a nanometer—the size of a single silicon atom. In other words, to the extent that Moore’s Law is driven by the shrinking size of transistors, we will inevitably reach a hard physical limit—even if we haven’t found that limit just yet.
“Part of what is driving MASH is that we are at an inflection point—we can’t keep doing things the way they’ve always been done,” says Vijaykrishnan Narayanan, Penn State’s associate dean for innovation and the A. Robert Noll Chair Professor in the School of Electrical Engineering and Computer Science. “For most of the history of the semiconductor industry, we’ve been led by the notion that we will have smaller, faster, and more efficient transistors. Now we’re reaching the end of that roadmap.”
The question for researchers, then, is how to continue making chips more powerful and efficient without relying on the shrinking size of silicon transistors to drive progress. It’s a question that Susan Trolier-McKinstry, the Evan Pugh University Professor and Steward S. Flaschen Professor of Ceramic Science and Engineering, has been thinking about for most of her career. Trolier-McKinstry’s research focuses on how to create next-generation chips that combine computing processes that have been separated in the past. For example, a computer’s processor—essentially its “brain”—and its memory are typically built on two separate chips. If they can be combined on a single chip, it could lead to substantially faster and more energy-efficient computers.
“The idea is to add sensing and actuation capabilities to conventional semiconductor processing so we can do things a normal silicon chip doesn’t do,” says Trolier-McKinstry, whose lab is also focusing on two approaches to integrating sensors onto computer chips. The first involves piezoelectrics, which are ceramic materials that change their shape when exposed to an electric field. The other involves ferroelectric materials, which are like miniature magnets whose polarizations can be flipped when electricity is applied and retain their new polarization when the electricity is removed, which makes them useful as memory devices. The challenge is how to layer these materials directly on top of a conventional silicon chip, which would allow for a direct coupling between a computer’s processor, memory, or other sensors that typically exist on separate chips.
Semiconductor manufacturing is an incredibly precise process that leaves little room for error. In fact, the fabrication process is done in clean rooms that are 10,000 times as sterile as a hospital operating room because even the smallest impurities introduced during this process can destroy the chip. The whole process starts with a thin circular slice of pure silicon known as a wafer, which is heated to form a thin layer of silicon dioxide on the wafer’s surface. Next, a light-sensitive chemical is applied to the wafer, and ultraviolet light is filtered through a mask containing the chip’s circuit pattern that is placed over the wafer. This effectively leaves a hardened shadow of the chip’s circuit on the wafer, and chemicals or plasma lasers are used to remove silicon dioxide that is not part of the circuit. Finally, small impurities are precisely added to different areas of the chip to change the electric properties of the silicon. This process—which is more akin to growing a chip than “building” it—can be repeated dozens of times to form layers so that billions of transistors can be packed on a chip the size of a thumbnail.
“Building memory on top of a processor is both a materials and an engineering challenge, and right now every semiconductor manufacturer in the world is trying to figure out how to solve this problem,” says Trolier-McKinstry. “Clean room engineers are by nature very conservative, so every time we introduce new materials into a clean room, they have to go through an incredible process to make sure that it doesn’t change anything else in the stack, and that the new material can be added in the right structure with the right orientation at the right temperature so that it doesn’t ruin everything underneath it.”
It’s a remarkable challenge—and the payoff of solving it would be enormous. The ability to directly integrate processors, memory, and sensors on a single chip would dramatically increase computer speed while making a substantial dent in their energy requirements. “Today, about 21% of all the energy we use in the world is associated with computing,” says Trolier-McKinstry. “If we could cut that down even a little bit, it would have a very important environmental and economic impact.”
Another promising pathway to next-generation chips involves changing the chemistry of the semiconductor itself. A rapidly growing area of demand for computer chips is in the realm of power electronics—a catchall term for systems that convert electrical energy, such as from high to low voltages, or from alternating current to direct current. This is being driven largely by the rapid adoption of electric vehicles, which rely on power electronics to charge and discharge their batteries, control their motor, and distribute power to different electronic systems in the vehicle. The problem is that conventional silicon semiconductor chips are inefficient for power electronic applications—they waste too much energy, have lower operating temperatures, are limited in their ability to handle high voltages, and have slower switching speeds.
To solve these problems, researchers are increasingly turning to a new chip chemistry called silicon carbide, which adds carbon into silicon’s crystal structure. But for all the benefits of silicon carbide chips, manufacturing them at the scale and quality of conventional chips remains difficult: Silicon carbide chips are more prone to defects during growth, and they are typically grown on smaller wafers—which increases manufacturing cost—and require specialized equipment and techniques to produce.
In 2023, Penn State inaugurated its Silicon Carbide Crystal Center, which is supported with funding from Onsemi, a U.S.-based semiconductor company. The new center is dedicated to researching every part of the silicon carbide chip production process, from analyzing the input powders to predict defects to optimizing manufacturing processes to integrating these chips in new types of device packaging. Earlier this year, the university doubled down on its silicon carbide efforts with the formation of the Silicon Carbide Innovation Alliance (SCIA), a coalition of researchers and industry leaders focused on bridging the gap between the fundamental R&D work done at the Silicon Carbide Crystal Center and the industrial manufacture of silicon carbide chips.