In August 2022, the bipartisan CHIPS and Science Act was signed into law by President Joe Biden; that stroke of a pen inaugurated the most ambitious, concerted, and focused federal effort in living memory to support high-tech research and manufacturing. One of the primary goals of the $280 billion funding package is to boost the United States’ semiconductor industry, which once led the world in the ability to design and manufacture the computer chips that are the foundation that are the foundation of technologies ranging from smartphones to missiles. Although the U.S. remains a global leader in semiconductor research and development, much of the high-end chip manufacturing is now concentrated in Asia, which has created significant supply chain risks for these vital inputs to modern life.

The U.S. got a firsthand taste of these risks shortly after the COVID-19 pandemic brought the world to a standstill in 2020, marking the beginning of an acute, yearslong global chip shortage that made it exceedingly difficult for Americans to purchase new cars, video game consoles, and countless other consumer technologies that depend on semiconductors. Although the pandemic was the proximate cause of the chip shortage, there were several other factors—including chip manufacturing plants that shut down due to extreme weather and an escalating trade war with China—that contributed to the shortage’s intensity and duration. The lesson from the shortage was clear: Reestablishing the U.S. as a global semiconductor production powerhouse is critical for the future of national security, economic prosperity, and technological innovation.

“I don’t think you could go a day without being exposed to something that’s utilizing some kind of semiconductor,” says Douglas Wolfe, Penn State’s associate vice president for research. “Without this industry, we’d go back to the Stone Age, so it’s important for us to have a secure supply of chips. But it’s going to take many, many years to catch up.”

Decades of offshoring critical elements of the semiconductor supply chain means the U.S. lacks much of the infrastructure and know-how required to manufacture what is arguably the most complex and capital-intensive technology ever created. A massive infusion of federal cash is an important first step on what will be a long and arduous journey. But this is a problem that money alone can’t fix. It will also require collaboration and coordination among university researchers, industry leaders, and policymakers at a scale that is virtually unprecedented in order to train more than 100,000 skilled workers, build several multibillion-dollar manufacturing facilities, and transition the next generation of computer chips out of the lab and into the real world.

It’s a daunting challenge, but it is, in many ways, exactly the type of challenge that Penn State has spent decades preparing for. As one of the top materials science research institutions in the world, the university has forged deep connections within the semiconductor industry, which draws on the expertise of Penn State scientists and engineers to translate cutting-edge semiconductor R&D into real-world applications. In 2023, the university parlayed its experience to create the Mid-Atlantic Semiconductor Hub (MASH), a consortium of dozens of semiconductor companies and leading East Coast research universities including Princeton, Columbia, and New York University, that will leverage the unique expertise of its members to rapidly respond to CHIPS Act funding opportunities and help the U.S. reclaim its mantle as the semiconductor capital of the world.

 

When Daniel Lopez arrived at Penn State in late 2020, the bill that would become the CHIPS Act was still in its infancy. Lopez, the Liang Professor of Electrical Engineering and Computer Science and director of the Nanofabrication Lab in the Materials Research Institute, knew the U.S. was on the verge of making a major investment in rebuilding a domestic semiconductor industry. But after spending nearly three decades working on micromechanical devices at Bell Laboratories and Argonne National Laboratory, he also knew that this wasn’t a challenge that money alone could solve.

 

conceptual illustration of computer technology imagery by Stuart Bradford

 

The semiconductor industry was born in the United States, and as recently as the 1990s, the U.S. was still producing nearly 40% of the world’s computer chips. But over the past three decades, America’s dominance in the industry has steadily eroded as chip companies moved their operations to East Asia to take advantage of lavish government subsidies and cheap labor. Today, 83% of the world’s computer chips are produced in just four countries—China, Taiwan, South Korea, and Japan—while the United States’ share of chip production has dwindled to just 10%.

As the industry epicenter shifted to Asia, the U.S. didn’t merely lose production capacity of its fabrication plants, or “fabs”; it also lost the expertise that is required to manufacture chips. Although the U.S. still leads the world in advanced semiconductor research, the country has nowhere near enough trained workers to produce computer chips at scale. Lopez knew that CHIPS Act funding would be vital for incentivizing companies to invest in building new domestic fabs, which can cost upward of $10 billion apiece. But without the large and highly trained workforce to run these facilities—to say nothing of a mechanism for moving America’s advanced semiconductor research into production—the CHIPS Act would fall short of its goal of igniting a stateside renaissance in chip production. Overcoming this challenge would require close coordination among industry, academia, and the federal government at a scale that hasn’t been seen since the U.S. revitalized its industrial base to meet the demands of World War II.

“The big challenge today in the semiconductor field is helping small and medium-sized companies access the resources they need to innovate, and part of what the CHIPS Act is trying to solve is providing resources to companies so they can grow bigger, faster,” says Lopez. “We need to create new and improved ways for universities to work with industry.”

Two years later, Lopez’s seed of an idea has blossomed into MASH, a consortium of nine of the top research universities on the Eastern Seaboard and dozens of the most important companies in the semiconductor industry working together to marshal funding from the CHIPS Act and lay the foundation for the domestic rebirth of the industry. “There are a lot of resources in academia that can help chip companies move from research prototypes to real products, so the idea behind MASH was to create a distributed network for these groups to share resources and expertise,” says Lopez. “Through MASH, there is now expertise in pretty much every subject in the semiconductor world that is ready to help support the CHIPS Act.”

MASH’s focus in many ways hearkens back to the birth of the industry, which also grew through tight-knit collaborations between private industry and academia. When the integrated circuit—the foundation of all modern computer chips—was invented in 1961 by an engineer at Texas Instruments, it contained a mere four transistors. Computing power is largely a function of the number of transistors on a chip, and the chips found in modern smartphones contain billions of them. The exponential growth in the number of transistors that fit on a computer chip over the past six decades—a curve known as Moore’s Law, which states that the number of transistors in an integrated circuit doubles roughly every two years—was made possible only by ensuring that the research breakthroughs happening in universities were accessible to the companies manufacturing the chips.

The CHIPS Act is structured to provide tens of billions of dollars in federal funding across three core areas. First, there’s the funding that will be provided to massive chip companies such as Taiwan Semiconductor Manufacturing Company (TSMC) and Samsung to build manufacturing plants. Over the past two years, several of these facilities have already been announced, and some have even broken ground. But these plants are referred to as “me-too fabs,” essentially copies of existing fabs in Asia rather than facilities designed to crank out the most advanced types of computer chips. The breakneck pace of innovation in the industry means that if the U.S. wants to remain globally competitive, it’s not enough to merely copy existing plants; it must also translate bleeding-edge research innovations into production. To support this need—the second core area—the CHIPS Act will also release billions in funding for R&D.

But all of these new plants and research innovations will amount to little without a large and specialized workforce. Current estimates predict that the U.S. has a shortfall of about 280,000 workers—ranging from materials science Ph.D.s and electrical engineers to tool operators and specialized HVAC technicians—who will be required to deliver on the CHIPS Act’s ambitions. Indeed, several of the recently announced plants have already been delayed due to labor shortages. Thus, the third core area of funding is dedicated to workforce development and rapidly training the next generation of professionals to enter the industry.

“The government has the funds, industry has the manufacturing know-how, and academics bring the next-generation advances to these materials systems,” says Wolfe. “This is the trifecta that will allow the collaboration to be successful.”

The broad range of expertise represented by the MASH consortium means that each time the government releases CHIPS Act funding, it can organize a group of leaders in academia and industry with expertise uniquely suited to the funding announcement to submit a proposal. The MASH leadership has already submitted CHIPS Act proposals for funding dedicated to semiconductor packaging research and semiconductor workforce development. Says Lopez, “The reason that MIT, ASU, Princeton, Columbia, and all these other universities are joining MASH is because they believe that by working together we can have a stronger impact.”

 

The CHIPS Act and the creation of MASH come at a critical inflection point in the history of an industry driven by changing applications of computer chips and limitations on the way they are designed and manufactured. In particular, the explosive growth of artificial intelligence over the past few years has led to a surge in demand for advanced chips that are capable of processing the massive amount of data that make ChatGPT and other AI systems tick. Running these chips in large-scale data centers requires an immense amount of energy, and as a result the AI revolution is already starting to have a meaningful carbon footprint that is rolling back progress on various climate goals. At the same time, the amount of data processed by AI applications has grown by orders of magnitude, and handling this data requires stuffing ever more transistors onto a computer chip.

For the past 60 years, the world has reaped the benefits of exponential growth in the efficiency of chips. Since the 1950s, the amount of data that can be processed by a computer chip has increased a trillionfold, while the amount of computation that can be done per kilowatt-hour of energy has decreased by an estimated 100 billionfold. Today, the chip in your coffeemaker has more computing capacity than a 1970s supercomputer. This eye-watering progress can be attributed mostly to a single factor: the ability to make ever smaller transistors.

The shrinking size of transistors has been the driving force behind Moore’s Law for more than half a century. Today, the most advanced chips in production are packed full of billions of transistors that are each a mere 3 nanometers long; TSMC, the largest chip manufacturing firm in the world, has already announced plans for producing a 1.6 nanometer chip.

The problem is that we are quickly brushing up against physical limits on the size of transistors. As they become more tightly packed on a chip, they generate more heat, making them both costly and challenging to cool. But even if the heating problem is solved, silicon-based transistors start to exhibit quantum interference at sizes below 5 nanometers, which limits transistors’ ability to route electrons across the chip.

So far, chip companies have been able to sidestep this problem by tweaking the thickness and chemistry of silicon computer chips. But even if this trend continues, it’s impossible to make a silicon transistor that is smaller than one-fifth of a nanometer—the size of a single silicon atom. In other words, to the extent that Moore’s Law is driven by the shrinking size of transistors, we will inevitably reach a hard physical limit—even if we haven’t found that limit just yet.

“Part of what is driving MASH is that we are at an inflection point—we can’t keep doing things the way they’ve always been done,” says Vijaykrishnan Narayanan, Penn State’s associate dean for innovation and the A. Robert Noll Chair Professor in the School of Electrical Engineering and Computer Science. “For most of the history of the semiconductor industry, we’ve been led by the notion that we will have smaller, faster, and more efficient transistors. Now we’re reaching the end of that roadmap.”

The question for researchers, then, is how to continue making chips more powerful and efficient without relying on the shrinking size of silicon transistors to drive progress. It’s a question that Susan Trolier-McKinstry, the Evan Pugh University Professor and Steward S. Flaschen Professor of Ceramic Science and Engineering, has been thinking about for most of her career. Trolier-McKinstry’s research focuses on how to create next-generation chips that combine computing processes that have been separated in the past. For example, a computer’s processor—essentially its “brain”—and its memory are typically built on two separate chips. If they can be combined on a single chip, it could lead to substantially faster and more energy-efficient computers.

“The idea is to add sensing and actuation capabilities to conventional semiconductor processing so we can do things a normal silicon chip doesn’t do,” says Trolier-McKinstry, whose lab is also focusing on two approaches to integrating sensors onto computer chips. The first involves piezoelectrics, which are ceramic materials that change their shape when exposed to an electric field. The other involves ferroelectric materials, which are like miniature magnets whose polarizations can be flipped when electricity is applied and retain their new polarization when the electricity is removed, which makes them useful as memory devices. The challenge is how to layer these materials directly on top of a conventional silicon chip, which would allow for a direct coupling between a computer’s processor, memory, or other sensors that typically exist on separate chips.

Semiconductor manufacturing is an incredibly precise process that leaves little room for error. In fact, the fabrication process is done in clean rooms that are 10,000 times as sterile as a hospital operating room because even the smallest impurities introduced during this process can destroy the chip. The whole process starts with a thin circular slice of pure silicon known as a wafer, which is heated to form a thin layer of silicon dioxide on the wafer’s surface. Next, a light-sensitive chemical is applied to the wafer, and ultraviolet light is filtered through a mask containing the chip’s circuit pattern that is placed over the wafer. This effectively leaves a hardened shadow of the chip’s circuit on the wafer, and chemicals or plasma lasers are used to remove silicon dioxide that is not part of the circuit. Finally, small impurities are precisely added to different areas of the chip to change the electric properties of the silicon. This process—which is more akin to growing a chip than “building” it—can be repeated dozens of times to form layers so that billions of transistors can be packed on a chip the size of a thumbnail.

“Building memory on top of a processor is both a materials and an engineering challenge, and right now every semiconductor manufacturer in the world is trying to figure out how to solve this problem,” says Trolier-McKinstry. “Clean room engineers are by nature very conservative, so every time we introduce new materials into a clean room, they have to go through an incredible process to make sure that it doesn’t change anything else in the stack, and that the new material can be added in the right structure with the right orientation at the right temperature so that it doesn’t ruin everything underneath it.”

It’s a remarkable challenge—and the payoff of solving it would be enormous. The ability to directly integrate processors, memory, and sensors on a single chip would dramatically increase computer speed while making a substantial dent in their energy requirements. “Today, about 21% of all the energy we use in the world is associated with computing,” says Trolier-McKinstry. “If we could cut that down even a little bit, it would have a very important environmental and economic impact.”

 

conceptual illustration of computer technology imagery by Stuart Bradford

 

Another promising pathway to next-generation chips involves changing the chemistry of the semiconductor itself. A rapidly growing area of demand for computer chips is in the realm of power electronics—a catchall term for systems that convert electrical energy, such as from high to low voltages, or from alternating current to direct current. This is being driven largely by the rapid adoption of electric vehicles, which rely on power electronics to charge and discharge their batteries, control their motor, and distribute power to different electronic systems in the vehicle. The problem is that conventional silicon semiconductor chips are inefficient for power electronic applications—they waste too much energy, have lower operating temperatures, are limited in their ability to handle high voltages, and have slower switching speeds.

To solve these problems, researchers are increasingly turning to a new chip chemistry called silicon carbide, which adds carbon into silicon’s crystal structure. But for all the benefits of silicon carbide chips, manufacturing them at the scale and quality of conventional chips remains difficult: Silicon carbide chips are more prone to defects during growth, and they are typically grown on smaller wafers—which increases manufacturing cost—and require specialized equipment and techniques to produce.

In 2023, Penn State inaugurated its Silicon Carbide Crystal Center, which is supported with funding from Onsemi, a U.S.-based semiconductor company. The new center is dedicated to researching every part of the silicon carbide chip production process, from analyzing the input powders to predict defects to optimizing manufacturing processes to integrating these chips in new types of device packaging. Earlier this year, the university doubled down on its silicon carbide efforts with the formation of the Silicon Carbide Innovation Alliance (SCIA), a coalition of researchers and industry leaders focused on bridging the gap between the fundamental R&D work done at the Silicon Carbide Crystal Center and the industrial manufacture of silicon carbide chips.

 

Penn State has been a leader in advanced semiconductor R&D for decades, but if the United States is ever going to be home to a globally competitive semiconductor industry, it’s imperative that the advanced R&D done at Penn State and other American universities be brought out of the lab and into industrial manufacturing facilities. Industry and academic consortiums such as MASH and SCIA are a critical part of this process insofar as they bring industry and academia together to ensure that research agendas are aligned with the needs of manufacturers and facilitate the process of translating bench-top R&D into scaled chip production.

“Penn State’s land grant mission means that we want to make a difference to the world, and one very important mechanism for making a difference is R&D that supports and empowers U.S. innovation,” says Andrew Read, Penn State’s senior vice president for research and Evan Pugh Professor of Biology and Entomology. “Because of the CHIPS Act, there’s more interest than ever in making sure they can onshore this demanding and crucially important work.”

But the actual mechanics of how cutting-edge semiconductor research is integrated into manufacturing is complex. Chip companies do business in a highly competitive and capital-intensive industry, and their investment in new manufacturing technologies and processes typically commands budgets of hundreds of millions or even billions of dollars. Given the complexity and high stakes involved, manufacturers understandably want a deep understanding of how a new chip architecture, chemistry, or application will affect their operations before they make an investment. The only way that semiconductor innovation can flow from academia to industry is if researchers are able to answer a deceptively simple question: Just because it worked in the lab, how can I know that it will work in the factory?

This is the question that is often top of mind for Narayanan, who is leading the effort at Penn State to develop “digital twin” simulation platforms that can dramatically accelerate the time it takes for chip innovations to go from design to prototype to product. Historically, when a manufacturer wanted to implement a new chip design, it would require substantial investment in equipment and prototyping to put the chip through rigorous testing before committing to scaled manufacturing. Even small design changes during this process could result in substantial increases in the amount of time and money it took for the chip to reach production.

That’s where digital twins come in. A digital twin is essentially a virtual doppelgänger of a physical system, which engineers can use to simulate the way that new designs or materials affect the performance of the system. Digital twins are used in industries such as aviation and nuclear power, but their application to semiconductor manufacturing is relatively new. Making these digital twins effective requires a deep understanding of the underlying physical properties of semiconductors and chip architectures, from the atoms they’re built from to the packaging designs of the final chip. These physical properties are then translated into code, and sophisticated AI systems can simulate how the physical systems will behave in the real world based on the fidelity of the underlying digital model.

“If you think about the choices that are being made about materials, designs, and so forth in a semiconductor stack, we’re talking about dozens or perhaps hundreds of choices,” says Narayanan. “Now you can run these ‘what if’ scenarios with a digital twin instead of spending tens of millions of dollars at a fab only to find out your idea doesn’t work the way you thought it would. The digital twin is an amazing platform for us to leapfrog to new technologies.”

In this respect, Narayanan says, Penn State is uniquely equipped to pioneer the future of digital twins, which requires bringing together experts from a swath of disciplines—software engineering, materials science, electrical engineering, and advanced manufacturing—to make a realistic digital twin of next-generation semiconductors. Penn State’s efforts at creating digital twins for semiconductor research and manufacturing will be critical to accelerating the timeline for new chip designs to enter production, because they will allow rapid iteration and experimentation with everything from the types of materials used in the chips to the circuit design and chip packaging.

“You need people who can straddle a broad range of disciplines and understand how they all interact,” says Narayanan. “This is something that Penn State has historically been very good at.”

 

It will take more than an army of Ph.D.s to produce computer chips. It will also require a broad variety of technicians trained to operate fabrication facilities. One of the biggest areas of predicted labor shortages involves specialized HVAC technicians familiar with the advanced air filtration systems that are required for the most immaculate clean rooms in the world. To make it happen will require standing up new associate programs at universities and community colleges to train more than 100,000 workers that the U.S. still lacks to operate chip fabrication plants.

“The workforce required for the CHIPS Act is tremendous, and it’s not all undergraduates and postgraduates, either,” says Clive Randall, the Evan Pugh Professor of Materials Science and Engineering and director of the Materials Research Institute. “We need people that are really good at hands-on skills who back in the day might have been the sort of people who would be able to change an engine in a car as a hobby. Part of what we’re doing with MASH is thinking about the workforce more generally by partnering with vocational schools and high schools to do outreach to people who may not be exposed to universities but have skills that can help the semiconductor industry.”

In addition to working with community colleges, vocational schools, and high schools to drum up interest in the semiconductor industry among students from nontraditional backgrounds, Penn State faculty are also collaborating with semiconductor manufacturers to upskill their existing workforce on how to use new tools to prepare them for working on the next generation of computer chips. The university is also pursuing global partnerships as part of its workforce development efforts, such as a recent agreement through the Association of American Universities that will support educational efforts in India on advanced technologies, including semiconductors.

While it’s still early in the timeline of delivering on the CHIPS Act’s vision, it’s apparent that it will work only through unprecedented levels of collaboration among government, industry, and academic researchers. Wolfe puts it succinctly: “It’s truly a team effort.”