Intel chips have been at the core of many of our computers and servers. But while most of us are very familiar with what Intel processors can do, how familiar are we with how they're made?
In this fascinating article, we're going to take a look at what goes into making the chips we rely on.
ZDNet had the opportunity to speak with Dr. Kevin Fischer, Intel VP and director of Interconnect and Memory Technology Integration; Dr. Ravi Mahajan, Intel Fellow, Assembly and Test Technology Pathfinding; and Dr. Rahul Manepalli, Intel Fellow and Director of Substrate TD Module Engineering -- three of the world's leading experts on semiconductor production.
In order to get started, we need to understand the terminology Intel uses in chip production.
Semiconductor:This is the term often used to describe both chips and the chip industry. Technically, a semiconductor is a material that conducts electricity, but that conductivity can be modified by a variety of properties and substances.
Transistor:A transistor is an electronic switch or amplifier. It is the simplest component of semiconductor chips. A typical Intel Core processor has billions of transistors. The term "gate" in semiconductors refers to transistors.
Silicon:Silicon is a crystalline solid that is hard and brittle, and is considered metalloid, meaning it has some, but not all the properties of a metal. The unique properties of silicon make it particularly suited to be the foundational material for semiconductor production.
Wafer:A wafer is a circular slice of machined silicon. Semiconductor chips are manufactured on silicon wafers through a variety of processes.
Tile:Once a wafer has gone through production, it contains many tiles. The tiles (also called chiplets) are the core components of chips inside the gear you use every day.
Interposer:A silicon substrate that sits below tiles and chiplets, and allows them to pass signals.
Package:After wafer production is complete, each individual tile is put into a package. These are the familiar chips with the various leads and connectors that allow the tiles to interact with other electronic components. When you put a processor in your computer, you're holding a package that contains tiles at its core.
When talking about billions of transistors on a tile of silicon the size of a pinky nail, it sometimes seems like the laws of physics have been suspended. In fact, engineers are working within unyielding constraints, but they're finding ways to push those constraints to the limit.
There are three key trade-offs when it comes to chip development: performance, power, and density.
Performance is all about how fast the part performs its assigned tasks. If you're looking at a Xeon processor for a massive server, you need to not only move instructions through the CPU, you need to be able to coordinate the code in the cores and move data to and from RAM. All of that has to be done at blistering fast speeds.
Power is how much energy the part uses. Servers are likely plugged into the electrical system of a data center, and they can use as much power as they need to maximize performance. Your laptop, on the other hand, needs a more balanced approach to consumption, because you want to use it unplugged.
Density is how much you can fit on the chip. The denser the chip, the hotter it's going to get. Heat is, fundamentally, wasted power. Yet, the closer the components of the chip, the faster the data can move between those components. The more tightly packed the chip, the more it's capable of doing.
Fischer described this trade-off in terms of a processor's calculation capability. "We can do calculations faster, or we can do more calculations with the same amount of power, or we can do more calculations in the same amount of time."
Keep these trade-offs in mind as we discuss Intel's work in producing new silicon.
Back in 1965, Intel cofounder Gordon Moore predicted: "The number of transistors incorporated in a chip will approximately double every 24 months."
Analogy time: Imagine you had one grain of sand in 1965. If you double that every two years, you'd have roughly 268 billion grains of sand by 2021. What does 268 billion grains look like? The BBC did some math for us. About 8,000 grains of sand fit in a cubic centimeter. 268 billion grains of sand would work out to 335 meters. That's taller than three Saturn V rockets, stacked on top of each other. Using Moore's law, that would jump to six Saturn V rockets in two years and twelve in another two years.
Bringing that back to silicon, Moore postulated that chip density would double every two years. With density comes performance, and Intel has been committed to following Moore's Law.
But as chips get smaller, it takes tremendous work and innovation to be able to sustain that exponential growth. That's where Kevin, Ravi, Rahul, and their teams come in.
Fischer explained that Moore's Law is Intel's guiding principle in terms ofcompute value. "We get more compute power, in terms of what data you're able to manipulate in the same footprint," he said. And going forward, "Expect a continuation of that, with more advanced technology year over year, with bigger and better experiences."
Since Moore's law is all about fitting more and more transistors into a single tile, Intel's engineers have to keep making the transistors smaller. Up until recently, the processes for producing chips have been described in terms of nanometers. A nanometer is 10-9meter. Intel announced this summer that it's moving beyond nanometers and ushering in the angstrom era, named for 19th century physicist Anders Jonas ?ngstr?m, who posited measurements of one ten-millionth of a meter -- or roughly one millionth the thickness of a strand of human hair. The unit angstrom came to be known as 10-10meter, an entire order of magnitude smaller.
At Intel, Fischer heads up the Intel 20A and Intel 18A programs, producing components of ever-increasing smallness. His teams are heading up changes in transistor technology that are driving this evolution. "The transistor is really the engine. It's what drives the computer, right? It's the switch that goes from zero to one. It's the basic unit of every single computer on the planet," he explained. "If you look back at other revolutionary changes in the past, [this shift is] like going from a planar transistor to a FinFET transistor."
What he's describing is the architecture of transistors. As more transistors are packed into a chip, they use and dissipate energy differently. Today, Kevin's groups are moving from FinFET (which bring power in and out using a fin-like structure) to RibbonFET (which bring power in and out using microscopic layers of silicon arranged like stacked ribbons).
These technologies provide better control over the transistors and more efficient power usage. Keep in mind that we're talking about structures that are a million times smaller than the thickness of hair, and you'll begin to understand the magnitude (or microscopic-itude) of what they're doing.
The processor in your laptop is just one kind of chip. There are special-purpose processors used to run your microwave's keypad, processors that manage the fuel level in your car, and chips that support all of those activities.
Some chips require densely packed tiles, while other chips require other forms of flexibility, from modularity to meeting power usage requirements. All of this falls under the general category of semiconductor packaging.
If you think of semiconductor chips as buildings in a city, you can either build out or build up or do both. All approaches have their advantages.
Mahajan and Manepalli are concerned with how the microscopic circuits built with the transistors are constructedinsidethe package. They are innovating with two key technologies:EMIB(Embedded Multi-Die Interconnect Bridge) is all about connecting tiles (also calledchiplets) next to one another to create modular components.Foverosis about stacking chiplets atop one another to increase density and reduce propagation delay (the time it takes for a signal to propagate from one end of a circuit to another, or between components). The two technologies can complement one another and allow systematic partitioning in all three dimensions.
This gets really powerful when you realize that one final product can include chiplets designed and made by completely different teams (or even different companies). Each chiplet may serve a different function, and the resulting whole is optimized for its workload.
EMIB allows Intel to build out chips with an ever-increasing number of chiplets. The key is to build high-performance interconnects (think of these as circuit board traces, but running inside the package between chips). This innovation allows products to grow in complexity.
Before EMIB, chips used an interposer layer that had to span all the chiplets. This was challenging, because growing the size of the interposer to accommodate increasing chiplet area added cost and complexity.
According to Mahajan, "It's very hard to scale the interposer cost-effectively. EMIB overcomes that inherently because it's built on a package framework and provides high-density interconnects only where needed, which has a natural cost-performance advantage. You can also get different shapes and optimize each die-to-die connection independent of other die-to-die links. It doesn't have to be symmetrical in any way." That allows for much more flexibility in both what the chip can do, and how.
"You have very powerful platforms available now [and] scaling into the future that will allow you to do a lot more functions than we have historically done. We are doing two things. We are reducing propagation delay [and] increasing the bandwidth of that interconnect, as well," he said. "So while propagation delay is important, the bandwidth is also just as important and we are creating substantially low-power interconnects in doing so. We are creating the infrastructure that allows different IP [intellectual property] to talk to each other with minimum delay, maximum bandwidth, [and] least amount of power."
Speaking of power, even in a microscopic transistors, power leakage matters. Think about it this way. When a light switch on your wall is off, it's not sending enough power to light a light, but it might still be letting a little power through.
Intel is solving this with RibbonFET and another technology called PowerVIA. RibbonFET describes how the "source" and "drain" elements of the transistor are built. PowerVia is a power management technology that layers power elements on the wafer beneath the transistors, providing a much lower-resistance electrical path from the power supply to the transistor. Less resistance in the power network directly results in less wasted power.
If each of a billion transistors all leak a tiny bit of power when they're ostensibly off, the power drain can sap batteries, reducing the usefulness of mobile devices. But Intel engineers are reducing that drain by making sure that off means "hard off," or, in other words, little to no leakage.
Fischer summed it up like this: "The RibbonFET gives you very good control of the transistor and the gate, which means when you turn it off, it's hard off. That means less leakage and power loss."
Manepalli put it in terms that will resonate with IT professionals: "Energy consumption in data centers is going through the roof. We are trying to find ways to make interconnects lower energy in terms of information transfer from chip to chip as well as package to package."
He ended with a sentiment that applies to everyone we've met from Intel while researching this article: "I truly believe this is the most fun time and the exciting time to be in packaging. The best is yet to come."
All the gains in semiconductor technology in the past fifty years will double in two years. And double again two years after that. The power we have in our laptops and mobile devices is astounding now. But five years from now? It boggles the mind.
We'd like to end this with a special thanks to Kevin, Ravi, and Rahul -- not just for spending time with us so we can understand what goes into producing the chips we all rely upon, but for squeezing every last erg from the laws of physics so we can have the coolest tech, year after year.
Learn more at Intel.com.