GF Press Releases

MRAM Continues March to Mainstream

Jan 30, 2020

For IoT and Automotive Applications, Embedded MRAM Promises Cost-Effective and Low-Power Solution 

By David Lammers

One reason the International Electron Devices Meeting (IEDM) is an important event is to see how the semiconductor industry is converging on a technology option, be it hafnium-oxide gate oxides, immersion lithography, or, in this case, magnetic random-access memory (MRAM).

At the 2019 IEDM, held in December in San Francisco, the major foundries and Intel all presented MRAM technologies that can be embedded in CMOS logic devices. While it is fair to say GLOBALFOUNDRIES has an edge on the others in terms of reliability and manufacturing experience, the other companies have clearly embraced MRAM as well.

MRAM’s day has come largely because embedded NOR flash requires too many masks—a dozen or more—to manufacture at the 28nm node and beyond. Embedded NOR flash also requires a high-voltage capability to write data, and the write time is quite long. MRAM has its challenges, as well, but it is faster and less power-hungry than eFlash.

Big Power Savings

“If your applications write a lot to NOR flash, then you are going to love MRAM,” said Jim Handy, the veteran memory analyst at Objective Analysis, based in Los Gatos, California. “Flash consumes a lot of power, a phenomenal amount, because it takes so long to write and requires high voltages. If you move to MRAM, there is big power savings. The write power drops by a couple of orders of magnitude, while the read power of MRAM is about the same.” 

Handy makes the point that companies developing microcontrollers have a choice: they can either load up on SRAM for working memory and put the code storage on an external (discrete) NOR flash; or they can make the jump to embedded MRAM (eMRAM). Since SRAM requires six transistors to store a bit, MRAM typically has about double—or better—the density improvement, he said. 

Additionally, in systems where the SRAM requires battery backup, non-volatile MRAM is often more cost effective than the combined chip-plus-battery cost of embedded static RAM (SRAM), he said.

At the 2019 IEDM, an entire session was devoted to eMRAM. After presenting GF’s latest eMRAM reliability data, Vinayak Bharat Naik, the Singapore-based technical lead for GF’s embedded MRAM effort, said he welcomed having four companies—GF, followed by Intel, Samsung, and TSMC—pushing eMRAM at same time.

“For the customers, if they want to move to a new technology from a conventional technology that they have been using for a long time, it cannot be sudden,” Naik said. “Once an end customer starts up on MRAM, they will grow more confident in the idea of replacing conventional memory with MRAM.”

eMRAM Reliability and Manufacturability 

Over the past year, several clients have asked GF to share additional data showing its eMRAM technology could meet all reliability tests for production, as well as withstand strong external magnetic fields that might disturb stored data. 

GF’s 2019 IEDM presentation focused on providing an answer to these questions, and it was a positive story to tell. 

Naik’s IEDM paper showed the manufacturability of eMRAM on GF’s 22nm FD-SOI embedded platform using advanced magnetic tunnel junction (MTJ) stack/etch/integration processes by achieving a fully functional 40Mb macro at industrial operating temperature range, -40 to 125 degrees Celsius. It also showed the capability of meeting solder reflow requirements as well as robust product reliability with failure a rate of less than one part per million (ppm) at package level.

The magnetic immunity study showed the 40Mb eMRAM macro has the capability to withstand an extremely high magnetic field of 1,600 Oersteds in stand-by mode at 25 degrees Celsius, with failure rate less than 1 ppm for 20 min exposure. At 125 degrees Celsius, the failure rate was still less than one ppm at 700 Oe. Active-mode magnetic immunity—the capability of a chip to operate in the presence of a magnetic field—of 500 Oe was also demonstrated. Endurance remained excellent with failure rates less than 1 ppm up to one million cycles, with no degradation in resistance distributions after one million cycles, and no degradation during high-temperature operation at 500 hours. All of the results were with error correction (ECC) in off-mode.

“A magnetic field can be anywhere,” Naik said. “In the home, the charger for your phone, for example, can create a certain level of magnetic field. We need to make sure that both standby immunity and active-mode immunity are good so that the chip can operate as usual,” Naik said.

In 2018, at the major technology conferences including IEDM and the Symposium on VLSI Technology, GF demonstrated its eMRAM could withstand the solder reflow steps used in chip packaging, which would allow microcontrollers (MCUs) to be programmed prior to the package solder reflow steps. The JEDEC standard of five times solder reflow at 260 degrees Celsius for five minutes has been proven with package-level tests.

Improved Reliability Performance

At the 2019 IEDM, by showing eMRAM package level reliability data from all standard reliability tests and magnetic immunity, GF remains competitive in eMRAM technology, Naik said.

“At this IEDM, we showed that we are production-ready for industrial-grade applications, including wearables, internet of things (IoT), and many others,” he said. “GF has good production experience with 40nm and 28nm MRAMs, experience that carries over to the eMRAM market.”

GF engineers have continued to optimize the magnetic tunnel junction (MTJ) cell, including deposition and etch. “Over the past year, we improved both the MTJ stack and etch as well as integration processes to improve the endurance performance with better switching efficiency. And our yields were boosted to above the 90 percent level,” Naik said.

Saving on Energy Consumption

Tom Coughlin, a memory and storage consultant who served as general chairman of the annual Flash Memory Summit for 10 years, said eMRAM “has a lot of possibilities for embedded products at the edge or end points, especially those that are power-sensitive.”

The market for emerging memories, such as eMRAM, is positioned to take off, Coughlin said. “There is big growth in persistent networks, including Factory 4.0, which combine intelligent devices with AI for more-efficient factories. In addition, agriculture could be a big market, with more farmers placing productive wireless smart sensors in their fields. Also with health care there is a need for more efficient energy usage. Many markets will drive demand. And then there are things we haven’t even thought of yet, including many consumer applications, new uses for a fast energy-efficient memory are just starting to come on-line, but we haven't recognized their potential yet.”

Naik said GF is taking it step-by-step, focusing first on IoT and industrial use, then automotive-grade eMRAM—where the temperature challenges are higher and where the data demands of autonomous driving require high-density on-chip memory—and then using MRAM as a level 4 cache, replacing some SRAM on processors.

And then there is another very large market, process in memory (PIM) computation, which was discussed often at the 2019 IEDM. PIM involves using some form of emerging memory in artificial intelligence (AI) computing. MRAM or other memory types, such as resistive RAM or phase-change RAM, could serve as the local processing element in edge devices. “Considering the superior performances of MRAM such as fast write speed, high endurance, high density, and low power, MRAM is unique among other NVMs and has a great potential for PIM computation for AI applications,” Naik said.

Process in Memory

Coughlin agreed about the potential of PIM. “Process-in-memory may be a bigger part of everything, putting AI applications in everything else,” he said. “We could do the training elsewhere, and have some learning capability on the device. At the very least, process-in-memory could run a model locally instead of at the data center.”

MRAM could also play a bigger role in data centers. “If the system is not using something, MRAM preserves the state, and when that data is needed it comes right back up. That takes us away from dependence on volatile memory toward a greater utilization of non-volatile memory. A lot of that today is driven by energy-sensitive applications, at edge points, but it could be used also in data centers,” Coughlin said.

Karim Arabi, CEO of San Diego-based Atlazo Inc., spoke at IEDM about change coming to edge devices. Autonomous driving is just one form of edge computing that will require “tons of data,” he said.

Advanced driver-assistance systems (ADAS) requires “low latency computing that is near the sensor,” Arabi said.

“When it comes to data aggregation and training, we can’t beat the cloud for computing power and data size. But other applications require much better power efficiency, and edge computing is 100 to 1,000 times less costly in terms of power than transmitting power over wireless links to the cloud. And for privacy reasons, a lot of data needs to stay local,” Arabi said.

In typical von Neumann architectures, about 75-95 percent of power is consumed by moving data between the memory and the processor. “With new memory architectures such as MRAM and PC-RAM, we can replace some SRAM with MRAM, and also move data from off-chip DRAM to on-chip MRAM. Either MRAM or PC-RAM could create a new paradigm in computing,” Arabi said. “Over the next 10 years, as neuromorphic computing takes hold, MRAM and PC-RAM will become even more key.”

New Compute Architectures

GF is positioning itself as a leader in MRAM, and embracing its potential for empowering GF clients to develop differentiated, feature-rich products, as well as to drive new technologies as potential new compute architectures.

Ted Letavic, chief technology officer and vice president for computing and wireless infrastructure at GF, said “we now have a connected society, and if you can’t process the data that we have within the power envelope, if can’t do the data analytics, then you can’t monetize, or even implement, AI. We have to be able to do the analytics, and that is either compute at the edge or in the data center.”

Moving forward, privacy will drive data to edge devices, where MRAM could play a role. “We all have personal data posted everywhere, from the edge to the data center. We would like to move that to the edge, to secure your data and be more private.”

A second factor driving edge computing is bandwidth. While 5G delivers more data to the data centers, that approach becomes impractical as the volume of mobile data accelerates. “Even with the huge promise of 5G, or even 6G, every bit that you have to transmit to the data center to compute takes bandwidth. We would like to get to the point where we have efficient compute engines at the edge. Then we could send the metadata—the result only—transmitting the result, not the raw data.”

Letavic said several major research centers are engaged with GF to explore these new approaches to edge computing.

“It is so much more than a silicon solution. We have to really change the compute architecture. Instead of just talking about new transistors and ways to handle electrons and photons, we are talking about new architectures,” Letavic said in an interview at the 2019 IEDM.

MRAM could play a major role in what Letavic calls the coming “renaissance of computer design.”

“For the first time in 30 years, we have opened the toolkit and are looking at non-Von Neumann architectures, where the power benefits are tremendous. We could achieve a 100 or 1,000 times lower power with dedicated architectures.”

Because the process-in-memory approach is so power efficient, MRAM could play a central role in these non-Von Neumann architectures. “As device technologists, we could keep improving the technology for the next 30 years, and we are still not going to get to a power point that meets our aspirations,” Letavic said. “We have to change the architectures and software stacks. New architectures bring new device types, new features on platforms, and new approaches to the compute problem.”