Computing

6 Billion
75% of the world’s population (6 billion consumers) will interact with data daily by 2025.1

100 MW
Large data centers require 100+ megawatts of power—enough to power ~80,000 U.S. households.2

Pervasive connectivity and the breakneck speed of data growth has led to significant memory and power bottlenecks. The combination of the need for even more speed, low-power but powerful processing at the edge and AI going mainstream is fueling new levels of semiconductor innovation, beyond Moore’s law alone, to address these challenges.


1Data Age 2025, November 2018, refreshed May 2020, seagate.com/files/www-content/our-story/trends/files/dataage-idc-report-07-2020.pdf/>
2US DOE 2020.

AI

Explosive growth in the AI silicon market is fueled by ballooning data sets, 4/5G connectivity and the need for more powerful semiconductor chips to handle associated real-time analytics requirements.

21 Billion
The AI silicon market
will hit $21 billion dollars in 2024.1

25%
In 2025, 25% of the world’s data will require real-time processing.2

GLOBALFOUNDRIES®’ (GF®) high-performance and ultra-low power AI accelerator solutions are optimized for training (creating computer models) and inferencing (deploying the models) both in the cloud and at the edge. Built on proven silicon platforms complemented by robust ecosystems, they are designed to help chip designers reduce development time and solution providers to get to market faster. By using AI-optimized architectures and features, these solutions can help solve the power/memory bottleneck in audio, video and image processing, smart edge devices and even autonomous vehicle applications.


1 ABI Research, ABI Research, Artificial Intelligence and Machine Learning – 2Q 2020 (MD-AIML-105).
2 Data Age 2025, November 2018, refreshed May 2020, seagate.com/files/www-content/our-story/trends/files/dataage-idc-report-07-2020.pdf

AI accelerator solutions from GF

12LP
Proven and robust offering with outstanding performance and area for cloud and edge AI inference
12LP+
A >20% increase in performance or a >40% decrease in power plus a 10% improvement in logic area scaling over base 12LP platform for cloud and edge AI inference
Power-performance with high level of integration and ultra-low power (1 pA/cell) with 0.5 V logic operation for edge AI inference

Featured Resources

AI / Cloud accelerators


AI accelerators for the cloud using 12LP and 12LP+ FinFET

GLOBALFOUNDRIES® (GF®) 12LP and 12LP+ AI accelerator solutions can help solve memory and power bottlenecks while speeding up AI applications such as high-end training and model inferencing in the cloud. The two FinFET-based solutions offer 1 GHz+ performance, with purpose-specific AI innovations providing significant power efficiency and area advantages. 12LP+ builds upon GF’s established 14LPP/12LP solutions, of which GF has shipped more than one million wafers.

These AI-specific solutions, complemented by GF AI design reference packages and design technology co-optimization (DTCO) services, enable cost-efficient, streamlined design and faster time to market.

AI-optimized performance & power, without moving to a smaller node.
Best-in-class IP and comprehensive third-party design and packaging ecosystem.

Design smarter, not smaller

12LP and 12LP+ deliver a superior combination of AI performance, power and area benefits and offer the same global routing capability as 7 nm solutions so chip designers can avoid the need to migrate to smaller and much costlier geometries.

Maximize performance, minimize power consumption

Clients are already leveraging GF’s 12LP solution for dramatic power and performance benefits. 12LP+ builds on those advantages with optimized MAC designs, a 0.5 V Vmin SRAM bitcell for 2X lower power at 1 GHz and a dual-work function FET that enables >20% faster logic performance or >40% lower power.

Differentiate and accelerate time to market

12LP/12LP+ offer Tier 1 supplier I/O interfaces, while best-in-class IP and a rich third-party partner design ecosystem enable cost-efficient designs and quick-turn prototyping for lower NRE and faster time to production. A 2.5D interposer is available for clients using high bandwidth memory (HBM2/2e).

Featured Resources

Back to top...

AI / 22FDX


AI accelerators for the edge using 12LP/12LP+ and 22FDX

A primary driver in the growth of the AI silicon market is the edge taking on compute and delivering local processing and filtered data to the cloud.

GLOBALFOUNDRIES® (GF®) 12LP/12LP+ FinFET and 22FDX® FD-SOI edge AI accelerator solutions are optimized to reduce latency and actionable response times while enabling enhanced security and data privacy by managing data at the edge. The purpose-built solutions combine a spectrum of power, performance and area advantages that enable chip designers to choose the best fit for their discrete or embedded AI SoCs.

22FDX is up to 1000x more power-efficient than current industry edge AI accelerator offerings.*
12LP/12LP+ offer AI-optimized performance with same global routing capability as 7 nm, so you can design smarter, not smaller.

Accelerate AI at the edge

GF 12LP/12LP+ and 22FDX solutions are optimized to deliver the performance horsepower you need to handle the demands of AI inferencing at the edge, instead of in the data center.

Solve the power challenge

Leverage the low dynamic power and best-in-class leakage power from 22FDX, excellent thermal performance from 12LP/12LP+ and a low-voltage SRAM available with 12LP+ to minimize power consumption in AC-wired or battery-powered devices.

Differentiate with confidence

Take advantage of a combination of AI-tuned features, including the AI reference package available with 12LP/12LP+ and the eMRAM AI storage core available in automotive grade 1-qualified 22FDX to stand out from your competition.

Featured Resources


*Assumes typical power consumption of edge device is ten to hundreds of watts. 22FDX can achieve 20 milliwatts power consumption.

Back to top...

Back to top...