Intel

Intel Takes on NVIDIA with Affordable Gaudi 3 AI Solution 

In a significant announcement, CEO Pat Gelsinger has unveiled major updates to Intel’s AI product portfolio.

These updates span across client and data center processors, marking a decisive move to offer more affordable AI hardware alternatives and directly challenging NVIDIA’s dominance.

Let’s talk about these exciting developments in detail.

The Gaudi 3 AI Accelerator

Intel’s Gaudi 3 AI accelerator is looking to revolutionize the AI hardware market. At Computex, Intel disclosed that the Gaudi 3 in an 8,192-accelerator cluster can offer up to 40% faster time-to-train compared to an equivalent Nvidia H100 GPU cluster.

For smaller clusters of 64 accelerators, Gaudi 3 boasts a 15% faster training throughput on the LLAMA2-70B model. Additionally, it promises an average of up to 2.2x faster inferencing for popular large language models like LLAMA-70B and Mistral-7B.

Competitive Pricing

One of the most striking aspects of this announcement is the pricing strategy. The standard AI kit, which includes eight Intel Gaudi 2 accelerators with a universal baseboard (UBB), is priced at $65,000. That’s approximately one-third the cost of comparable competitive platforms.

Meanwhile, the Gaudi 3 kit, featuring eight accelerators with a UBB, will list at $125,000, around two-thirds the cost of competing systems. This aggressive pricing strategy will make high-performance AI more accessible to a broader range of organizations.

AI KitNumber of AcceleratorsPriceEstimated Cost Compared to Competitors
Intel Gaudi 2 Kit8$65,000One-third
Intel Gaudi 3 Kit8$125,000Two-thirds

Intel also highlighted collaborations with top global system providers to bring the Gaudi 3 to market, including new partners such as Asus, Foxconn, Gigabyte, Inventec, Quanta, and Wistron, alongside existing partners Dell, Hewlett-Packard Enterprise, Lenovo, and Supermicro.

Lunar Lake CPU Architecture

Intel’s new Lunar Lake CPU architecture represents a major leap in their SoC design, setting the stage for future generations of laptops and desktops.

Built from the ground up with the latest client compute CPU, GPU, and NPU engines, Lunar Lake aims to deliver cutting-edge AI experiences with remarkable power efficiency.

You can expect Lunar Lake to power the largest number of AI PCs in the industry, delivering best-in-class power performance with up to 40% lower processor power usage in real-life applications.

Join 450,000+ professionals from top companies like Microsoft, Apple, & Tesla and get the AI trends and tools you need to know to stay ahead of the curve 👇

From an AI perspective, the platform is expected to achieve up to 120 TOPS (Tera Operations Per Second), a key metric in measuring AI performance. Intel emphasizes that while TOPS indicates potential speed, software optimization determines actual performance. Their extensive software work promises competitive results.

Power Efficiency

The significant reduction in processor power usage is not just a technical achievement—it has practical implications for everyday users. Lower power consumption means longer battery life for laptops and reduced energy costs for desktops.

This improvement aligns with the growing demand for more sustainable and environmentally friendly technology solutions.

The all-new Lunar Lake architecture will also feature new Performance-cores (P-cores) and Efficient-cores (E-cores), an advanced low-power island, and a fourth-generation Intel neural processing unit (NPU) with up to 48 TOPS of AI performance.

Xeon 6 E-core Data Center CPUs

The launch of Intel Xeon 6 processors brings a versatile solution to data center customers, catering to a wide range of needs from high AI performance to exceptional efficiency and cloud scalability.

These processors feature new Performance-core (P-core) and Efficient-core (E-core) SKUs, providing flexibility to meet diverse organizational requirements.

Impressive Performance Gains

The Xeon 6 E-core is expected to deliver high-core density and exceptional performance per watt, significantly enhancing data center capabilities for AI workloads.

Compared to 2nd Gen Intel Xeon, the Xeon 6 with E-cores offers up to 4.2x rack-level performance improvement and 2.6x performance per watt gain for media transcode workloads. This performance boost enables a 3:1 rack consolidation, a critical advantage for modern data centers.

Comparison2nd Gen Intel XeonXeon 6 E-core
Rack-level Performance Improvement4.2x
Performance per Watt Gain2.6x

Flexibility and Scalability

The Xeon 6 processors are designed with flexibility in mind, offering both P-core and E-core options to address a variety of workloads. This adaptability is crucial for organizations that need to balance performance and efficiency, whether they are running AI applications, managing cloud services, or handling data-intensive tasks.

The Intel Xeon 6 E-core (code-named Sierra Forest) is the first of the Xeon 6 processors to debut. It is available now. However, you can expect Xeon 6 P-cores (code-named Granite Rapids) to launch next quarter.

Join 450,000+ professionals from top companies like Microsoft, Apple, & Tesla and get the AI trends and tools you need to know to stay ahead of the curve 👇

Why Do These Innovations Matter?

These updates from Intel signify more than just incremental advancements. They represent a strategic shift in the AI and data center landscapes, aiming to democratize access to high-performance computing.

Intel is poised to expand the adoption of advanced AI technologies across various industries. How? By offering competitive alternatives to NVIDIA at a lower cost!

Real-World Impact

Consider a mid-sized company looking to implement AI solutions. Previously, the high cost of NVIDIA’s top-tier hardware might have been prohibitive. Now, with Intel’s more affordable yet powerful Gaudi 3 accelerators, this company can access cutting-edge AI capabilities without breaking the bank.

This democratization of AI hardware could spur innovation and efficiency across numerous sectors, from healthcare to finance to manufacturing.

The Future of AI and Data Centers

Intel’s latest announcements underscore their commitment to leading the charge in AI and data center technology. Intel’s innovations will play a pivotal role in shaping the future of computing, as organizations continue to seek more powerful and efficient solutions.

Maybe anticipate more updates from Intel. Looks like they’ll continue to push the boundaries of what’s possible in AI and data center technology. The competition is heating up. And the beneficiaries will undoubtedly be the organizations and consumers who gain access to these groundbreaking advancements.

Key Takeaways

  • Gaudi 3 AI Accelerator: Up to 40% faster time-to-train and 2.2x faster inferencing at a fraction of the cost.
  • Lunar Lake CPU Architecture: Leading AI performance with significant power efficiency.
  • Xeon 6 E-core CPUs: Exceptional performance and efficiency for data centers, with significant rack-level performance improvements.

Intel’s latest offerings are not just about staying competitive. They’re about reshaping the landscape of AI and data centers for a more accessible and efficient future.

Sign Up For The Neuron AI Newsletter

Join 450,000+ professionals from top companies like Microsoft, Apple, & Tesla and get the AI trends and tools you need to know to stay ahead of the curve 👇

Join 450,000+ professionals from top companies like Microsoft, Apple, & Tesla and get the AI trends and tools you need to know to stay ahead of the curve 👇