Intel Tech showcase: AI Everywhere Event introduces Next-Level core ultra, Xeon processor

  • Facebook
  • Twitter
  • Reddit
  • Flipboard
  • Email
  • WhatsApp
Intel Tech showcase: AI Everywhere Event introduces Next-Level core ultra, Xeon processor (Image: intel.com)
Intel Tech showcase: AI Everywhere Event introduces Next-Level core ultra, Xeon processor (Image: intel.com)

Delhi : In an attempt to increase its market share in the Artificial Intelligence (AI) space, Intel has revealed new processors intended for PCs and data centres. New Core Ultra processors, which the firm claims is the "most AI-capable and power-efficient client processor in the company's history," were introduced by Intel during the "AI everywhere" keynote.

According to Intel, when idling on Windows, the Core Ultra series uses up to 79% less power than previous generation AMD CPUs. When it comes to multi threading jobs, the officials claims to be 11% quicker than the competitors. Built on Intel's 7nm technology, the Ultra series has FOVEROS 3D packaging, a cutting-edge stacking solution that enables the business to increase performance and efficiency while occupying less space. These are "the most efficient x86 processors for ultrathin systems," according to Intel. The recognisable efficiency and performance cores are still there, along with integrated Intel Arc graphics—which the firm claims are twice as fast as the previous generation—in these.

The Ultra 7 165H, the flagship model of the new series, boasts 16 cores total—6 performance cores, 8 efficiency cores, and 2 low-power cores—with a maximum turbo frequency of 5GHz. The Ultra 9 185H, which has the same number of cores but a slightly higher clock at 5.1GHz top turbo speed, and a faster integrated GPU, is scheduled for availability in the first quarter of 2024. The first lineup of CPUs powers PCs from OEM partners like Acer, ASUS, Dell, Dynabook, Gigabyte, Google Chromebook, HP, Lenovo, LG, Microsoft Surface, MSI, and Samsung. It is now available in shops.

The enhanced 5th generation Xeon processors, which are intended to handle end-to-end AI tasks, are available in addition to these. When it comes to Large Language Models (LLM) with fewer than 20 billion parameters, Intel claims that the new CPUs have latency of less than 100 ms. Eight DDR5 channels and up to 64 cores per CPU are features of the latest Xeon CPUs. Additionally, they are pin-compatible with the earlier model. In addition, Intel launched Gaudi 3, a new AI accelerator that rivals Nvidia's highly anticipated H100 family of accelerators. In 2024, Gaudi 3 will be launched at some point.