With AI becoming an increasingly integral part of our lives, it was only a matter of time before our personal devices would be made capable enough to process AI-based tasks, locally.
Most AI tools that we use today rely on a working internet connection, to have its tasks processed in a cloud. While there are some positives to this approach, the cons are worrying. Data sovereignty, and data protection, for example, can be a major challenge. Also, some tasks require very quick processing and constant processing. This is where localised AI comes into play.
Thanks to their new line of PC CPUs, the Intel Core Ultra series, Intel is perfectly positioned to usher in a new age of computing, one that combines the benefits of cloud-based AI, and edge computing, or localised AI.
Akshay Kamath, Director, PC Client Category - Intel India, explains why this is the perfect time for integrating AI into our PCs, what advantages do AI PCs bring over previous-gen PCs, and how everyone — gamers, working professionals, students, and homemakers — have something to gain from AI PCs. Edited excerpts:
How are AI PCs different from a typical regular PC?
The AI PC powered by Intel Core Ultra represents a new generation of personal computing with dedicated AI acceleration capability across the central processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU). Consumers will now be able to run AI tasks right on the device, instead of having to send the instructions and data to be processed in the cloud.
Impact Shorts
More ShortsIn addition to the hardware, software is key to enabling an AI PC. Intel has worked with more than 100 independent software vendors on more than 300 features and workloads to enable new experiences on an AI PC powered by Intel Core Ultra processors. AI PC is not a category, it’s a transition and this generation of Intel Core Ultra processors usher in the age of AI PC.
What are the advantages of having hardware that enables localised AI processing?
Having hardware that enables localized AI processing offers several advantages to the computing landscape. Intel’s dedication to advancing hardware technology lays the foundation for the AI PC era, offering users unparalleled performance and versatility.
Intel Core Ultra processors are GenAI-ready and can run the latest LLMs, transformers, and text-to-image workloads with up to 70 per cent faster generative AI performance with GPU and NPU offload. This has several clear advantages. First, by running everything locally, your data remains private. Second, by running AI-based tasks on the device, we can reduce latency and also eliminate any dependency on an internet connection.
Lastly, every AI task we run on-device, is a task that’s not running on a server, consequently reducing the power consumption of the data centre and helping advance overall sustainability. While one request may not seem like much, by collectively moving a large chunk of AI capabilities to on-device we are able to reduce the overall compute load being put on data centres (and the subsequent power consumption that comes with it).
For what uses will a regular, office-going PC user, or a homemaker need AI processing or AI PCs?
In today’s digital age, AI processing and AI-powered PCs offer many benefits for both regular office-goers and homemakers. With AI-enabled features, office users can experience increased efficiency in day-to-day activities. Reduced power consumption in web browsing, video calls, desktop multitasking, and streaming services like Netflix not only conserves energy but also enhances overall productivity.
Moreover, AI processing revolutionizes productivity by automating labour-intensive tasks. For instance, AI can swiftly generate summaries of meetings, emails, and chat histories, saving valuable time for busy professionals. In essence, AI processing and AI PCs redefine the user experience by delivering improved productivity, and better efficiency for both office users and homemakers.
We have seen AI being used to fine-tune the way several components perform and make them more efficient in PCs. In what other ways will AI help in performance?
In addition to fine-tuning components in PCs, AI aids performance through predictive analytics, optimizing power consumption, and enhancing resource allocation. AI-driven algorithms predict system behaviour, preemptively allocating resources to prevent bottlenecks and optimize workflow.
Dynamic frequency scaling adjusts processor speeds based on workload demands, conserving power during low activity periods and boosting performance when needed. AI also enhances thermal management by dynamically adjusting cooling systems based on component temperatures, ensuring optimal operating conditions.
Furthermore, AI-driven software optimization streamlines processes, improving overall system responsiveness and efficiency. In summary, AI contributes significantly to enhancing PC performance through proactive management and resource optimization.
How will an AI PC be advantageous for a PC gamer?
AI in PC gaming is still in very early stages, but the potential is limitless. Currently, AI is being used as part of the training regimen in esports, and also being used to identify violations and cheating. Another way that AI is being used extensively in gaming is upscaling.
Xe Super Sampling, our AI-based upscaling solution, allows gamers to render games at a lower resolution and uses AI to output a higher resolution to enjoy both high fps and high graphics settings.
Over time, we could also see AI being used to deliver further enhancements in the gaming experiences, such as non-player character interactions or AI-generated scenarios in a game that are unique to your playing style. It really holds a lot of potential and we are excited about what the future could bring.
How rapidly does Intel see Indian users adopt AI and AI-based tools that need dedicated AI processing in PCs?
There is a tremendous effort across the industry to integrate truly meaningful AI capabilities into the software of all kinds and use cases. While we as Intel are obviously creating the hardware to support these new AI-centric use cases, we are also deeply involved with software and hardware vendors to make the process of bringing AI everywhere as smooth as possible.
Today, you can find AI-based functionality integrated into something as simple as office productivity applications to even complex functions in video editing, VFX and even music production. Perhaps the biggest advancement in all this has been the ability to move these workloads off the cloud and make it possible to run locally on your PC. The adoption of these features will only increase over time.
However, we also recently announced the AI PC Developer program which provides both software and hardware vendors with the tools they need to create an end product, whether software or hardware, that delivers great AI experiences.
Qualified vendors gain access to Intel’s Open Labs, where they receive technical and co-engineering support early in the development phase of their hardware solutions and platforms. Additionally, through this program, Intel provides reference hardware so that qualified IHV partners can test and optimize their technology so that it runs as efficiently as possible at the time of launch.
The hardware that we get today in desktops and laptops is already capable of handling some AI tasks. What made Intel think that PCs will start needing NPUs now?
Intel’s decision to incorporate NPUs (Neural Processing Units) into PCs stems from the evolving demands of AI-driven applications and the need for improved efficiency. While current hardware in desktops and laptops can handle some AI tasks, they will be running either on the CPU or the GPU.
In some cases, the tasks are not very demanding, but by having them run on either of the two results in increased power draw. Intel recognized that as AI applications become more prevalent and sophisticated, traditional CPUs and GPUs may not be sufficient to deliver optimal power-efficient performance.
The NPU on Intel Core Ultra processors is a low-power unit, ideal for running sustained heavily used AI workloads at low power for greater efficiency and minimal power draw, resulting in improved battery life.
Moreover, the trend towards localized AI processing at the edge, driven by concerns over privacy, latency, and bandwidth limitations, underscores the importance of NPUs in PCs. By integrating NPUs into PCs, Intel aims to empower users with the capability to handle AI workloads effectively.
Intel’s newest Core Ultra CPUs are its first tile-based CPUs. Are there any advantages to taking this approach, particularly in AI-related workloads?
Intel Core Ultra processors are Intel’s largest client architectural shift in 40 years. It is the first client processor manufactured on the new Intel 4 process node using our 3D high-performance hybrid architecture, and the first client tile-based design enabled by Foveros packaging technology and features CPU, GPU, and NPU.
This disaggregation allows greater flexibility and unlocks new levels of optimization in client SOCs – including the addition of low-power island e-cores directly in the SOC tile which enables optimized compute performance for a wider range of PC workloads.
This disaggregation also allowed us to integrate an NPU that delivers power-optimized AI performance. If you consider graphics, the built-in Intel Arc graphics (available on select Intel Core Ultra H-series processors) deliver lower latency, optimized power efficiency, and up to 2x the graphics performance for the consumer PC compared to our Intel Iris graphics found in the previous generation.
The NPU delivers on the need for high-efficiency, high-performance inference for emerging needs in collaboration, content consumption, productivity, and future OS needs while the Intel Arc graphics deliver on a need for highest-throughput inference to support gaming and creative needs.
Intel CPUs deliver maximum developer flexibility with performance that allows the platform to quickly support the latest AI methods, giving ISVs the ability to deploy cutting-edge technology immediately on the vast X86 install base. This is all a result of the disaggregated approach.
How does Intel see the future of system packages in terms of heterogeneous computing?
Intel envisions a future where heterogeneous computing plays a critical role in driving innovation and addressing the diverse needs of computing environments. Intel recognizes the significance of harnessing the power of heterogeneous computing to unlock new possibilities across various domains, including edge computing and 5G networks. By integrating diverse computing elements such as CPUs, GPUs, FPGAs, and accelerators, Intel aims to deliver comprehensive solutions capable of tackling a wide range of workloads efficiently.
This approach enables optimized performance, scalability, and flexibility, empowering developers and organizations to leverage the full potential of heterogeneous computing architectures. Intel’s commitment to advancing heterogeneous computing reflects its dedication to driving technological advancements and meeting the evolving demands of modern computing environments effectively.