Connect with us


AI PC; revolutionary technology of the future?



How are computers made with special artificial intelligence hardware different from regular PCs and are they going to revolutionize the future of technology? So what is an AI PC?

AI PC; revolutionary technology of the future?

These days, it’s hard to find a company in the tech space that isn’t trying to enter the world of artificial intelligence. Since the emergence of ChatGPT, which was widely accepted by people all over the world in a short period of time, the footprint of artificial intelligence has been opened to various industries. Personal computer companies have been trying to implement artificial intelligence in their hardware for a long time. We saw a clear example of this issue at CES and Unpack Samsung 2024.

Although a significant range of new technologies were showcased at these events, the main focus of all of them was the application of artificial intelligence in mobile phones, laptops, and computers. First of all, mobile phones were introduced and offered features and capabilities of artificial intelligence, but the idea of ​​CPUs equipped with artificial intelligence in laptops and personal computers drives our minds to the endless possibilities of this amazing technology.

This expectation is probably somewhat misleading, or at least premature. As we have seen in recent technology exhibitions, many companies put forward ideas that are not clear on what benefits and advantages it has for the end user. Currently, companies such as Intel, AMD, and Nvidia have shown that they are focused on developing AI-centric hardware by adding a Neural Network Processing Unit (NPU) to their latest processors.

However, the experts’ questions about what special features these processors have and what they do are met with vague answers. Companies often point to things that may be possible in the future but do not have immediate, tangible user-centric benefits. Follow us in this article to get a better view of artificial intelligence PCs and their role in the future of technology.

Table of Contents
  • What is an AI PC?
  • What is NPU?
  • What companies make neural unit processors?
  • How is NPU performance measured?
  • Application of NPU and artificial intelligence PCs
  • The role of Windows 12 in the development and adoption of AI computers
  • Artificial intelligence software
  • Technology managers’ vision for artificial intelligence computers
  • What companies are in the hands of the artificial intelligence hardware market?
  • Artificial intelligence PCs, marketing bubbles, or vital technology of the future?

What is an AI PC?

An artificial intelligence PC (AI PC) can be considered a supercharged personal computer that has the right hardware and software to perform professional tasks based on artificial intelligence and machine learning. In fact, the main story revolves around the mathematical calculations of engineering sciences, data cleaning and sifting, and the power required to perform machine learning and artificial intelligence tasks.

These tasks cover a wide range of generative AI application workloads such as stable diffusion, intelligent chatbots with local language models, comprehensive data analysis, training AI models, running complex simulations, and AI-based applications.

A conceptual image of a robot behind a computer

In addition to powerful CPUs and GPUs, as well as ample RAM and fast storage options, AI computers are equipped with a new piece of hardware: the NPU, or Neural Processing Unit, which is specifically designed to perform AI turbocharging tasks.

In addition to CPU and GPU, AI PCs are equipped with NPU to perform artificial intelligence tasks

Previously, we saw that Intel and Microsoft worked together to redefine artificial intelligence computers and added the physical “CoPilot” key to the new keyboards of many laptops so that personal computers can use CoPilot’s artificial intelligence capabilities separately and locally. However, measuring NPU power and performance requires meeting standards and requirements that are not easily achieved in a short period of time.

Therefore, the main goal is to build and develop systems that are faster and more efficient in performing artificial intelligence tasks and are optimized in terms of energy consumption; In other words, systems that no longer need to send data, especially sensitive data, to AI cloud servers for processing. Thus, by having an artificial intelligence PC, the user can be sure that his system is able to work independently of the Internet and increase his security by storing and processing data locally.

Read more: iPhone, left behind the field of AI

What is NPU?

The Neural Processing Unit (NPU) is a specialized processor developed to handle the heavy tasks of artificial intelligence that were previously assigned to the graphics card.

Conceptual design of NPU

Current GPUs can handle AI workloads, but they require a lot of energy and effort. For us users, laptop battery life is often very important, and high pressure on the graphics card is not a desirable element. Even on the desktop, running programs becomes slow and causes a lot of trouble.

Of course, NPUs still cannot take full control of such tasks from the GPU. The current performance of the neural processing unit and graphics card is more like mutual cooperation: they work in tandem to reduce processing time while limiting power consumption.

NPU processing volume per second is 10 times higher than traditional CPU cores

To understand the computing speed of these chips, you only need to keep in mind that their processing volume per second is 10 times higher than traditional CPU cores. For this reason, they are great for implementing large language models or complex algorithms.

Performance of large language models of artificial intelligence in plain language

Maybe you are one of the millions of users of ChatGPT and Copilot; But do you know how exactly these chatbots based on linguistic model work?

Performance of large language models of artificial intelligence in plain language

Thus, NPU offers users a smoother workflow along with CPU and GPU. Also keep in mind that an NPU works similarly to platforms like ChatGPT and DALL-E, with the important difference that it has its datasets, algorithms, and language models locally on-chip, while ChatGPT and DALL-E have to process Data requires cloud servers.

PCs aren’t the only devices getting AI hardware and software updates. Almost all flagship laptops that will enter the market this year are equipped with one type of NPU. Smartphones are no exception to this rule, and as we have seen, Samsung presented the Galaxy S24 series with a variety of features such as artificial intelligence-based transcription and translation tools, as well as AI content generation tools for editing images and videos.

Data sets, algorithms, and language models reside on the NPU chip itself

Next-generation NPUs will probably be able to perform AI tasks alone, and GPUs will focus on their best comparative advantages, But we are not there yet.

What companies make neural unit processors?

As expected, the tech giants have a strong presence in the artificial intelligence PC market. Intel and AMD have released chips with AI cores in the Core Ultra and Ryzen 8000G series, respectively. Nvidia has also developed suitable artificial intelligence features in the Geforce and RTX graphics card lines.

The NPUs in these chips take over part of the workload of artificial intelligence; Including AI effects in video calls and video production, improved multitasking capabilities with AI accelerator software, as well as AI assistants.

However, Intel and AMD have not been pioneers in the field of NPUs. In 2020, when Apple ditched Intel and unveiled its own line of M-series processors, these chips used “neural engine” NPUs.

But the beginning of the story goes back a few years before this. In September 2017, Apple unveiled the A11 Bionic chip for the iPhone, which was considered the first chip with a neural engine. Some Qualcomm Snapdragon mobile processors were also equipped with neural engines in 2018.

Intel and AMD are not considered to be the first NPU manufacturers, but they will undoubtedly be more influential in changing the scene of this game than any other company.

How is NPU performance measured?

NPU performance is measured in TOPS, which means trillions of operations per second, and this metric is likely to become the true measure of a neural processing unit.

Intel’s Vice President of Client Computing Group, Todd Lehlen, said that running Microsoft’s CoPilot artificial intelligence service locally and not in the cloud requires an NPU with a minimum performance of 40 tops.

The performance level of the latest Intel and AMD chips does not even reach 20 tops

The point here is that the performance level of the latest silicon products from Intel and AMD, i.e. Meteor Lake and Hawk Point series processors, is estimated to be less than half of this value and does not even reach 20 tops.

Qualcomm is likely to launch its Snapdragon X Elite chips this year, which will use the company’s 12-core NPUs with 45 tops of performance.

Application of NPU and artificial intelligence PCs

The question that is most in the minds of users is what artificial intelligence PCs are supposed to do for us.

In the early days of the release of these systems, it was not easy to distinguish them from ordinary PCs; Because all computers have access to artificial intelligence programs on the Internet. The main turning point of AI PCs is that they process information locally and do not need an internet connection to benefit from AI software.

Tech companies usually focus on the following features and capabilities to promote AI PCs:

  • Text-to-image conversion programs
  • AI-based security features for the device
  • Intelligent battery management
  • Improve photo and video editing capabilities
  • Artificial intelligence assistant for writing, coding, autocorrection, and prediction of texts

Most of these features initially require constant internet access, but some apps, such as the AI ​​assistant, can also be used offline.

The current idea of ​​AI PCs is to use artificial intelligence to accelerate and optimize programs on the computer and a set of features to improve everyday use. This idea will be useful for some specific applications, but it cannot be called revolutionary by any means. It seems that we will see the true potential of this new technology in the future.

The role of Windows 12 in the development and adoption of AI PCs

Nothing motivates businesses and consumers to upgrade their hardware like a new version of Windows. For this reason, PC manufacturers hope that Windows 12 will be the lever that will lead to a big explosion in sales of artificial intelligence PCs.

Windows 12 / Windows 12

If some of the rumors come true, artificial intelligence will be the cornerstone of the new version of Windows. Microsoft is likely to offer Windows Copilot with more features than BingChat, the company’s former AI-focused assistant, and the gap between Windows and Mac OS in terms of AI processing is gradually narrowing.

Artificial intelligence software

The difference between artificial intelligence software and classic software is in how they process the work you ask them to do. A typical application only provides users with predefined tools, something like specialized tools in a mechanic’s toolbox.

You have to learn the best way to use them, and in order to achieve the highest productivity, you need your own personal experience in using them. In fact, every step of the way, it’s all up to you.

In contrast, AI software can learn by itself, make decisions, and perform complex creative tasks like humans. The ability to learn changes the software model, because the AI ​​program does the work at your request, the way you asked.

This fundamental difference allows AI software to automate complex tasks, or provide personalized experiences. In addition, vast amounts of data will be processed more efficiently and the way we interact with our computers will change.

Artificial intelligence software can learn and make decisions by itself

For example, let’s say you took a photo on your trip by the beach, which is not without problems because, at that exact moment, people appeared in the background of your photo. Normally, you need professional editing tools to remove the parts you don’t want; Especially if you want your photo to look accurate, realistic, and convincing, you may spend hours working on the photo.

But the artificial intelligence software is trained with millions of images of similar landscapes and can “imagine” what the beach looks like without crowds. So instead of using different image editing software, you just click a button and suddenly all the parts that took hours to edit will be deleted.

This example is probably familiar to you and you have seen examples of such functionality in new smartphones. Now imagine when you work with your personal computer, almost all the software works with the same routine according to your wishes. This picture is the long-term horizon of artificial intelligence PCs.

According to Robert Halog, Intel’s director of artificial intelligence, AI software follows different algorithms and runs differently on CPUs. He says:

I recently witnessed an AI create an entire PowerPoint out of nothing. There was no need to tell the program how many slides we needed and in what order, or to specify how to lay them out and break them up into smaller sections. AI takes a blank page from you and delivers what you need for the project.

Technology managers’ vision for AI PCs

Lenovo is one of the companies that play a significant role in the market of artificial intelligence systems. Almost all of the company’s AI systems are developed in collaboration with AMD. Robert Herman, Lenovo’s vice president of business, says:

First of all, Workstation is an artificial intelligence PC that uses a powerful GPU. In addition, the in-system processor and processor-enhanced memory are all building blocks for developing artificial intelligence and directing its operations.

According to Herman, since 2017, Lenovo has expanded the workstation team to work with the AI ​​client side, and since then, it has gradually expanded its way into more customer-friendly products. He emphasizes that we will soon see NPUs and AI engines in personal computers that are perfectly suitable for everyday use.

Jason Banta, head of AMD’s PC OEM division, also acknowledged in an interview that Lenovo was the leader in introducing hardware products that have neural processors and artificial intelligence systems at this year’s CES. He also said:

We’re bringing millions of AI PCs to market, and luckily you’re now seeing developers trying to better understand these products. They want to understand how this technology works on a personal level with their applications and improves their programs.

Banta had said some time ago that artificial intelligence PCs are the revolution of the technology world after the introduction of the graphical interface. According to him, Lenovo’s cooperation with big partners like Microsoft will be a big step in the development of artificial intelligence hardware and software. At the same time, the interest and acceptance of other software developers to learn new AI systems will accelerate the growth of this market.

Lenovo: AI PCs are the revolution of the technology world after the introduction of the graphical interface

Last year, Pat Gelsinger, the CEO of Intel, called artificial intelligence personal computers a big and surprising change in the world of technology at the company’s innovations event last year. Among the companies that have joined the AI ​​PC movement are Microsoft, Dell, and HP.

Qualcomm’s VP of Engineering, Ain Shivnan, at the Snapdragon Summit in October, called the next step in personal computing changes in how hardware is used to provide completely new and more personalized AI experiences to consumers. Satya Nadella, the CEO of Microsoft, also pointed out that high-scale artificial intelligence applications rely on both cloud processing and personal computers:

We will have literally tons of applications and programs, some of which will use local processing models and some will use hybrid models. I think this is the future of artificial intelligence.

What companies are in the hands of the artificial intelligence hardware market?

Business trends research company Gartner has announced that artificial intelligence personal computers will take about 43% of the PC market by 2025, and this figure will reach 60% by 2027. Regardless of the extent to which the above estimates come true, the most important companies involved in the construction of AI hardware are:

Artificial intelligence PCs are expected to account for 60% of global computer sales by 2027.


At the beginning of 2023, when the value of the Nvidia company exceeded one trillion dollars, this technology giant became one of the main players in the artificial intelligence hardware market. Nvidia has released the A100 chip and the Volta GPU graphics processor specifically for data centers and announced its readiness to produce hardware equipped with artificial intelligence for the gaming sector.

Last August, this company introduced its newest product, which uses the HBM3e processor, to the world of technology: the Grace Hopper platform, which has three times the bandwidth and memory capacity of current Superchips.

And finally, Nvidia’s NVLink technology can connect the Grace Hopper super chip to other chips. This technology enables multiple GPUs to communicate with each other through a high-speed connection.


Intel has achieved a great position in the CPU market with its artificial intelligence products, and this has caused many competitors to accuse this company of monopoly in the field of AI. Although Intel has not overtaken Nvidia in GPUs, the company’s CPUs handle about 70 percent of the world’s data center inference.

As of last fall, Intel had worked with 100 independent software vendors on more than 300 AI-accelerated features to improve PC experiences in audio effects, content creation, gaming, security, streaming, video collaboration, and more.

After the introduction of Core Ultra processors in December, it was announced that this leading product will be used in the design of more than 230 series of laptops. The most important advantages of Intel in the field of artificial intelligence PCs are software activation, scalability, and immediate availability of products.


Google’s parent company offers a variety of products for mobile devices, data storage, and cloud infrastructure. The company has developed Cloud TPU v5e for large language models and generative artificial intelligence, which performs data processing five times faster at half the cost of the previous generation.

Now Alphabet is focused on producing powerful artificial intelligence chips to meet the demand of large projects. In addition, the company has also unveiled Multislice performance scaling technology. The fourth edition of Alphabet TPUs improves floating-point operations by up to 60% in multibillion-parameter models.


Apple’s chip-based specialized cores, known as Neural Engines, have advanced the design and performance of the company’s AI hardware. This technology was first used in the M1 chips of MacBooks and made the general performance of laptops 3.5 times faster and their graphic performance five times faster than the previous generation.

The success of Apple’s M1 chip led to the introduction of the M2 and M3 series, which benefited from more powerful cores and much better graphics performance.


After the success of its first specialized artificial intelligence chip, Telum, IBM decided to design a powerful successor to compete with other companies in the artificial intelligence market. This company launched a new specialized department called the Artificial Intelligence Unit in 2022, and its first AI chip uses more than 23 billion transistors and 32 processing cores.

One of the most important differentiators of IBM’s AI vision is that instead of focusing on GPUs, it has shifted to producing mixed-signal analog chips, with improved energy efficiency and competitive performance.


Although Qualcomm has been a newer and relatively late entrant to the AI ​​hardware market compared to the other competitors we mentioned, its experience in the telecommunications and mobile phone sectors makes the company a serious player in the AI ​​hardware scene.

Qualcomm’s Cloud AI 100 chip beat the Nvidia H100 in a series of benchmarks . In one of these tests, it was found that the Qualcomm chip responded to 227 requests from the data center server per watt, while this number reached 100 requests for the Nvidia chip. In the “object detection” test, the Cloud AI 100 chip managed to prove its superiority to the H100 by responding to 3.8 requests per watt with a rate of 2.4 requests per watt.


Amazon shifted its focus somewhat from cloud infrastructure to chips in order to maintain its stock value and technology market share.

For example, the company developed Elastic Compute Cloud Trn1s virtual servers specifically for deep learning and generative AI models. Said servers use the Trainium chip, which is a kind of artificial intelligence accelerator.

The first version of Amazon’s Trn1.2xlarge machine learning instance uses a network bandwidth of 12.5 gigabytes per second and a 32-gigabyte memory instance. The new version of this chip was also released with the name trn1.32xlarge, which has 16 accelerators, 521 GB of memory, and a bandwidth of 1,600 GB per second.

AI PCs, marketing bubbles, or vital technology of the future?

With all the hype surrounding AI these days, it’s no surprise that chipmakers are scrambling to implement AI into their products as quickly as possible before consumer interest wanes.

There’s no doubt that adding NPUs to processors will bring amazing benefits to end users in the long run, but the early waves of AI PCs mostly benefited from trends that hit the mainstream.

Currently, NPU is not considered a revolutionary element for personal computers

The capabilities of artificial intelligence PCs, which we mentioned in the previous sections, are interesting features, but they are still accessible with external and web-based applications. AI PC manufacturers should develop programs that encourage users to upgrade their systems; Otherwise, people’s enthusiasm will subside very soon.

Of course, this issue was also true at the beginning of the release of JPT chat and other artificial intelligence tools. AI chatbots at the beginning seemed more hype than practicality; But as their meaning was more widely and deeply understood, they also became more powerful tools.

Currently, NPUs are not considered a vital and extraordinary element for personal computers. They simply speed up what you’re currently doing with your computer and make programs more efficient, but they don’t change the playing field by themselves. The advancement and ubiquity of AI PCs seem to be in the hands of developers who must use this new chip architecture to create innovative software that brings tangible value to consumers.

Perhaps in the future, when applications bring artificial intelligence to their platforms and new technologies are developed focusing on this technology, there will be a greater difference between normal computers and artificial intelligence PCs.


Unveiling of OpenAI new artificial intelligence capabilities





OpenAI claims that its free GPT-4o model can talk, laugh, sing, and see like a human. The company is also releasing a desktop version of ChatGPT’s large language model.

 Unveiling of OpenAI new artificial intelligence capabilities

Yesterday, OpenAI introduced the GPT-4o artificial intelligence model, which is a completely new model of the company’s artificial intelligence, which according to OpenAI is a step closer to a much more natural human-computer interaction.
This new model accepts any combination of text, audio, and image as input and can produce output in all three formats. It can also detect emotions, allow the user to interrupt it mid-speech, and respond almost as quickly as a human during a conversation.
In the live broadcast of the introduction of this new model, Meera Moratti, Chief Technology Officer of OpenAI, said: “The special thing about GPT-4o is that GPT-4 level intelligence has been made available to everyone, including our free users. This is the first time we’ve taken a big step forward in ease of use.
During the unveiling of the model, OpenAI demonstrated the GPT-4o, which translates live between English and Italian, with its intuitive ability to help a researcher solve a linear equation on paper in an instant, just by listening to The breaths of an OpenAI executive give him advice on deep breathing.
The letter “o” in the name of the GPT-4o model stands for the word “Omni”, which is a reference to the multifaceted capabilities of this model.
OpenAI said that GPT-4o is trained with text, images, and audio, meaning all input and output is processed by a neural network. This differs from the company’s previous models, including the GPT-3.5 and GPT-4, which allowed users to ask questions just by speaking, but then converted the speech to text. This would cause tone and emotion to be lost and interactions to slow down.
OpenAI will make this new model available for free to everyone, including ChatGPT users, over the next few weeks, and will also initially release a desktop version of ChatGPT for Apple computers (Mac) for users who have purchased a subscription, starting today. They will have access to it. The introduction of the new OpenAI model took place one day before the Google I/O event, which is the company’s annual developer conference.
It should be noted that shortly after OpenAI introduced GPT-4o, Google also presented a version of its artificial intelligence known as Gemini with similar capabilities.
While the GPT-4 model excelled at tasks related to image and text analysis, the GPT-4o model integrates speech processing and expands its range of capabilities.

Natural human-computer interaction

According to OpenAI, the GPT-4o model is a step towards a much more natural human-computer interaction that accepts any combination of text, audio, and image as input and produces any combination of text, audio and image.
This model can respond to voice inputs in less than 232 milliseconds, with an average speed of 320 milliseconds, which is similar to the response time of humans in a conversation.
This model matches the performance of the GPT-4 Turbo model on English text and code with a significant improvement in converting text to non-English languages while being much faster and 50% cheaper via application programming interface (API). The GPT-4o model is especially better in visual and audio understanding compared to existing models.

What exactly does the introduction of this model mean for users?

The GPT-4o model significantly enhances the experience of ChatGPT, OpenAI’s wildly popular AI chatbot. Users can now interact with ChatGPT like a personal assistant, ask it questions and even hang it up wherever they want.
Additionally, as mentioned, OpenAI is introducing a desktop version of ChatGPT along with a revamped user interface.
“We recognize the increasing complexity of these models, but our goal is to make the interaction experience more intuitive and seamless,” Moratti emphasized. We want users to focus on working with GPT instead of being distracted by the UI. Our new model can reason text, audio, and video in real-time. This model is versatile, fun to work with, and a step toward a much more natural form of human-computer interaction, and even human-computer-computer interaction.
The GPT-4o model has also been extensively reviewed by more than 70 experts in areas such as social psychology, bias and fairness, and misinformation to identify risks introduced or enhanced by the newly added methods. OpenAI has used these learnings to develop safety interventions to improve the safety of interacting with GPT-4o. The members of the OpenAI team demonstrated their audio skills during the public presentation of this new model. A researcher named Mark Chen emphasized its ability to gauge emotions and noted its adaptability to user interruptions.
Chen demonstrated the model’s versatility by requesting a bedtime story in a variety of tones, from dramatic to robotic, and even had it read to him. As mentioned, this new model is available for free to all ChatGPT users. Until now, GPT-4 class models were only available to people who paid a monthly subscription.
“This is important to us because we want to make great AI tools available to everyone,” said OpenAI CEO Sam Altman.

Strong market for generative artificial intelligence

OpenAI is leading the way in productive AI alongside Microsoft and Google, as companies across sectors rush to integrate AI-powered chatbots into their services to stay competitive.
For example, Anthropic, a competitor of OpenAI, recently unveiled its first corporate proposal to Apple to provide a free program for iPhones.
“We recognize that GPT-4o audio presentations present new risks,” OpenAI said in a statement. Today we’re publicly releasing text and image inputs and text outputs, and in the coming weeks and months, we’ll be working on the technical infrastructure, post-training usability, and security necessary to release other methods. For example, at startup, audio outputs are limited to a set of predefined sounds and adhere to our existing security policies. We will share more details about the full range of GPT-4o methods in a future system.
According to the report, the generative AI market saw a staggering $29.1 billion in investment across nearly 700 deals in 2023, up more than 260 percent from the previous year. Predictions indicate that the yield of this market will exceed one trillion dollars in the next decade. However, there are concerns about the rapid deployment of untested services by academics and ethicists who are troubled by the technology’s potential to perpetuate prejudice.
Since launching in November 2022, ChatGPT’s chatbot has broken records as the fastest-growing user base in history, with nearly 100 million weekly active users. OpenAI reports that more than 92% of the world’s top 500 companies use it.
At the presentation event last night, Moratti answered some questions from the audience and when he spoke in fluent Italian and the artificial intelligence translated his words into English, the hall was filled with excitement.
There is more. This means the next time you take a selfie, OpenAI’s artificial intelligence can assess your exact emotions. All you have to do is select a selfie and ask ChatGPT to tell you how you feel.
It should be said that OpenAI employees were so happy that ChatGPT asked them why they were so happy!

Continue Reading


Samsung S95B OLED TV review




Samsung S95B OLED TV
The S95B TV is Samsung’s serious attempt to enter the OLED TV market after a decade of hiatus; But can it take back the OLED throne from LG?

Samsung S95B OLED TV review

What can be placed in a container with a depth of 4 mm? For example, 40 sheets of paper or 5 bank cards; But to think that Samsung has successfully packed a large 4K OLED panel into a depth of less than 4mm that can produce more than 2000 nits of brightness is amazing. Join me as I review the Samsung S95B TV.

Continue Reading


MacBook Air M3 review; Lovely, powerful and economical




MacBook Air M3 review
The MacBook Air M3, with all its performance improvements, adds to the value and economic justification of the MacBook Air M1, rather than being an ideal purchase.

MacBook Air M3 review; Lovely, powerful and economical

If you are looking for a compact, well-made and high-quality laptop that can be used in daily and light use, the MacBook Air M3 review is not for you; So close the preceding article, visit the Zomit products section and choose one of the stores to buy MacBook Air M1 ; But if you, like me, are excited to read about the developments in the world of hardware and are curious to know about the performance of the M3 chip in the Dell MacBook Air 2024 , then stay with Zoomit.

Continue Reading