Connect with us

Technology

Xiaomi Redmi Note 13 Pro Plus 5G review

Published

on

Redmi Note 13 Pro+ 5G

Xiaomi Redmi Note 13 Pro Plus 5G review. Review of the camera, hardware, software, battery, design, screen, and charging speed of Redmi Note 13 Pro Plus 5G phone.

Xiaomi Redmi Note 13 Pro Plus 5G review

Redmi Note 13 Pro Plus 5G is Xiaomi’s flagship product in the very successful Redmi Note series. While there have been generations of Pro Plus models in the series, it’s only with the Redmi Note Pro Plus 12 that the company has turned to a premium pricing model for what was already a budget smartphone series.

Redmi Note 13 Pro Plus hands-on review

The Redmi Note 13 Pro Plus 5G takes things up a notch and is the most expensive Redmi phone to date. The phone comes well-packed with a wide range of high-end features. A curved screen will appear for the first time on Redmi series devices with Victus Gorilla Glass protection as well as an IP68 entry rating. The main camera is Samsung’s new 200MP ISOCELL HP3, and you get the same 120W super-fast charging with the included charger in the box.

All of these include the custom MediaTek Dimensity 7200-Ultra with an updated list of storage and memory options, starting at 256GB and now up to 512GB UFS 3.1.

The Redmi Note 13 Pro Plus 5G starts at INR 31,999, INR 4,000 more than the Redmi Note 12 Pro Plus 5G and INR 11,000 more than the Redmi Note 11 Pro Plus 5G that was launched just two years ago.

Unboxing

The Redmi Note 13 Pro Plus 5G comes in Xiaomi’s updated packaging for the Redmi series, which features a red color scheme and embossed text on the front that replaces the phone’s image on previous packages.

Redmi Note 13 Pro Plus hands-on review

Inside the phone box, we get a 120W charger, a cable, and a case. The soft silicone case changes from a transparent design to a completely black color. Since it is made of matte material, it will not turn yellow like the previous silicone frames.

New case design - Redmi Note 13 Pro Plus hands-on review New case design - Redmi Note 13 Pro Plus hands-on review
New case design

It has a fairly large cutout at the top around the camera, which exposes too much of the phone for purely aesthetic reasons and doesn’t count as much of a phone protector. We recommend getting a better case for added protection.

Read more: Review of Xiaomi 14 Pro

Design

Xiaomi has done some interesting things with the design of the Redmi Note 13 Pro Plus 5G. For starters, this is the first Redmi phone to feature a curved display. There aren’t many performance benefits to this design, but it looks pretty good, and since the company is starting to charge more for these phones, they might as well look premium.

Redmi Note 13 Pro Plus hands-on review

There is also something interesting going on at the back of the phone, but only in a special way. The purple fusion model has a vegetable leather back. Vegan leather sounds fancy but it’s just another word for plastic. Anyway, what’s interesting is the use of multicolored geometric pieces that present the works of the famous Dutch painter Piet Mondrian, albeit in much more subtle colors.

Redmi Note 13 Pro Plus hands-on review

Unfortunately, this interesting part of the design is exclusive to the Purple Fusion model, as the Black Fusion and White Fusion pictured here have plain glass backs with slight differences in reflective materials for the top and bottom of the panel. However, some may prefer the glass back on the black and white models as it feels better in the hand and is less likely to smudge than soft plastic.

Redmi Note 13 Pro Plus hands-on review

A plastic frame is placed between the two glass panels, which is smoothed on four sides. With a little skill and patience, you can fit the phone on three of the four sides, if you have nothing better to do with your time.

The top of the phone has a cutout for the speaker, microphone, and IR blaster. The bottom has a speaker, microphone, and wire tray. On the right side are the power and volume buttons.

Redmi Note 13 Pro Plus hands-on review

Aesthetically, the Redmi Note 13 Pro Plus 5G has the premium look and feel that the company is aiming for. But what really makes it stand out is the inclusion of IP68 dust and water resistance. That’s on par with flagship devices from other manufacturers, and that’s a really good thing, especially if you live or work in challenging climates.

Display

The Redmi Note 13 Pro Plus 5G has a 6.67-inch AMOLED display with a resolution of 2712 x 1220. The display can refresh up to 120Hz, has a maximum brightness of 1800 nits, 100% P3 coverage and is protected by Corning Gorilla Glass Victus.

Redmi Note 13 Pro Plus hands-on review

The screen has a remarkable image quality. Out of the box, it’s set to the usual cooler and more saturated mode, but it can be changed to the standard sRGB mode via the settings, at which point it looks even more impressive and detailed. You can manually adjust the color gamut, RGB color balance, HSV, contrast, and gamma through the advanced settings.

This display can have 500 nits of brightness under normal conditions. Under strong sunlight, it can reach 1200 nits globally, and the advertised 1800 nits figure is the maximum brightness that can be achieved for HDR, although only for a small portion of the display at a time or for a very short period of time.

Display settings - Redmi Note 13 Pro Plus hands-on review Display settings - Redmi Note 13 Pro Plus hands-on review Display settings - Redmi Note 13 Pro Plus hands-on review Display settings - Redmi Note 13 Pro Plus hands-on review Display settings - Redmi Note 13 Pro Plus hands-on review Display settings - Redmi Note 13 Pro Plus hands-on review
Display settings

The screen can refresh up to 120Hz but can also drop down to 90Hz and 60Hz depending on the content and application. It can also drop down to 30Hz but only for always-on display mode. The operating system allows you to choose your refresh rate between 60 Hz, 90 Hz, and 120 Hz on a per-program basis. For example, the YouTube app defaults to 60Hz but can be increased to 120Hz. This does not affect full-screen video playback, which is handled separately based on the content’s frame rate, although the display is typically set to 60Hz.

The phone supports all major HDR standards including HDR10, HDR10+, Dolby Vision, and HLG. We were able to stream Dolby Vision content locally as well as on the Netflix app.

Redmi Note 13 Pro Plus hands-on review

The Redmi Note 13 Pro Plus 5G has a permanent sharpening and color enhancement filter that applies to all video apps like YouTube, Netflix, Prime Video, and even the built-in Gallery app. It applies a very strong edge enhancement and saturation filter that looks especially stylish on HDR content. Dolby Vision content appears to be exempt from color enhancement but still becomes overly sharp.

The worst part is that it’s not optional and there’s not much you can do to disable it. The phone now has a high-resolution display that provides a clear image with good color saturation. Sharpening and color enhancement will further destroy the picture and make it look like something you’d find on a cheap underground TV at your local electronics store.

Redmi Note 13 Pro Plus hands-on review

Another problem is that even though the phone has “always on” mode, it doesn’t actually stay on. Once activated, it only stays on for ten seconds, after which you have to tap the screen to see the always-on screen.

The display has an optical fingerprint scanner which, interestingly enough, doubles as a heart rate monitor. This option is buried a little under the settings, but once you find it, just hold your finger over the sensor and it will tell you your heart rate in real-time.

Redmi Note 13 Pro Plus hands-on review

Overall, there isn’t much to complain about the Redmi Note 13 Pro Plus 5G’s display. High brightness is especially useful when shooting outdoors in daylight. The only problem is the obnoxious image sharpening that applies to all video apps. This should never be a mandatory option on any device.

Battery and charging

Redmi Note 13 Pro Plus 5G has a 5000 mAh battery. We’ll do a full battery test in a full review, but we got good battery life from the phone during day-to-day use.

This phone supports 120W charging using Xiaomi’s exclusive HyperCharge technology. Xiaomi claims that it takes 19 minutes to fully charge the phone.

Redmi Note 13 Pro Plus hands-on review

Using the 120W charger provided in the box, it took about 25 minutes for the phone to report 100% from 1%. We’ve gotten similar results in the past from other Xiaomi phones with 5,000mAh batteries and 120W charging, and even then the company claimed 19 minutes. We think Xiaomi needs to rethink its advertising claims as they are consistently over the top.

However, 25 minutes is a very short time to fully charge a 5000 mAh battery and you can get around 70% in just 15 minutes or 30% in just five minutes. This feature is very impressive, but it requires the use of special Xiaomi chargers.

Out of curiosity, we also tried a full charge with a 65W USB-PD charger and it took just over an hour, which is still pretty good.

Speakers

The Redmi Note 13 Pro Plus 5G has a set of stereo speakers, one inside the phone and one at the bottom of the phone. The speaker of the phone has an additional vent at the top of the device.

Redmi Note 13 Pro Plus hands-on review

The sound of the phone speakers is not very good. The sound is fairly thin and while we wouldn’t use the word tinny, there isn’t much bass or body to the sound. Additionally, the speakers don’t sound particularly loud for outdoor use, although they are adequate indoors. The two speakers are well balanced in terms of tone and volume when played together, but it was surprising to see more body and bass from the top speaker than the bottom speaker played separately, as it’s usually the other way around for most phones.

This phone supports Dolby Atmos codec and sound processing. When enabled, it improves the spatial characteristics of the speakers, while making them sound better in the lows and highs, but only marginally.

The phone supports analog audio output via the USB-C port, so you don’t need to invest in aftermarket USB DAC devices for wired audio.

Fortunately, you can disable Atmos and other audio processing when using wired or wireless headphones.

Software

Redmi Note 13 Pro Plus 5G comes with MIUI 14 on top of Android 13. This is disappointing because Xiaomi has already announced and started rolling out its next-generation HyperOS running on top of Android 14 to other devices.

Redmi Note 13 Pro Plus hands-on review

The company uses the oldest trick in the book here. Launching a phone with an older version of Android allows companies to avoid sending fewer updates in the future. Xiaomi has promised three major Android updates for the device, which means it will stop with Android 16 instead of Android 17, which the phone shipped with Android 14. A HyperOS update for this device is expected to arrive at an unspecified time.

The Redmi Note 13 Pro Plus 5G, along with its siblings in the Redmi Note 13 series, will be the last Xiaomi devices to launch with MIUI, so it’s more like an exit interview than a review. Again, HyperOS is not dissimilar in appearance to MIUI, so much of what is said here will apply in the next major update.

MIUI 14 - Redmi Note 13 Pro Plus hands-on review MIUI 14 - Redmi Note 13 Pro Plus hands-on review MIUI 14 - Redmi Note 13 Pro Plus hands-on review MIUI 14 - Redmi Note 13 Pro Plus hands-on review MIUI 14 - Redmi Note 13 Pro Plus hands-on review MIUI 14 - Redmi Note 13 Pro Plus hands-on review
MIUI 14

MIUI has always been a bit of a chore. Modeled largely after iOS, the design bears more than a passing resemblance, and while many rivals have continued doing the same, Xiaomi has only doubled down over the years, including borrowing some Questionable ideas including separate notification and control centers, an overly nested settings app, an objectively worse sharing tab, and the need to force every app icon to be square by default.

What MIUI has always had on iOS and even most Android skins before is an incredible level of customization. So many aspects of the user experience can be tweaked to your satisfaction that there’s often little room for complaint because there’s almost always room to change something that bugs you. This change is especially frustrating for things like unbreakable resolution for video because it’s not only annoying to the user, it’s also not in the spirit of the operating system. It also makes the design of the Settings app even more frustrating, as there are an incredible number of options in there that can be changed, but they are often at very low levels (similar to iOS) that most users never dare to go into.

Bloatware and notification spam – Redmi Note 13 Pro Plus hands-on review Bloatware and notification spam – Redmi Note 13 Pro Plus hands-on review Bloatware and notification spam – Redmi Note 13 Pro Plus hands-on review Bloatware and notification spam – Redmi Note 13 Pro Plus hands-on review Bloatware and notification spam – Redmi Note 13 Pro Plus hands-on review
Bloatware and notification spam

 As the operating system is customizable, one complaint is always a large number of unnecessary programs and services that come pre-installed, many of which cannot be removed or disabled. Is there a good reason for a phone to have two SMS apps, two file managers, two app stores, two gallery apps, and two browsers? no really. Redmi phones are especially offensive in terms of the amount of software they come with. We counted 18 non-native apps, many of which were games that came pre-installed on the phone in addition to the Xiaomi and Google apps. Xiaomi seems to have forgotten what it’s retailing its phones for these days, as we don’t expect or like to see this level of garbage inexpensive devices.

Redmi Note 13 Pro Plus hands-on review

Note, that this is apart from the ads that still plague many first-party apps on Redmi devices in markets like India, and you’ll have to work to disable them in each of those apps. Again, not what you’d expect in a premium device.

Then there are the endless notifications. The best advice you can give a new Xiaomi smartphone user is to immediately go to settings and disable notifications for any first-party apps (or at least the ones that allow you), otherwise, they’ll stop spamming you. They do not prevent In particular, the App Vault feature constantly sends you information that you neither subscribed to nor care about.

This software is still a controversial topic for Xiaomi phones. It’s like a supercharged version of iOS with the best and worst aspects of Android.

Performance

The Redmi Note 13 Pro Plus 5G runs on the MediaTek Dimensity 7200-Ultra, which appears to be a custom variant of the Dimensity 7200 for the device. However, there seems to be no difference on paper when it comes to core configuration and clock speed. Customizations may be largely limited to your ISP and how it interacts with the camera.

The phone also has LPDDR5 memory up to 12 GB and UFS 3.1 memory up to 512 GB. Our review unit here has 12GB of RAM and 256GB of storage. There is no support for memory expansion.

Redmi Note 13 Pro Plus hands-on review

The performance of Redmi Note 13 Pro Plus 5G was really impressive. The phone is very responsive and the scrolling function is very smooth. This is usually a hurdle that many phones can’t overcome even when they claim to be premium and have fancy hardware, but Xiaomi has nailed this aspect of the user experience. It’s a functional phone and a pleasure to use for everyday tasks.

Unfortunately, we were unable to run some of our usual benchmarks because our review unit was preventing them from connecting to the internet, which then prevented them from downloading the additional data that allowed them to run. We had to limit testing to criteria that were operational. We wish manufacturers would stop inventing new and creative ways to interfere with the testing process and let the device stand for itself. If these criteria become operational, we may update the results.

In the benchmarks we were able to run, we can see that the Redmi Note 13 Pro Plus 5G isn’t very competitive, losing by a wide margin to older devices like the OnePlus 11R and Nord 3. But this is one case where benchmarks don’t tell the whole story, and as mentioned earlier, this phone performs surprisingly well in everyday use.

Redmi Note 13 Pro Plus
Redmi Note 13 Pro Plus
Redmi Note 13 Pro Plus

Camera

Redmi Note 13 Pro Plus 5G has three cameras on the back, the main camera being one of the key features of this device. We have a 200MP Samsung ISOCELL HP3 sensor here, along with a 23mm equivalent 7P lens, f1.65 aperture, and OIS. By default, this sensor works in 16 to 1 mode and produces 12.5-megapixel images. However, it can capture native 200MP images and also use center cropping to produce lossless 2x and 4x images.

The other two cameras include an 8-megapixel ultra-wide camera and a 2-megapixel macro camera.

Redmi Note 13 Pro Plus hands-on reviewThe camera app is well laid out, with all the main modes at the bottom and additional options can be brought up by swiping down on the screen. In terms of features, you get your usual suspects like night mode, portrait mode, and also professional mode. Unfortunately, there are no RAW images in pro mode, which severely limits its usefulness.

Camera app - Redmi Note 13 Pro Plus hands-on review Camera app - Redmi Note 13 Pro Plus hands-on review Camera app - Redmi Note 13 Pro Plus hands-on review Camera app - Redmi Note 13 Pro Plus hands-on review Camera app - Redmi Note 13 Pro Plus hands-on review Camera app - Redmi Note 13 Pro Plus hands-on review
Camera app

Now for the image quality. The image quality of the main camera was a little low due to low color reproduction. All images have a magenta bias that can be quite noticeable in some cases. Colors also lack warmth, which affects reds the most, and darker shades like red soil often appear brown in some photos. Magenta bias is particularly damaging to skin tones, with darker tones often appearing pink instead of brown.

12.5MP main camera - f/1.6, ISO 64, 1/100s - Redmi Note 13 Pro Plus hands-on review 12.5MP main camera - f/1.6, ISO 50, 1/278s - Redmi Note 13 Pro Plus hands-on review 12.5MP main camera - f/1.6, ISO 125, 1/100s - Redmi Note 13 Pro Plus hands-on review
12.5MP main camera - f/1.6, ISO 64, 1/100s - Redmi Note 13 Pro Plus hands-on review 12.5MP main camera - f/1.6, ISO 50, 1/332s - Redmi Note 13 Pro Plus hands-on review 12.5MP main camera - f/1.6, ISO 50, 1/100s - Redmi Note 13 Pro Plus hands-on review
12.5MP main camera - f/1.6, ISO 50, 1/349s - Redmi Note 13 Pro Plus hands-on review 12.5MP main camera - f/1.6, ISO 200, 1/100s - Redmi Note 13 Pro Plus hands-on review
12.5-megapixel main camera

Our review unit also showed significant chromatic aberration in several shots, resulting in noticeable purple fringing around high-contrast edges, such as tree leaves in front of the sky. This is usually a lens artifact that can be suppressed via software, but it didn’t seem to happen in our unit.

The colors were disappointing, as the level of detail and dynamic range in the images is satisfactory, although like Samsung phones there is a tendency to overexpose by stopping. If Xiaomi could adjust the colors and exposure, this camera could be very capable, but at the moment it is not.

Just to make this point more clear, here we compare it to the OnePlus 11R. Shadow detail isn’t impressive on the OnePlus, but the color rendering is much better and more accurate. The scene looks similar to the OnePlus images, which further shows how much the Redmi has deviated in comparison.

Primary camera: Redmi Note 13 Pro+ 5G - f/1.6, ISO 50, 1/305s - Redmi Note 13 Pro Plus hands-on review Primary camera: OnePlus 11R - f/1.8, ISO 50, 1/425s - Redmi Note 13 Pro Plus hands-on review Primary camera: Redmi Note 13 Pro+ 5G - f/1.6, ISO 50, 1/346s - Redmi Note 13 Pro Plus hands-on review
Primary camera: OnePlus 11R - f/1.8, ISO 50, 1/288s - Redmi Note 13 Pro Plus hands-on review Primary camera: Redmi Note 13 Pro+ 5G - f/1.6, ISO 50, 1/178s - Redmi Note 13 Pro Plus hands-on review Primary camera: OnePlus 11R - f/1.8, ISO 50, 1/281s - Redmi Note 13 Pro Plus hands-on review
Main camera: Redmi Note 13 Pro+ 5G • OnePlus 11R

Both have the same level of detail despite having four times the number of pixels, and that’s because the sensor on the Redmi Note 13 Pro+ 5G operates in 16-to-1 mode, meaning it only captures 12.5MP of data and doesn’t compress it. 200 megapixels in a 12.5-megapixel image.

But having a 200-megapixel sensor has one advantage, and that’s when zooming. The 2x zoom uses a center crop and 4-to-1 mode to produce a 12.5MP image that’s sharp and on par with any native 2x camera. In fact, it looks better because you still get the benefits of superior optics and aperture that native telephoto cameras lack.

2x lossless digital zoom - f/1.6, ISO 320, 1/100s - Redmi Note 13 Pro Plus hands-on review 2x lossless digital zoom - f/1.6, ISO 80, 1/100s - Redmi Note 13 Pro Plus hands-on review 2x lossless digital zoom - f/1.6, ISO 64, 1/100s - Redmi Note 13 Pro Plus hands-on review
2x lossless digital zoom - f/1.6, ISO 50, 1/499s - Redmi Note 13 Pro Plus hands-on review 2x lossless digital zoom - f/1.6, ISO 50, 1/252s - Redmi Note 13 Pro Plus hands-on review 2x lossless digital zoom - f/1.6, ISO 50, 1/708s - Redmi Note 13 Pro Plus hands-on review
2x lossless digital zoom - f/1.6, ISO 50, 1/761s - Redmi Note 13 Pro Plus hands-on review Lossless 2x digital zoom - f/1.6, ISO 160, 1/100s - Redmi Note 13 Pro Plus hands-on review
2x lossless digital zoom

Images are cropped 4x more and captured in 1 to 1 mode so they are not sharp but still fully usable.

4x lossless digital zoom - f/1.6, ISO 100, 1/50s - Redmi Note 13 Pro Plus hands-on review 4x lossless digital zoom - f/1.6, ISO 50, 1/123s - Redmi Note 13 Pro Plus hands-on review 4x lossless digital zoom - f/1.6, ISO 50, 1/50s - Redmi Note 13 Pro Plus hands-on review
4x lossless digital zoom - f/1.6, ISO 50, 1/593s - Redmi Note 13 Pro Plus hands-on review 4x lossless digital zoom - f/1.6, ISO 50, 1/222s - Redmi Note 13 Pro Plus hands-on review 4x lossless digital zoom - f/1.6, ISO 50, 1/848s - Redmi Note 13 Pro Plus hands-on review
4x lossless digital zoom - f/1.6, ISO 50, 1/763s - Redmi Note 13 Pro Plus hands-on review 4x lossless digital zoom - f/1.6, ISO 64, 1/50s - Redmi Note 13 Pro Plus hands-on review
4x lossless digital zoom

Native 200MP images have plenty of detail, but file sizes can range from 8x to 10x the size of 12.5MP files. These images also don’t benefit from the same level of processing as 12.5MP images, so the dynamic range is worse and problems with chromatic aberration are more prominent.

200MP main camera - f/1.6, ISO 50, 1/420s - Redmi Note 13 Pro Plus hands-on review 200MP main camera - f/1.6, ISO 100, 1/50s - Redmi Note 13 Pro Plus hands-on review
200MP main camera - f/1.6, ISO 50, 1/352s - Redmi Note 13 Pro Plus hands-on review 200MP main camera - f/1.6, ISO 50, 1/100s - Redmi Note 13 Pro Plus hands-on review
200-megapixel main camera

What would make more sense is to have a 50MP mode with 4×1 pixels. These could still have a lot of detail with more manageable file sizes and better image processing.

Moving to an ultra-wide 8-megapixel camera is nothing special. The resolution is insufficient for this wide view, so the level of detail is understandably low. The images have a good dynamic range and the color gradation in most cases corresponds to the original camera. The camera also shows purple fringing, so it appears to be a systemic problem with the software rather than a specific camera.

8MP ultra-wide camera - f/2.2, ISO 50, 1/999s - Redmi Note 13 Pro Plus hands-on review 8MP ultra-wide camera - f/2.2, ISO 50, 1/303s - Redmi Note 13 Pro Plus hands-on review 8MP ultra-wide camera - f/2.2, ISO 50, 1/1190s - Redmi Note 13 Pro Plus hands-on review
8MP ultra-wide camera - f/2.2, ISO 50, 1/423s - Redmi Note 13 Pro Plus hands-on review 8MP ultra-wide camera - f/2.2, ISO 50, 1/903s - Redmi Note 13 Pro Plus hands-on review 8MP ultra-wide camera - f/2.2, ISO 50, 1/349s - Redmi Note 13 Pro Plus hands-on review
8MP ultra-wide camera - f/2.2, ISO 50, 1/471s - Redmi Note 13 Pro Plus hands-on review 8MP ultra-wide camera - f/2.2, ISO 50, 1/174s - Redmi Note 13 Pro Plus hands-on review
Ultra-wide 8-megapixel camera

The 2MP macro camera is less problematic than we’ve seen before, but it’s still pretty awful.

2MP macro camera - Redmi Note 13 Pro Plus hands-on review 2MP macro camera - Redmi Note 13 Pro Plus hands-on review
2MP macro camera - Redmi Note 13 Pro Plus hands-on review 2MP macro camera - Redmi Note 13 Pro Plus hands-on review
2-megapixel macro camera

The worst part is that the macro camera is completely redundant on this phone as you can get much better results by simply stepping back and using 2x or even 4x zoom on the main camera. You get more clarity and detail, better colors and dynamic range, and much less distortion with a more pleasing depth of field. We’ve said it before, but macro cameras are just there to pad the spec sheet and make it look like you’re getting more money when it’s just a scam.

2MP macro camera - Redmi Note 13 Pro Plus hands-on review 4x lossless zoom - f/1.6, ISO 50, 1/50s - Redmi Note 13 Pro Plus hands-on review
2-megapixel macro camera • 4x lossless zoom

Moving on to video, the Redmi Note 13 Pro Plus 5G can record 4K video at up to 30fps. This is a limitation of the chipset itself as the sensor can handle much higher resolution and frame rate combinations.

4K videos look fairly pleasing, not showing the excessive resolution levels we typically see on smartphones while maintaining the level of detail expected from a 4K video. Unfortunately, it still suffers from the same magenta bias and color fringing in still images and can look really bad on rare occasions.

Photos from 4K video - Redmi Note 13 Pro Plus hands-on review Photos from 4K video - Redmi Note 13 Pro Plus hands-on review
Stills from 4K video

The combination of OIS and EIS works well enough for handheld shots, but if you choose to walk or run, there’s noticeable vibration and it’s not suppressed as well as more expensive devices.

The ultra-wide camera can only record up to 1080p resolution, and videos are noticeably soft and dimmed to hide shadow noise. However, they can be used under the right conditions.

Summary

Redmi Note 13 Pro Plus 5G price starts at INR 31,999 in India, which is the highest price for Redmi series devices. For the money, you get a remarkable design and build quality, the best ever in a Redmi phone. The display quality is also excellent and we were very satisfied with the performance of the device in everyday tasks. The 120W fast charge is also a fantastic feature, and we appreciate the inclusion of a 120W charger in a world where packs are quickly disappearing.

Redmi Note 13 Pro Plus hands-on review

However, we weren’t too happy with the camera. The massive 200MP resolution lossless zoom feature is really neat, but there were noticeable color issues that plagued all of the device’s cameras. The good news is that it’s fixable, and we hope the company fixes things based on the feedback provided here.

Unfortunately, we don’t see much change in the software department, which remains one of the weakest aspects of the Redmi device. Despite the ever-increasing prices, the phone is still loaded with software, ads, and apps with annoying notifications.

Why should we buy the Redmi Note 13 Pro Plus 5G phone?

  • Good design and IP68-rated durability
  • Excellent display quality
  • Fast and smooth operation
  • Very fast charging speed

Why should we avoid buying the Redmi Note 13 Pro Plus 5G phone?

  • The software is full of bloatware, extra programs and annoying notifications
  • Constant sharpening and saturation filters are boring for video applications
  • The camera has white balance and color fringing issues

Source: GSMARENA.COM

Technology

iPhone 16 Pro Review

Published

on

By

iPhone 16 Pro
The iPhone 16 Pro is one of the least changed iPhones of the last few years, and at the same time, it offers the same reliable experience as before.

iPhone 16 Pro Review

We usually know Apple as a company that refuses to release half-assed products or software features and prefers not to enter a new field at all or to enter with a product that provides a reliable and efficient experience to the user. Accordingly, the iPhone 16 Pro is the most imperfect product in Apple’s history; I will explain further.

Table of contents
  • iPhone 16 Pro video review
  • Camera and Camera Control
  • Ultrawide camera
  • Main camera
  • Telephoto camera
  • Portrait photography
  • selfie camera
  • Performance and battery
  • Design and build quality
  • Display and speaker
  • Summary and comparison with competitors

Apple is marketing the iPhone 16 Pro with a focus on Apple Intelligence and its artificial intelligence capabilities; But now, even to experience Apple’s artificial intelligence half-and-half, you have to wait until the official release of iOS 18.1 in late October, more than a month after the iPhone 16’s launch. There is not even news of the attractive animation of the new Siri; The animation that inspired Apple to name the iPhone 16 event It’s Glowtime.

Dimensions of iPhone 16 Pro in hand

For those who have been unaware of the technology world since the early months of 2024, I must say that Apple Intelligence is Apple’s answer to Google’s Gemina, Samsung’s Galaxy AI, and even Microsoft’s Copilot. According to Apple Intelligence, Siri is going to be what was promised 13 years ago, during its unveiling; A full-fledged digital assistant that speaks to the user in natural language; Of course, apart from the advanced Siri, capabilities such as creating photos and emojis with AI, text writing and photo editing tools will also be added to iOS.

Note that we have to wait for iOS 18.4 to fully experience Apple Intelligence with all its features; This update will be released in the early months of 2025. iPhone 16 comes with iOS 18 by default; So it is not surprising that Apple lags behind its competitors with such a delay, and the iPhone 16 Pro is not a perfect device either.

Camera and Camera Control

Now that Apple Intelligence is out of the question, and as per Zoomit’s policy, we don’t review a device based on the promise of future updates, let’s leave AI out of the iPhone 16 Pro review headlines and start straight from the part that has changed the most. : Camera or rather, camera button.

Control camera button on iPhone 16 Pro frame
Working with iPhone 16 Pro camera control
iPhone 16 Pro camera control menu
iPhone 16 Pro cameras

While it was said that Apple is working on removing the physical buttons of the iPhone, this year surprisingly, another button was added to the iPhone 16 family; Although Apple insists on calling it Camera Control. Unfortunately, camera control is crude and incomplete both in terms of implementation and capabilities; I will explain further.

As usual with Apple, the camera control has a complex engineering behind its simple appearance. The surface of the control camera is made of sapphire and is surrounded by a stainless steel ring of the same color as the body. Under this surface, there is a precise force detection sensor with haptic feedback along with a touch sensor so that the camera control can simulate the shutter of DSLR cameras and recognize the swipe of the finger on the button surface.

Camera menu on iPhone 16 Pro

Apple says that by the end of this year, with a software update, it will add a feature to the camera control that will allow the user to focus on the subject by half-pressing the button and record the photo by fully pressing it, just like professional cameras and Xperia phones. On the other hand, after the release of Apple Intelligence, the user will have access to Siri’s image search function with the camera control.

control camera; An interesting idea, but very immature

Currently, with the camera control, you can take photos, record videos, or change camera parameters; Thus, by pressing the button once, the camera application is launched, now if you press the button again, a photo will be taken, and if you hold it, the video will start, and as soon as you lift the finger, the video will stop.

In the camera environment, if you gently press the button twice without lifting your finger, the photography parameters will appear, you can switch between the options by swiping on the button surface, and you can enter the desired parameter settings with another gentle press. Among the photography parameters available are exposure, depth of field, zoom, switching between cameras, Style, and Tone, and we will talk more about the last two in the following.

Camera control in the camera viewfinder
Control camera settings
Control camera settings 2

To be honest, for me and many of my colleagues at Zoomit, it was much easier and more straightforward to touch the screen to navigate through the camera menu than to use the camera controls. Still, after 10 days of working with iPhone 16 Pro, it is very difficult and time-consuming to go to the photography parameters section and swipe to adjust the parameters; For example, it often happens that while swiping to adjust the value of a parameter such as Tone, the phone decides to exit the Tone settings and move between parameters.

One of the problems of the camera control comes back to the firmness of its button; Therefore, when taking pictures with this button, the phone shakes; An issue that may end up blurring the details of photos in the dark.

Apart from the safety of the button, the placement of Camera Control is also not optimal in my opinion; When using the phone in portrait mode, especially with the Pro Max model, you are likely to have trouble and need to use both hands; If you use the phone with your left hand, sometimes your fingers may press the button and disrupt the phone’s functionality.

Applying changes to the color and lighting of iPhone 16 Pro photos

If Apple fixes the problems and bugs of the control camera, maybe it can be used in two cases; First, during zooming, because you can have more precise control over the zoom level, and second, for faster access to Apple’s new settings for the camera called Style and Tone, which are very useful for photography enthusiasts; Now I will explain the reason.

iPhones usually have their own style of photography; iPhone photos usually have colors close to reality with a relative tendency towards warmth, and there is no mention of saturated and high-contrast colors; Of course, Apple introduced the Photographic Styles feature with iPhone 13 to satisfy the fans of high-contrast photography in the style of Google Pixels by providing different photography styles.

the battle of the flags; Comparison of iPhone 16 Pro camera with Pixel 9 Pro and Galaxy S24 Ultra [survey results]

iPhone 16 Pro? Pixel 9 Pro XL or Galaxy S24 Ultra? Which phone has the best camera? The result will surprise you.

With the iPhone 15, Apple adopted a policy that was not very pleasant for the public; In short, in order to use all the capacities of the powerful Photonic Engine with the aim of preserving the details of the shadows and highlights, the iPhone goes a little too far in the implementation of HDR to the point where the colors and shadows lose their power and do not have the previous dramatic sense.

The bad news is that the iPhone 16 Pro follows Apple’s previous policy and, so-called, records the shadows weakly; But the good news is that now with the evolved version of Photographic Styles, you can breathe new life into shadows and colors. With the new version of Photographic Styles, you can change the type of skin color processing and shadows, even after taking photos, you can change the photography style.

Discover your photography style with the iPhone 16 Pro

Before we see the effect of photographic styles on photos, let’s talk about their different modes first. iPhone photography styles are now divided into two general categories: Mood and Undertone; Apart from the standard photography mode, 5 Undertone styles and 9 Mood styles are available. Undertone styles adjust the skin tone of human subjects more than anything else, and Mood styles offer functionality similar to Instagram filters.

Undertone styles are as follows:

  • Standard: iPhone’s default photography mode
  • Amber: Intensifies the amber tone in photos
  • Gold: Intensifies the golden tone in photos
  • Rose Gold: Intensifies the pink-gold tone in photos
  • Neutral: Neutralizes warm undertones in photos
  • Cool Rose: Intensifies cool-toned color in photos
Kausar Nikomanesh, Zomit writer in the editorial office - Standard iPhone 16 Pro photography style
Undertone: Standard
Kausar Nikomanesh, Zomit writer in the editorial office - Amber iPhone 16 Pro photography style
Undertone: Amber
Kausar Nikomanesh, Zomit writer in the editorial office - Gold iPhone 16 Pro photography style
Undertone: Gold
Kausar Nikomanesh, Zomit writer in the editorial office - Rose Gold iPhone 16 Pro photography style
Undertone: Rose Gold
Kausar Nikomanesh, Zomit writer in the editorial office - Neutral iPhone 16 Pro photography style
Undertone: Neutral
Kausar Nikomanesh, Zomit writer in the editorial office - Cool Rose iPhone 16 Pro photography style
Undertone: Cool Rose

Mood styles are as follows:

  • Vibrant
  • Natural
  • Luminous
  • Dramatic
  • Quiet
  • Cozy
  • Ethereal
  • Muted B&W
  • Stark B&W
Kausar Nikomanesh, Zomit writer in the editorial office - Vibrant iPhone 16 Pro photography style
Mood: Vibrant
Kausar Nikomanesh, Zomit writer in the editorial office - Natural iPhone 16 Pro photography style
Mood: Natural
Kausar Nikomanesh, Zomit writer in the editorial office - Luminous iPhone 16 Pro photography style
Mood: Luminous
Kausar Nikomanesh, Zomit writer in the editorial office - Dramatic iPhone 16 Pro photography style
Mood: Dramatic
Kausar Nikomanesh, Zomit writer in the editorial office - Quiet iPhone 16 Pro photography style
Mood: Quiet
Kausar Nikomanesh, Zomit writer in the editorial office - Cozy iPhone 16 Pro photography style
Mood: Cozy
Kausar Nikomanesh, Zomit writer in the editorial office - Ethereal iPhone 16 Pro photography style
Mood: Ethereal
Kausar Nikomanesh, Zomit writer in the editorial office - Muted B&W iPhone 16 Pro photography style
Mood: Muted B&W
Kausar Nikomanesh, Zomit writer in the editorial office - Stark B&W iPhone 16 Pro photography style
Mood: Stark B&W

All styles can be customized with three new parameters: Palette, Color, and Tone; The Palette parameter changes the range of applied colors, Color adjusts the intensity of color saturation, and most importantly, Tone can change the intensity of shadows and contrast and bring freshness back to iPhone photos.

While the Palette parameter is adjusted with a simple slider, you have to use a control pad to adjust color and tone. Working with this pad is very difficult and boring; Because to change the value of each of the two parameters, you have to put your finger on the head pad and since you have no feeling about the exact location of the finger, it is difficult to change the other parameter by keeping one parameter constant.

The iPhone 16 Pro photography experience is slightly different from the previous generation

If, like me, you don’t feel like messing around with the control pad and slider, you can directly access the styles or the Tone parameter with the camera control button and believe that you can increase the attractiveness of iPhone photos just by changing the Tone; For example, pay attention to the following two photos:

Standard mode with Tone -7
Standard mode with Tone 0 (default)

As you can see in the photos above, without changing the styles and simply by reducing the intensity of the tone, both the shadows have returned to the photo, and the black color of Mohammad Hossein’s t-shirt is visible better than before thanks to the improvement of the contrast of the image.

Ultrawide camera

Leaving aside the discussion of photography styles, the iPhone 16 Pro camera itself has undergone several major changes, the most important of which is the upgrade of the telephoto camera sensor from 12 to 48 megapixels; The new sensor uses a Quad-Bayer filter and 0.7-micrometer pixels; Therefore, it seems that the dimensions of the sensor itself are not different from the 1.2.55-inch sample of the previous generation with 1.4-micrometer pixels.

camera

Sensor

Lens

capabilities

Wide camera (main)

48-megapixel Sony IMX903

Dimensions 1/1.28 inches

1.22 µm pixels

Phase detection autofocus

Sensor-shift optical stabilizer

24 mm

Aperture f/1.78

12, 24 and 48-megapixel photography

4K120 video recording

Dolby Vision, ProRes, and Log

Portrait photography

Telephoto camera

12-megapixel Sony IMX913

Dimensions 1/3.06 inches

1.12 µm pixels

Dual Pixel phase detection autofocus

Sensor-shift optical stabilizer

120 mm

Aperture f/2.8

5x optical zoom

12-megapixel photography

4K60 video recording

Dolby Vision, ProRes, and Log

Portrait photography

Ultrawide camera

48 megapixels

Dimensions 1/2.55 inches

0.7 µm pixels

Phase detection autofocus

13 mm

Aperture f/2.2

12 and 48-megapixel photography

4K60 video recording

Dolby Vision, ProRes, and Log

Macro photography

selfie camera

12-megapixel Sony IMX714

Dimensions 1/3.6 inches

1.0 µm pixels

Phase detection autofocus

23 mm

Aperture f/1.9

12-megapixel photography

4K60 video recording

Dolby Vision, ProRes, and Log

In order for the pixels to capture the right light, the ultrawide camera by default captures 12MP photos by combining 4:1 pixels and achieving 1.4 micrometer pixels; But with the HEIF Max photography format, it is possible to shoot with 48 megapixels, so that the user has more freedom to zoom in on the photos.

A building with a stone facade and a yard full of trees - iPhone 16 Pro ultrawide camera - 48 megapixel photo
48-megapixel ultrawide photo – iPhone 16 Pro
A building with a stone facade and a yard full of trees - iPhone 16 Pro ultrawide camera - 12 megapixel photo
12-megapixel ultrawide photo – iPhone 16 Pro
A building with a stone facade and a yard full of trees - iPhone 16 ultrawide camera
12-megapixel ultrawide photo – iPhone 16
Cutting ultrawide photo of iPhone 16 and 16 Pro - air conditioner in the terrace of the apartment
Crop ultrawide camera photos

As you can see in the images above, the ultrawide 48 megapixel photo of the iPhone is somewhat more detailed in some parts; But it is generally softer than the 12-megapixel model. We also took photos of the same subject with iPhone 16; There is no noticeable difference between the 12 megapixel photos of the two phones.

View of the buildings around Zomit office on Pakistan Street - iPhone 16 Pro Ultra Wide Camera in the dark
Ultrawide iPhone 16 Pro camera with 1/25 second exposure
View of the buildings around the Zomit office on Pakistan Street - iPhone 16 ultrawide camera in the dark
iPhone 16 ultrawide camera with 1/10 second exposure
Cutting ultrawide photos of iPhone 16 and 16 Pro in the darkCrop ultrawide camera photos in the dark

iPhone 16 Pro goes to Night mode and long exposure much less than the iPhone 16 in dark environments; Therefore, sometimes its ultrawide night photos are less detailed than the iPhone 16; For example, in the photos above, the iPhone 16 is exposed for one-tenth of a second; While the exposure of the iPhone 16 Pro was 60% less and equivalent to one twenty-fifth of a second; So it is not surprising that the cheaper iPhone photo is more attractive!

iPhone 16 Pro ultrawide camera photo gallery

Children's playground with slide - iPhone 16 Pro ultra-wide camera
A tree from below and in front of sunlight - iPhone 16 Pro ultrawide camera
Tehran Book Garden - iPhone 16 Pro ultrawide camera
Super wide view of Tehran food garden - iPhone 16 Pro Ultra Wide Camera
Zoomit office terrace - iPhone 16 Pro ultrawide camera
Tehran Food Garden - iPhone 16 Pro ultrawide camera
Stone facade of a building - iPhone 16 Pro ultra-wide camera
The buildings of Pakistan Street in Tehran in the dark of the night - iPhone 16 Pro ultrawide camera
Zoomit studio in the dark - iPhone 16 Pro ultrawide camera
View of the buildings of Pakistan Street in Tehran at night - Ultrawide camera of iPhone 16 Pro
Sunflower flower close-up - iPhone 16 Pro ultrawide camera
Close-up of a yellow flower - macro photo of the iPhone 16 Pro ultrawide camera

The ultrawide camera of the iPhone 16 Pro generally takes attractive photos, But maybe it cannot be considered on par with competitors. The difference in performance with the best in the market is more noticeable in the dark; The iPhone 16 Pro’s ultrawide camera doesn’t appear so amazing in dark environments and records relatively soft photos. To evaluate the performance of the iPhone’s ultrawide camera against the competitors, I suggest that you read the comprehensive article comparing the 2024 flagship cameras.

Main camera

On paper, the main 48-megapixel camera of the iPhone 16 is no different from the previous generation in terms of sensor dimensions and pixels or lens specifications; But Apple calls this camera Fusion and claims that the sensor itself has become faster, and thanks to a new architecture called Apple Camera Interface, image data is transferred from the sensor side to the chip for processing at a faster rate; So now the main camera of the iPhone has the ability to record 4K120 Dolby Vision.

Record stunning videos with 120 frames per second video recording

HDR filming at a rate of 120 frames per second and 4K resolution requires very heavy processing; Because to implement the HDR effect, several 4K frames with different exposures must be compared and aggregated every second. If you have an external SSD and a high-speed USB 3 cable, you can also save 4K120 videos in professional ProRes and log formats, which give you more freedom when editing videos and correcting colors.

4K120 video sample 1

Watch on YouTube

4K120 video sample 2

Watch on YouTube

The 4K120 iPhone 16 Pro videos are very attractive and detailed and bring a wonderful visual experience to Armaghan. Since none of the 4K120 iPhone 16 Pro videos were uploaded properly to the app platform, you must refer to the YouTube links to watch the videos.

Thanks to the faster sensor and Apple’s new interface, 48-megapixel photos with HEIF Max format are recorded almost without pause and at a rate of about 4 frames per second. Like the previous generation, the iPhone combines multiple 12- and 48-megapixel frames, by default, it shoots at 24-megapixel resolution to provide a balanced combination of contrast, color, and detail; Of course, it is possible to take 12-megapixel photos alongside 48-megapixel HEIF Max photos.

Zomit office terrace - 48 megapixel photo of iPhone 16 Pro main camera
48-megapixel photo of the main camera
Zomit office terrace - 24 megapixel photo of iPhone 16 Pro main camera
24-megapixel photo of the main camera
Zomit office terrace - 12 megapixel photo of iPhone 16 Pro main camera
12-megapixel photo of the main camera
Crop photos of 12, 24 and 48 megapixels of the main iPhone 16 Pro cameraCrop photos of 48, 24, and 12 megapixels

As you can see in the photos above, the 48-megapixel mode improves the details to some extent at the cost of overall softening of the photo and gives you more freedom to zoom into the photo; But the contrast and concentration of its colors are at a lower level than the 24 and 12-megapixel modes. The 24MP photos seem to have a good balance of detail, color and contrast.

Mohammad Hossein Moaidfar, the author of Zoomit - iPhone 16 Pro's main camera
iPhone 16 Pro main camera
Mohammad Hossein Moaidfar, the author of Zoomit - iPhone 16 main camera
iPhone 16 main camera
Cropping the photo of the main camera of the iPhone 16 and 16 Pro

The main camera of the iPhone 16 Pro has recorded a little more detail in the photos above compared to the iPhone 16; But as you can see, the iPhone 16 Pro photo has a lower contrast, its colors are more warm than the iPhone 16, and the black color of Mohammad Hossein’s T-shirt does not match black enough.

iPhone 16 Pro main camera photo gallery

Children's playground - iPhone 16 Pro main camera
Candy with fruit decoration - iPhone 16 Pro main camera
An artificial lake around Tehran's book garden - iPhone 16 Pro main camera
Tehran book garden plants - iPhone 16 Pro main camera
Exterior view of Tehran Book Garden - main camera of iPhone 16 Pro
Two young people in the book garden of Tehran - iPhone 16 Pro main camera
Humvee military vehicle in Tehran's book garden - iPhone 16 Pro main camera
A view from inside the Tehran Book Garden - iPhone 16 Pro's main camera
A cat in the middle of the bushes - iPhone 16 Pro main camera in the dark
Orange motorcycle - main iPhone 16 Pro camera in the dark
Room ceiling lights - iPhone 16 Pro main camera with 2x zoom
Iranian Islamic view of Tehran mosque - iPhone 16 Pro main camera in the dark with 2x zoom
Brick facade of a building around Madras highway - iPhone 16 Pro main camera
Sugar goat statue in Tehran's book garden - iPhone 16 Pro main camera with 2x zoom
The statue of Zoro and Sergeant Garcia in Tehran's book garden - iPhone 16 Pro main camera with 2x zoom
The light in the garden - the main camera of the iPhone 16 Pro in the dark

The photos of the iPhone 16 Pro’s main camera have the same feeling as the iPhone 15 Pro; They are full of details, the colors appear relatively natural, and tend to be a little warm. The iPhone does not artificially remove noise as much as possible; Therefore, even in the dark, it pulls out a high level of fine and intricate details from the subjects. The large dimensions of the sensor allow the iPhone to record 2x high-quality photos by creating a 12-megapixel crop from the middle of the full-sensor image of the main camera.

Telephoto camera

In addition to the renewed ultrawide camera, another big change is the addition of a 5x telephoto camera to the iPhone 16 Pro; Last year, this camera was exclusive to the iPhone 15 Pro Max. The new telephoto camera uses the same 12-megapixel sensor as the previous generation and provides the user with digital zoom up to 25 times.

iPhone 16 Pro telephoto camera photo gallery

World War era motorcycle - iPhone 16 Pro telephoto camera
Single car in Tehran Book Garden - iPhone 16 Pro telephoto camera
iPhone 16 Pro telephoto camera - 2
Street lights in front of a tall glass building - iPhone 16 Pro telephoto camera
The Little Prince in Tehran's Book Garden - iPhone 16 Pro telephoto camera
iPhone 16 Pro telephoto camera
Locust airplane replica in Tehran book garden - iPhone 16 Pro telephoto camera
Mural on Madras highway - iPhone 16 Pro telephoto camera
Yellow motorcycle in the dark - iPhone 16 Pro telephoto camera
Building under construction near Madras highway - iPhone 16 Pro telephoto camera
2 Star Cafe on Pakistan Street - iPhone 16 Pro telephoto camera
Tehran Mosli minaret in the dark - iPhone 16 Pro telephoto camera

The iPhone 16 Pro telephoto camera records 5x high-quality photos; The level of detail and colors of the telephoto camera are very similar to the main camera and match its mood. The telephoto camera also excels in low-light environments and takes good photos in the dark. But as we said in the comprehensive comparison of 2024 flagship cameras, the competitors perform better in this field.

The main iPhone 16 Pro camera - the first example
1x photo
2x photo of iPhone 16 Pro
Double photo
3x photo of iPhone 16 Pro
3x photo
iPhone 16 Pro 5x photo
5x photo
10x photo of iPhone 16 Pro
10x photo
25x iPhone 16 Pro photo
25x photo

The combination of the iPhone 16 Pro’s 48-megapixel main camera and its 5x telephoto camera allows us to record relatively high-quality zoomed photos in the range of 1-10x; Apart from the 5x optical zoom, the iPhone looks quite satisfactory at 2x and 10x levels.

Portrait photography

The iPhone 16 Pro relies on the main and telephoto cameras for portrait photography and uses the ToF sensor to accurately separate the subject from the background. 1x and 2x portrait photos are recorded with the main camera and 5x portrait photos are also recorded with the telephoto camera.

Kausar Nikomanesh, author of Zoomit - 1x portrait photo of iPhone 16 Pro
1x portrait photo
Kausar Nikomanesh, the author of Zoomit - 2x portrait photo of iPhone 16 Pro
2x portrait photo
Kausar Nikomanesh, the author of Zoomit - 5x portrait photo of iPhone 16 Pro
5x portrait photo
1x portrait photo of iPhone 16 Pro
1x portrait photo
Mohammad Hossein Moaidfar, the author of Zoomit - 2x photo of iPhone 16 Pro
2x photo with natural bokeh
5x portrait photo of iPhone 16 Pro
5x portrait photo

The iPhone had a poor performance in portrait photography several years ago, and the iPhone 16 Pro follows the same rule. Portrait photos are detailed and the bokeh effect implementation is gradual and similar to professional cameras. As we saw in the 2024 flagship camera comparison article, the iPhone beats even tough competitors like the Pixel 9 Pro and S24 Ultra in portrait photography.

selfie camera

The selfie camera of the iPhone 16 Pro is no different from the previous generation, and it still captures eye-catching photos with many details and true-to-life colors.

Mohammad Hossein Moaidfar and Hadi Ghanizadegan from Zomit - iPhone 16 Pro selfie camera
Mohammad Hossein Moidfar and Hadi Ghanizadegan from Zomit - iPhone 16 Pro selfie camera with bokeh effect

iPhone 16 Pro with all its cameras is capable of recording 4K60 videos with Dolby Vision HDR standard; Of course, you can also choose 24 and 30 frames per second for filming. Videos are pre-recorded with h.265 codec, But it is also possible to switch to the more common h.264 codec.

We shot at 30 and 60 fps and h.265 codecs, and the iPhone 16 Pro recorded very detailed videos in both modes with vivid colors, high contrast, and decent exposure control; If you want to see the video recording performance in competition with other flagships, don’t miss the iPhone 16 Pro vs. Pixel 9 Pro and Galaxy S24 Ultra camera comparison article.

Performance and battery

The next big change to the iPhone 16 Pro comes back to its chip. A18 Pro uses the familiar combination of 2 high-power cores and 4 low-power cores as a CPU, and this unit is accompanied by a 6-core graphics processor and a 16-core neural processing unit. Apple’s new chip is produced with TSMC’s improved 3nm lithography called N3E.

Technical specifications of the A18 Pro chip compared to the previous generation

Specifications/Chip

A17 Pro

A18

A18 Pro

Central processor

2 powerful 3.78 GHz cores with 16 MB cache

4 low-power 2.11 GHz cores with 4 MB cache

24 MB system cache

2 powerful 4.04 GHz cores with 8 MB cache

4 low-power 2.0 GHz cores with 4 MB cache

12 MB system cache

2 powerful 4.04 GHz cores with 16 MB cache

4 low-power 2.2 GHz cores with 4 MB cache

24 MB system cache

A set of instructions

ARMv8.6-A

ARMv9.2-A

ARMv9.2-A

Graphics

6-core

1398 MHz

768 shading units

Ray tracing

5-core

1398 MHz

640 shading units

Ray tracing

6-core

1450 MHz

768 shading units

Ray tracing

Memory controller

4 16-bit channels

RAM 3200 MHz LPDDR5X

The bandwidth is 51.2 GB

4 16-bit channels

RAM 3750 MHz LPDDR5X

The bandwidth is 58.6 GB

4 16-bit channels

RAM 3750 MHz LPDDR5X

The bandwidth is 58.6 GB

Record and play video

4K60

10-bit H.265

8K24 / 4K120 10-bit H.265

8K24 / 4K120 10-bit H.265

Wireless connection

Bluetooth 5.3 and Wi-Fi 7

Bluetooth 5.3 and Wi-Fi 7

Bluetooth 5.3 and Wi-Fi 7

modem

X70 modem

Download 7500 MB in the UK

Upload is 3500 megabits per second

X75 modem

Download 10,000 megabits per second

Upload is 3500 megabits per second

X75 modem

Download 10,000 megabits per second

Upload is 3500 megabits per second

manufacturing process

3-nanometer TSMC

3-nanometer TSMC

(Enhanced: N3E)

3-nanometer TSMC

(Enhanced: N3E)

Apple says it uses new cores in the CPU, which results in 15% faster performance than the A17 Pro and achieves the same level of performance as this chip with 20% less power consumption. Apple claims that the A18 Pro uses more cache memory compared to the A18 chip.

The A18 Pro chip has faster single-core performance than even multi-100W desktop processors.

According to Apple, the 6-core A18 Pro graphics is 20% faster than the previous generation. Apple says the ray tracing accelerator in the new GPU is also a 100% improvement over the previous generation.

Playing mobile games on iPhone 16 Pro

The 16-core A18 Pro neural processing unit, like the previous generation, is capable of performing 35 trillion operations; But thanks to the 17% increase in bandwidth between the RAM and the chip, the new NPU performs better than before in real-world applications. The A18 Pro chip is connected to 8 GB LPDDR5x-7500 RAM with a high-speed memory controller.

iPhone 16 Pro performance against competitors

Product/benchmark

chip

Speedometer 2.1

GeekBench 6

GFXBench

Web browsing experience

GPU computing power

CPU computing power

Game simulator

Vulkan/Metal

Single/Multi

Aztec Ruins

Onscreen/1440p

Vulkan/Metal

iPhone 16 Pro

A18 Pro

572

33105

3542

8801

59

70

iPhone 16

A18

554

28025

3440

8406

59

61

iPhone 15 Pro

A17 Pro

475

27503

2960

7339

59

46.8

Piura 70 Ultra

(Performance Mode)

Kirin 9010

235

1528

(Failed)

1452

4494

32

30

Pixel 9 Pro

Tensor G4

221

6965

1945

4709

70

44

Galaxy S24 Ultra

Snapdragon 8 Gen 3 for Galaxy

240

17012

2262

7005

75

81

iPhone 16 Pro is noticeably faster than current Android flagships; The difference of about 60% in single-core CPU performance with the Galaxy S24 Ultra clearly shows how fast the iPhone 16 Pro appears in everyday use.

Apple’s 2024 flagship dictates its 95% advantage over a rival such as the Galaxy S24 Ultra when using the GPU for calculations such as blurring the background of photos and face recognition; However, in the rendering of games, the advantage is still with the Galaxy and the Snapdragon 8 generation 3 chip.

The performance of the neural processing unit of the iPhone 16 Pro against competitors

phone/parameters

framework

intermediary

Single count accuracy score (FP32)

iPhone 16 Pro

Core ML

Neural Engine

4647

iPhone 15 Pro

Core ML

Neural Engine

3862

Piura 70 Ultra

TensorFlow Lite

NNAPI

235

Pixel 9 Pro

TensorFlow Lite

NNAPI

347

Galaxy S24 Ultra

TensorFlow Lite

NNAPI

477

The neural processing unit of the iPhone 16 Pro outperforms the Galaxy S24 Ultra in the GeekBench AI benchmark by an astronomical 870%; Now we have to wait until the release of Apple’s artificial intelligence capabilities to see if such a difference is reasonable or just a bug in the benchmark software.

Like the previous generation, Apple sells the iPhone 16 Pro in versions of 128, 256, 512 GB and 1 TB with NVMe storage; While the base model of the iPhone 16 Pro Max uses 256 GB of storage space. Benchmarks show that the storage speed of the iPhone 16 Pro is no different from the previous generation.

iPhone 16 Pro storage speed compared to competitors

phone model

Sequential reading rate

Sequential write rate

iPhone 16 Pro

1636 megabytes

1340 megabytes

iPhone 15 Pro

1652 MB UK

1380 megabytes

Pixel 9 Pro XL

1350 megabytes

171 megabytes

Galaxy S24 Ultra

2473 megabytes

1471 megabytes

If we leave the numbers aside, we will face the fact that the feeling of using the iPhone 16 Pro in everyday use is not much different from the iPhone 15 Pro or even the iPhone 14 Pro. The performance gap between the new iPhone and the previous generations is the reason that the phone can still provide good performance with the standard of a few years later, and of course, it can handle the heavy processing of Apple Intelligence.

Apple says that with the changes made in the internal structure of the iPhone 16 Pro; Including the metal shell of the battery (pro model only), the phone can now perform up to 20% more stable in heavy usage. This performance stability improvement is felt to some extent; The phone does not get hot while playing graphic games and its performance drops less than before; In the Zomit stability test, the iPhone 16 Pro dropped less than the Galaxy S24 Ultra and the previous generation; The maximum temperature of his body reached 47 degrees Celsius.

In order to measure the performance stability of the iPhone 16 Pro in applications other than playing heavy games, we went to the CPU stress test; This test involves all CPU cores for 20 minutes and at the end shows what level of performance capacity the CPU provides after heating up under heavy processing load.

iPhone 16 Pro CPU stress test
CPU performance stability test under heavy processing load for 20 minutes

In our tests, the iPhone 16 Pro was able to provide 84% of its performance level to the user after 20 minutes; Therefore, the iPhone probably rarely lags and drops frames during very heavy use. In the CPU stress test, the body of the device reached about 45 degrees Celsius.

This year, Apple has increased the battery capacity of the iPhone 16 Pro and 16 Pro Max by about 10%; This issue, along with the A18 Pro chip’s excellence, makes the new flagships have very good charging; In such a way that Apple considers the iPhone 16 Pro Max as “the best iPhone in history in terms of charging”.

iPhone 16 Pro battery life in the battery menu

Cupertino residents announce the charging time of the new iPhones with the duration of video playback and say that the iPhone 16 Pro has 4 hours more charging time compared to the previous generation with 27 hours of video playback. Zomit tests also show 26 hours and 5 minutes of charging time for the new iPhone, which is more or less consistent with Apple’s claim.

iPhone 16 Pro battery life against competitors

Product/benchmark

Display

battery

Play video

Everyday use

Dimensions, resolution, and refresh rate

milliampere hour

minute: hour

minute: hour

iPhone 16 Pro

6.3 inches, 120 Hz

2622 x 1206 pixels

3582

26:05

iPhone 15 Pro

6.1 inches, 120 Hz

2556 x 1179 pixels

3274

21:11

iPhone 15 Pro Max

6.7 inches, 120 Hz

2796 x 1290 pixels

4441

24:43

Pixel 9 Pro XL

6.8 inches, 120 Hz

2992 × 1344 pixels (Native)

5060

25:00

13:25

Piura 70 Ultra

6.8 inches, 120 Hz

2844 x 1260 pixels

5200

25:00

17:00

Galaxy S24 Ultra

6.8 inches, 120 Hz

3088 x 1440 pixels

5000

27:41

14:05

Another change of the iPhone 16 Pro goes back to increasing the charging speed; Apple’s new flagship now supports wired charging with a power of 30 watts, and if the same charger is connected to the Magsafe wireless charging pad, the wireless charging power reaches 25 watts, which, according to Apple, can charge the battery from zero to 50% within 30 minutes.

Very good charging and beyond the last generation

Although the wired charging speed of the iPhone 16 Pro has increased from 20 to 30 watts; again, it takes about 100 minutes to fully charge the battery; Because both the battery capacity has increased by 10%, and the iPhone charges between 85 and 100% at a very low speed; Even with the optimal battery charging function turned off, the phone needs about 35-40 minutes to complete the remaining 15% of the battery capacity.

Design and build quality

Leaving aside the fundamental and significant changes of the iPhone, what you will notice at first glance is the increase in the size of the phone, especially in the iPhone 16 Pro Max, and the narrowing of the edges around the screen.

Home screen apps and widgets on the iPhone 16 Pro screen

iPhone 16 Pro and Pro Max use 6.3 and 6.9-inch screens with an increase of 0.2 inches in screen diameter compared to several previous generations; So it is not strange that the physical dimensions and weight also increase; Both phones are about 3 mm longer and 1 mm wider and 12 and 6 grams heavier, respectively; Therefore, the increase in the weight of the iPhone 16 Pro is more significant, and the 16 Pro Max sits worse in the hand than before and requires constant two-handed use.

Dynamic Island iPhone 16 Pro close-up

The borders around the display have become noticeably narrower; Now, around the screen of the iPhone 16 Pro, a border with a thickness of a little more than one millimeter (1.15 millimeters to be exact) is covered; While the thickness of the edges of the iPhone 15 Pro is about 1.5 mm, and it reaches more than 2 mm for the iPhone 16; Of course, you should pay attention that by putting the cover on the phone, the narrowness of the edges is less noticeable.

Dynamic Island iPhone 16 Pro
iPhone 16 Pro screen close-up

Another change in the appearance of the iPhone 16 Pro is the addition of the Desert Titanium color option to the device’s coloring and the removal of the Blue Titanium option. The new color is more similar to cream with a golden frame; But unfortunately, we didn’t have this color to review. Other color options are limited to neutral and understated Black Titanium, White Titanium, and Natural Titanium.

iPhone 16 Pro in hand

The design of the iPhone 16 Pro is no different from the previous generation in the rest of the parts; We see the same flat titanium frame with flat glass panels on the back and front of the phone, which are mounted with high precision and form a solid structure with IP68 certification. Unlike the iPhone 16, there has been no change in the painting process of the back panel and the arrangement of the cameras, only the screen cover has been upgraded to the third-generation ceramic shield, which, according to Apple, is twice as strong as the previous generation.

Camera control button on iPhone 16 Pro

 

We talked about Camera Control and its not very ergonomic location on the right side of the frame at the beginning of the article. Apart from this new button, the rest of the buttons are the same as the previous generation, the volume control buttons and Side button are in the right place and provide very good feedback, and the Action button, like the previous generation, allows you to personalize it.

Read more: Reviews of iPhone 14 Plus, price and technical specifications

Display and speaker

Finally, another not-so-variable part is the iPhone 16 Pro display, which uses the same 120 Hz OLED panel with LTPO technology; Of course, this year, due to the 0.2-inch increase in the diameter of the screen, its resolution reaches 2622 x 1206 pixels with a very good density of 460 pixels. As before, the display supports HDR standards including HDR10 and Dolby Vision; Therefore, like a few generations ago, we are either with a 10-bit panel or 8-bit + FRC.

Watch video with iPhone 16 Pro

Thanks to the LTPO technology, the iPhone 16 Pro display can change the refresh rate of the display between 1 and 120 Hz depending on the type and movement rate of the content, so that the phone can display smooth and smooth animations, and match the frame rate of games and videos. Do not damage the battery charging.

iPhone 16 Pro display performance against competitors

Product/Test

Minimum brightness

Maximum brightness

contrast ratio

sRGB

DCI P3

Adobe RGB

manual

automatic

local

cover

Average error

cover

Average error

cover

Average error

iPhone 16 Pro

1.35

1044

1950 (HDR)

99.7 percent

0.98

iPhone 15 Pro

2.21

1052

1947 (HDR)

99.7 percent

1.0

iPhone 15 Pro Max

2.15

1041

1950 (HDR)

99.7 percent

0.9

Pixel 9 Pro XL

4

1300

2650

(HDR)

97.2 percent

(Natural)

1.1

81.6 percent

(Adaptive)

3

Piura 70 Ultra

2.5

740

1500

(HDR)

99.7 percent

(Natural)

1.9

79.7 percent

(Vivid)

5.3

Galaxy S24 Ultra

0.7

914

2635

(HDR)

102 percent

(Natural)

3.5

81.8 percent

(Vivid)

4.4

Apple says that the iPhone 16 Pro display, like the previous generation, supports the wide color space of P3, achieves a brightness of 1000 nits in manual mode, and its maximum brightness reaches 2000 nits in automatic mode or during HDR video playback; But the important difference between the iPhone 16 Pro panel and the iPhone 15 Pro comes back to the minimum single-purpose brightness.

Zoomit measurements confirm Apple’s claims about the iPhone’s brightness; We measured the iPhone 16 Pro’s minimum brightness at 1.35 nits, which is significantly lower than the previous generation’s 2.15 nits; But the maximum brightness in manual mode and while displaying HDR videos is no different from the iPhone 15 Pro and is equal to 1044 and 1950 nits, respectively. It goes without saying that the iPhone 16 Pro achieved a brightness of 1296 nits in automatic mode while displaying SDR content (uses other than HDR video playback); But probably exposed to strong ambient light, it can approach the same range of 2000 nits.

iPhone 16 Pro Type-C port

iPhone 16 Pro uses stereo speakers, the main channel of which is located at the bottom edge of the frame, and the conversation speaker also plays the role of the second channel. Maybe the volume of the iPhone does not reach the level of competitors such as Pixel 9 Pro or Galaxy S24 Ultra, But the output quality of the speakers is at a higher level; The iPhone’s sound is clearer and its bass is much stronger than its competitors.

Summary and comparison with competitors

If we assume that the government will finally act rationally and start working on the iPhone registry, in this situation, it is not very logical for iPhone 15 Pro and even iPhone 14 Pro users to buy the iPhone 16 Pro with a few 10 million additional costs; Unless the 5x telephoto camera (for iPhone 15 Pro and both 14 Pro models), 15-30% faster chip performance, or Apple Intelligence (for iPhone 14 Pro users) is critical to them.

Users of iPhone 13 Pro or older models have more reasons to buy the iPhone 16 Pro; Better charging, more RAM, a more efficient camera, a brighter screen with Dynamic Island, a faster chip, and perhaps finally artificial intelligence capabilities, can all justify spending money to upgrade from the iPhone 13 Pro to the 16 Pro.

If the ecosystem is not a limiting factor for you, the Galaxy S24 Ultra, even a year after its launch and at a much lower price, offers you more or less the same experience promised by Apple Intelligence with Galaxy AI, and in most cases, in terms of photography and videography, it is on par with the iPhone 16 Pro and even better than It appears.

Naturally, we could not check the competitive advantage of the iPhone 16 Pro; Apple Intelligence is the focus of Apple’s marketing for this phone; But to experience all its capabilities, we have to wait until early 2025; However, a significant part of these features will be available on the iPhone 15 Pro with basically the same experience.

Working with iPhone 16 Pro

iPhone 16 Pro is a very attractive phone; But at least in the first month of its release, it is not in line with Apple’s philosophy; We know Apple as a company that provides mature and efficient functions and features to the user from the very beginning; But apparently, in the age of artificial intelligence, we have to get used to rudeness and delays; First it was the turn of Google, Microsoft and Samsung; Now Apple.

Continue Reading

Technology

Biography of Geoffrey Hinton; The godfather of artificial intelligence

Published

on

By

Geoffrey Hinton
Geoffrey Hinton, the godfather of artificial intelligence, revolutionized our world by inventing artificial neural networks. Do not miss the story of his ups and downs life.

Biography of Geoffrey Hinton; The godfather of artificial intelligence

Geoffrey Hinton (Geoffrey Hinton), a scientist who has rightly been called the “Godfather of Artificial Intelligence”, created a revolution in the world of technology with his research. Inspired by the human brain, he built artificial neural networks and gave machines the ability to learn, think, and make decisions. These technologies that are everywhere in our lives today, from voice assistants to self-driving cars, are the result of the relentless efforts of Hinton and his colleagues.

Hinton is now recognized as one of the most influential scientists of the 20th century, having won the 2024 Nobel Prize in Physics. But his story goes beyond awards and honors.

Geoffrey Hinton’s story is a story of perseverance, innovation, and the constant search to discover the unknown. In this article, we will look at the life and achievements of Geoffrey Hinton and we will answer the question of how one person with a simple idea was able to revolutionize the world of technology.

From physical problems to conquering the digital world

Hinton has been working stand-up for almost 18 years. He can’t sit for more than a few minutes due to back disc problems, but even that hasn’t stopped him from doing his activities. “I hate standing and prefer to sit, but if I sit, my lower back bulges out and I feel excruciating pain,” she says.

Since driving or sitting in a bus or subway is very difficult and painful for Hinton, he prefers to walk instead of using a private car or public transportation. The long-term walks of this scientist show that he has not only surrendered to his physical conditions but also to what extent he is eager to conduct scientific research and achieve results.

Hinton has been standing for years

For about 46 years, Hinton has been trying to teach computers like humans. This idea seemed impossible and hopeless at first, but the passage of time proved otherwise so much so that Google hired Hinton and asked him to make artificial intelligence a reality. “Google, Amazon, and Apple think artificial intelligence is what will make their future,” Hinton said in an interview after being hired by Google.

Google hired Hinton to make artificial intelligence a reality

Heir to genius genes

Hinton was born on December 6, 1947, in England in an educated and famous family with a rich scientific background. Most of his family members were educated in mathematics and economics. His father, Howard Everest Hinton, was a prominent entomologist, and all his siblings had done important scientific research.

Hinton knew from the age of seven that he would one day reach an important position

Some of the world’s leading mathematicians, such as George Boole, the founder of Boolean logic, and Charles Howard Hinton, a mathematician known for his visualization of higher dimensions, were relatives of Hinton. So, from a young age, there was a lot of pressure on Hinton to be the best in education, so much so that the scientist was thinking about getting a doctorate from the age of seven.

Hinton at the age of 7Geoffrey Hinton at seven years old
psychology, philosophy, and artificial intelligence; A powerful combination to create the future

Hinton took a diverse academic path; He began his primary education at Clifton College in Bristol and then went to Cambridge University for further studies. There, Hinton constantly changed his major, vacillating between the natural sciences, art history, and philosophy. Finally, he graduated from Cambridge University in 1970 with a bachelor’s degree in experimental psychology.

Hinton’s interest in understanding the brain and how humans learn led him to study artificial intelligence. Therefore, he went to the University of Edinburgh to continue his studies, where he began research in the field of artificial intelligence under his mentor, Christopher Longuet-Higgins. Finally, in 1978, Hinton achieved his seven-year-old dream and received his doctorate in artificial intelligence. The PhD was a turning point in Hinton’s career and prepared him to enter the complex and fascinating world of artificial intelligence.

Hinton’s diverse education, from psychology to artificial intelligence, gave him a comprehensive and interdisciplinary perspective that greatly contributed to his future research. This perspective enabled him to make a deep connection between the functioning of the human brain and machine learning algorithms.

Hinton decided to enter the field of physiology and study the anatomy of the human brain in his undergraduate course due to his great interest in learning about the workings of the human mind. After that, he entered the field of psychology and finally entered the field of artificial intelligence and completed his studies. His goal in entering the field of artificial intelligence was to simulate the human brain and use it in artificial intelligence.

If you want to learn about the functioning of a complex device like the human brain, you have to build one like it.

– Geoffrey Hinton

Hinton believed that in order to have a deep understanding of a complex device like the brain, one should build a device similar to it. For example, we normally think we are familiar with how cars work, but when building a car we will notice many details that we had no knowledge of before building it.

Only against the crowd, but victorious

While struggling with his ideas and thoughts and their opponents, Hinton met a number of researchers, such as Frank Rosenblatt (Frank Rosenblatt) in the field of artificial intelligence. Rosenblatt was an American scientist who created a revolution in the field of artificial intelligence in the 1950s and 1960s by inventing and expanding the perceptron model.

The perceptron model, one of the first machine learning models, is recognized as the main inspiration for the development of today’s artificial neural networks. Perceptron is a simple algorithm used to classify data. This model is inspired by the way brain neurons work. A perceptron is a mathematical model for an artificial neuron that receives various inputs, processes them using a weighted function, and decides on the output.

Hinton and Rosenblatt side by side
Hinton and Rosenblatt side by side

Rosenblatt’s hope was that one could feed a neural network a set of data, such as photographs of men and women, and the neural network, like humans, could learn how to separate the photographs; But there was one problem: the perceptron model didn’t work very well. Rosenblatt’s neural network was a single layer of neurons and was too limited to perform the assigned task of image separation.

Even when no one believed in artificial intelligence, Hinton didn’t lose hope

In the late 1960s, Rosenblatt’s colleague wrote a book about the limitations of Rosenblatt’s neural network. After that, for about ten years, research in the field of neural networks and artificial intelligence almost stopped. No one wanted to work in this field, because they were sure that no clear results would be obtained. Of course, nobody might not be the right word, and it is better to say almost nobody; Because the topic of artificial intelligence and neural network was completely different for Hinton.

Hinton believed that there must be a way to simulate the human brain and make a device similar to it. He had no doubt about it. Why did Hinton want to pursue a path that few would follow and almost no one saw a happy ending for? Thinking that everyone makes mistakes, this eminent scientist continued on his way and did not give up.

From America to Canada; A journey that changed the course of artificial intelligence

Hinton went to different research institutes in America during his research. At that time, the US Department of Defense funded many US research institutions, so most of the projects carried out or underway focused on military objectives. Hinton was not interested in working in the military field and was looking for pure scientific research and the development of technology for human and general applications. As a result, he was looking for a place where he could continue his research away from the pressures of the military and the limitations of dependent funds.

I did not want my research to be funded by military organizations, because the results obtained would certainly not be used for human benefit.

– Geoffrey Hinton

After searching for a suitable place to continue research, Canada seemed to be the most suitable option. Finally, Hinton moved to Canada in 1987 and began his research at the University of Toronto. In the same years, Hinton and his colleagues were able to solve problems that simpler neural networks could not solve by building more complex neural networks.

Hinton and his colleagues developed multilayer neural networks instead of building and expanding single-layer neural networks. These neural networks worked well and drew a null line on all disappointments and failures. In the late 80s, a person named Dean Pomerleau built a self-driving car using a neural network and drove it on different roads.

In the 1990s, Yann LeCun, one of the pioneers of artificial intelligence and deep learning, developed a system called “Convolutional Neural Networks” (CNNs). These networks became the basis for many modern techniques in machine vision and pattern recognition. One of the first important applications of these networks was to build a system that could recognize handwritten digits; But once again, after the construction of this system, researchers in the field of artificial intelligence reached a dead end.

In the 1990s, an interesting neural network was built, but it stalled due to insufficient data.

The neural networks built at that time did not work well due to the lack of sufficient data and the lack of necessary computing power. As a result, educated people in the fields of computer science and artificial intelligence once again concluded that neural networks and their construction were nothing more than a fantasy. In 1998, after 11 years at the University of Toronto, Geoffrey Hinton left Toronto to found and manage the Gatsby Computational Neuroscience Unit at University College London. During his research at this center, he studied neural networks and their applications.

AlexNet: A Milestone in the History of Artificial Intelligence

From the 1990s to 2000, Hinton was the only hopeful person on the planet who still believed in the development of neural networks and artificial intelligence. Hinton attended many conferences to achieve his goal but was usually met with indifference by the attendees and treated like an outcast. You might think to yourself that Hinton never gave up and moved on with hope, but that’s not the case. He was also sometimes disappointed and doubted reaching the desired result; But by overcoming despair, he continued his way no matter how difficult it was; Because this sentence kept repeating in Hinton’s mind: “Computers can learn.”

Watch: The story of the birth of artificial intelligence, the exciting technology that shook the world
Study ‘1

After returning to the University of Toronto in 2001, Hinton continued his work on neural network models and, together with his research group in the 2000s, developed deep learning technology and applied it to practical applications. In 2006, the world caught on to Hinton’s ideas and did not see them far away.

In 2012, Hinton, along with two of his PhD students, Alen Krizhevsly and Ilya Sotskever (the co-founder of OpenAI, the creator of ChatGPT), developed an eight-layer neural network program called AlexNet. The purpose of developing this program was to identify images in ImageNet, a large online database of images. AlexNet’s performance was stellar, outperforming the most accurate program up to that point by about 40 percent. The image below shows the architecture of Alexnet convolutional neural network.

AlexNet neural network

Viso

In the image above, C1 to C5 are convolutional layers that extract image features. Each layer has convolutional filters of different sizes that are applied to the image or output of the previous layer to detect different features. Also, the number of channels in each layer (96, 256 and 384) shows the number of filters used in that layer.

After feature extraction, the image is sent to fully connected layers (FC6 to FC8). Each circle in these layers represents a neuron that is connected to the neurons of the previous layer.

FC8 is the final output layer and consists of 1000 neurons. Due to the high number of layers and the ability to learn complex image features, the AlexNet architecture was very accurate in image recognition and paved the way for further improvements in the field of neural networks.

After developing AlexNet, Hinton and two of his students founded a company called DDNresearch, which was acquired by Google for $44 million in 2013. That same year, Hinton joined Google’s artificial intelligence research team, Google Brain, and was later appointed one of its vice presidents and chief engineers.

Hinton on Google

Businesstoday

From Backpropagation Algorithms to Capsule Networks: Hinton’s Continuous Innovations

Hinton has written or co-authored more than 200 scientific papers on the use of neural networks for machine learning, memory, perception, and symbol processing. While doing a postdoctoral fellowship at the University of California, San Diego, Hinton worked with David A. Rumelhart (David E. Rumelhart) and R. Wenald J. Williams (Ronald J. Williams) to implement a backpropagation algorithm on multilayer neural networks.

Hinton stated in an interview in 2018 that the main idea of ​​this algorithm was from Rumelhart, But Hinton and his colleagues were not the first to propose the backpropagation algorithm. In 1970, Seppo Linnainmaa proposed a method called inverse automatic derivation, which the backpropagation algorithm is a special type of this method.

Hinton and his colleagues took a big step in their research after publishing their paper on the error backpropagation algorithm in 1986. This article is one of Hinton’s most cited articles with 55,020 citations.

The number of citations of the 1986 article

Google Scholar

In October and November 2017, Hinton published two open-access papers on capsule neural networks, which he says work well.

At the 2022 Neural Information Processing Conference, Hinton introduced a new learning algorithm called forward-forward algorithm for neural networks. The main idea of ​​this algorithm is to use two forward steps instead of forward and backward steps in the error backpropagation method; One with positive (real) data and the other with negative data that only the network produces.

When the creator questions his creation

Finally, in May 2023, after about 10 years of working with Google, Hinton resigned from his job at the company because he wanted to speak freely about the dangers of the commercial use of artificial intelligence. Hinton was concerned about the power of artificial intelligence to generate fake content and its impact on the job market. Next, we read a part of Hinton’s words in an interview in 2023:

I think we’ve entered an era where, for the first time, we have things that are more talented than us. Artificial intelligence understands and has talent. This advanced system has its own experiences and can make decisions based on those experiences. Currently, artificial intelligence does not have self-awareness, but over time, it will acquire this feature. There will even come a time when humans are the second most talented creatures on earth. Artificial intelligence came to fruition after many disappointments and failures.

– Geoffrey Hinton

The supervisor of the doctoral course asked me to work on another subject and not to jeopardize my future work, but I preferred to learn about the functioning of the human brain and mind and simulate it, even if I fail. It took longer than I expected, about 50 years, to achieve the result.

At one point, the reporter asks Hinton at what point did you come to the conclusion that your idea about neural networks is right and everyone else is wrong? “I’ve always thought I was right, and I’m right,” Hinton replies with a pause and a smile.

With the advent of ultra-high-speed chips and the vast amount of data generated on the Internet, Hinton’s algorithms have reached magical power. Little by little, computers were able to recognize the content of photos, even later they were able to easily recognize sound and translate from one language to another. In 2012, words like neural networks and machine learning became the main words on the front page of the New York Times.

Read more: The biography of Ida Lovelace; The first programmer in history

From Turing to Nobel: The Unparalleled Honors of the Godfather of Artificial Intelligence

As one of the pioneers of artificial intelligence, Geoffrey Hinton has been recognized many times for his outstanding achievements. He has received numerous awards including the David E. Rommelhart of the Cognitive Science Society and Canada’s Gerhard Hertzberg Gold Medal, which is Canada’s highest science and engineering honor.

One of Hinton’s most notable honors was winning the Turing Award with his colleagues in 2018. This is a prestigious award in the field of computing, so it is referred to as the Nobel of Computing. This award was given in recognition of Hinton’s continuous efforts in the development of neural networks. In 2022, another honor was added to Hinton’s honors, when he received the Royal Society Medal for his pioneering work in deep learning.

2024 was a historic year for Geoffrey Hinton. He and John Hopfield won the Nobel Prize in Physics for their amazing achievements in the field of machine learning and artificial neural networks. The Nobel Committee awarded this valuable prize to these two scientists for their fundamental discoveries and inventions that made machine learning with artificial neural networks possible. When awarding the prize, the development of the “Boltzmann machine” was specifically mentioned.

When a New York Times reporter asked Hinton to explain in simple terms the importance of the Boltzmann machine and its role in pretraining post-propagation networks, Hinton jokingly referred to a quote from Richard Feynman :

Look, my friend, if I could explain this in a few minutes, it wouldn’t be worth a Nobel Prize.

– Richard Feynman

This humorous response shows that this technology is very complex and its full understanding requires extensive knowledge and study. Boltzmann machine is one of the first neural network models (1985), which as a statistical model helps the network to automatically find patterns in data.

Geoffrey Hinton is a man who turned the dream of machine intelligence into a reality by standing against the currents. From back pain to receiving the Nobel Prize in Physics, his life path was always full of ups and downs. With steely determination and perseverance, Hinton not only became one of the most influential scientists of the 20th century but also changed the world of technology forever with the invention of artificial neural networks. His life story is an inspiration to all who pursue their dreams, even when the whole world is against them.

Continue Reading

Technology

Everything about Cybercube and Robo Van; Elon Musk’s robotic taxis

Published

on

By

Elon Musk's robotic taxis
Elon Musk brought the idea of ​​smart public transportation one step closer to reality by unveiling Cybercubes and Robovans.

Everything about Cybercube and Robo Van; Elon Musk’s robotic taxis

After years of passionate but unfulfilled promises, finally on October 11, 2024 (October 20, 1403) at the WE, Robots event, Elon Musk unveiled Tesla’s robotic taxis.

Appearing on stage an hour late, Musk showed off the Cybercube self-driving taxi: a silver two-seater that moves without a steering wheel or pedals.

The CEO of Tesla further announced the presence of 21 Cybercubes and a total of 50 self-driving cars at the Warner Bros. studio (California), where Tesla hosted the event with special guests only.

Tesla Cybercab / Tesla Cybercab profile robotic taxi

Tesla

“We’re going to have a very glorious future ahead of us,” Musk said, but gave no indication of where the new cars will be built. According to him, Tesla hopes to offer Cybercubes to consumers at a price of less than 30,000 dollars before 2027.

The company will reportedly begin testing “unsupervised FSD” with Model 3 and Model Y electric vehicles in Texas and California next year.

Currently, the company’s self-driving cars operate with supervised FSD, meaning they require human support to take control of the steering wheel or brakes at any time. Tesla needs to get several permits from the regulators of different US states (or other countries) to offer cars without steering wheel and pedals.

But Cybercube was not the only product that was unveiled at this ceremony. Alongside the line-up of Optimus robots likely to launch as consumer work assistants in the coming months, the unveiling of an autonomous robotic van that can carry up to 20 passengers or be used to carry goods also generated more excitement among the audience.

Tesla Robovan side view

According to Musk, Robovans and Cybercubes use inductive charging and do not need a physical power connection for recharging. He also stated that “robovans” would solve the problem of high density and pointed to the transportation of sports teams, for example.

The CEO of Tesla has been drawing the dream vision of the company’s self-driving public transportation fleet for the shareholders for years and sees the company’s future in self-driving vehicles.

It is not bad to remind you that the WE, Robots event was the first product introduction event after the introduction of Cybertrack in 2019; The product, which entered the market in late 2023, has since been recalled 5 times in the United States due to various problems.

The event ended with Elon Musk’s “Let’s party” and a video of Optimus robots dancing, while Tesla’s CEO invited guests to take a test drive with the on-site self-driving cars inside the closed-off film studios.

However, experts and analysts of the self-driving car industry believe that the release of cybercabs will take longer than the announced schedule because ensuring the safety of these cars in scenarios such as bad weather, complex road intersections and unpredictable behavior of pedestrians will require many permits and tests.

Tesla shareholders still balk at Musk’s vague timetable for the production and delivery of new cars, as he has a poor track record of promising robotic taxis. But we cannot deny that this unveiling breathed new life into the world of self-driving technologies.

But where did the idea of ​​robotic taxis, which Tesla CEO claims are 10 to 20 times safer than human-driven cars and reduce the cost of public transportation, start?

Tesla Robovan next to Cybercube

Tesla

In 2019, during a meeting on the development of Tesla’s self-driving cars, Elon Musk suddenly made a strange prediction: “By the end of next year, we will have more than a million robot taxis on the road.”

Tesla’s investors were not unfamiliar with the concept of fully autonomous driverless cars, and what surprised them was the timing and short window of time of the plans that Musk was announcing. His prediction did not come true until the end of 2020, but has been postponed many times; But in recent months, with the decrease in Tesla’s interest rate, Elon Musk has tried in various ways to divert Wall Street’s attention from the company’s main activity and draw it to a new point. At every opportunity, he explains that the company’s future lies not in the production of electric cars, but in the very exciting world of artificial intelligence and humanoid robots.

According to him, one of the most profitable businesses in the field of AI will be driverless taxis or robotaxis that work almost anywhere and in any condition. Musk believes that Tesla’s market value will reach several trillion dollars after the release of these cars, although with this, Tesla will enter a highly competitive market.

Tesla’s technology will face fierce competition from Alphabet’s Waymo, Amazon’s self-driving unit Zoox, and General Motors’ Cruise. Also, ride-sharing companies such as Uber and Lyft and Chinese companies such as Baidu and BYD are considered serious competitors of Tesla.

Can robotaxis really save Tesla from declining profitability? How close is the company really to the production of driverless and fully autonomous car technology, and what guarantee is there for the success of Elon Musk’s plans to form a vast network of robotic taxis?

The start of the internal project of Tesla’s self-driving taxis

Elon Musk at the presentation ceremony of Tesla's autopilot system

Business Insider

Although Elon Musk has implicitly raised the idea of ​​robotaxis since 2016; the design and development operations of these cars took on a more serious color from 2022. At this time, during Tesla’s first-quarter earnings call, Musk once again announced that the company is building robotic taxis that do not have any steering wheel, pedals, or any other controller for physical human driving.

He also said that these cars will be fully self-driving and will be available to the public by 2024, when Tesla completes its self-driving car project. Sometime later, at the opening ceremony of the Gigafactory in Austin, he mentioned that the robotaxis would have a futuristic design and probably look more like a Cybertruck than a Tesla Model S.

Tesla’s robotic taxis have no steering wheel, pedals, or any other controls for physical human driving

During the same meeting, a Tesla investor asked Musk if the robot taxis would be offered to utilities or sold directly to consumers. Musk did not answer this question but continued to emphasize that robot taxis minimize the cost of a car per kilometer of distance, and the cost of traveling with these cars will be lower than a bus or subway ticket.

Sometime before Musk’s statement, Tesla announced that it is producing fully autonomous and self-driving vehicles at a cost of $25,000, which can have a steering wheel or not. For this reason, no one knows yet whether Musk’s robotaxis project refers to these cars or not.

According to the announced timeline, Tesla had 32 months to complete the construction, legal permits, and software required for the robot taxis and align with acceptable standards for “level 5 autonomy.”

At the beginning of 2024, the subject of robotic taxis made the news again. Elon Musk, who seemed fed up with Tesla’s usual car business, emphasized that Tesla’s future does not depend on selling more electric cars, but mainly on artificial intelligence and robotics.

Unlike Uber, which is Tesla’s main competitor in this project, Musk does not want to rely on Model 3 sedans and SUVs like the Model Y for the development of robot taxis. According to Tesla’s statement, the company is generally working on the production of new dedicated vehicles, which will probably be called Cybercab.

The supply of robotaxis depended on the completion of Tesla’s autopilot technologies and the so-called full self-driving systems, and exact statistics of how much consumers will accept this innovative product and what new rules will be imposed in this field were not announced.

Car design

Tesla Robotaxis concept design

Teslaoracle

In terms of design, the interior of the car was expected to have differences from other Tesla electric cars to meet the demands of passengers; For example, two rows of seats facing each other, or doors that open in a sliding manner and facilitate boarding of passengers. Also, a car that is used as a taxi should have provisions for simple and quick cleaning of the interior and even disinfection.

The idea of ​​robotaxis also received interesting design proposals from enthusiasts: some said it would be better for Tesla to optimize its public self-driving cars depending on different uses; For example, some of them have a place to rest for long distances, or others come with a monitor and several accessories that are suitable for working along the way.

Supporters said that these facilities improve people’s quality of life and even if a passenger devotes his travel time to something useful, he has saved the same amount of time.

Continuing speculation about the design of the Cybercube, a group of experts in the field of car research also said that in the coming years, Tesla could produce other vehicles that are suitable for special entertainment, such as watching movies, or other amenities for users who want to hang out with friends and fellow travelers along the way. To socialize yourself, have: just like sitting in a limousine.

The design of the Cybercube is similar to the Cybertruck van, but with doors that open from the top

But the initial design of the Cybercube, which was published on the Tesla website, was somewhat reminiscent of the Cybertruck, and there was no special feature even to make it easier for people with disabilities to ride.

Forbes also wrote in its latest report comparing self-driving cars of different companies that Tesla’s robot taxi will probably be a two-seater car with side-by-side seats and a retractable steering wheel because eventually, users will need a steering wheel to drive outside the areas that have the company’s support services. had

Tesla Cybercab Tesla Cybercab back and side view with open doors

However the final design of the Tesla Cybercube was not similar to the self-driving cars of the startup Zoox or Zeekr.

With doors that open up like butterfly wings and a small interior, this car only hosts two passengers. As we might have guessed, the Cybercube looks a lot like the Cybertruck, but it’s sleeker and more eye-catching than the controversial Tesla pickup.

Hardware

Tesla Cybercube Robotaxis

Sugar-Design

So far, Tesla has not disclosed any information about the set of sensors that will be used in the robotaxis. The company talks about Autopilot technologies on its website, but what Elon Musk has so far described as a fully self-driving, driverless car will require more advanced sensors, software and equipment than Autopilot.

Tesla Autopilot cars are equipped with multiple layers of cameras and powerful “machine vision” processing, and instead of radar, they use special “Tesla Vision” technology that provides a comprehensive view of the surrounding environment.

In the next step, Tesla Autopilot processes the data from these cameras using neural networks and advanced algorithms, then detects and groups objects and obstacles and determines their distance and relative position.

Tesla’s Autopilot system is equipped with multiple layers of cameras and powerful “machine vision” processing and uses “Tesla Vision” instead of radar.

Car driving functions also include two important eras: 1. adaptive cruise control with traffic detection that changes the car’s speed depending on the surrounding traffic; 2. The Autosteer steering system makes the car move on a certain line with the help of cruise control and continues the right path, especially when it encounters a curve in the road.

These cars can park automatically, recognize stop signs and other road signs as well as traffic lights, and slow down if necessary. Blind spot monitoring, automatic switching between road lanes, and intelligent summoning of the car by mobile application are some other features of these cars.

Despite all security measures, all Tesla Autopilot cars still require driver supervision according to national laws and the company’s own announcement. For this reason, until this company provides new specifications and information about the sensors, cameras, and systems of the robot taxis, no expert can check their efficiency or risk.

Introducing the Robotaxis application

The image of the map on the Tesla Robotaxis application

Tesla

In April 2024, Tesla released a brief report on the mobile application of robotaxis, and Elon Musk also said that the first of these cars would be unveiled in August (this date was later postponed).

In the initial images of the robotic taxis application, a button to call or summon a taxi and a little lower, the message of waiting time for the car’s arrival could be seen. The second image showed a 3D map and a small virtual vehicle following a path toward a waiting passenger. These images were very similar to the Uber app, except that it looked like a Tesla Model Y car was driving in it.

According to Tesla, passengers can adjust the temperature of the car as they wish when they are waiting for the taxi to arrive. Of course, other details such as the waiting time and the maximum passenger capacity of the car were also seen in the images of the application.

Passengers can adjust the temperature inside the car and their favorite music through the Tesla application

According to the published screenshots, in the next step when the vehicle reaches the origin and the passenger boards, the map view changes to the destination. Passengers can control the car’s music through the mobile application.

The app looks like a standard online ride-hailing app, but there’s no mention of the robotic nature of the car, which does all the driving automatically and autonomously. Elon Musk said in the same meeting:

You can think of Tesla’s robotaxis as a combination of Uber and Airbnb.

According to Musk, part of the fleet of robotic cars will belong to Tesla and the other part will belong to consumers. The owners of this group of robotic cars can give their cars to the taxi fleet whenever they want and earn money in this way.

Legal restrictions on removing the steering wheel and pedals

Tesla robot taxi without a steering wheel

independent

Despite all his previous promises, Tesla’s CEO has been evasive in past interviews when asked if the robotaxis will have traditional controls like pedals and steering wheels. Tesla’s Robotaxi plans have been heavily questioned due to delays in early prototype development, making the answer to the above question more important than ever.

The reality is that by mid-2024, in theory, it could take months or even years to approve a vehicle without pedals and a steering wheel for public roads, while a more traditional-looking vehicle could come much sooner.

In a letter addressed to its shareholders, Tesla emphasized that it would need the permission of the federal government to deploy and operate robotaxis with a more radical and progressive design. The statement also stated:

Scheduling robotaxis requires technological advances and regulatory approvals, but considering their very high potential value, we intend to make the most of this opportunity and are working hard on the project.

Elon Musk also did not respond to a question about exactly what type of regulatory approval Tesla is seeking.

He was then asked by reporters if Tesla was seeking an exemption from the Federal Motor Vehicle Safety Standards (FMVSS) to develop and market a car without traditional controls. In response, Musk compared Tesla’s new product to Waymo’s local self-driving cars and said that products that are developed for local transportation are very vulnerable and weak. He added:

The car we produce is a universal product that works anywhere. Our robotaxis work well on any terrain.

Currently, car manufacturers must comply with federal motor vehicle safety standards that require human controls such as steering wheels, pedals, side mirrors, and the like. These standards specify how vehicles must be designed before they can be sold in the United States, and if a manufacturer’s new product does not meet these requirements, manufacturers can apply for an exemption; But the US government has set a limit of 2,500 cars per company per year.

The regulatory exemption cap would theoretically prevent the mass deployment of purpose-built self-driving vehicles from any AV company, including Tesla. To date, self-driving car advocates have tried hard to pass legislation to cap the number of driverless cars on public roads; But the bill is apparently stalled in Congress due to questions about the technology’s “level of reliability” and readiness.

Tesla will need an FMVSS exemption if it wants to remove the steering wheel and pedals from its self-driving cars

So far, only Nuro has managed to obtain an FMVSS exemption, allowing it to operate a limited number of driverless delivery robots in the states of Texas and California.

For example, General Motors’ Cruise unit applied for a waiver for Origin’s steering-less and pedal-less shuttle, but it was never approved, and the Origin program was put on hold indefinitely.

Tesla Cybercab interior view and seats
Tesla Cybercab Tesla Cybercab interior view and interior space

Startup Zoox (a subsidiary of Amazon) also announced that its self-driving shuttles are “self-certified”, prompting the US National Highway Traffic Safety Administration to launch new research to understand this newly invented concept. Issues such as strict legal processes and approval of the license caused other companies in this field to completely ignore the issue of removing the steering wheel and pedals. For example, Waymo’s self-driving cars, although operating on public roads without a safety driver, have traditional controls. Some time ago, the company also announced that it would finally introduce a new driverless car, but did not specify an exact date for it, nor did it mention FMVSS exemptions.

Thus, now that it has been determined that the final Cybercube car will be produced without traditional controls, Tesla must also pass similar regulatory hurdles.

The challenges of mass production of Tesla robotaxis

Tesla's robot taxi design

Sugar-Design

Apart from persuading the regulators and getting a city traffic permit, there have been many other challenges standing in the way of the success of the robotaxis project, some of which Tesla has passed and has not found an answer for others.

For example, Tesla claims that it has reached a reliable milestone in terms of technologies and hardware infrastructure, but incidents such as the crash of Uber’s self-driving car in 2018, which killed a pedestrian, or two severe crashes of cruise cars in 2023, have a negative public view. It followed people into driverless cars.

On the other hand, the current infrastructure of most American cities is designed for conventional cars and must be updated and developed again to support the multitude of robotic taxis . For example, installing smart traffic lights that can communicate with self-driving cars and provide them with real-time information is one of the basic needs of robot taxis. Also, the presence of clear road lines and legible traffic signs is very important for self-driving car sensors.

The mass production of robotaxis requires changing the road infrastructure

Contrary to Musk’s claim that “the roads are ready for permanent robot taxis,” self-driving cars from other companies are still plying urban and intercity roads in certain parts of the United States. Until July 2024, Tesla had about 2.2 million cars on American roads, which is far from Elon Musk’s target of a fleet of 7 million cars.

In the second stage, Tesla’s self-driving cars are equipped with advanced technologies such as a variety of cameras and sensors and data processing systems, which, apart from the cost of production, also increase the cost of maintaining equipment and keeping software up-to-date.

In the past year alone, some Tesla customers have been forced to pay an extra $12,000 to upgrade their cars’ self-driving capabilities, while there’s still no news of new features.

If we consider the average price of robotaxis between 150,000 and 175,000 dollars, it is not clear how long Elon Musk’s promise to potential buyers about the revenue-generating potential of these cars will take. Unfortunately, Musk’s prediction regarding the annual gross profit of 30,000 dollars for the owners who give their cars to other passengers does not have statistical and computational support.

Developing new insurance models for self-driving cars will be one of Tesla’s serious challenges

The development of suitable insurance models for self-driving cars will also be one of Tesla’s serious challenges, because insurance companies must be able to correctly assess the risks and possible costs of robotaxis; Therefore, Tesla must cooperate with insurance companies from different angles to reach a comprehensive plan that both customers and companies are satisfied with.

In addition to paying attention to technological and legal issues, Tesla must gain people’s trust in its new series of fully automatic and driverless cars, and for this purpose, it will be necessary to hold advertising campaigns and extensive training programs to familiarize consumers with the company’s technologies and reduce the concerns of end users. was

The status of the project in 2024 and the concern of shareholders

Tesla Cybercab / Cybercab in the city

Tesla

In 2024, Elon Musk postponed the unveiling date of robotaxis to August 8 and then to October 10. In April, he told Tesla’s investors, who were frustrated by the uncertain progress of production of the cars.

All the cars that Tesla produces have all the necessary hardware and computing for fully autonomous driving. I’ll say it again: all Tesla cars currently in production have all the prerequisites for autonomous driving. All you have to do is improve the software.

He also said that it doesn’t matter if these cars are less safe than new cars, because Tesla is improving the average level of road safety. A few weeks later, he released another video in which he summarized meeting Tesla’s goals in three steps:

  • Completing the technological features and capabilities of fully autonomous vehicles
  • Improving car technology to the point where people can ride driverless cars without any worries
  • Convincing the regulators that the previous option is true!

While other companies producing self-driving cars go from city to city to obtain the necessary permits and try to expand the range of activities of cars by proving the safety of their products, NBC recently revealed in its report that Tesla even the states of California and Nevada, which employ the most employees. has not received a license to test these cars.

Tesla has not yet received permission to test robotaxis in the US states

In July, Musk told investors that anyone who does not believe in the efficiency and value of robotaxis should not be a Tesla investor. Meanwhile, the California Department of Motor Vehicles recently filed a lawsuit against Tesla, accusing the company of falsely advertising Autopilot and fully automated systems.

In addition to specifying the monthly cost and upfront payments for fully autonomous cars, the case also addresses the issue that both types of systems require drivers to be behind the wheel and control the vehicle’s steering and braking.

The unveiling of the Cybercubes in October 2024 seemed to ease the mind of Tesla shareholders somewhat, but during the night of the company’s big event, some of them expressed their concern about Musk’s uncertain timings to the media.

What do experts and critics say?

Some critics say that it is not possible for Elon Musk’s robot taxi to be produced and released according to his schedule. Referring to Waymo vehicles that make 50,000 road trips every week, this group considers Tesla’s silence regarding the request for technical information of the vehicles unacceptable. From their point of view, Musk is just continuing to make vague promises about this car.

In response to critics, Elon Musk emphasizes one sentence: that Tesla is basically an artificial intelligence and robotics company, not a traditional automobile company. So why doesn’t he try to clarify the obstacles that stand in the way of Tesla’s actions to realize the old idea?

On the other hand, the technology academies remind that Tesla’s systems have not reached level 5 autonomy, that is, conditions that do not require any human control. The MIT Technology Review writes:

After years of research and testing of robotic taxis by various companies on the road, mass production of these cars still has heavy risks. To date, these vehicles only travel within precise, pre-defined geographic boundaries, and although some do not have a human operator in the front seat, they still require remote operators to take over in an emergency.

R. Amanarayan Vasudevan , associate professor of robotics and mechanical engineering at the University of Michigan, also says:

These systems still rely on remote human supervision for safe operation, which is why we call them automated rather than autonomous; But this version of self-driving is much more expensive than traditional taxis.

Tesla is burning direct investor money to produce robotaxis, and it is natural that more pressure will be placed on the company to get more reliable results. The balance between costs and potential revenues will come when more robotaxis hit the roads and can truly compete with ride-sharing services like Uber.

Despite numerous legal, infrastructural and social challenges, the unveiling ceremony of Cybercube and RoboVon puts not only the technology levels of self-driving cars but the entire transportation industry on the threshold of a huge transformation. The entry of Tesla’s robotic taxis into the market can even affect the traditional taxi service model, but how long will it take for the transition from this unveiling to the actual launch

Continue Reading

Popular