Connect with us

Technology

Review of Samsung Galaxy S23 Ultra, S23 Plus and S23

Published

on

Samsung Galaxy S23 Ultra, S23 Plus and S23

Review of Samsung Galaxy S23 Ultra, S23 Plus and S23. Review of camera, design, software, hardware and other technical specifications of Samsung Galaxy S23 Ultra, S23 Plus and S23.

Samsung Galaxy S23 Ultra, S23 Plus and S23

Review of Samsung Galaxy S23 Ultra, S23 Plus and S23

Introduction

In this article we’re going to read about Samsung Galaxy S23 Ultra, S23 Plus and S23.The Samsung Galaxy S23 Ultra sits alongside the Galaxy S23 Plus and Galaxy S23, and we can finally put the news and rumors behind the three phones and officially review them for real. 

As before, there are key similarities between the three models. The biggest of these is the main chipset of the three phones – all three models use the Snapdragon 8 Gen 2, the Snapdragon 8 Gen 2 being a version tuned specifically for the three devices. Samsung and Qualcomm are calling it the 2nd generation Snapdragon 8 Mobile Platform for Galaxy, and it’s basically a higher-clocked version of the familiar new chip, the Ultra.

It still has the upper hand in performance thanks to the 12GB RAM option, while the S23/S23+ max out at 8GB.

The Galaxy S23 Ultra looks familiar. If you think this is at least an upgrade over the Galaxy S23 Ultra, you’d be wrong. If we accept that the camera, performance and battery life are the key areas where the phone can be improved, the Galaxy S23 Ultra is a big upgrade over the Galaxy S22 Ultra.

Snapdragon 2 generation brings moderate to large improvements in performance. Judging by some early benchmarks, the Cortex-X3 inside is 10% to 20% faster than the Cortex-X2 inside the Exynos 2200 in single-core tests. Multi-core scores for the new processor could be between 30 and 60 percent better, depending on the benchmark. The GPU is also much faster. Samsung declined to name the percentages, but games like PUBG Mobile run at 60fps on the Exynos 2200-powered Galaxy S22 Ultra, while Snapdragon 8 Gen 2 devices support gameplay at up to 120fps.

Apart from the artificial advantages, the new chip is much more efficient and less prone to thermal problems. Based on specs alone, it’s about 40 percent more efficient than its predecessor, which itself was more efficient than the Exynos 2200, which was mainly featured in the Galaxy S22 series globally. This means that the Galaxy S23 Ultra will likely deliver more performance with a 5,000mAh battery – that’s a major battery upgrade for you.

Finally, the new 200-megapixel primary camera could prove a generational upgrade over the old 108-megapixel camera. The new sensor can produce both 12MP and 50MP photos depending on the scenario. And you can get that 50MP through the Expert RAW app, giving customers a richer, sharper base. Photography and videography at night have also been improved.

Technical specifications of the Samsung Galaxy S23 Ultra at a glance

Body: 163.4×78.1×8.9mm, 233g; Front glass (Gorilla Glass Victus 2), back glass (Gorilla Glass Victus 2); IP68 dust/water resistant (up to 1.5m for 30 minutes), Armor aluminum frame with stronger drop and scratch resistance (advertised), stylus, 2.8ms latency (Bluetooth integration, accelerometer, gyroscope) .

Screen: 6.80 inches Dynamic AMOLED 2X, 120Hz, HDR10+, 1750 nits (peak), 1440x3088px resolution, 19.3:9 aspect ratio, 501ppi; The display is always on.

Chipset: Qualcomm SM8550-AC Snapdragon 8 Gen 2 (4 nm): Octa-core (1x 3.36 GHz Cortex-X3 and 2x 2.8 GHz Cortex-A715 and 2x 2.8 GHz Cortex-A710 and 3x 2.0 GHz Cortex-A710 and 2.0 GHz x 3 ) Adreno 740.

Memory: 256GB 8GB RAM, 256GB 12GB RAM, 512GB 12GB RAM, 1TB 12GB RAM. UFS 4.0.

OS/Software: Android 13, One UI 5.1.

Rear Camera: Wide (main): 200MP, f/1.7, 23mm, 1/1.3″ 0.6µm, PDAF, Laser AF, OIS; Telephoto: 10MP, f/2.4, 70mm, 1/3.52in, 1.12µm dual-pixel PDAF, OIS, 3x optical zoom. Telephoto: 10MP, f/4.9, 230mm, 1/3.52in, 1.12µm, dual-pixel PDAF, OIS, 10x optical zoom; Ultra Wide: 12MP, f/2.2, 13mm, 120°, 1/2.55″ 1.4µm, Dual Pixel PDAF.

Front camera: 12 MP, f/2.2, 25 mm (wide), PDAF.

Video recording: Rear camera: 8K@24/30fps, 4K@30/60fps, 1080p@30/60/240fps, 720p@960fps, HDR10+, Stereo sound recording, EIS gyroscope. Front camera: 4K@30/60fps, 1080p@30fps.

Battery: 5000 mAh; 45W wired, PD3.0, 10W wireless (Qi/PMA), 4.5W reverse wireless.

Other specifications: fingerprint scanner (under the display, ultrasonic); NFC; stereo speakers; Samsung DeX, Samsung Wireless DeX (desktop experience support), Bixby commands and natural language commands, Samsung Pay (Visa, MasterCard approved), Ultra Wideband (UWB) support.

Compared to the Ultra, the Galaxy S23 and Galaxy S23+ are significant upgrades. The biggest upgrade is the move to the Snapdragon 8 Gen 2, which should bring better overall performance and battery life. The batteries in both models have been increased by 200 mAh, totaling 3,900 mAh and 4,700 mAh in the S23 and S23+.

Specifications of Samsung Galaxy S23 Plus at a glance:

Body:  157.8×76.2×7.6mm, 195g; Glass front (Gorilla Glass Victus 2), glass back (Gorilla Glass Victus 2), aluminum frame; IP68 dust/water resistant (up to 1.5m for 30 minutes), Armor aluminum frame with stronger drop and scratch resistance (advertised).

Screen: 6.60 inches Dynamic AMOLED 2X, 120Hz, HDR10+, resolution 1080x2340px, aspect ratio 19.5:9, 390ppi; The display is always on.

Chipset: Qualcomm SM8550 Snapdragon 8 Gen 2 (4 nm): Octa-core (1x 3.36 GHz Cortex-X3 & 2x 2.8 GHz Cortex-A715 & 2x 2.8 GHz Cortex-A710 & 3x 2.0 GHz Cortex-X3); Adreno 740. Memory: 256 GB 8 GB RAM, 512 GB 8 GB RAM; UFS 4.0.

OS/Software: Android 13, One UI 5.1. Rear camera: Wide (main): 50 MP, f/1.8, 23 mm, 1/1.56 inch, 1.0 µm, Dual Pixel PDAF, OIS; Telephoto: 10MP, f/2.4, 70mm, 1/3.94in, 1.0µm, PDAF, 3x optical zoom; Ultra Wide Angle: 12MP, f/2.2, 13mm, 120˚, 1/2.55″ 1.4μm, super stable video.

Front camera: 12 MP, f/2.2, 25 mm (wide), PDAF.

Video recording: Rear camera: 8K@24/30fps, 4K@30/60fps, 1080p@30/60/240fps, 720p@960fps, HDR10+, Stereo sound recording, EIS gyroscope. Front camera: 4K@30/60fps, 1080p@30fps.

Battery: 4700mAh; 45W wired, PD3.0, 10W wireless (Qi/PMA), 4.5W reverse wireless.

Miscellaneous: fingerprint reader (under the display, ultrasonic); NFC; stereo speakers; Samsung DeX, Samsung Wireless DeX (desktop experience support), Bixby commands and natural language commands, Samsung Pay (Visa, MasterCard certified).

Samsung Galaxy S23 specifications at a glance:

Body: 146.3 x 70.9 x 7.6mm, 167g; Glass front (Gorilla Glass Victus 2), glass back (Gorilla Glass Victus 2), aluminum frame; IP68 dust/water resistant (up to 1.5m for 30 minutes), Armor aluminum frame with stronger drop and scratch resistance (advertised).

Screen: 6.10 inches Dynamic AMOLED 2X, 120Hz, HDR10+, resolution 1080x2340px, aspect ratio 19.5:9, 422ppi; The display is always on.

Chipset: Qualcomm SM8550 Snapdragon 8 Gen 2 (4 nm): Octa-core (1x 3.36 GHz Cortex-X3 & 2x 2.8 GHz Cortex-A715 & 2x 2.8 GHz Cortex-A710 & 3x 2.0 GHz Cortex-X3); Adreno 740. Memory: 128 GB 8 GB RAM, 256 GB 8 GB RAM; UFS.

OS/Software: Android 13, One UI 5.1.

Rear camera: Wide (main): 50 MP, f/1.8, 23 mm, 1/1.56 inch, 1.0 µm, Dual Pixel PDAF, OIS; Telephoto: 10MP, f/2.4, 70mm, 1/3.94in, 1.0µm, PDAF, 3x optical zoom; Ultra Wide Angle: 12MP, f/2.2, 13mm, 120˚, 1/2.55″ 1.4μm, super stable video.

Front camera: 12 MP, f/2.2, 25 mm (wide), PDAF.

Video recording: Rear camera: 8K@24/30fps, 4K@30/60fps, 1080p@30/60/240fps, 720p@960fps, HDR10+, Stereo sound recording, EIS gyroscope. Front camera: 4K@30/60fps, 1080p@30fps.

Battery: 3900mAh; 25W wired, PD3.0, 10W wireless (Qi/PMA), 4.5W reverse wireless.

Other specifications: fingerprint reader (under the display, ultrasonic); NFC; stereo speakers; Samsung DeX, Samsung Wireless DeX (supports desktop experience), Bixby commands and natural language commands, Samsung Pay (Visa, MasterCard certified)

The rest of the Galaxy S23/S23+ specifications are the same as their previous versions. You get the triple camera setup mostly unchanged – wide, 3x and ultra-wide on the back and a 12MP front-facing selfie shooter shared between all three models.

The 6.1-inch and 6.6-inch 1080x2340px Dynamic AMOLED 2X 120Hz displays are also carried over directly from last year’s Galaxy S22 and S22+.

Samsung decided to remove the contoured camera island on the Galaxy S23 and S23+. This pairs well with the more flat and minimalist look of the Galaxy S23 Ultra. Samsung calls it linear design across all models. Some may find it too simple, even boring.

All three models are available in four colors – black, cream, green and lavender. All three are also covered in the new Victus 2 Gorilla Glass – the first devices to use the material.

This is for the outline of the new Galaxy S23 series. It paints a picture of 2022 – the Galaxy S23 and Galaxy S23+ aren’t a compelling upgrade for previous-generation owners, while the Ultra model could be, depending on your needs. There aren’t any additions as big as last year’s inclusion of the S Pen, but sometimes refinement is just as important as innovation.

In the next two pages we will look at the hardware details.

You can also see Xiaomi 13 Pro review, price and technical specifications

Reviewing the design and specifications of the Samsung Galaxy S23 Ultra phone

When you look at Samsung’s S23 series, the Ultra immediately stands out. It is taller and wider than its counterparts, and its design is more complex than the other two models. For starters, both the front and back glass panels slope towards the thin aluminum frame.

The light curve on either side of the thin frame gives the Galaxy S23 Ultra a high-precision quality that the Galaxy S23 and S23+ lack with their flat bezels and flat glass panels.

However , this design is not new. The Galaxy S23 Ultra is undeniably similar to its predecessor. You probably won’t be able to tell the Phantom Black Galaxy S23 Ultra from the Phantom Black Galaxy S22 Ultra. Even the cream model looks similar to last year’s white model under certain lighting. Luckily, there’s green and lavender on this model to help your potential new phone stand out.

The back panel is matte rather than glossy, just like the Galaxy S22 Ultra, meaning the glass is more smudge-resistant and easier to clean.

Of course, keeping the same design might not be the worst thing. Even a year later, it’s undeniably premium and advanced, and some fans of the Galaxy S22 Ultra’s look and feel will be happy that its successor looks more of the same.

The design is about more than aesthetics – the more square, almost notebook feel of the Galaxy S23 Ultra comes straight from its predecessor, which itself took its cue from the latest Galaxy Note – it’s a body designed to feel comfortable to write on. .

A familiar design A familiar design

The S Pen looks the same as last year. It fits nicely inside the Galaxy S23 Ultra, and as before, only the clicker matches the body.

Samsung hasn’t announced any significant improvements to the S Pen experience, so we expect the same excellent 2.8ms latency for the stylus and 4,096 levels of pressure on the screen digitizer. It’s the best pen experience on the phone, and also the most comfortable thanks to its physical and software implementation.

The same applies to the display. The panel is apparently the same as last year’s model, which is still widely regarded as the best in the industry. It’s a really advanced panel, even in 2023. On paper, we get the same 6.8-inch 1440x3088px Dynamic AMOLED 2x that can have a variable refresh rate of up to 120Hz (with LTPO 2.0 controller). and maximum brightness of 1750 nits.

Samsung has made improvements to the Vision Booster – it can now adjust the display’s color tone and contrast in three different lighting conditions, meaning the panel will be optimal for almost any scenario.

Just like the S22 Ultra, the Galaxy S23 Ultra defaults to FHD+ and Vivid color mode, but can be maxed out at WQHD+ settings with little or no cost to battery life.

Samsung says it has reduced the curvature of the display, resulting in a larger flat surface on the Galaxy S23 Ultra’s screen than its predecessor. Seasoned S22 Ultra owners may notice a difference, though it’s not immediately apparent even when viewed side-by-side.

Display curve Display curve

In fact, all controls on the Galaxy S23 Ultra are similar in placement and feel to the Galaxy S22 Ultra.

Again, that’s not a bad thing – the Galaxy S22 Ultra has excellent ergonomics for its size, and fans of its design and layout will appreciate the familiarity of its successor. Not a bad incentive for potential upgraders who enjoy continuity.

Controls Controls

Samsung has extended its eco-friendly approach to the Galaxy S23 Ultra. Like its predecessor, it ships in a 100% recycled box and is equipped with parts made from recycled materials.

Samsung says the Galaxy S23 Ultra has 12 internal and external components that use recycled plastic materials from discarded fishing nets, water bottles and PET bottles. Compared to the Galaxy S23 Ultra, this is twice as much.

  The Galaxy S23 Ultra uses recycled aluminum and recycled glass in the side keys and volume controls, the inner cover of the S Pen and the SIM tray, among others. Samsung says the company’s Galaxy S23 series will prevent more than 15 tons of plastic from entering the world’s oceans.

Here’s a look at some of the official Galaxy S23 Ultra cases. There’s a Smart View protective case with built-in NFC (bottom left), as well as leather and silicone cases in a variety of colors. Samsung has partnered with Adidas for some special edition cases.

Samsung cases Samsung cases

Checking the handling of the Samsung Galaxy S23 Ultra phone

The Samsung Galaxy S23 Ultra is about the size of a large phone case. It weighs 233g but the slim sides make it comfortable to hold, while the flat top and bottom give the S23 Ultra a secure feel to hold – perfect for both portrait orientation and video viewing. .

 In this phone, we reach the far corners of the screen with the thumb, but Samsung has included a one-handed mode to help it. You can also easily pull down notifications by swiping in the center of the screen.

Meanwhile, reaching the power button and volume keys is no problem – Samsung has had a few years to perfect the usability of this form factor, and it usually does it right. The ultrasonic fingerprint scanner is also perfectly centered on the lower half of the display and is easy to get used to for newcomers.

The matte back panel feels great to the touch, but it makes the Galaxy S23 Ultra slippery. Although the back of the phone is beautiful despite the glass panel, it makes the phone look a little slippery .

The Galaxy S 23 Ultra is able to shrink the screen as small as possible while still remaining usable. It’s a very well-balanced big phone with a software experience thought out to match that big screen and its usability.

Mobile platform Snapdragon 8 generation 2 for Galaxy

This year, Samsung is launching its Galaxy S23 series exclusively with Qualcomm’s 2nd generation Snapdragon 8, ditching its Exynos chipset for the first time since it introduced it with the Galaxy S II in 2011. However, Samsung isn’t using the regular phone’s SD 8 Gen 2, but a special, higher-clocked version it’s calling the Snapdragon 8 Gen 2 Mobile Platform for Galaxy.

The custom chip boosts the clock speed of the fastest Cortex-X3 core to 3.36GHz – up from 3.2GHz in other Snapdragon 8 Gen 2 phones.

The chip should use its inherent efficiency to make the most of the 5,000 mAh battery while boosting gaming performance over the Galaxy S22 Ultra.

The Snapdragon 8 Gen 2 also gets a lot of camera improvements compared to last year’s Galaxy S22 Ultra, but it all starts with the new camera sensor.

200 megapixel ISOCELL HP2 sensor

The Samsung Galaxy S23 Ultra features Samsung’s new ISOCELL HP2 200MP image sensor. At 1/1.3in, it’s slightly larger than last year’s 1/1.33in 108MP sensor, though its pre-populated pixels are slightly smaller at 0.6µm compared to the 108MP’s 0.8µm pixels.

This is where the new illustrator is more advanced than its predecessor. The camera can combine its 16-by-1 200-megapixel photos for the resulting 12.5-megapixel photos (these photos will most likely be rounded to 12 megapixels) for the most dynamic range and lowest image signal noise. But the sensor can also combine its pixels at a 4-to-1 ratio, giving you a 50-megapixel image for even more detail.

You can also shoot at the full 200MP, but that’s likely to produce an image without the inherent benefits of multi-frame processing, giving you limited dynamic range and more noise.

The new 200MP features what Samsung calls Advanced Super Quad Pixel Autofocus – it uses each of the 200MP pixels to detect differences from left to right and up and down to achieve focus.

 

With Expert Raw Samsung Galaxy S23 Ultra is now integrated into the main camera app and smarter than before. You can take enhanced RAW photos with multi-frame processing up to 50MP resolution. This is similar to what the iPhone 14 Pros give you with Apple ProRAW at 48MP, and is a great base for further photo editing. The new higher resolution RAW recording is especially useful for landscapes.

Samsung has made improvements in a number of areas of the Galaxy S23 Ultra’s camera. Thanks to the higher resolution imager, Night Portrait and night videos are improved. Multi-frame processing optimization combined with artificial intelligence has resulted in better noise reduction. Samsung also says it has doubled the OIS angle compared to the older 108MP camera, resulting in more stable photos.

There’s a new Astro Hyperlapse mode that can capture light trails without the need for additional equipment. And while on the subject of video, the new main camera can record 8K video at a maximum rate of 30fps, up from last year’s 24fps, which may be useful for some creators.

 

Samsung decided to keep the other three cameras intact for another year. The combination of 10x telephoto periscope, 3x zoom and 12MP ultra-wide is one of the most versatile on the market. And while taking photos with handheld devices is subjectively faster than the Galaxy S22 Ultra, we’ll save our observations for the final review.

The Galaxy S23 Ultra has a new 12-megapixel selfie camera that, alongside the Snapdragon 8 Gen 2 chipset, features Super HDR, which Samsung says it applies to the front-facing camera at 60 frames per second.

It’s unlikely that Samsung made adjustments to ensure better results from the 10x, 3x and 0.6x cameras on the Galaxy S23 Ultra.

Galaxy S23 Ultra vs Galaxy S22 Ultra camera samples

Now that we have a few more minutes with the new Galaxy, we decided to do a quick camera shoot between the new and old Ultra. We took some photos outside in good lighting conditions and inside in not so good conditions.

This will be a quick side by side comparison. You’ll have to wait for our in-depth review, where we’ll take a deeper look at all the new Galaxy S23 Ultra cameras.

With that said, let’s look at some examples. We captured two scenes at 1x, 3x, and 10x zoom levels on the Galaxy S23 Ultra to compare with similar cameras on the Galaxy S23 Ultra. The 200-megapixel sensor offers more detail than the 108-megapixel sensor, even at 12-megapixel pixel resolution. The resolution in the Galaxy S23 Ultra’s photos has been increased compared to the S22 Ultra, giving a sense of greater detail.

3x and 10x images are also noticeably sharper on the Galaxy S23 Ultra. There’s a little more noise, but we’ll gladly accept that in exchange for a higher level of detail.  The Galaxy S23 Ultra retains the fine textural details that its predecessor simply smeared to nothing.

Samsung Galaxy S23 Ultra: 1x - f/1.7, ISO 10, 1/909s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S23 Ultra: 3x - f/2.4, ISO 50, 1/2500s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S23 Ultra: 10x - f/4.9, ISO 50, 1/769s - Samsung Galaxy S23 Series Hands On review
Samsung Galaxy S23 Ultra: 1x - f/1.7, ISO 10, 1/1250s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S23 Ultra: 3x - f/2.4, ISO 50, 1/3333s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S23 Ultra: 10x - f/4.9, ISO 50, 1/1111s - Samsung Galaxy S23 Series Hands On review
Samsung Galaxy S23 Ultra: 1x • 3x • 10x Samsung Galaxy S22 Ultra: 1x • 3x • 10xSamsung Galaxy S22 Ultra: 1x - f/1.8, ISO 50, 1/1468s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S22 Ultra: 3x - f/2.4, ISO 50, 1/2672s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S22 Ultra: 10x - f/4.9, ISO 50, 1/894s - Samsung Galaxy S23 Series Hands On review
Samsung Galaxy S22 Ultra: 1x - f/1.8, ISO 50, 1/2040s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S22 Ultra: 3x - f/2.4, ISO 50, 1/3488s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S22 Ultra: 10x - f/4.9, ISO 50, 1/1204s - Samsung Galaxy S23 Series Hands On review

The next few pictures are of the inside. The image of the sofa is in good light. The image of the untidy shelves is in lower light, while the final image of our studio is in almost complete darkness.

The Galaxy S22 Ultra’s images are cleaner but less detailed.  The Galaxy S23 Ultra captures a much higher level of detail at the cost of some noise. Once again, we love the deal. It’s remarkable how much more detail you get from the new 200MP sensor – notice the Kodak Instamatic 33 lettering (if you can see it) – it’s almost unreadable on the Galaxy S22 Ultra and completely readable on the Galaxy S23 Ultra.

Samsung Galaxy S23 Ultra in low light - f/1.7, ISO 400, 1/100s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S23 Ultra in low light - f/1.7, ISO 800, 1/50s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S23 Ultra in low light - f/1.7, ISO 3200, 1/4s - Samsung Galaxy S23 Series Hands On review

Samsung Galaxy S22 Ultra in low light - f/1.8, ISO 50, 1/100s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S22 Ultra in low light - f/1.8, ISO 640, 1/50s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S22 Ultra in low light - f/1.8, ISO 3200, 1/10s - Samsung Galaxy S23 Series Hands On review

Finally , we took 200MP and 50MP images at full resolution so you can see what the highest possible resolution offers. After hitting the shutter, it took a few seconds to finish.

 

Samsung Galaxy S23 Ultra: 200MP - f/1.7, ISO 10, 1/833s - Samsung Galaxy S23 Series Hands On review Samsung Galaxy S23 Ultra: 50MP - f/1.7, ISO 10, 1/909s - Samsung Galaxy S23 Series Hands On review

Reviewing the design and handling of the Samsung Galaxy S23 Plus and Galaxy S23

In this part of the review of Samsung Galaxy S23 Ultra, S23 Plus and S23, we’re going to review their design. The Samsung Galaxy S23 Plus and Galaxy S23 are simpler than the new S23 series. But that may not be a bad thing. For one, they’re smaller and fit squarely into decent-sized territory. At 6.6 inches, the S23+ is on the smaller end of large phones, while at 6.1 inches, the S23s is on the larger end of small phones.

The pair don’t have the same sense of precision engineering as the Galaxy S23 Ultra, but they’re still very well made. Gone are the days when smaller S-series phones used plastic instead of glass. This year, both the Galaxy S23 and S23 Plus use the Victus 2 Gorilla Glass as their Ultra counterpart.

This year, Samsung decided to remove the contoured camera island from the back of the Galaxy S23 and S23+, which the company says will streamline the design on all three models. These three phones are even in black, cream, green and lavender color schemes. People who like simple and clean design will love S23 and S23+.

Perhaps Samsung should go even further and make the Galaxy S24 and S24+ (if the series is kept intact) square like the Ultra – again like the Galaxy Note 10 and Galaxy Note 10 Plus pair.

As you know, the Galaxy S23 Ultra has a flat top and bottom bezel and thinner side bezels, while the Galaxy S23 and S23+ have an equally wide bezel that curves down slightly to fit the front and rear glass panels. reach back

Like the Ultra, the back panels of the S23 and S23+ are matte instead of glossy. We wholeheartedly agree with Samsung’s decision on the ending. It’s better at keeping stains at bay and looks better too.

On a less positive note, the new Galaxy S23+ could be mistaken for the Galaxy A13 from a distance – they have the same 6.6-inch display and a seamless back panel with only the camera lenses sticking out at the top.

Under the hood, the pair of phones come with the same premium Snapdragon 8 Gen 2 chip that’s been tuned for the Galaxy. It’s a lovely zip experience that’s a step up from the old S22 series and levels up from the S21. The phone handles everything quickly, and the fast 120Hz panels deliver the speed of light accordingly.

We expect the increase in battery capacity to 200 mAh to be significant, especially in conjunction with the efficient chipset.

Elsewhere the update is less noticeable. You get essentially the same display and camera configuration as last year’s S22 and S22+. These were already good enough, perhaps, but potential upgraders need convincing.

Summary

So let’s try to summarize the review of Samsung Galaxy S23 Ultra, S23 Plus and S23. Let’s say you own a Galaxy S22 or Galaxy S22 Plus, and you’re looking at the pair and wondering if you should upgrade. The pre-booking campaign didn’t offer enough discounts to really make a difference.

Minor specification differences are also not enough to entice people to upgrade. Well, maybe disgruntled owners of the smaller Galaxy S22 might take refuge in the Galaxy S23’s improved battery, but this might be a superior case.

The outlook for potential buyers of the Galaxy S23 Ultra is not so bad. Whether you own a Galaxy S22 Ultra or you’re in the market for the best possible smartphone, the S23 Ultra has features to impress.

We use Snapdragon 8 Gen 2 to make a tangible difference in performance and battery life. The new 200MP camera has the potential to be much better than the 108MP camera.

However, before you buy any of the S23 phones, try reading our reviews of the three phones. So listen to the bell!

Source: GSMARENA.COM

Technology

iPhone 16 Pro Review

Published

on

By

iPhone 16 Pro
The iPhone 16 Pro is one of the least changed iPhones of the last few years, and at the same time, it offers the same reliable experience as before.

iPhone 16 Pro Review

We usually know Apple as a company that refuses to release half-assed products or software features and prefers not to enter a new field at all or to enter with a product that provides a reliable and efficient experience to the user. Accordingly, the iPhone 16 Pro is the most imperfect product in Apple’s history; I will explain further.

Table of contents
  • iPhone 16 Pro video review
  • Camera and Camera Control
  • Ultrawide camera
  • Main camera
  • Telephoto camera
  • Portrait photography
  • selfie camera
  • Performance and battery
  • Design and build quality
  • Display and speaker
  • Summary and comparison with competitors

Apple is marketing the iPhone 16 Pro with a focus on Apple Intelligence and its artificial intelligence capabilities; But now, even to experience Apple’s artificial intelligence half-and-half, you have to wait until the official release of iOS 18.1 in late October, more than a month after the iPhone 16’s launch. There is not even news of the attractive animation of the new Siri; The animation that inspired Apple to name the iPhone 16 event It’s Glowtime.

Dimensions of iPhone 16 Pro in hand

For those who have been unaware of the technology world since the early months of 2024, I must say that Apple Intelligence is Apple’s answer to Google’s Gemina, Samsung’s Galaxy AI, and even Microsoft’s Copilot. According to Apple Intelligence, Siri is going to be what was promised 13 years ago, during its unveiling; A full-fledged digital assistant that speaks to the user in natural language; Of course, apart from the advanced Siri, capabilities such as creating photos and emojis with AI, text writing and photo editing tools will also be added to iOS.

Note that we have to wait for iOS 18.4 to fully experience Apple Intelligence with all its features; This update will be released in the early months of 2025. iPhone 16 comes with iOS 18 by default; So it is not surprising that Apple lags behind its competitors with such a delay, and the iPhone 16 Pro is not a perfect device either.

Camera and Camera Control

Now that Apple Intelligence is out of the question, and as per Zoomit’s policy, we don’t review a device based on the promise of future updates, let’s leave AI out of the iPhone 16 Pro review headlines and start straight from the part that has changed the most. : Camera or rather, camera button.

Control camera button on iPhone 16 Pro frame
Working with iPhone 16 Pro camera control
iPhone 16 Pro camera control menu
iPhone 16 Pro cameras

While it was said that Apple is working on removing the physical buttons of the iPhone, this year surprisingly, another button was added to the iPhone 16 family; Although Apple insists on calling it Camera Control. Unfortunately, camera control is crude and incomplete both in terms of implementation and capabilities; I will explain further.

As usual with Apple, the camera control has a complex engineering behind its simple appearance. The surface of the control camera is made of sapphire and is surrounded by a stainless steel ring of the same color as the body. Under this surface, there is a precise force detection sensor with haptic feedback along with a touch sensor so that the camera control can simulate the shutter of DSLR cameras and recognize the swipe of the finger on the button surface.

Camera menu on iPhone 16 Pro

Apple says that by the end of this year, with a software update, it will add a feature to the camera control that will allow the user to focus on the subject by half-pressing the button and record the photo by fully pressing it, just like professional cameras and Xperia phones. On the other hand, after the release of Apple Intelligence, the user will have access to Siri’s image search function with the camera control.

control camera; An interesting idea, but very immature

Currently, with the camera control, you can take photos, record videos, or change camera parameters; Thus, by pressing the button once, the camera application is launched, now if you press the button again, a photo will be taken, and if you hold it, the video will start, and as soon as you lift the finger, the video will stop.

In the camera environment, if you gently press the button twice without lifting your finger, the photography parameters will appear, you can switch between the options by swiping on the button surface, and you can enter the desired parameter settings with another gentle press. Among the photography parameters available are exposure, depth of field, zoom, switching between cameras, Style, and Tone, and we will talk more about the last two in the following.

Camera control in the camera viewfinder
Control camera settings
Control camera settings 2

To be honest, for me and many of my colleagues at Zoomit, it was much easier and more straightforward to touch the screen to navigate through the camera menu than to use the camera controls. Still, after 10 days of working with iPhone 16 Pro, it is very difficult and time-consuming to go to the photography parameters section and swipe to adjust the parameters; For example, it often happens that while swiping to adjust the value of a parameter such as Tone, the phone decides to exit the Tone settings and move between parameters.

One of the problems of the camera control comes back to the firmness of its button; Therefore, when taking pictures with this button, the phone shakes; An issue that may end up blurring the details of photos in the dark.

Apart from the safety of the button, the placement of Camera Control is also not optimal in my opinion; When using the phone in portrait mode, especially with the Pro Max model, you are likely to have trouble and need to use both hands; If you use the phone with your left hand, sometimes your fingers may press the button and disrupt the phone’s functionality.

Applying changes to the color and lighting of iPhone 16 Pro photos

If Apple fixes the problems and bugs of the control camera, maybe it can be used in two cases; First, during zooming, because you can have more precise control over the zoom level, and second, for faster access to Apple’s new settings for the camera called Style and Tone, which are very useful for photography enthusiasts; Now I will explain the reason.

iPhones usually have their own style of photography; iPhone photos usually have colors close to reality with a relative tendency towards warmth, and there is no mention of saturated and high-contrast colors; Of course, Apple introduced the Photographic Styles feature with iPhone 13 to satisfy the fans of high-contrast photography in the style of Google Pixels by providing different photography styles.

the battle of the flags; Comparison of iPhone 16 Pro camera with Pixel 9 Pro and Galaxy S24 Ultra [survey results]

iPhone 16 Pro? Pixel 9 Pro XL or Galaxy S24 Ultra? Which phone has the best camera? The result will surprise you.

With the iPhone 15, Apple adopted a policy that was not very pleasant for the public; In short, in order to use all the capacities of the powerful Photonic Engine with the aim of preserving the details of the shadows and highlights, the iPhone goes a little too far in the implementation of HDR to the point where the colors and shadows lose their power and do not have the previous dramatic sense.

The bad news is that the iPhone 16 Pro follows Apple’s previous policy and, so-called, records the shadows weakly; But the good news is that now with the evolved version of Photographic Styles, you can breathe new life into shadows and colors. With the new version of Photographic Styles, you can change the type of skin color processing and shadows, even after taking photos, you can change the photography style.

Discover your photography style with the iPhone 16 Pro

Before we see the effect of photographic styles on photos, let’s talk about their different modes first. iPhone photography styles are now divided into two general categories: Mood and Undertone; Apart from the standard photography mode, 5 Undertone styles and 9 Mood styles are available. Undertone styles adjust the skin tone of human subjects more than anything else, and Mood styles offer functionality similar to Instagram filters.

Undertone styles are as follows:

  • Standard: iPhone’s default photography mode
  • Amber: Intensifies the amber tone in photos
  • Gold: Intensifies the golden tone in photos
  • Rose Gold: Intensifies the pink-gold tone in photos
  • Neutral: Neutralizes warm undertones in photos
  • Cool Rose: Intensifies cool-toned color in photos
Kausar Nikomanesh, Zomit writer in the editorial office - Standard iPhone 16 Pro photography style
Undertone: Standard
Kausar Nikomanesh, Zomit writer in the editorial office - Amber iPhone 16 Pro photography style
Undertone: Amber
Kausar Nikomanesh, Zomit writer in the editorial office - Gold iPhone 16 Pro photography style
Undertone: Gold
Kausar Nikomanesh, Zomit writer in the editorial office - Rose Gold iPhone 16 Pro photography style
Undertone: Rose Gold
Kausar Nikomanesh, Zomit writer in the editorial office - Neutral iPhone 16 Pro photography style
Undertone: Neutral
Kausar Nikomanesh, Zomit writer in the editorial office - Cool Rose iPhone 16 Pro photography style
Undertone: Cool Rose

Mood styles are as follows:

  • Vibrant
  • Natural
  • Luminous
  • Dramatic
  • Quiet
  • Cozy
  • Ethereal
  • Muted B&W
  • Stark B&W
Kausar Nikomanesh, Zomit writer in the editorial office - Vibrant iPhone 16 Pro photography style
Mood: Vibrant
Kausar Nikomanesh, Zomit writer in the editorial office - Natural iPhone 16 Pro photography style
Mood: Natural
Kausar Nikomanesh, Zomit writer in the editorial office - Luminous iPhone 16 Pro photography style
Mood: Luminous
Kausar Nikomanesh, Zomit writer in the editorial office - Dramatic iPhone 16 Pro photography style
Mood: Dramatic
Kausar Nikomanesh, Zomit writer in the editorial office - Quiet iPhone 16 Pro photography style
Mood: Quiet
Kausar Nikomanesh, Zomit writer in the editorial office - Cozy iPhone 16 Pro photography style
Mood: Cozy
Kausar Nikomanesh, Zomit writer in the editorial office - Ethereal iPhone 16 Pro photography style
Mood: Ethereal
Kausar Nikomanesh, Zomit writer in the editorial office - Muted B&W iPhone 16 Pro photography style
Mood: Muted B&W
Kausar Nikomanesh, Zomit writer in the editorial office - Stark B&W iPhone 16 Pro photography style
Mood: Stark B&W

All styles can be customized with three new parameters: Palette, Color, and Tone; The Palette parameter changes the range of applied colors, Color adjusts the intensity of color saturation, and most importantly, Tone can change the intensity of shadows and contrast and bring freshness back to iPhone photos.

While the Palette parameter is adjusted with a simple slider, you have to use a control pad to adjust color and tone. Working with this pad is very difficult and boring; Because to change the value of each of the two parameters, you have to put your finger on the head pad and since you have no feeling about the exact location of the finger, it is difficult to change the other parameter by keeping one parameter constant.

The iPhone 16 Pro photography experience is slightly different from the previous generation

If, like me, you don’t feel like messing around with the control pad and slider, you can directly access the styles or the Tone parameter with the camera control button and believe that you can increase the attractiveness of iPhone photos just by changing the Tone; For example, pay attention to the following two photos:

Standard mode with Tone -7
Standard mode with Tone 0 (default)

As you can see in the photos above, without changing the styles and simply by reducing the intensity of the tone, both the shadows have returned to the photo, and the black color of Mohammad Hossein’s t-shirt is visible better than before thanks to the improvement of the contrast of the image.

Ultrawide camera

Leaving aside the discussion of photography styles, the iPhone 16 Pro camera itself has undergone several major changes, the most important of which is the upgrade of the telephoto camera sensor from 12 to 48 megapixels; The new sensor uses a Quad-Bayer filter and 0.7-micrometer pixels; Therefore, it seems that the dimensions of the sensor itself are not different from the 1.2.55-inch sample of the previous generation with 1.4-micrometer pixels.

camera

Sensor

Lens

capabilities

Wide camera (main)

48-megapixel Sony IMX903

Dimensions 1/1.28 inches

1.22 µm pixels

Phase detection autofocus

Sensor-shift optical stabilizer

24 mm

Aperture f/1.78

12, 24 and 48-megapixel photography

4K120 video recording

Dolby Vision, ProRes, and Log

Portrait photography

Telephoto camera

12-megapixel Sony IMX913

Dimensions 1/3.06 inches

1.12 µm pixels

Dual Pixel phase detection autofocus

Sensor-shift optical stabilizer

120 mm

Aperture f/2.8

5x optical zoom

12-megapixel photography

4K60 video recording

Dolby Vision, ProRes, and Log

Portrait photography

Ultrawide camera

48 megapixels

Dimensions 1/2.55 inches

0.7 µm pixels

Phase detection autofocus

13 mm

Aperture f/2.2

12 and 48-megapixel photography

4K60 video recording

Dolby Vision, ProRes, and Log

Macro photography

selfie camera

12-megapixel Sony IMX714

Dimensions 1/3.6 inches

1.0 µm pixels

Phase detection autofocus

23 mm

Aperture f/1.9

12-megapixel photography

4K60 video recording

Dolby Vision, ProRes, and Log

In order for the pixels to capture the right light, the ultrawide camera by default captures 12MP photos by combining 4:1 pixels and achieving 1.4 micrometer pixels; But with the HEIF Max photography format, it is possible to shoot with 48 megapixels, so that the user has more freedom to zoom in on the photos.

A building with a stone facade and a yard full of trees - iPhone 16 Pro ultrawide camera - 48 megapixel photo
48-megapixel ultrawide photo – iPhone 16 Pro
A building with a stone facade and a yard full of trees - iPhone 16 Pro ultrawide camera - 12 megapixel photo
12-megapixel ultrawide photo – iPhone 16 Pro
A building with a stone facade and a yard full of trees - iPhone 16 ultrawide camera
12-megapixel ultrawide photo – iPhone 16
Cutting ultrawide photo of iPhone 16 and 16 Pro - air conditioner in the terrace of the apartment
Crop ultrawide camera photos

As you can see in the images above, the ultrawide 48 megapixel photo of the iPhone is somewhat more detailed in some parts; But it is generally softer than the 12-megapixel model. We also took photos of the same subject with iPhone 16; There is no noticeable difference between the 12 megapixel photos of the two phones.

View of the buildings around Zomit office on Pakistan Street - iPhone 16 Pro Ultra Wide Camera in the dark
Ultrawide iPhone 16 Pro camera with 1/25 second exposure
View of the buildings around the Zomit office on Pakistan Street - iPhone 16 ultrawide camera in the dark
iPhone 16 ultrawide camera with 1/10 second exposure
Cutting ultrawide photos of iPhone 16 and 16 Pro in the darkCrop ultrawide camera photos in the dark

iPhone 16 Pro goes to Night mode and long exposure much less than the iPhone 16 in dark environments; Therefore, sometimes its ultrawide night photos are less detailed than the iPhone 16; For example, in the photos above, the iPhone 16 is exposed for one-tenth of a second; While the exposure of the iPhone 16 Pro was 60% less and equivalent to one twenty-fifth of a second; So it is not surprising that the cheaper iPhone photo is more attractive!

iPhone 16 Pro ultrawide camera photo gallery

Children's playground with slide - iPhone 16 Pro ultra-wide camera
A tree from below and in front of sunlight - iPhone 16 Pro ultrawide camera
Tehran Book Garden - iPhone 16 Pro ultrawide camera
Super wide view of Tehran food garden - iPhone 16 Pro Ultra Wide Camera
Zoomit office terrace - iPhone 16 Pro ultrawide camera
Tehran Food Garden - iPhone 16 Pro ultrawide camera
Stone facade of a building - iPhone 16 Pro ultra-wide camera
The buildings of Pakistan Street in Tehran in the dark of the night - iPhone 16 Pro ultrawide camera
Zoomit studio in the dark - iPhone 16 Pro ultrawide camera
View of the buildings of Pakistan Street in Tehran at night - Ultrawide camera of iPhone 16 Pro
Sunflower flower close-up - iPhone 16 Pro ultrawide camera
Close-up of a yellow flower - macro photo of the iPhone 16 Pro ultrawide camera

The ultrawide camera of the iPhone 16 Pro generally takes attractive photos, But maybe it cannot be considered on par with competitors. The difference in performance with the best in the market is more noticeable in the dark; The iPhone 16 Pro’s ultrawide camera doesn’t appear so amazing in dark environments and records relatively soft photos. To evaluate the performance of the iPhone’s ultrawide camera against the competitors, I suggest that you read the comprehensive article comparing the 2024 flagship cameras.

Main camera

On paper, the main 48-megapixel camera of the iPhone 16 is no different from the previous generation in terms of sensor dimensions and pixels or lens specifications; But Apple calls this camera Fusion and claims that the sensor itself has become faster, and thanks to a new architecture called Apple Camera Interface, image data is transferred from the sensor side to the chip for processing at a faster rate; So now the main camera of the iPhone has the ability to record 4K120 Dolby Vision.

Record stunning videos with 120 frames per second video recording

HDR filming at a rate of 120 frames per second and 4K resolution requires very heavy processing; Because to implement the HDR effect, several 4K frames with different exposures must be compared and aggregated every second. If you have an external SSD and a high-speed USB 3 cable, you can also save 4K120 videos in professional ProRes and log formats, which give you more freedom when editing videos and correcting colors.

4K120 video sample 1

Watch on YouTube

4K120 video sample 2

Watch on YouTube

The 4K120 iPhone 16 Pro videos are very attractive and detailed and bring a wonderful visual experience to Armaghan. Since none of the 4K120 iPhone 16 Pro videos were uploaded properly to the app platform, you must refer to the YouTube links to watch the videos.

Thanks to the faster sensor and Apple’s new interface, 48-megapixel photos with HEIF Max format are recorded almost without pause and at a rate of about 4 frames per second. Like the previous generation, the iPhone combines multiple 12- and 48-megapixel frames, by default, it shoots at 24-megapixel resolution to provide a balanced combination of contrast, color, and detail; Of course, it is possible to take 12-megapixel photos alongside 48-megapixel HEIF Max photos.

Zomit office terrace - 48 megapixel photo of iPhone 16 Pro main camera
48-megapixel photo of the main camera
Zomit office terrace - 24 megapixel photo of iPhone 16 Pro main camera
24-megapixel photo of the main camera
Zomit office terrace - 12 megapixel photo of iPhone 16 Pro main camera
12-megapixel photo of the main camera
Crop photos of 12, 24 and 48 megapixels of the main iPhone 16 Pro cameraCrop photos of 48, 24, and 12 megapixels

As you can see in the photos above, the 48-megapixel mode improves the details to some extent at the cost of overall softening of the photo and gives you more freedom to zoom into the photo; But the contrast and concentration of its colors are at a lower level than the 24 and 12-megapixel modes. The 24MP photos seem to have a good balance of detail, color and contrast.

Mohammad Hossein Moaidfar, the author of Zoomit - iPhone 16 Pro's main camera
iPhone 16 Pro main camera
Mohammad Hossein Moaidfar, the author of Zoomit - iPhone 16 main camera
iPhone 16 main camera
Cropping the photo of the main camera of the iPhone 16 and 16 Pro

The main camera of the iPhone 16 Pro has recorded a little more detail in the photos above compared to the iPhone 16; But as you can see, the iPhone 16 Pro photo has a lower contrast, its colors are more warm than the iPhone 16, and the black color of Mohammad Hossein’s T-shirt does not match black enough.

iPhone 16 Pro main camera photo gallery

Children's playground - iPhone 16 Pro main camera
Candy with fruit decoration - iPhone 16 Pro main camera
An artificial lake around Tehran's book garden - iPhone 16 Pro main camera
Tehran book garden plants - iPhone 16 Pro main camera
Exterior view of Tehran Book Garden - main camera of iPhone 16 Pro
Two young people in the book garden of Tehran - iPhone 16 Pro main camera
Humvee military vehicle in Tehran's book garden - iPhone 16 Pro main camera
A view from inside the Tehran Book Garden - iPhone 16 Pro's main camera
A cat in the middle of the bushes - iPhone 16 Pro main camera in the dark
Orange motorcycle - main iPhone 16 Pro camera in the dark
Room ceiling lights - iPhone 16 Pro main camera with 2x zoom
Iranian Islamic view of Tehran mosque - iPhone 16 Pro main camera in the dark with 2x zoom
Brick facade of a building around Madras highway - iPhone 16 Pro main camera
Sugar goat statue in Tehran's book garden - iPhone 16 Pro main camera with 2x zoom
The statue of Zoro and Sergeant Garcia in Tehran's book garden - iPhone 16 Pro main camera with 2x zoom
The light in the garden - the main camera of the iPhone 16 Pro in the dark

The photos of the iPhone 16 Pro’s main camera have the same feeling as the iPhone 15 Pro; They are full of details, the colors appear relatively natural, and tend to be a little warm. The iPhone does not artificially remove noise as much as possible; Therefore, even in the dark, it pulls out a high level of fine and intricate details from the subjects. The large dimensions of the sensor allow the iPhone to record 2x high-quality photos by creating a 12-megapixel crop from the middle of the full-sensor image of the main camera.

Telephoto camera

In addition to the renewed ultrawide camera, another big change is the addition of a 5x telephoto camera to the iPhone 16 Pro; Last year, this camera was exclusive to the iPhone 15 Pro Max. The new telephoto camera uses the same 12-megapixel sensor as the previous generation and provides the user with digital zoom up to 25 times.

iPhone 16 Pro telephoto camera photo gallery

World War era motorcycle - iPhone 16 Pro telephoto camera
Single car in Tehran Book Garden - iPhone 16 Pro telephoto camera
iPhone 16 Pro telephoto camera - 2
Street lights in front of a tall glass building - iPhone 16 Pro telephoto camera
The Little Prince in Tehran's Book Garden - iPhone 16 Pro telephoto camera
iPhone 16 Pro telephoto camera
Locust airplane replica in Tehran book garden - iPhone 16 Pro telephoto camera
Mural on Madras highway - iPhone 16 Pro telephoto camera
Yellow motorcycle in the dark - iPhone 16 Pro telephoto camera
Building under construction near Madras highway - iPhone 16 Pro telephoto camera
2 Star Cafe on Pakistan Street - iPhone 16 Pro telephoto camera
Tehran Mosli minaret in the dark - iPhone 16 Pro telephoto camera

The iPhone 16 Pro telephoto camera records 5x high-quality photos; The level of detail and colors of the telephoto camera are very similar to the main camera and match its mood. The telephoto camera also excels in low-light environments and takes good photos in the dark. But as we said in the comprehensive comparison of 2024 flagship cameras, the competitors perform better in this field.

The main iPhone 16 Pro camera - the first example
1x photo
2x photo of iPhone 16 Pro
Double photo
3x photo of iPhone 16 Pro
3x photo
iPhone 16 Pro 5x photo
5x photo
10x photo of iPhone 16 Pro
10x photo
25x iPhone 16 Pro photo
25x photo

The combination of the iPhone 16 Pro’s 48-megapixel main camera and its 5x telephoto camera allows us to record relatively high-quality zoomed photos in the range of 1-10x; Apart from the 5x optical zoom, the iPhone looks quite satisfactory at 2x and 10x levels.

Portrait photography

The iPhone 16 Pro relies on the main and telephoto cameras for portrait photography and uses the ToF sensor to accurately separate the subject from the background. 1x and 2x portrait photos are recorded with the main camera and 5x portrait photos are also recorded with the telephoto camera.

Kausar Nikomanesh, author of Zoomit - 1x portrait photo of iPhone 16 Pro
1x portrait photo
Kausar Nikomanesh, the author of Zoomit - 2x portrait photo of iPhone 16 Pro
2x portrait photo
Kausar Nikomanesh, the author of Zoomit - 5x portrait photo of iPhone 16 Pro
5x portrait photo
1x portrait photo of iPhone 16 Pro
1x portrait photo
Mohammad Hossein Moaidfar, the author of Zoomit - 2x photo of iPhone 16 Pro
2x photo with natural bokeh
5x portrait photo of iPhone 16 Pro
5x portrait photo

The iPhone had a poor performance in portrait photography several years ago, and the iPhone 16 Pro follows the same rule. Portrait photos are detailed and the bokeh effect implementation is gradual and similar to professional cameras. As we saw in the 2024 flagship camera comparison article, the iPhone beats even tough competitors like the Pixel 9 Pro and S24 Ultra in portrait photography.

selfie camera

The selfie camera of the iPhone 16 Pro is no different from the previous generation, and it still captures eye-catching photos with many details and true-to-life colors.

Mohammad Hossein Moaidfar and Hadi Ghanizadegan from Zomit - iPhone 16 Pro selfie camera
Mohammad Hossein Moidfar and Hadi Ghanizadegan from Zomit - iPhone 16 Pro selfie camera with bokeh effect

iPhone 16 Pro with all its cameras is capable of recording 4K60 videos with Dolby Vision HDR standard; Of course, you can also choose 24 and 30 frames per second for filming. Videos are pre-recorded with h.265 codec, But it is also possible to switch to the more common h.264 codec.

We shot at 30 and 60 fps and h.265 codecs, and the iPhone 16 Pro recorded very detailed videos in both modes with vivid colors, high contrast, and decent exposure control; If you want to see the video recording performance in competition with other flagships, don’t miss the iPhone 16 Pro vs. Pixel 9 Pro and Galaxy S24 Ultra camera comparison article.

Performance and battery

The next big change to the iPhone 16 Pro comes back to its chip. A18 Pro uses the familiar combination of 2 high-power cores and 4 low-power cores as a CPU, and this unit is accompanied by a 6-core graphics processor and a 16-core neural processing unit. Apple’s new chip is produced with TSMC’s improved 3nm lithography called N3E.

Technical specifications of the A18 Pro chip compared to the previous generation

Specifications/Chip

A17 Pro

A18

A18 Pro

Central processor

2 powerful 3.78 GHz cores with 16 MB cache

4 low-power 2.11 GHz cores with 4 MB cache

24 MB system cache

2 powerful 4.04 GHz cores with 8 MB cache

4 low-power 2.0 GHz cores with 4 MB cache

12 MB system cache

2 powerful 4.04 GHz cores with 16 MB cache

4 low-power 2.2 GHz cores with 4 MB cache

24 MB system cache

A set of instructions

ARMv8.6-A

ARMv9.2-A

ARMv9.2-A

Graphics

6-core

1398 MHz

768 shading units

Ray tracing

5-core

1398 MHz

640 shading units

Ray tracing

6-core

1450 MHz

768 shading units

Ray tracing

Memory controller

4 16-bit channels

RAM 3200 MHz LPDDR5X

The bandwidth is 51.2 GB

4 16-bit channels

RAM 3750 MHz LPDDR5X

The bandwidth is 58.6 GB

4 16-bit channels

RAM 3750 MHz LPDDR5X

The bandwidth is 58.6 GB

Record and play video

4K60

10-bit H.265

8K24 / 4K120 10-bit H.265

8K24 / 4K120 10-bit H.265

Wireless connection

Bluetooth 5.3 and Wi-Fi 7

Bluetooth 5.3 and Wi-Fi 7

Bluetooth 5.3 and Wi-Fi 7

modem

X70 modem

Download 7500 MB in the UK

Upload is 3500 megabits per second

X75 modem

Download 10,000 megabits per second

Upload is 3500 megabits per second

X75 modem

Download 10,000 megabits per second

Upload is 3500 megabits per second

manufacturing process

3-nanometer TSMC

3-nanometer TSMC

(Enhanced: N3E)

3-nanometer TSMC

(Enhanced: N3E)

Apple says it uses new cores in the CPU, which results in 15% faster performance than the A17 Pro and achieves the same level of performance as this chip with 20% less power consumption. Apple claims that the A18 Pro uses more cache memory compared to the A18 chip.

The A18 Pro chip has faster single-core performance than even multi-100W desktop processors.

According to Apple, the 6-core A18 Pro graphics is 20% faster than the previous generation. Apple says the ray tracing accelerator in the new GPU is also a 100% improvement over the previous generation.

Playing mobile games on iPhone 16 Pro

The 16-core A18 Pro neural processing unit, like the previous generation, is capable of performing 35 trillion operations; But thanks to the 17% increase in bandwidth between the RAM and the chip, the new NPU performs better than before in real-world applications. The A18 Pro chip is connected to 8 GB LPDDR5x-7500 RAM with a high-speed memory controller.

iPhone 16 Pro performance against competitors

Product/benchmark

chip

Speedometer 2.1

GeekBench 6

GFXBench

Web browsing experience

GPU computing power

CPU computing power

Game simulator

Vulkan/Metal

Single/Multi

Aztec Ruins

Onscreen/1440p

Vulkan/Metal

iPhone 16 Pro

A18 Pro

572

33105

3542

8801

59

70

iPhone 16

A18

554

28025

3440

8406

59

61

iPhone 15 Pro

A17 Pro

475

27503

2960

7339

59

46.8

Piura 70 Ultra

(Performance Mode)

Kirin 9010

235

1528

(Failed)

1452

4494

32

30

Pixel 9 Pro

Tensor G4

221

6965

1945

4709

70

44

Galaxy S24 Ultra

Snapdragon 8 Gen 3 for Galaxy

240

17012

2262

7005

75

81

iPhone 16 Pro is noticeably faster than current Android flagships; The difference of about 60% in single-core CPU performance with the Galaxy S24 Ultra clearly shows how fast the iPhone 16 Pro appears in everyday use.

Apple’s 2024 flagship dictates its 95% advantage over a rival such as the Galaxy S24 Ultra when using the GPU for calculations such as blurring the background of photos and face recognition; However, in the rendering of games, the advantage is still with the Galaxy and the Snapdragon 8 generation 3 chip.

The performance of the neural processing unit of the iPhone 16 Pro against competitors

phone/parameters

framework

intermediary

Single count accuracy score (FP32)

iPhone 16 Pro

Core ML

Neural Engine

4647

iPhone 15 Pro

Core ML

Neural Engine

3862

Piura 70 Ultra

TensorFlow Lite

NNAPI

235

Pixel 9 Pro

TensorFlow Lite

NNAPI

347

Galaxy S24 Ultra

TensorFlow Lite

NNAPI

477

The neural processing unit of the iPhone 16 Pro outperforms the Galaxy S24 Ultra in the GeekBench AI benchmark by an astronomical 870%; Now we have to wait until the release of Apple’s artificial intelligence capabilities to see if such a difference is reasonable or just a bug in the benchmark software.

Like the previous generation, Apple sells the iPhone 16 Pro in versions of 128, 256, 512 GB and 1 TB with NVMe storage; While the base model of the iPhone 16 Pro Max uses 256 GB of storage space. Benchmarks show that the storage speed of the iPhone 16 Pro is no different from the previous generation.

iPhone 16 Pro storage speed compared to competitors

phone model

Sequential reading rate

Sequential write rate

iPhone 16 Pro

1636 megabytes

1340 megabytes

iPhone 15 Pro

1652 MB UK

1380 megabytes

Pixel 9 Pro XL

1350 megabytes

171 megabytes

Galaxy S24 Ultra

2473 megabytes

1471 megabytes

If we leave the numbers aside, we will face the fact that the feeling of using the iPhone 16 Pro in everyday use is not much different from the iPhone 15 Pro or even the iPhone 14 Pro. The performance gap between the new iPhone and the previous generations is the reason that the phone can still provide good performance with the standard of a few years later, and of course, it can handle the heavy processing of Apple Intelligence.

Apple says that with the changes made in the internal structure of the iPhone 16 Pro; Including the metal shell of the battery (pro model only), the phone can now perform up to 20% more stable in heavy usage. This performance stability improvement is felt to some extent; The phone does not get hot while playing graphic games and its performance drops less than before; In the Zomit stability test, the iPhone 16 Pro dropped less than the Galaxy S24 Ultra and the previous generation; The maximum temperature of his body reached 47 degrees Celsius.

In order to measure the performance stability of the iPhone 16 Pro in applications other than playing heavy games, we went to the CPU stress test; This test involves all CPU cores for 20 minutes and at the end shows what level of performance capacity the CPU provides after heating up under heavy processing load.

iPhone 16 Pro CPU stress test
CPU performance stability test under heavy processing load for 20 minutes

In our tests, the iPhone 16 Pro was able to provide 84% of its performance level to the user after 20 minutes; Therefore, the iPhone probably rarely lags and drops frames during very heavy use. In the CPU stress test, the body of the device reached about 45 degrees Celsius.

This year, Apple has increased the battery capacity of the iPhone 16 Pro and 16 Pro Max by about 10%; This issue, along with the A18 Pro chip’s excellence, makes the new flagships have very good charging; In such a way that Apple considers the iPhone 16 Pro Max as “the best iPhone in history in terms of charging”.

iPhone 16 Pro battery life in the battery menu

Cupertino residents announce the charging time of the new iPhones with the duration of video playback and say that the iPhone 16 Pro has 4 hours more charging time compared to the previous generation with 27 hours of video playback. Zomit tests also show 26 hours and 5 minutes of charging time for the new iPhone, which is more or less consistent with Apple’s claim.

iPhone 16 Pro battery life against competitors

Product/benchmark

Display

battery

Play video

Everyday use

Dimensions, resolution, and refresh rate

milliampere hour

minute: hour

minute: hour

iPhone 16 Pro

6.3 inches, 120 Hz

2622 x 1206 pixels

3582

26:05

iPhone 15 Pro

6.1 inches, 120 Hz

2556 x 1179 pixels

3274

21:11

iPhone 15 Pro Max

6.7 inches, 120 Hz

2796 x 1290 pixels

4441

24:43

Pixel 9 Pro XL

6.8 inches, 120 Hz

2992 × 1344 pixels (Native)

5060

25:00

13:25

Piura 70 Ultra

6.8 inches, 120 Hz

2844 x 1260 pixels

5200

25:00

17:00

Galaxy S24 Ultra

6.8 inches, 120 Hz

3088 x 1440 pixels

5000

27:41

14:05

Another change of the iPhone 16 Pro goes back to increasing the charging speed; Apple’s new flagship now supports wired charging with a power of 30 watts, and if the same charger is connected to the Magsafe wireless charging pad, the wireless charging power reaches 25 watts, which, according to Apple, can charge the battery from zero to 50% within 30 minutes.

Very good charging and beyond the last generation

Although the wired charging speed of the iPhone 16 Pro has increased from 20 to 30 watts; again, it takes about 100 minutes to fully charge the battery; Because both the battery capacity has increased by 10%, and the iPhone charges between 85 and 100% at a very low speed; Even with the optimal battery charging function turned off, the phone needs about 35-40 minutes to complete the remaining 15% of the battery capacity.

Design and build quality

Leaving aside the fundamental and significant changes of the iPhone, what you will notice at first glance is the increase in the size of the phone, especially in the iPhone 16 Pro Max, and the narrowing of the edges around the screen.

Home screen apps and widgets on the iPhone 16 Pro screen

iPhone 16 Pro and Pro Max use 6.3 and 6.9-inch screens with an increase of 0.2 inches in screen diameter compared to several previous generations; So it is not strange that the physical dimensions and weight also increase; Both phones are about 3 mm longer and 1 mm wider and 12 and 6 grams heavier, respectively; Therefore, the increase in the weight of the iPhone 16 Pro is more significant, and the 16 Pro Max sits worse in the hand than before and requires constant two-handed use.

Dynamic Island iPhone 16 Pro close-up

The borders around the display have become noticeably narrower; Now, around the screen of the iPhone 16 Pro, a border with a thickness of a little more than one millimeter (1.15 millimeters to be exact) is covered; While the thickness of the edges of the iPhone 15 Pro is about 1.5 mm, and it reaches more than 2 mm for the iPhone 16; Of course, you should pay attention that by putting the cover on the phone, the narrowness of the edges is less noticeable.

Dynamic Island iPhone 16 Pro
iPhone 16 Pro screen close-up

Another change in the appearance of the iPhone 16 Pro is the addition of the Desert Titanium color option to the device’s coloring and the removal of the Blue Titanium option. The new color is more similar to cream with a golden frame; But unfortunately, we didn’t have this color to review. Other color options are limited to neutral and understated Black Titanium, White Titanium, and Natural Titanium.

iPhone 16 Pro in hand

The design of the iPhone 16 Pro is no different from the previous generation in the rest of the parts; We see the same flat titanium frame with flat glass panels on the back and front of the phone, which are mounted with high precision and form a solid structure with IP68 certification. Unlike the iPhone 16, there has been no change in the painting process of the back panel and the arrangement of the cameras, only the screen cover has been upgraded to the third-generation ceramic shield, which, according to Apple, is twice as strong as the previous generation.

Camera control button on iPhone 16 Pro

 

We talked about Camera Control and its not very ergonomic location on the right side of the frame at the beginning of the article. Apart from this new button, the rest of the buttons are the same as the previous generation, the volume control buttons and Side button are in the right place and provide very good feedback, and the Action button, like the previous generation, allows you to personalize it.

Read more: Reviews of iPhone 14 Plus, price and technical specifications

Display and speaker

Finally, another not-so-variable part is the iPhone 16 Pro display, which uses the same 120 Hz OLED panel with LTPO technology; Of course, this year, due to the 0.2-inch increase in the diameter of the screen, its resolution reaches 2622 x 1206 pixels with a very good density of 460 pixels. As before, the display supports HDR standards including HDR10 and Dolby Vision; Therefore, like a few generations ago, we are either with a 10-bit panel or 8-bit + FRC.

Watch video with iPhone 16 Pro

Thanks to the LTPO technology, the iPhone 16 Pro display can change the refresh rate of the display between 1 and 120 Hz depending on the type and movement rate of the content, so that the phone can display smooth and smooth animations, and match the frame rate of games and videos. Do not damage the battery charging.

iPhone 16 Pro display performance against competitors

Product/Test

Minimum brightness

Maximum brightness

contrast ratio

sRGB

DCI P3

Adobe RGB

manual

automatic

local

cover

Average error

cover

Average error

cover

Average error

iPhone 16 Pro

1.35

1044

1950 (HDR)

99.7 percent

0.98

iPhone 15 Pro

2.21

1052

1947 (HDR)

99.7 percent

1.0

iPhone 15 Pro Max

2.15

1041

1950 (HDR)

99.7 percent

0.9

Pixel 9 Pro XL

4

1300

2650

(HDR)

97.2 percent

(Natural)

1.1

81.6 percent

(Adaptive)

3

Piura 70 Ultra

2.5

740

1500

(HDR)

99.7 percent

(Natural)

1.9

79.7 percent

(Vivid)

5.3

Galaxy S24 Ultra

0.7

914

2635

(HDR)

102 percent

(Natural)

3.5

81.8 percent

(Vivid)

4.4

Apple says that the iPhone 16 Pro display, like the previous generation, supports the wide color space of P3, achieves a brightness of 1000 nits in manual mode, and its maximum brightness reaches 2000 nits in automatic mode or during HDR video playback; But the important difference between the iPhone 16 Pro panel and the iPhone 15 Pro comes back to the minimum single-purpose brightness.

Zoomit measurements confirm Apple’s claims about the iPhone’s brightness; We measured the iPhone 16 Pro’s minimum brightness at 1.35 nits, which is significantly lower than the previous generation’s 2.15 nits; But the maximum brightness in manual mode and while displaying HDR videos is no different from the iPhone 15 Pro and is equal to 1044 and 1950 nits, respectively. It goes without saying that the iPhone 16 Pro achieved a brightness of 1296 nits in automatic mode while displaying SDR content (uses other than HDR video playback); But probably exposed to strong ambient light, it can approach the same range of 2000 nits.

iPhone 16 Pro Type-C port

iPhone 16 Pro uses stereo speakers, the main channel of which is located at the bottom edge of the frame, and the conversation speaker also plays the role of the second channel. Maybe the volume of the iPhone does not reach the level of competitors such as Pixel 9 Pro or Galaxy S24 Ultra, But the output quality of the speakers is at a higher level; The iPhone’s sound is clearer and its bass is much stronger than its competitors.

Summary and comparison with competitors

If we assume that the government will finally act rationally and start working on the iPhone registry, in this situation, it is not very logical for iPhone 15 Pro and even iPhone 14 Pro users to buy the iPhone 16 Pro with a few 10 million additional costs; Unless the 5x telephoto camera (for iPhone 15 Pro and both 14 Pro models), 15-30% faster chip performance, or Apple Intelligence (for iPhone 14 Pro users) is critical to them.

Users of iPhone 13 Pro or older models have more reasons to buy the iPhone 16 Pro; Better charging, more RAM, a more efficient camera, a brighter screen with Dynamic Island, a faster chip, and perhaps finally artificial intelligence capabilities, can all justify spending money to upgrade from the iPhone 13 Pro to the 16 Pro.

If the ecosystem is not a limiting factor for you, the Galaxy S24 Ultra, even a year after its launch and at a much lower price, offers you more or less the same experience promised by Apple Intelligence with Galaxy AI, and in most cases, in terms of photography and videography, it is on par with the iPhone 16 Pro and even better than It appears.

Naturally, we could not check the competitive advantage of the iPhone 16 Pro; Apple Intelligence is the focus of Apple’s marketing for this phone; But to experience all its capabilities, we have to wait until early 2025; However, a significant part of these features will be available on the iPhone 15 Pro with basically the same experience.

Working with iPhone 16 Pro

iPhone 16 Pro is a very attractive phone; But at least in the first month of its release, it is not in line with Apple’s philosophy; We know Apple as a company that provides mature and efficient functions and features to the user from the very beginning; But apparently, in the age of artificial intelligence, we have to get used to rudeness and delays; First it was the turn of Google, Microsoft and Samsung; Now Apple.

Continue Reading

Technology

Biography of Geoffrey Hinton; The godfather of artificial intelligence

Published

on

By

Geoffrey Hinton
Geoffrey Hinton, the godfather of artificial intelligence, revolutionized our world by inventing artificial neural networks. Do not miss the story of his ups and downs life.

Biography of Geoffrey Hinton; The godfather of artificial intelligence

Geoffrey Hinton (Geoffrey Hinton), a scientist who has rightly been called the “Godfather of Artificial Intelligence”, created a revolution in the world of technology with his research. Inspired by the human brain, he built artificial neural networks and gave machines the ability to learn, think, and make decisions. These technologies that are everywhere in our lives today, from voice assistants to self-driving cars, are the result of the relentless efforts of Hinton and his colleagues.

Hinton is now recognized as one of the most influential scientists of the 20th century, having won the 2024 Nobel Prize in Physics. But his story goes beyond awards and honors.

Geoffrey Hinton’s story is a story of perseverance, innovation, and the constant search to discover the unknown. In this article, we will look at the life and achievements of Geoffrey Hinton and we will answer the question of how one person with a simple idea was able to revolutionize the world of technology.

From physical problems to conquering the digital world

Hinton has been working stand-up for almost 18 years. He can’t sit for more than a few minutes due to back disc problems, but even that hasn’t stopped him from doing his activities. “I hate standing and prefer to sit, but if I sit, my lower back bulges out and I feel excruciating pain,” she says.

Since driving or sitting in a bus or subway is very difficult and painful for Hinton, he prefers to walk instead of using a private car or public transportation. The long-term walks of this scientist show that he has not only surrendered to his physical conditions but also to what extent he is eager to conduct scientific research and achieve results.

Hinton has been standing for years

For about 46 years, Hinton has been trying to teach computers like humans. This idea seemed impossible and hopeless at first, but the passage of time proved otherwise so much so that Google hired Hinton and asked him to make artificial intelligence a reality. “Google, Amazon, and Apple think artificial intelligence is what will make their future,” Hinton said in an interview after being hired by Google.

Google hired Hinton to make artificial intelligence a reality

Heir to genius genes

Hinton was born on December 6, 1947, in England in an educated and famous family with a rich scientific background. Most of his family members were educated in mathematics and economics. His father, Howard Everest Hinton, was a prominent entomologist, and all his siblings had done important scientific research.

Hinton knew from the age of seven that he would one day reach an important position

Some of the world’s leading mathematicians, such as George Boole, the founder of Boolean logic, and Charles Howard Hinton, a mathematician known for his visualization of higher dimensions, were relatives of Hinton. So, from a young age, there was a lot of pressure on Hinton to be the best in education, so much so that the scientist was thinking about getting a doctorate from the age of seven.

Hinton at the age of 7Geoffrey Hinton at seven years old
psychology, philosophy, and artificial intelligence; A powerful combination to create the future

Hinton took a diverse academic path; He began his primary education at Clifton College in Bristol and then went to Cambridge University for further studies. There, Hinton constantly changed his major, vacillating between the natural sciences, art history, and philosophy. Finally, he graduated from Cambridge University in 1970 with a bachelor’s degree in experimental psychology.

Hinton’s interest in understanding the brain and how humans learn led him to study artificial intelligence. Therefore, he went to the University of Edinburgh to continue his studies, where he began research in the field of artificial intelligence under his mentor, Christopher Longuet-Higgins. Finally, in 1978, Hinton achieved his seven-year-old dream and received his doctorate in artificial intelligence. The PhD was a turning point in Hinton’s career and prepared him to enter the complex and fascinating world of artificial intelligence.

Hinton’s diverse education, from psychology to artificial intelligence, gave him a comprehensive and interdisciplinary perspective that greatly contributed to his future research. This perspective enabled him to make a deep connection between the functioning of the human brain and machine learning algorithms.

Hinton decided to enter the field of physiology and study the anatomy of the human brain in his undergraduate course due to his great interest in learning about the workings of the human mind. After that, he entered the field of psychology and finally entered the field of artificial intelligence and completed his studies. His goal in entering the field of artificial intelligence was to simulate the human brain and use it in artificial intelligence.

If you want to learn about the functioning of a complex device like the human brain, you have to build one like it.

– Geoffrey Hinton

Hinton believed that in order to have a deep understanding of a complex device like the brain, one should build a device similar to it. For example, we normally think we are familiar with how cars work, but when building a car we will notice many details that we had no knowledge of before building it.

Only against the crowd, but victorious

While struggling with his ideas and thoughts and their opponents, Hinton met a number of researchers, such as Frank Rosenblatt (Frank Rosenblatt) in the field of artificial intelligence. Rosenblatt was an American scientist who created a revolution in the field of artificial intelligence in the 1950s and 1960s by inventing and expanding the perceptron model.

The perceptron model, one of the first machine learning models, is recognized as the main inspiration for the development of today’s artificial neural networks. Perceptron is a simple algorithm used to classify data. This model is inspired by the way brain neurons work. A perceptron is a mathematical model for an artificial neuron that receives various inputs, processes them using a weighted function, and decides on the output.

Hinton and Rosenblatt side by side
Hinton and Rosenblatt side by side

Rosenblatt’s hope was that one could feed a neural network a set of data, such as photographs of men and women, and the neural network, like humans, could learn how to separate the photographs; But there was one problem: the perceptron model didn’t work very well. Rosenblatt’s neural network was a single layer of neurons and was too limited to perform the assigned task of image separation.

Even when no one believed in artificial intelligence, Hinton didn’t lose hope

In the late 1960s, Rosenblatt’s colleague wrote a book about the limitations of Rosenblatt’s neural network. After that, for about ten years, research in the field of neural networks and artificial intelligence almost stopped. No one wanted to work in this field, because they were sure that no clear results would be obtained. Of course, nobody might not be the right word, and it is better to say almost nobody; Because the topic of artificial intelligence and neural network was completely different for Hinton.

Hinton believed that there must be a way to simulate the human brain and make a device similar to it. He had no doubt about it. Why did Hinton want to pursue a path that few would follow and almost no one saw a happy ending for? Thinking that everyone makes mistakes, this eminent scientist continued on his way and did not give up.

From America to Canada; A journey that changed the course of artificial intelligence

Hinton went to different research institutes in America during his research. At that time, the US Department of Defense funded many US research institutions, so most of the projects carried out or underway focused on military objectives. Hinton was not interested in working in the military field and was looking for pure scientific research and the development of technology for human and general applications. As a result, he was looking for a place where he could continue his research away from the pressures of the military and the limitations of dependent funds.

I did not want my research to be funded by military organizations, because the results obtained would certainly not be used for human benefit.

– Geoffrey Hinton

After searching for a suitable place to continue research, Canada seemed to be the most suitable option. Finally, Hinton moved to Canada in 1987 and began his research at the University of Toronto. In the same years, Hinton and his colleagues were able to solve problems that simpler neural networks could not solve by building more complex neural networks.

Hinton and his colleagues developed multilayer neural networks instead of building and expanding single-layer neural networks. These neural networks worked well and drew a null line on all disappointments and failures. In the late 80s, a person named Dean Pomerleau built a self-driving car using a neural network and drove it on different roads.

In the 1990s, Yann LeCun, one of the pioneers of artificial intelligence and deep learning, developed a system called “Convolutional Neural Networks” (CNNs). These networks became the basis for many modern techniques in machine vision and pattern recognition. One of the first important applications of these networks was to build a system that could recognize handwritten digits; But once again, after the construction of this system, researchers in the field of artificial intelligence reached a dead end.

In the 1990s, an interesting neural network was built, but it stalled due to insufficient data.

The neural networks built at that time did not work well due to the lack of sufficient data and the lack of necessary computing power. As a result, educated people in the fields of computer science and artificial intelligence once again concluded that neural networks and their construction were nothing more than a fantasy. In 1998, after 11 years at the University of Toronto, Geoffrey Hinton left Toronto to found and manage the Gatsby Computational Neuroscience Unit at University College London. During his research at this center, he studied neural networks and their applications.

AlexNet: A Milestone in the History of Artificial Intelligence

From the 1990s to 2000, Hinton was the only hopeful person on the planet who still believed in the development of neural networks and artificial intelligence. Hinton attended many conferences to achieve his goal but was usually met with indifference by the attendees and treated like an outcast. You might think to yourself that Hinton never gave up and moved on with hope, but that’s not the case. He was also sometimes disappointed and doubted reaching the desired result; But by overcoming despair, he continued his way no matter how difficult it was; Because this sentence kept repeating in Hinton’s mind: “Computers can learn.”

Watch: The story of the birth of artificial intelligence, the exciting technology that shook the world
Study ‘1

After returning to the University of Toronto in 2001, Hinton continued his work on neural network models and, together with his research group in the 2000s, developed deep learning technology and applied it to practical applications. In 2006, the world caught on to Hinton’s ideas and did not see them far away.

In 2012, Hinton, along with two of his PhD students, Alen Krizhevsly and Ilya Sotskever (the co-founder of OpenAI, the creator of ChatGPT), developed an eight-layer neural network program called AlexNet. The purpose of developing this program was to identify images in ImageNet, a large online database of images. AlexNet’s performance was stellar, outperforming the most accurate program up to that point by about 40 percent. The image below shows the architecture of Alexnet convolutional neural network.

AlexNet neural network

Viso

In the image above, C1 to C5 are convolutional layers that extract image features. Each layer has convolutional filters of different sizes that are applied to the image or output of the previous layer to detect different features. Also, the number of channels in each layer (96, 256 and 384) shows the number of filters used in that layer.

After feature extraction, the image is sent to fully connected layers (FC6 to FC8). Each circle in these layers represents a neuron that is connected to the neurons of the previous layer.

FC8 is the final output layer and consists of 1000 neurons. Due to the high number of layers and the ability to learn complex image features, the AlexNet architecture was very accurate in image recognition and paved the way for further improvements in the field of neural networks.

After developing AlexNet, Hinton and two of his students founded a company called DDNresearch, which was acquired by Google for $44 million in 2013. That same year, Hinton joined Google’s artificial intelligence research team, Google Brain, and was later appointed one of its vice presidents and chief engineers.

Hinton on Google

Businesstoday

From Backpropagation Algorithms to Capsule Networks: Hinton’s Continuous Innovations

Hinton has written or co-authored more than 200 scientific papers on the use of neural networks for machine learning, memory, perception, and symbol processing. While doing a postdoctoral fellowship at the University of California, San Diego, Hinton worked with David A. Rumelhart (David E. Rumelhart) and R. Wenald J. Williams (Ronald J. Williams) to implement a backpropagation algorithm on multilayer neural networks.

Hinton stated in an interview in 2018 that the main idea of ​​this algorithm was from Rumelhart, But Hinton and his colleagues were not the first to propose the backpropagation algorithm. In 1970, Seppo Linnainmaa proposed a method called inverse automatic derivation, which the backpropagation algorithm is a special type of this method.

Hinton and his colleagues took a big step in their research after publishing their paper on the error backpropagation algorithm in 1986. This article is one of Hinton’s most cited articles with 55,020 citations.

The number of citations of the 1986 article

Google Scholar

In October and November 2017, Hinton published two open-access papers on capsule neural networks, which he says work well.

At the 2022 Neural Information Processing Conference, Hinton introduced a new learning algorithm called forward-forward algorithm for neural networks. The main idea of ​​this algorithm is to use two forward steps instead of forward and backward steps in the error backpropagation method; One with positive (real) data and the other with negative data that only the network produces.

When the creator questions his creation

Finally, in May 2023, after about 10 years of working with Google, Hinton resigned from his job at the company because he wanted to speak freely about the dangers of the commercial use of artificial intelligence. Hinton was concerned about the power of artificial intelligence to generate fake content and its impact on the job market. Next, we read a part of Hinton’s words in an interview in 2023:

I think we’ve entered an era where, for the first time, we have things that are more talented than us. Artificial intelligence understands and has talent. This advanced system has its own experiences and can make decisions based on those experiences. Currently, artificial intelligence does not have self-awareness, but over time, it will acquire this feature. There will even come a time when humans are the second most talented creatures on earth. Artificial intelligence came to fruition after many disappointments and failures.

– Geoffrey Hinton

The supervisor of the doctoral course asked me to work on another subject and not to jeopardize my future work, but I preferred to learn about the functioning of the human brain and mind and simulate it, even if I fail. It took longer than I expected, about 50 years, to achieve the result.

At one point, the reporter asks Hinton at what point did you come to the conclusion that your idea about neural networks is right and everyone else is wrong? “I’ve always thought I was right, and I’m right,” Hinton replies with a pause and a smile.

With the advent of ultra-high-speed chips and the vast amount of data generated on the Internet, Hinton’s algorithms have reached magical power. Little by little, computers were able to recognize the content of photos, even later they were able to easily recognize sound and translate from one language to another. In 2012, words like neural networks and machine learning became the main words on the front page of the New York Times.

Read more: The biography of Ida Lovelace; The first programmer in history

From Turing to Nobel: The Unparalleled Honors of the Godfather of Artificial Intelligence

As one of the pioneers of artificial intelligence, Geoffrey Hinton has been recognized many times for his outstanding achievements. He has received numerous awards including the David E. Rommelhart of the Cognitive Science Society and Canada’s Gerhard Hertzberg Gold Medal, which is Canada’s highest science and engineering honor.

One of Hinton’s most notable honors was winning the Turing Award with his colleagues in 2018. This is a prestigious award in the field of computing, so it is referred to as the Nobel of Computing. This award was given in recognition of Hinton’s continuous efforts in the development of neural networks. In 2022, another honor was added to Hinton’s honors, when he received the Royal Society Medal for his pioneering work in deep learning.

2024 was a historic year for Geoffrey Hinton. He and John Hopfield won the Nobel Prize in Physics for their amazing achievements in the field of machine learning and artificial neural networks. The Nobel Committee awarded this valuable prize to these two scientists for their fundamental discoveries and inventions that made machine learning with artificial neural networks possible. When awarding the prize, the development of the “Boltzmann machine” was specifically mentioned.

When a New York Times reporter asked Hinton to explain in simple terms the importance of the Boltzmann machine and its role in pretraining post-propagation networks, Hinton jokingly referred to a quote from Richard Feynman :

Look, my friend, if I could explain this in a few minutes, it wouldn’t be worth a Nobel Prize.

– Richard Feynman

This humorous response shows that this technology is very complex and its full understanding requires extensive knowledge and study. Boltzmann machine is one of the first neural network models (1985), which as a statistical model helps the network to automatically find patterns in data.

Geoffrey Hinton is a man who turned the dream of machine intelligence into a reality by standing against the currents. From back pain to receiving the Nobel Prize in Physics, his life path was always full of ups and downs. With steely determination and perseverance, Hinton not only became one of the most influential scientists of the 20th century but also changed the world of technology forever with the invention of artificial neural networks. His life story is an inspiration to all who pursue their dreams, even when the whole world is against them.

Continue Reading

Technology

Everything about Cybercube and Robo Van; Elon Musk’s robotic taxis

Published

on

By

Elon Musk's robotic taxis
Elon Musk brought the idea of ​​smart public transportation one step closer to reality by unveiling Cybercubes and Robovans.

Everything about Cybercube and Robo Van; Elon Musk’s robotic taxis

After years of passionate but unfulfilled promises, finally on October 11, 2024 (October 20, 1403) at the WE, Robots event, Elon Musk unveiled Tesla’s robotic taxis.

Appearing on stage an hour late, Musk showed off the Cybercube self-driving taxi: a silver two-seater that moves without a steering wheel or pedals.

The CEO of Tesla further announced the presence of 21 Cybercubes and a total of 50 self-driving cars at the Warner Bros. studio (California), where Tesla hosted the event with special guests only.

Tesla Cybercab / Tesla Cybercab profile robotic taxi

Tesla

“We’re going to have a very glorious future ahead of us,” Musk said, but gave no indication of where the new cars will be built. According to him, Tesla hopes to offer Cybercubes to consumers at a price of less than 30,000 dollars before 2027.

The company will reportedly begin testing “unsupervised FSD” with Model 3 and Model Y electric vehicles in Texas and California next year.

Currently, the company’s self-driving cars operate with supervised FSD, meaning they require human support to take control of the steering wheel or brakes at any time. Tesla needs to get several permits from the regulators of different US states (or other countries) to offer cars without steering wheel and pedals.

But Cybercube was not the only product that was unveiled at this ceremony. Alongside the line-up of Optimus robots likely to launch as consumer work assistants in the coming months, the unveiling of an autonomous robotic van that can carry up to 20 passengers or be used to carry goods also generated more excitement among the audience.

Tesla Robovan side view

According to Musk, Robovans and Cybercubes use inductive charging and do not need a physical power connection for recharging. He also stated that “robovans” would solve the problem of high density and pointed to the transportation of sports teams, for example.

The CEO of Tesla has been drawing the dream vision of the company’s self-driving public transportation fleet for the shareholders for years and sees the company’s future in self-driving vehicles.

It is not bad to remind you that the WE, Robots event was the first product introduction event after the introduction of Cybertrack in 2019; The product, which entered the market in late 2023, has since been recalled 5 times in the United States due to various problems.

The event ended with Elon Musk’s “Let’s party” and a video of Optimus robots dancing, while Tesla’s CEO invited guests to take a test drive with the on-site self-driving cars inside the closed-off film studios.

However, experts and analysts of the self-driving car industry believe that the release of cybercabs will take longer than the announced schedule because ensuring the safety of these cars in scenarios such as bad weather, complex road intersections and unpredictable behavior of pedestrians will require many permits and tests.

Tesla shareholders still balk at Musk’s vague timetable for the production and delivery of new cars, as he has a poor track record of promising robotic taxis. But we cannot deny that this unveiling breathed new life into the world of self-driving technologies.

But where did the idea of ​​robotic taxis, which Tesla CEO claims are 10 to 20 times safer than human-driven cars and reduce the cost of public transportation, start?

Tesla Robovan next to Cybercube

Tesla

In 2019, during a meeting on the development of Tesla’s self-driving cars, Elon Musk suddenly made a strange prediction: “By the end of next year, we will have more than a million robot taxis on the road.”

Tesla’s investors were not unfamiliar with the concept of fully autonomous driverless cars, and what surprised them was the timing and short window of time of the plans that Musk was announcing. His prediction did not come true until the end of 2020, but has been postponed many times; But in recent months, with the decrease in Tesla’s interest rate, Elon Musk has tried in various ways to divert Wall Street’s attention from the company’s main activity and draw it to a new point. At every opportunity, he explains that the company’s future lies not in the production of electric cars, but in the very exciting world of artificial intelligence and humanoid robots.

According to him, one of the most profitable businesses in the field of AI will be driverless taxis or robotaxis that work almost anywhere and in any condition. Musk believes that Tesla’s market value will reach several trillion dollars after the release of these cars, although with this, Tesla will enter a highly competitive market.

Tesla’s technology will face fierce competition from Alphabet’s Waymo, Amazon’s self-driving unit Zoox, and General Motors’ Cruise. Also, ride-sharing companies such as Uber and Lyft and Chinese companies such as Baidu and BYD are considered serious competitors of Tesla.

Can robotaxis really save Tesla from declining profitability? How close is the company really to the production of driverless and fully autonomous car technology, and what guarantee is there for the success of Elon Musk’s plans to form a vast network of robotic taxis?

The start of the internal project of Tesla’s self-driving taxis

Elon Musk at the presentation ceremony of Tesla's autopilot system

Business Insider

Although Elon Musk has implicitly raised the idea of ​​robotaxis since 2016; the design and development operations of these cars took on a more serious color from 2022. At this time, during Tesla’s first-quarter earnings call, Musk once again announced that the company is building robotic taxis that do not have any steering wheel, pedals, or any other controller for physical human driving.

He also said that these cars will be fully self-driving and will be available to the public by 2024, when Tesla completes its self-driving car project. Sometime later, at the opening ceremony of the Gigafactory in Austin, he mentioned that the robotaxis would have a futuristic design and probably look more like a Cybertruck than a Tesla Model S.

Tesla’s robotic taxis have no steering wheel, pedals, or any other controls for physical human driving

During the same meeting, a Tesla investor asked Musk if the robot taxis would be offered to utilities or sold directly to consumers. Musk did not answer this question but continued to emphasize that robot taxis minimize the cost of a car per kilometer of distance, and the cost of traveling with these cars will be lower than a bus or subway ticket.

Sometime before Musk’s statement, Tesla announced that it is producing fully autonomous and self-driving vehicles at a cost of $25,000, which can have a steering wheel or not. For this reason, no one knows yet whether Musk’s robotaxis project refers to these cars or not.

According to the announced timeline, Tesla had 32 months to complete the construction, legal permits, and software required for the robot taxis and align with acceptable standards for “level 5 autonomy.”

At the beginning of 2024, the subject of robotic taxis made the news again. Elon Musk, who seemed fed up with Tesla’s usual car business, emphasized that Tesla’s future does not depend on selling more electric cars, but mainly on artificial intelligence and robotics.

Unlike Uber, which is Tesla’s main competitor in this project, Musk does not want to rely on Model 3 sedans and SUVs like the Model Y for the development of robot taxis. According to Tesla’s statement, the company is generally working on the production of new dedicated vehicles, which will probably be called Cybercab.

The supply of robotaxis depended on the completion of Tesla’s autopilot technologies and the so-called full self-driving systems, and exact statistics of how much consumers will accept this innovative product and what new rules will be imposed in this field were not announced.

Car design

Tesla Robotaxis concept design

Teslaoracle

In terms of design, the interior of the car was expected to have differences from other Tesla electric cars to meet the demands of passengers; For example, two rows of seats facing each other, or doors that open in a sliding manner and facilitate boarding of passengers. Also, a car that is used as a taxi should have provisions for simple and quick cleaning of the interior and even disinfection.

The idea of ​​robotaxis also received interesting design proposals from enthusiasts: some said it would be better for Tesla to optimize its public self-driving cars depending on different uses; For example, some of them have a place to rest for long distances, or others come with a monitor and several accessories that are suitable for working along the way.

Supporters said that these facilities improve people’s quality of life and even if a passenger devotes his travel time to something useful, he has saved the same amount of time.

Continuing speculation about the design of the Cybercube, a group of experts in the field of car research also said that in the coming years, Tesla could produce other vehicles that are suitable for special entertainment, such as watching movies, or other amenities for users who want to hang out with friends and fellow travelers along the way. To socialize yourself, have: just like sitting in a limousine.

The design of the Cybercube is similar to the Cybertruck van, but with doors that open from the top

But the initial design of the Cybercube, which was published on the Tesla website, was somewhat reminiscent of the Cybertruck, and there was no special feature even to make it easier for people with disabilities to ride.

Forbes also wrote in its latest report comparing self-driving cars of different companies that Tesla’s robot taxi will probably be a two-seater car with side-by-side seats and a retractable steering wheel because eventually, users will need a steering wheel to drive outside the areas that have the company’s support services. had

Tesla Cybercab Tesla Cybercab back and side view with open doors

However the final design of the Tesla Cybercube was not similar to the self-driving cars of the startup Zoox or Zeekr.

With doors that open up like butterfly wings and a small interior, this car only hosts two passengers. As we might have guessed, the Cybercube looks a lot like the Cybertruck, but it’s sleeker and more eye-catching than the controversial Tesla pickup.

Hardware

Tesla Cybercube Robotaxis

Sugar-Design

So far, Tesla has not disclosed any information about the set of sensors that will be used in the robotaxis. The company talks about Autopilot technologies on its website, but what Elon Musk has so far described as a fully self-driving, driverless car will require more advanced sensors, software and equipment than Autopilot.

Tesla Autopilot cars are equipped with multiple layers of cameras and powerful “machine vision” processing, and instead of radar, they use special “Tesla Vision” technology that provides a comprehensive view of the surrounding environment.

In the next step, Tesla Autopilot processes the data from these cameras using neural networks and advanced algorithms, then detects and groups objects and obstacles and determines their distance and relative position.

Tesla’s Autopilot system is equipped with multiple layers of cameras and powerful “machine vision” processing and uses “Tesla Vision” instead of radar.

Car driving functions also include two important eras: 1. adaptive cruise control with traffic detection that changes the car’s speed depending on the surrounding traffic; 2. The Autosteer steering system makes the car move on a certain line with the help of cruise control and continues the right path, especially when it encounters a curve in the road.

These cars can park automatically, recognize stop signs and other road signs as well as traffic lights, and slow down if necessary. Blind spot monitoring, automatic switching between road lanes, and intelligent summoning of the car by mobile application are some other features of these cars.

Despite all security measures, all Tesla Autopilot cars still require driver supervision according to national laws and the company’s own announcement. For this reason, until this company provides new specifications and information about the sensors, cameras, and systems of the robot taxis, no expert can check their efficiency or risk.

Introducing the Robotaxis application

The image of the map on the Tesla Robotaxis application

Tesla

In April 2024, Tesla released a brief report on the mobile application of robotaxis, and Elon Musk also said that the first of these cars would be unveiled in August (this date was later postponed).

In the initial images of the robotic taxis application, a button to call or summon a taxi and a little lower, the message of waiting time for the car’s arrival could be seen. The second image showed a 3D map and a small virtual vehicle following a path toward a waiting passenger. These images were very similar to the Uber app, except that it looked like a Tesla Model Y car was driving in it.

According to Tesla, passengers can adjust the temperature of the car as they wish when they are waiting for the taxi to arrive. Of course, other details such as the waiting time and the maximum passenger capacity of the car were also seen in the images of the application.

Passengers can adjust the temperature inside the car and their favorite music through the Tesla application

According to the published screenshots, in the next step when the vehicle reaches the origin and the passenger boards, the map view changes to the destination. Passengers can control the car’s music through the mobile application.

The app looks like a standard online ride-hailing app, but there’s no mention of the robotic nature of the car, which does all the driving automatically and autonomously. Elon Musk said in the same meeting:

You can think of Tesla’s robotaxis as a combination of Uber and Airbnb.

According to Musk, part of the fleet of robotic cars will belong to Tesla and the other part will belong to consumers. The owners of this group of robotic cars can give their cars to the taxi fleet whenever they want and earn money in this way.

Legal restrictions on removing the steering wheel and pedals

Tesla robot taxi without a steering wheel

independent

Despite all his previous promises, Tesla’s CEO has been evasive in past interviews when asked if the robotaxis will have traditional controls like pedals and steering wheels. Tesla’s Robotaxi plans have been heavily questioned due to delays in early prototype development, making the answer to the above question more important than ever.

The reality is that by mid-2024, in theory, it could take months or even years to approve a vehicle without pedals and a steering wheel for public roads, while a more traditional-looking vehicle could come much sooner.

In a letter addressed to its shareholders, Tesla emphasized that it would need the permission of the federal government to deploy and operate robotaxis with a more radical and progressive design. The statement also stated:

Scheduling robotaxis requires technological advances and regulatory approvals, but considering their very high potential value, we intend to make the most of this opportunity and are working hard on the project.

Elon Musk also did not respond to a question about exactly what type of regulatory approval Tesla is seeking.

He was then asked by reporters if Tesla was seeking an exemption from the Federal Motor Vehicle Safety Standards (FMVSS) to develop and market a car without traditional controls. In response, Musk compared Tesla’s new product to Waymo’s local self-driving cars and said that products that are developed for local transportation are very vulnerable and weak. He added:

The car we produce is a universal product that works anywhere. Our robotaxis work well on any terrain.

Currently, car manufacturers must comply with federal motor vehicle safety standards that require human controls such as steering wheels, pedals, side mirrors, and the like. These standards specify how vehicles must be designed before they can be sold in the United States, and if a manufacturer’s new product does not meet these requirements, manufacturers can apply for an exemption; But the US government has set a limit of 2,500 cars per company per year.

The regulatory exemption cap would theoretically prevent the mass deployment of purpose-built self-driving vehicles from any AV company, including Tesla. To date, self-driving car advocates have tried hard to pass legislation to cap the number of driverless cars on public roads; But the bill is apparently stalled in Congress due to questions about the technology’s “level of reliability” and readiness.

Tesla will need an FMVSS exemption if it wants to remove the steering wheel and pedals from its self-driving cars

So far, only Nuro has managed to obtain an FMVSS exemption, allowing it to operate a limited number of driverless delivery robots in the states of Texas and California.

For example, General Motors’ Cruise unit applied for a waiver for Origin’s steering-less and pedal-less shuttle, but it was never approved, and the Origin program was put on hold indefinitely.

Tesla Cybercab interior view and seats
Tesla Cybercab Tesla Cybercab interior view and interior space

Startup Zoox (a subsidiary of Amazon) also announced that its self-driving shuttles are “self-certified”, prompting the US National Highway Traffic Safety Administration to launch new research to understand this newly invented concept. Issues such as strict legal processes and approval of the license caused other companies in this field to completely ignore the issue of removing the steering wheel and pedals. For example, Waymo’s self-driving cars, although operating on public roads without a safety driver, have traditional controls. Some time ago, the company also announced that it would finally introduce a new driverless car, but did not specify an exact date for it, nor did it mention FMVSS exemptions.

Thus, now that it has been determined that the final Cybercube car will be produced without traditional controls, Tesla must also pass similar regulatory hurdles.

The challenges of mass production of Tesla robotaxis

Tesla's robot taxi design

Sugar-Design

Apart from persuading the regulators and getting a city traffic permit, there have been many other challenges standing in the way of the success of the robotaxis project, some of which Tesla has passed and has not found an answer for others.

For example, Tesla claims that it has reached a reliable milestone in terms of technologies and hardware infrastructure, but incidents such as the crash of Uber’s self-driving car in 2018, which killed a pedestrian, or two severe crashes of cruise cars in 2023, have a negative public view. It followed people into driverless cars.

On the other hand, the current infrastructure of most American cities is designed for conventional cars and must be updated and developed again to support the multitude of robotic taxis . For example, installing smart traffic lights that can communicate with self-driving cars and provide them with real-time information is one of the basic needs of robot taxis. Also, the presence of clear road lines and legible traffic signs is very important for self-driving car sensors.

The mass production of robotaxis requires changing the road infrastructure

Contrary to Musk’s claim that “the roads are ready for permanent robot taxis,” self-driving cars from other companies are still plying urban and intercity roads in certain parts of the United States. Until July 2024, Tesla had about 2.2 million cars on American roads, which is far from Elon Musk’s target of a fleet of 7 million cars.

In the second stage, Tesla’s self-driving cars are equipped with advanced technologies such as a variety of cameras and sensors and data processing systems, which, apart from the cost of production, also increase the cost of maintaining equipment and keeping software up-to-date.

In the past year alone, some Tesla customers have been forced to pay an extra $12,000 to upgrade their cars’ self-driving capabilities, while there’s still no news of new features.

If we consider the average price of robotaxis between 150,000 and 175,000 dollars, it is not clear how long Elon Musk’s promise to potential buyers about the revenue-generating potential of these cars will take. Unfortunately, Musk’s prediction regarding the annual gross profit of 30,000 dollars for the owners who give their cars to other passengers does not have statistical and computational support.

Developing new insurance models for self-driving cars will be one of Tesla’s serious challenges

The development of suitable insurance models for self-driving cars will also be one of Tesla’s serious challenges, because insurance companies must be able to correctly assess the risks and possible costs of robotaxis; Therefore, Tesla must cooperate with insurance companies from different angles to reach a comprehensive plan that both customers and companies are satisfied with.

In addition to paying attention to technological and legal issues, Tesla must gain people’s trust in its new series of fully automatic and driverless cars, and for this purpose, it will be necessary to hold advertising campaigns and extensive training programs to familiarize consumers with the company’s technologies and reduce the concerns of end users. was

The status of the project in 2024 and the concern of shareholders

Tesla Cybercab / Cybercab in the city

Tesla

In 2024, Elon Musk postponed the unveiling date of robotaxis to August 8 and then to October 10. In April, he told Tesla’s investors, who were frustrated by the uncertain progress of production of the cars.

All the cars that Tesla produces have all the necessary hardware and computing for fully autonomous driving. I’ll say it again: all Tesla cars currently in production have all the prerequisites for autonomous driving. All you have to do is improve the software.

He also said that it doesn’t matter if these cars are less safe than new cars, because Tesla is improving the average level of road safety. A few weeks later, he released another video in which he summarized meeting Tesla’s goals in three steps:

  • Completing the technological features and capabilities of fully autonomous vehicles
  • Improving car technology to the point where people can ride driverless cars without any worries
  • Convincing the regulators that the previous option is true!

While other companies producing self-driving cars go from city to city to obtain the necessary permits and try to expand the range of activities of cars by proving the safety of their products, NBC recently revealed in its report that Tesla even the states of California and Nevada, which employ the most employees. has not received a license to test these cars.

Tesla has not yet received permission to test robotaxis in the US states

In July, Musk told investors that anyone who does not believe in the efficiency and value of robotaxis should not be a Tesla investor. Meanwhile, the California Department of Motor Vehicles recently filed a lawsuit against Tesla, accusing the company of falsely advertising Autopilot and fully automated systems.

In addition to specifying the monthly cost and upfront payments for fully autonomous cars, the case also addresses the issue that both types of systems require drivers to be behind the wheel and control the vehicle’s steering and braking.

The unveiling of the Cybercubes in October 2024 seemed to ease the mind of Tesla shareholders somewhat, but during the night of the company’s big event, some of them expressed their concern about Musk’s uncertain timings to the media.

What do experts and critics say?

Some critics say that it is not possible for Elon Musk’s robot taxi to be produced and released according to his schedule. Referring to Waymo vehicles that make 50,000 road trips every week, this group considers Tesla’s silence regarding the request for technical information of the vehicles unacceptable. From their point of view, Musk is just continuing to make vague promises about this car.

In response to critics, Elon Musk emphasizes one sentence: that Tesla is basically an artificial intelligence and robotics company, not a traditional automobile company. So why doesn’t he try to clarify the obstacles that stand in the way of Tesla’s actions to realize the old idea?

On the other hand, the technology academies remind that Tesla’s systems have not reached level 5 autonomy, that is, conditions that do not require any human control. The MIT Technology Review writes:

After years of research and testing of robotic taxis by various companies on the road, mass production of these cars still has heavy risks. To date, these vehicles only travel within precise, pre-defined geographic boundaries, and although some do not have a human operator in the front seat, they still require remote operators to take over in an emergency.

R. Amanarayan Vasudevan , associate professor of robotics and mechanical engineering at the University of Michigan, also says:

These systems still rely on remote human supervision for safe operation, which is why we call them automated rather than autonomous; But this version of self-driving is much more expensive than traditional taxis.

Tesla is burning direct investor money to produce robotaxis, and it is natural that more pressure will be placed on the company to get more reliable results. The balance between costs and potential revenues will come when more robotaxis hit the roads and can truly compete with ride-sharing services like Uber.

Despite numerous legal, infrastructural and social challenges, the unveiling ceremony of Cybercube and RoboVon puts not only the technology levels of self-driving cars but the entire transportation industry on the threshold of a huge transformation. The entry of Tesla’s robotic taxis into the market can even affect the traditional taxi service model, but how long will it take for the transition from this unveiling to the actual launch

Continue Reading

Popular