Over the weekend folks noticed an unusual processor name in the online database for an Ashes of the Singularity benchmark. The chip is an “Intel Core Ultra 5 1003H” processor, which is described as an 18-core chip with integrated graphics.
It’s unclear if this name is real or not (there’s some evidence that it could be, as it shows up in other places as well), but it looks like a possible rebranding for Intel’s upcoming 14th-gen Core chips based on Meteor Lake technology. And it could be just the latest in a long line of moves that makes it harder than ever to know what to expect in terms of performance or power efficiency from a processor. Update: Without confirming any names of upcoming chips, Intel’s Global Communications Director Bernard Fernandes has confirmed that the company will unveil “brand changes” with the launch of its 14th-gen chips.

Intel has been using Core i3, Core i5, Core i7, and Core i9 branding for its chips for more than 15 years. Generally speaking, the higher the number, the more powerful the processor.
But that math has gotten a lot more complicated over the years. First, because this year’s Core i5 processor might actually deliver better performance than a Core i7 chip from a few years ago. Second, because a Core i7 desktop chip might outperform a Core i9 laptop chip. And third, because even those things don’t tell the complete story: Intel also adds a series of numbers and letters after the Core i3 that are supposed to give you an idea of things like chip generation and performance.
For a few years Intel also added G1, G4, and G7 to the end of chip names so you could get a sense of whether it had Intel UHD or Intel Iris Xe graphics, and how powerful that integrated GPU would be. But the company did away with that when launching it’s 12th-gen “Alder Lake” chips in 2021.
Oh, and then Intel decided to blur the lines between its “Core” processors and lower-cost, lower-performance chips that used to be branded Celeron and Pentium Silver: the latest chips in that family are based on “Alder Lake-N” architecture and include names like Intel Processor N100, Intel Processor N200, and Intel Core i3-N305.
You’ve always had to know how to read the digits and letters after the Core i3/i7/i9 to figure out whether you were looking at a new chip or an old one, or where the processor fell with respect to its contemporaries in terms of performance. Now you also need to know how to tell if you’re looking at a Core i3 chip based on “N” architecture (which means it’s all Efficiency cores with no Performance cores) or “U” or higher tech (with a hybrid architecture that combines Efficiency and Performance cores).
So let’s take a look at that new “Core Ultra 5 1003H” processor. If that’s a real name, does it solve any problems? The only one that I can see is that it gives Intel an opportunity to reduce the number of numbers.
Compared with a current-gen chip like the Intel Core i5-13500H, for example, it looks like “Ultra 5” could replace “i5” and “1003H” could replace “13500H.” But that 13 actually tells us something – that this is a 13th-gen chip, released in or around 2023.
Again, I don’t know if Intel is really planning to drop the “Core i” branding and/or switch from 5-digit numbers to 4-digits. Given how confusing things have gotten over the years, a rebrand wouldn’t necessarily be a bad thing. But if this is what Intel’s rebrand looks like, it could just add to the confusion rather than alleviating it.
Oh, and in case you thought this might just be an Intel problem, AMD’s recent changes to its chip-naming scheme is arguably even worse.
Last year the company unveiled a new system for mobile chip names which are ostensibly designed to let you know a little about the chip by giving each digit in the name a meaning. But the most important number is the third digit, which tells you the CPU architecture, while the first digit is practically meaningless since it only tells you the year a processor was released.
While that might have been useful information a few years ago when AMD was releasing newer, more powerful chips every year, now the company has made a habit of mixing and matching current and previous-gen CPU architecture.
You practically need a decoder ring to figure out that a Ryzen 5 7520U processor features Zen 2 CPU cores and a Ryzen 3 7330U chip has Zen 3+ cores and delivers stronger single-core and multi-core performance. And both of those are using previous-gen technology: if you want a chip with Zen 4 CPU cores then you need a Ryzen 7040 series chip.
Oh, and since AMD is also mixing and matching its current and previous-gen GPU architecture and failing to include any of that information in its chip names, you really do need to look at a list of all the company’s chips to figure out which has the combination of CPU and graphics technologies that you want.
This is what you get when you replace Engineers with Marketeers. And you ain’t seen nuttin’ yet – wait ’til they replace Marketeers wit’ AI.
I agree. I was managing to follow along when it went Celeron, pentium, i3, i5, i7 (and then look at the number and GHz to see where it stacks up within it). But it’s getting bad and AMD is worse. Even ARM with their core names is confusing to me, even though they make a admirable effort to keep it consistent.
Fun fact, the new n100 SBCs are showing benchmarks close to core i5s commonly used in the last PC I built (2013). That’s pretty neat!
A few years ago, that useless source of information known as Consumer Reports was advising people to get at least an i5 processor, as if that meant anything, and as if that was actually good advice for all users.
Trying to confuse consumers should be made a criminal offense.
Capitalism wouldn’t survive such a drastic, transparency-first change.
I think a unified system like this works:
Watts – Year 2020 – 2021 – 2022, etc etc SoC Generation and Nomenclature
~~1W: Apple M10, M11, M12, M13, M14… … (for 2in / watches, wearables)
~~3W: Apple M20, M21, M22, M23, M24… … (for 5in / phones, small gadgets)
~~5W: Apple M30, M31, M32, M33, M34… … (for 7in / phablets, small tablets)
~~7W: Apple M40, M41, M42, M43, M44… … (for 9in / large tablets)
~10W: Apple M50, M51, M52, M53, M54… … (for 11in / ultrabooks)
~15W: Apple M60, M61, M62, M63, M64… … (for 14in / laptops)
~25W: Apple M70, M71, M72, M73, M74… … (for 17in / thick laptops)
~45W: Apple M80, M81, M82, M83, M84… … (for 29in / iMac, All-in-One office pc)
~95W: Apple M90, M91, M92, M93, M94… … (for Mac Pro, Desktops with strong cooling)
You can take that concept, and apply it to Intel or AMD:
AMD Z100, Z1100, Z1200, Z1300,
…and add extra SKUs in-between, like Z100 vs Z103, Z107, Z110, Z115, etc etc. With each step having a newer and more capable chipset, if not in performance, then in features or efficiency.
Same deal with Team-Blue; Intel i100, i110, i1200, i1300… … …
If there’s a will, then there’s a way. We CAN make things more simple, intuitive, and better. It’s just that these corporations DONT want that to happen. They do prey on people’s misunderstandings, to either upsell them a device they didn’t need, or undersell them something they didn’t comprehend. The layman won’t understand how much faster, more efficient, or features you get from a RTX-4 vs RTX-3, RDNA3 vs RDNA2, or Zen3 vs Zen4, or Intel-13 vs Intel-12. Sometimes even I forget, and I’m someone in the know.
The funny thing is that we routinely hear these chip makers promote a new, simpler naming convention every other year before no matter how much they try to simplify, they fall back in messy marketing-led chaos the second nobody is watching.
How on earth is AMD putting 2 cores of 610m RDNA 2 graphics into the Ryzen 9 7945HX, but 12 cores of 780m RDNA 3 graphics into the Ryzen 9 7940HS?
This must be a typo, the graphics are not only multiple tiers worse, but a previous generation!
Nope – the expectation is that systems with “Dragon Range” processors will most likely also have discrete graphics, so they can skimp on the integrated GPU and focus on the CPU.
Intel takes a similar approach. Its best-in-class CPUs for gaming laptops typically have Intel UHD graphics rather than Iris Xe.
That’s not a typo, that’s correct. It’s probably because these products will never be sold in a laptop that lacks a discrete GPU. The integrated 610M GPU is probably only there to offer video encoding/decoding support.
These 7x45HX products from AMD are a new product family called the Dragon Range. They’re probably going to only be offered to manufacturers who want to put them in gaming laptops, so the integrated GPU is pointless.
It can always get worse. They haven’t started throwing “pro”, and “s” and “plus” randomly into the SKUs yet.
Bite your tongue!
They could also start adding more materials to the names — we’ve already seen Pentium/Athlon “Silver” and “Gold”; they could go the route of 80+ certifications and add “Bronze”, “Platinum”, “Titanium”…maybe with “Diamond” down the line. After a few years, they could get more esoteric: “Palladium”, “Vanadium”, “Osmium”, “Zinc”. I can imagine a world where I have to choose between an AMD Ryzen 6 Technetium 65950XU Plus and an Intel Core i10+ 17-4402K Cobalt Iris, each of which has a choose-your-own-adventure bevvy of current and last-gen features, integrated graphics levels, power envelopes, multi-threaded vs. single-threaded performance tuning, overclocking headroom, virtualization features, memory compatibility, and integrated scents (or whatever nonsense they’ve decided to package onto the die in 2035 will be).
In all seriousness, though, this post is spot-on. Every time I look at a computer I have to wade through a pile of information to figure out what the processor name actually entails. There has to be a better way.
Wait until there is a distinction between HX and Hx.
HX chips are unlocked.
Hx chips are unlockable for a fee.