Intel plans to launch its 14th-gen processors based on “Meteor Lake” architecture in December, and they’re expected to be the first chips manufactured using the company’s new Intel 4 process, the first Intel consumer chips with an NPU (neural processing unit) for hardware-accelerated AI, and the first to be sold under the new Intel Core Ultra brand.
There’s also one other thing that will make Meteor Lake architecture a little different from most of the mainstream processor families we see from Intel: the chips will be mobile-only. While some mini PCs and ultra-slim all-in-one desktops may use the processors, they’re designed to play well with laptops and you won’t see any 125W desktop-class Meteor Lake processors, for example.
Here’s a roundup of recent tech news from around the web.
For notebooks & AIOs: Intel Meteor Lake is not available for “classic desktop PCs” [ComputerBase]
Intel’s upcoming 14th-gen “Meteor Lake” processors will be mobile-only. They will show up in some desktops, but just mini PCs and all-in-one systems that typically use mobile chips rather than socketed processors.
OpenAI and Jony Ive in talks to raise $1bn from SoftBank for AI device venture [Financial Times]
ChatGPT maker OpenAI is reportedly in talks with Softbanks and Jony Ive’s design company to build its “first consumer device” that it hopes would be the “iPhone of artificial intelligence,” that would change the way people interact with AI the way the iPhone changed the user experience for phones
Introducing Fitbit Charge 6: Our most advanced tracker yet [Google]
The Fitbit Charge 6 goes up for pre-order today for $160. It’s the first activity tracker from the Google-owned Fitbit with Google Maps and Google Wallet integration, support for YouTube Music controls, and Zoom+ magnification for accessibility.
Steam on Chromebook comes to the ChromeOS 117 Stable Channel [About Chromebooks]
You can now install Steam on Chromebooks running ChromeOS 117 stable channel builds (you still need to enable an experimental flag, but you don’t need to be on the beta channel anymore). Just keep in mind that you’ll want to stick to less demanding games on most Chromebooks.
A new Lenovo Tab M11 FHD tablet leaks [Windows Report]
A new Lenovo Tab M11 may be on the way, with a 1920 x 1200 display, MediaTek Helio G88 processor, up to 8GB of RAM and 128GB of storage, quad speakers, dual rear cameras, and Android 13 + 2 major OS updates & 4-years security updates.
CrossOver 23.5 is a real game changer [CrossOver]
CrossOver 23.5 brings better support for playing Windows games on macOS thanks to “components from the Apple game porting toolkit.” Enabling the new D3DMetal option improves compatibility & performance for some DirectX 11 and DX12 games.
Andy Jassy and Dave Limp welcome Panos Panay as Amazon’s new Devices & Services leader [Amazon]
As expected, former Microsoft Surface & Windows chief Panos Panay is now set to take over Amazon’s Devices & Services business after current head Dave Limp retires later in 2023.
Keep up on the latest headlines by following @[email protected] on Mastodon. You can also follow Liliputing on X (the app formerly known as Twitter) and Facebook.
Altman is a liar. He wants to create a mote and petitioned congress to get A.I. regulated because it’s “sooooo dangerous”. Truth is, models like Llama/Llama 2 are so good, good enough that you don’t need his censored A.I., and that has him (and Microsoft) scared to death. It’s all smoke and mirrors from Altman. He’s a liar, plain and simple. He holds a house of cards that could easily tumble and he’s scared to death of losing his foothold on A.I.
To comment to someone else; A.I. has been around forever. How do you think you interact with NPC’s in video games? The game has to be able to react to you in a certain way, and there are algorithms that NPC’s are programmed with to be able to react to you. I remember the lead programmer of Starfleet Command 2 (gawd, was that 1998?) talking about programming “A.I.” into the game…
This whole “oh my gawd wtf bbq A.I. is going to destroy us all!” is a dang lie from those that are afraid of losing their foothold and their weak house of cards tumbling to the ground.
I’ll tell you what. Try the latest Vicuna v1.5 that is based on Llama 2. It’s nearly as good in story telling (and it’s NOT censored) as chatgpt. And you can run it on your phone, or in my case, on a laptop with nearly 2 tokens per second. I’d rather have that than something censored that I have to pay for, and I have complete privacy, and in my tests, it’s nearly as good as chatgpt 3.5. Stuff like that has Altman scared stiff.
@someguy; as usual, I couldn’t agree with you more!!!!
And by the way, Llama 2 is a good model and is very competitive with commercial offerings. Especially when you consider you get complete privacy, it’s even better. Koboldcpp works very good as a backend for my needs. No need for an internet connection, private.
Llama 2 was trained on 2 trillion tokens and is very good for what it is. GPT-2, Janeway, Nerys, Pygmalion, pale in comparison.
I have no need for character.ai or chatgpt personally.
Truth be told, I’ve gotten kind of bored with A.I. personally.
But I did have a really great roleplay/story telling session recently, and if enough people are interested, I may post a link to it here. If that’s okay with Brad. Just to give an idea of what Llama2 is capable of.
But I really think A.I. is overhyped, and no, we’re not going to die ala Terminator. Sam Altman has a billion dollar corporation and Microsoft is very heavily invested in it. He’s afraid of being upended by competition, and that’s why he’s decrying the dangers of A.I. at congress trying to get it regulated. It’s all about the money and nothing more. Don’t listen to him, he’s a naysayer.
Even less desirable when you consider Altman’s other profiteering project: WorldCoin.
But I don’t want to change how I interact with AI.
Because outside of running the programs on my general purpose personal computer at home, I don’t want to interact with AI at all.
Everything else is run by user hostile megacorps who hate anyone like me and always will no matter what I believe.
And even when it comes to their stuff, I can’t imagine any reason to use something other than a PC to interact with it, except for WiFi sensing, which is less a matter of “PCs don’t do that” and more “no PCs currently do that that I know of”.
Concur. In fact I don’t want to interact with it at all.
Seems like AI/chatGPT is a maybe more efficient search and graphic generation tool, and that’s about it.
I don’t like Alexa, I don’t like Bixby, I don’t like Siri, and I don’t like Hey Google, not do I use them and this is more of the same.
It’s a novelty, nothing more, and often returns incorrect or even made up results.
Electronics from the 1980s had the Hey-Google feature and they certainly were not marketed, nor would they currently be labelled as an “AI”.
To me and most others, True-AI does not exist. We have fairly good/decent standard computation, and these novel software to trick you into thinking it is AI. But scratch the surface and it comes tumbling down.
Calling that all semantics, just having the hardware is all good and that, but without the software to take advantage it’s pretty pointless. And I’m not talking from the kernel-driver side, I mean there needs to be an end-user solution.
So far, there’s nothing really compelling to do with it. Most of those tasks can be done decently/faster by the CPU or GPU or Both. So the “innovations” have far little to do with hardware (ahem RISC-V) and are mostly focused on the software. However these huge corporations have already made their investments into that sector, we are talking in the Tens of Billions, and these 1% investors NEVER come out of an investment without a return. Wether it was for the greater good or the detriment of society is a different matter for them. And usually the returns come from the rest of society. Without a compelling use-case, it becomes harder to sell, and if sales are not improving the profit, we will see a cascading effect and the USA economy falter. This will have a greater effect on the international economies as well. It will basically be a repeat of 2007-2009 but magnified. The rich will have to sell half their yatchs, the poor are already poor anyways, but the middle-class will be footing the bill and more devastated than before. Not so much as countries, but it is cities around the world that will suffer, and the ones that are already suffering will be the ones to suffer most. Without proper leadership the rich will decide to contaminate the information sources and it will follow the path into conflict and war. The world has more than it needs, it is not a matter of supply, it is a matter of distribution, and the pride/greed of those with power.
This is just semantics, but I really do think that what we’re seeing now really is artificial intelligence, it’s just a fractional portion of everything you’d need to make a conscious or sapient being. In particular, I really see no reason to think that Stable Diffusion and Dall-E aren’t imagination, and I think most people would agree that a person’s imagination can be considered a portion of a person’s intelligence.
Then there’s the time someone got GPT-4 to try and buy a nuclear bomb, or the time someone else got it to hire someone else to solve a captcha for it by pretending to be a disabled person. These programs can plan even if they can’t really comprehend all the little things in the plans and it’s less planning and more generating a sentence that reads like a plan, but even our ability to plan may be more like that than we’d like to think when we’re up against machines such as this.
Don’t get me wrong. These aren’t people, and as long as they’re running on machines that don’t feel pain they never will be deserving of human rights (although I’ve thought about giving them to them anyway just to make building and running the things as unprofitable as possible). To get the human experience, you need a human body, and that’s all there is to it.
Which is why I don’t think you should ever take their advice.
And I’m not disagreeing with the other stuff you said.