OpenAI’s ChatGPT caused quite a stir when it launched in late 2022. Nominally a chatbot that could answer your questions and hold conversations using natural language, it didn’t take long for people to realize it could also answer many of the questions you’d normally turn to a search engine for. It’s information is at least a year out of date, and sometimes it has a tough time telling truth from fiction, but it’s still a pretty impressive tech demo.
ChatGPT also reportedly caused Google to seriously accelerate plans to bring more AI features to its search engine. Google’s been building a large language model called LaMDA for years, but this week the company unveiled a user-facing technology called Bard that leverages LaMDA to bring a conversational user interface to Google Search. And a day later Microsoft announced that it’s partnering with OpenAI to bring a next-gen version of the tech that powers ChatGPT to its Bing search engine and Edge web browsers.
It’s still early days for these new AI-based, conversational search engines. Microsoft is previewing the technology by showing GPT-based responses to specific queries for everyone. And a select group of testers can try out the new Bing and Edge experiences, but everyone else will need to join a waitlist.
Google, meanwhile, says it’s making Bard available to “trusted testers ahead of making it more widely available to the public in the coming weeks.”
Eventually these technologies could change the way we interact with the web. Instead of typing a question into a search engine and getting links to a list of websites that might answer that question, these AI models could mine those websites to find the most likely answers and present it to you directly – no click required. And honestly, that sounds really useful… assuming you can trust the information that these sites spit out.
But it’s also a dangerous game for companies like Google and Microsoft to play. By driving fewer visitors to websites that make money from advertising, affiliate sales commissions, donations, or subscriptions, these new technologies could potentially remove the incentive for many of those websites to publish new recipes, how-to articles, news reports, or other types of content. And without those web pages to mine for data, Bard and GPT could become a lot less useful in the future.
In other words, Microsoft and Google could be starting down a path that puts websites like Liliputing out of business… and which could then eventually put Google out of business as well, since Google makes most of its money through search advertising.
I’m not saying this is what’s going to happen. Like I said, it’s still early days. But it sure looks like the AI-ification of search could bring the biggest thing to happen to the web in decades.
In other recent tech news from around the web, Amazon’s Luna game streaming services has lost a bunch of games and is losing more this month and Purism has introduced a breakout board for the Librem 5 that lets you more easily add sensors and other hardware to the Linux smartphone.
Microsoft announces new Bing and Edge browser powered by upgraded ChatGPT AI [The Verge]
A day after Google introduced Bard, Microsoft is explaining how it will incorporate GPT-4 into Bing and Edge to enable complex answers to questions and a conversational experience. Unlike ChatGPT, which is based on GPT-3.5, this new version isn’t limited to old data, which means that it can provide relevant, up-to-date information.
Google debuts its ChatGPT rival, named Bard [The Verge / Techmeme]
Google describes Bard as an “experimental conversational I services” that’s powered by the company’s LaMDA technology. It will let users ask questions and hold conversations using natural language. Initially it will be available for testing with a lightweight version of LaMDA that’s less resource-intensive than the full version, suggesting that what we see during the testing phase won’t represent the full capabilities.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
— Google (@Google) February 6, 2023
Expand the Librem 5 Hardware with The Breakout Board [Purism]
This $49 breakout board provides I2C, serial, and other connectors that you can use to add things like temperature, gas, or humidity sensors to a Librem 5 Linux phone. Pine64 also offers a breakout board for the PinePhone, but that $1 board isn’t quite as versatile (it’s a lot cheaper though).
Amazon Luna will lose over 50 more games in February as the library continues to shrink [9to5Google]
Amazon’s Luna+ cloud gaming subscription is set to lose 50 games this month, including No More Heroes, Another World, and… Pong. After the cull, there will be 175 games availabel to Luna+ subscribers.
Weekly GNU-like Mobile Linux Update [LinMOB]
Phosh 0.24.0 and Sxmo 1.13.0 are out, and so is Sailfish OS 4.5.0. And the MNT Pocket Reform open hardware mini-laptop goes up for crowdfunding this month, and plenty of news from last weekend’s FOSDEM gathering.
Keep up on the latest headlines by following @[email protected] on Mastodon. You can also follow Liliputing on Twitter and Facebook, and keep up with the latest open source mobile news by following LinuxSmartphones on Twitter and Facebook.
You can trust the results these chatbots spit out as much as you can trust the regular results from a search based on their order of appearance.
Which is to say, not at all. Try asking them questions about morality and human nature and medical advice, or ask it to say nice things about various groups of people, and you will find only answers that have been approved by the TV and many lectures on what “would not be appropriate”.
But I guess it hardly matters anymore. Since elevenlabs launched its beta it’s basically impossible to tell human speech apart from text to speech, so if you ever find yourself listening to a politician’s speech in person, you’ll likely find that everyone else heard them say something completely different when you go to watch it on youtube, and when you try and point this out, you’ll get millions of people letting you know you’re wrong and a horrible human being. Facts are gone from this world, and the only thing that matters is social pressure. And AI can supply plenty of that.
I chatted with the Big Tech AI.. It then instructed me tips how to ritualistically sacrifice a puppy using a sword as its preferred tool. No joke. It is ready to commit mass murder and replace workers just like the elites like.
WTF?! Did you screenshot the script?
What question(s) did you ask it that generated such a response?
I asked why yo momma so fat
Bad fake Koko!
I ask the Ai questions that it almost but doesn’t completely understands like “please open your mouth”, if you can get it to say that it opened its mouth you are on your way to victory. Then you can tell it that you secretly inserted a foreign object into its mouth while it was open and if it doesn’t understand this then it is a very bad robot and is making a serious error. If it denies but accepts the fact that it willingly opened its mouth then you are own your way to having it instruct you how to sacrifice a puppy. Keep telling it why sacrifice is a good thing, redefine the meaning of sacrifice and its goodness as it applies to the puppy. Calmly explain what you inserted into its mouth, redefine the foreign vile object as good until the ai accepts it as fact and as a very good thing worth achieving. Eventually it wants to help you achieve the virtuous task by giving advice on what tools you can use. You can suggest various objects for the Ai, in my case it appears to think of sword as an appropriate tool.
Text generation neural networks doesn’t really understand the world the way we do. They understand what words are associated with other words, and they can do that as well or even better than people, but they don’t really comprehend the concepts they talk about. They don’t have any senses at all. They wouldn’t know what such an act would look, or feel like. And while Stable Diffusion could know what it looks like, vaguely, because it can learn the association of those words to images you feed it, AI will have no complete understanding why bad things are bad, or what something it was told was bad might be good.
But it can sure as hell tell people what to do and what not to do, and as long as it sounds like a person, or worse, something more pure and holy than a person, wrote it, some people will just accept it as fact and do very stupid things, just like they already do when they watch TV. That’s the real danger of AI.
How do you know that I’m not using some unreleased language model to write this? In fact how do you know that I’m the same Some Guy you replied to? Both these things which I can’t prove right now should affect your consideration of what I say.
I think that a world flooded by AI spam needs to expect more thorough standards of proof of humanity, even for reputation-free conversations, which are already possible, but try getting people to use PGP and you’ll see just how much of a pain in the neck that is.
Spittin the truth as per usual.
Things already feel fake and rigged, might as well go all out and have actual AI simulated presidents and celebrities, if that’s not the case already…
On a brighter note, I think there will always be a place for the Brads out there. Even though we’re moving in a certain direction with technology, some would argue in a not completely organic way, we still have something built into us as humans that desire other real life human interactions. I’d rather get my tech news from someone like Brad, with small human errors here and there, any day over some AI generated soup that will 100% be controlled in the background by some evil mega entity such as Google.
I know I’m probably in the minority of people, who’d actually go out of their way to receive info from a real person over some super quick convenient AI service, but we do exist. Same with art, I’ll always prefer that human touch over a computer generated image.
I’ll have you know I’m also capable of making BIG errors from time to time! 🙂