Google unveiled its answer to ChatGPT in February. Now it’s available for testing… by a select few folks in the US and UK who add their name to a waitlist. Meanwhile Microsoft’s Bing AI ChatBot still technically has a waitlist, but the company seems to be granting access immediately to anyone who signs up from a supported region. And now Microsoft is adding the ability to generate images from text prompts, using an “advanced” version of the technology that powers OpenAI’s DALL·E 2.

While those two companies duke it out in the search space, Adobe is also getting in on the AI action. The company’s new Firefly can generate graphics and text effects… and Adobe says using it is less likely to get anyone sued, since the generative AI model was trained on “millions of professional-grade, licensed images in Adobe Stock along with openly licensed content and public domain content where the copyright has expired.”

Adobe

Here’s a roundup of recent tech news from around the web.

Try Bard and share your feedback [Google]

Google is letting users in the US and UK sign up (there’s a waitlist) to try its Bard AI chatbot, which uses a large language model to respond to questions and prompts. It’s not always accurate and Google calls it a complement to search, not a replacement.

Google

Create images with your words – Bing Image Creator comes to the new Bing [Microsoft]

The same day Google launched a preview off its AI Chatbot, rival Microsoft has added an AI Image Creator to its Bing Chatbot. Now you can ask Bing questions and get written or visual responses powered by OpenAI’s GPT-4 and DALL∙E.

Microsoft

Bringing Generative AI into Creative Cloud with Adobe Firefly [Adobe]

Adobe is also getting in on the AI art space with the launch of Adobe Firefly, which generates images and text effects based on written prompts. Unlike some other AI models, it’s trained on licensed and public domain content.

aCropalypse vulnerability affects Windows Snipping Tool [@David3141593]

So that aCropalypse vulnerability that allows you to restore a full image from a screenshot cropped using Google’s Markup utility for Pixel phones? It may also be present in Microsoft’s Windows Snipping Tool.

Intel graphics chief Raja Koduri leaves after five years battling Nvidia and AMD [The Verge]

After spending 5 years as the head of Intel’s graphics department (spearheading the company’s drive into discrete GPUs) Raja Koduri is stepping down to launch a generative AI startup. Koduri was previously at AMD and spent 4 years stint at Apple.

Oppo Pad 2 [Oppo]

The Oppo Pad 2 is an 11.6 inch tablet with a 2880 x 2000px 144Hz displkay, a Dimensity 9000 processor and pen support. In other words it’s basically the same tablet as the OnePlus Pad, but for the Chinese market.

Oppo Pad 2

NVIDIA Unveils Ada Lovelace RTX Workstation GPUs for Laptops; Desktop RTX 4000 SFF [AnandTech]

NVIDIA introduces RTX 2000, 3000, 4000 and 5000 series laptop graphics solutions based on Ada Lovelace architecture. The new chips are optimized for content creation rather than gaming and designed for mobile workstations.

Keep up on the latest headlines by following @[email protected] on Mastodon. You can also follow Liliputing on Twitter and Facebook, and keep up with the latest open source mobile news by following LinuxSmartphones on Twitter and Facebook.

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 16,202 other subscribers

Join the Conversation

26 Comments

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Jupiter Broadcasting’s most recent Linux Action News has a feature on NextCloud integrating “EthicalAI” (with a grading system and End User choice over what gets included).

    1. Seeing that makes me nervous because every time I hear something called “ethical” I immediately assume it means “biased against acceptable targets (to prove we’re not biased against unacceptable targets).”
      And that’s that it looks like they’re going with.

      1. As far as I understood from that podcast, it’s more to do with the ethics – or lack thereof – in respecting the End User’s computing freedoms and privacy, somewhat analogous to going from using a (currently, still) FSF RYF-endorsed distro to then using Debian when it didn’t ship the non-free repo by default but you had the choice to add it if you wanted (but with this presented more overtly upfront and with health warnings for the implications to your privacy, security and computing freedom).

        1. Also, by lack of ethics in that sense, I mean in reference to the proprietary, cloud, data-harvesting “AI” services which aren’t self-hostable and open source.

  2. Some people might say that Raja Koduri leaving means Intel’s discrete graphics is doomed. Other’s might say that AMD’s graphics suffered under his influence when he used to work there and that this is a good thing for Intel. Who really knows? All I know is that ARC cards are just about the cheapest way to get 16gB of VRAM and it would really suck if I had to pay twice that much for as much VRAM in the future.

  3. “Google is letting users in the US and UK”

    As someone who doesn’t live there I suppose I’ll try Bing’s image tool then; best luck trying to recover the lost momentum with those incremental baby steps, Google!

    1. I don’t think Google is even offering an image generation tool at this time.
      I wouldn’t use it even if they were, I have stable diffusion installed for that.
      In spite of all the stuff I said.

      1. I type “skeletal dragon” in the Dalle Mini Craiyon.. the results are almost the same as typing “skeletal dragon” in Google images search, except with aspect ratio distortions. It seems they are just copy cat machines built on copy cat machines and want to be kept behind a proprietary paywall. This actually makes Google look good as it already had this feature about a decade ago simply by scouring people’s art.

      2. I was referring to AI in general, Google is still taking its first steps while Microsoft, for better or worse, is being bold with its innovation. I’ve played a bit around with Bing’s image tool, it’s kind of unimpressive when compared to some carefully trained SD models (I really love Orange Mix’s Abyss 2 for anime style pictures atm,), but it’s especially curious how it’s easy to trigger its censorship atm (phrases like “asian girl” are enough).

  4. Not exactly… Meta didn’t open source the weights, but the weights were leaked. The Stanford group which trained Alcapa (based off Llama) wanted to release their weights, but then suddenly pulled their demo and there’s been no word from the group since…

    I think that the big players are frightened of A.I. that’s almost as good as theirs, or good enough, that people will want to use their own locally instead of their service, and that is a threat to their billions. So there is all this pushback from the big players to close down open source A.I. in the name of “safety”, when really it’s just in the name of profit in my opinion.

    I and others are ticked off about it, but, I’m not going to download pirated weights, I’m waiting to see if they ever do in fact get released openly (I’m speaking about Llama). I don’t think they will. If not, so be it. I’ll stick with GPT-J and it’s derivatives, which frankly, aren’t very good and belong in a museum at this point, but that’s pretty much all we have right now. I’m not giving big tech another bit of my data or time, so I wont use ChatGPT/Bing or CharacterAI.

    I was really hoping Llama got an official release. It’s much better than the other open source models I’ve tried. We can only hope…

    1. If that was meant to be in response to me, I did actually say “and their weightings were subsequently leaked”…

      Ha! That would not surprise me in the least! RE not downloading pirated weights, I don’t know whether “leaked” (particularly if by an internal source) legally constitutes “pirated” but with something like LLM training models, it’s probably not an unfair question to ask what were those LLM trained on – i.e. is it not potentially something that belongs in the public domain anyway, by virtue of what it was based upon?
      Is releasing the source code under GPLv3 something that to some extent obliges the weightings to be released under the same?
      I’m thinking out loud here but it’s possible that these could be valid questions.

      The analogy isn’t quite the same as an LLM training model but is a court forcing Linksys to open source its router code, such that people across the World continue to benefit from OpenWRT, really all that different from a leak?

      1. It was in response to you, but I just wanted to clarify a bit.

        But people can justify anything to themselves if they want to bad enough. I was just saying, that I’m taking the personal stance, that unless and until Meta openly releases the Llama weights, I wont download them myself. It’s up to others what choices they make, that’s just my personal choice.

        Expanding on my earlier comment, I discovered A.I. two months ago when I first tried CharacterAI, and I’ve been following A.I. closely since. Then I switched to Kobold+Tavern when I found out about that, and have tried many hugging face models. I then recently tried the Alpaca demo when it was live before it got shutdown… and frankly, Alpaca is the best one I have tried apart from Ch.AI. And I really hope that Meta pulls their head out of their rear and finally releases the weights, so everyone can benefit from them.

        I am impressed with the open source community coming up with means to run Llama on their phones and Rpi’s. I hope for the day when we can run a decent enough model at a fast enough speed on our own devices and have no need for third party services.

        But all this talk about the big A.I players decrying how “unsafe” A.I. is sickening. Mark my words, in the coming years, I think there will be a lot of lobbying of congress to get open source A.I. shut down and A.I. in general regulated, all so they can have the pie for themselves… and it’s sickening.

        1. There really is a problem, in that with enough VRAM you too can run the world’s most sophisticated spambots that can talk like other people’s contacts and convince people to part with their credit card info, or Controlled Unclassified Information, completely unsupervised. They now have the ability to get in protracted arguments with you about how you vote and they have infinite stamina for that while you don’t. They can even fabricate information outright by typing text they generate into elevenlabs or Stable Diffusion and saying “listen with your ears, meatbag, here is Politician X saying Y, what more proof do you need?”. And there are people out there who won’t be able to tell when an image was AI generated.
          The problem with that problem is “so how come it’s NOT a problem when Facebook and Google and the like uses it to deceive and frustrate people into doing what their ad customers want?”

          1. In fact I will go further and suggest that while I might be able to handle living alongside neural network applications that use more than 16GB of memory, the average person can’t. And if they can’t, they can be persuaded to hurt someone who otherwise could, through this deception. Therefore, the best way to handle this is with a total ban on neural network applications that use more than 12GB with a permit system allowing for chemistry and physics research. Enforced by special forces if we have to. No exceptions for the government, military, or any corporation of any size!

            But I also know that no such ban will ever not exempt the government and anyone it works with, so the next best thing is to just let everyone have it.

          2. If you’ve watched the news in the past several years, there have been a couple stories about people who mowed down pedestrians in crowded places, because a crazy decided to get behind the wheel and hurt people.
            But by your logic, then all cars should be banned and nobody should be driving, cause they can be used as weapons.
            Spam and spambots have been around for quite a number of years, long before A.I. hit the scene. People determined enough will always find a way.
            The argument that you’re making here is one that has been tossed around the net a lot lately about A.I. and it holds no water whatsoever.
            I tend to think that the vast majority of people out there are decent people, but unfortunately, there have always been bad actors in every sector.
            I for one, like using A.I. just for interactive fiction stories/roleplay games. I can be a vampire in the victorian era, or a samurai in feudal Japan, or an edgerunner in the cyberpunk universe, whatever I can imagine. I like the infinite replayability factor. That doesn’t mean I want to learn how to make meth or something with it. I heard someone say on a forum that they asked an A.I. how to make meth, and it told them. Without A.I., people if they want to find out bad enough, will find out. With or without A.I.
            It’s like that whole pushback against “misinformation” like you’re alluring to in your post. People are usually smart enough to figure things out for themselves and formulate their own opinions. They don’t need a nanny state telling them what information to digest.
            Your argument holds no water.

          3. @justsomeone1 What I’m really trying to get at is that people shouldn’t trust social media as a place to talk anymore, and that there’s a real risk of Meta putting words in your friend’s mouth, or nonexistent persons pretending to be your friend, whether there’s any kind of crackdown or not. And that these large ad companies will be using the arguments I listed to justify their positions as a state sanctioned monopoly over certain uses of neural network applications, even though I actually agree it really is a really bad reason for taking away people’s right to run them on their local machines. The corporations will then abuse that privilege just like they’ve abused everything else.

      2. I don’t think releasing the source code for the program that runs the models would oblige releasing the model files, which are just files that look like a wad of data, you can’t read them. This would be sort of like suggesting that releasing a video player for a video format you invented legally requires releasing a proof of concept file in that format you made up. Meta could add a clause that says “if you train any models with this and use them for certain purposes you have to release them” but they don’t have to oblige themselves to do that.

        If linksys was forced to release it’s router code, that’s because they used openwrt as the basis for their router software, and the software they used was released to them from openwrt under a license that said they had to.

        1. Oh I had been under the impression that OpenWRT was the result of what Linksys was forced to open source? Makes more sense the way round that you tell it.

          I wondered whether specifically having open sourced LLaMa under the GPLv3 might mean that its weightings had to be likewise?

  5. FacebookResearch/Meta open sourced their LLM, called “LLaMa” under the GPLv3 licence, and their weightings were subsequently leaked, leading to people getting it running on an RPi4 (4Gb RAM) and Pixel phone, amongst other things, so that really is more “AI for everyone”! Then there’s GPT-J…

    1. Enjoy it while you can, this is going to end up creating demands for proof of humanity for every single thing posted on the internet to prove you’re not someone using GTP-4, as well as increased demands for limitations on what computers are allowed to do. This will require new image formats and video and audio formats that contain logs that describe the exact steps of their creation signed by your user certificate which was issued by a trusted authority and can be revoked if you ever disagree too hard with something the government does or its ideologies, and you’ll need remote hardware attestation that you computer couldn’t possibly be running any of these programs to access most websites, enabling computer service providers to ban your specific machine and spelling an end to the little social acceptability desktop Linux has.

      1. A better solution would be forcing all AI to watermark their products so authorities could verify if its AI or “original”

        1. I don’t understand how A.I. could be watermarked, but I do know that it is currently being experimented with though, from what little I’ve read about this.

          But frankly, I advocate just pulling out of social media all together. It’s all crap these days anyway. Remember back in the 2000’s when everyone was saying how great and wonderful the internet 2.0 was going to be, and it’s all just become a cesspool.