A year and a half after launching the Samsung Galaxy Book Go budget Windows laptop, Samsung has unveiled a new model that should bring a significant performance boost over the original model.

The new Samsung Galaxy Book2 Go is a 3.2 pound Windows 11 notebook with a 14 inch FHD IPS LCD display, a Qualcomm Snapdragon 7c+ Gen 3 processor, support for WiFi 6E and optional support for 5G. It will go on sale in France January 20th and in the UK at the end of January

For the most part the new laptop looks a lot like the original Galaxy Book Go. But the first-gen model featured a Snapdragon 7c Gen 2 chip while the updated model has the Snapdragon 7c+ Gen 3, which Qualcomm says delivers:

  • Up to 30% faster single-core performance
  • Up to 60% faster multi-threaded performance
  • Up to 70% faster graphics

That said, this is still a Windows-on-ARM laptop with a budget processor, so it’s probably safe to expect long battery life and an affordable price tag… but less-than-stellar performance, particularly when running Windows applications that haven’t been compiled to run natively on ARM-based processors.

Samsung says the Galaxy Book2 Go gets up to 21 hours of battery life during video playback (which is probably a best-case scenario), supports WiFi 6E connectivity, and is designed to work with other Samsung Galaxy devices including smartphones and tablets. For example, you can use a Galaxy Tab as a second display or a pen input device for the Galaxy Book2 Go, automatically pair a set of Galaxy Buds true wireless earbuds, or synchronize notes across your devices using the Samsung Notes app.

Here’s a run-down of key specs:

Samsung Galaxy Book2 Go specs
Display14 inches
ProcessorQualcomm Snapdragon 7c+ Gen 3
Storage128GB / 256GB UFS
WirelessWiFi 6E
5G (optional)
Dual SIM (eSIM + pSIM)
Ports2 x USB Type-C
1 x USB 2.0 Type-A
1 x 3.5mm audio
1 x uSD
1 x nano SIM slot
Battery42.3 Wh
OSWindows 11 Home
Dimensions324 x 225 x 16mm
(12.8″ x 8.9″ x 0.6″)
Weight1.44 kg
(3.2 pounds)

via SamMobile and NotebookCheck

This article was first published January 2, 2023 and most recently updated January 17, 2023. 

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign


Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 9,534 other subscribers

10 replies on “Samsung Galaxy Book2 Go is a budget laptop with Qualcomm Snapdragon 7c+ Gen 3”

  1. Brad, you lost one full year it seems. Maybe you wanted to forget about 2022 completely.

    Mid 2021 to now is 1.5 years… not half a year between models.

  2. When can we finally expect non-Qualcomm Windows on ARM SOCs? I was kind of interested in seeing how an Xclipse 920 derivative would perform in a laptop.

    1. Mediatek was the only other company I heard of planning on working on it, maybe we’ll hear something later this year. And if that doesn’t work out (for them), I’m still thinking when Microsoft insists that ARM CPUs conform to a common firmware standard, either systemready, or something else put together by Microsoft and/or Qualcomm.
      But nothing would excite me until there’s actual choice in operating systems.

      1. wider availability of 5G and optic fiber could make cloud/hybrid computing a viability especially in a metro cloud / edge server setup. then switching OSes would be a so called nothingburger. I have high hopes for hybrid computing being taken to the next level with home+edge caching. edge cache is where all your stuff gets backed up/updated 24×7 then all your access devices simply do a fast sync on each access.

        1. If you’re not running the OS on your hardware, you don’t control the data, and can lose your machine for doing something the service provider doesn’t like, or will suddenly start to not like at some unknown point in the future.

          1. Think of it this way: instead of all your accounts syncing data with all your devices (which is slow and tedious), you have a virtual PC in the cloud but its main instance is geographically as near as economically feasible. This way servers all over the global can sync with the virtual PC all day long and then you simply log in with any of your devices and sync with this single proximate virtual pc. about losing control, that virtual pc was only a convenience, like a disposable buffer albeit with exceptional hit rates.

          2. @ajay So what I think you’re describing would be a bizarre situation in which the contents of your local C:\ or / exactly match the contents of C:\ or / on the remote server. I don’t know of any service like this. But that’s the only thing I can think of wherein the loss of access to the service would actually result in no data loss.
            I suppose it’s not impossible but quite frankly, I don’t think anyone would ever set something up like that according to a common standard because it would give the average person a degree of leverage most IT giants don’t think you deserve. But even if such a service existed, losing access to the remote virtual machine isn’t going be just an inconvenience, if you need it, it’s because you need the extra processing power; there would be things you simply could not do on the devices you have.

          3. basically all computing is hybrid, then the question is how exactly to go about hybrid. clearly 24×7 background and proximate caching are two standouts. the former allows for continuous updates for instance price alerts need to be reported seconds after hitting the targets. again smartphones are always on, the only difference is that cloud pcs allow 100x as many continuous background tasks. proximate caching allows nearly instantaneous synchronisation with offline clients and bandwidth wise extremely economical because only the first hop suffices. then it is about how to deploy these. so no need for full fledged oses, some stripped down linux can achieve both the above objectives. i personally feel that such setups can evolve into highly complex systems that could render traditional forums, social platforms useless. let’s say i need to sell article x. that can be my only requirement. I don’t need to signup anywhere or to sign off on any agreement. all of a sudden thousands of indexing engines register me as selling article x. these then generate y queries that i address using my own engines. the idea is that it is all bottom up instead of top down ie everything builds around my seed action (sell article x).

          4. At the risk of overextending my welcome on lili, let me put in a longish comment. I felt the need to elaborate on a point i had made in my previous comment regarding bottom up web. it demonstrates the power of owning your own background processes. And also how auctions and royalties could become the new currency. Let us say that person x is looking for boat photos, he needs to create a flipbook for which he needs about 6 boat photos among others. Let us say that a person y has boat photos on him but only two. let us also assume that only one of y’s photos is any good in terms of composition, resolution etc. on the tradional web, there would be no chance that x and y ever meet each other, let alone become business partners. and therein lies the power of indexing engines. so sooner or later x’s engine WANT request will process y’s engine AVAILABLE request. but it is not so simple because then these would go into an auction, let’s say that the programming of both engines does indeed end up agreeing on a set of conditions for use and terms of payment, then after getting approval, y photos would be in x’s flipbook earning royalties. again it is not as simple as that because flipbook generation would become automated, there would be continuous ab testing and it may just be that because y insisted on bundle deal and x was paying for both but using only one photo, the economics would militate against y, so the percentage of flipbooks with y’s images in them would keep dropping until it becomes zero. and all this assumes that the indexing engines are themselves not being audited so they could be upto their own games, cartels etc. clearly we would need auditors who in turn would need blockchains to prove lack of bias/manipulation etc.

Comments are closed.