Portable USB SSDs may be pricier than portable hard drives, but prices have been falling in recent years, making them viable alternatives. They offer plenty of advantages including speedier data transfers, smaller sizes, and better durability thanks to the lack of moving parts.

Samsung’s newest entry also adds something else to the mix: biometric security.

The new Samsung T7 Touch portable SSDs feature built-in fingerprint scanners. They should be available starting this month for $130 for a 500GB SSD.

Theoretically that’s the same as the list price for a 500GB Samsung T5 portable SSD without a fingerprint reader. But you can usually pick one up for significantly less (currently they’re selling for about $90), which means that you’ll probably end up paying a premium to get the new model.

That said, the ability to encrypt the drive so that it can only be unlocked with a fingerprint is only one update — the new Samsung T7 Touch is also up to twice as fast as the T5, with data transfer speeds of up to 1,050 MB/s.

Samsung’s new SSD has a USB Type-C port and supports USB 3.2 Gen 2 10Gbps connections. It measures about 3.3″ x 2.2″ x 0.3″ and weighs just 2 ounces.

In addition to supporting AES 256-bit hardware encryption and fingerprint authentication, you can set up a password for unlocking the SSD. And if you really don’t care about the fingerprint feature, Samsung will also also launch a non-touch version of the portable T7 SSD in the second quarter of the year. It’ll support encryption and password-protection, but it won’t have a fingerprint reader.

Pricing for the non-touch model hasn’t been announced yet, but here’s the run-down on pricing for the various Samsung T7 Touch models available at launch:

  • 500GB for $130
  • 1TB for $230
  • 2TB for $400

press release

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Join the Conversation

2 Comments

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. And eventually they fail.
    I’ve been told that SSDs have trouble when almost full, because of ‘limited area for re-writing??’

    1. Indeed, but that eventually might come much later than in case of a HDD or optical drive. See, each “bit” in an SSD can only be written a finite number of times. Writing NAND flash is a physical change in the structure that later in time might give out. A single level cell SSD-bit can be written 10-30000 times total. To make an SSD last longer the SSD built-in controller tries to allocate as few writes to an operation as possible. First, every data will be compressed by the controller. It’s a HW level compression that does not significantly slow down the operation, but with most user files makes less data written on the flash. Yout text based files, documents, spreadsheets, PDF will all benefit greatly of this. Other than that the controller will try to allocate the new data on an empty spot where there is already some data that is similar to what you try to write. When you delete a file it won’t be overwritten with all ‘0’-s, that would be a huge waste. To speed up this process the SSD has a DRAM cache that is faster than the flash itself to store the data until the controller finds the optimal place for it. Some very cheap SSDs don’t have this cache and instead take a part of the flash itself to act as a buffer. With these really cheap SSDs the performance does drop when the drive is full, because there is no empty space left to act as this buffer. Again, this is for the cheapest bottom of the barrel drives.

      When a cell gives out it’s not actually the end of the drive. First: the data can be read unlimited times, so there is no information loss. Second, all drives have a so called overprovisioning. It’s a part of the drive that’s not visible to you, but the controller can allocate fresh cells to those places that are broken. The controller can keep a tab of what broken cell got replaced by what fresh area so it’s seemless. It is a good indication about the expected lifetime left for the drive thou, so your software might report a ‘85% condition’ even thou you still have all your data and original performance, you just used up 15% of your spare. And in some controller’s cases even when the overprovisioning is used up, you might find your SSD to still work, but as new cells die the capacity of the drive starts to shrink little by little.

      Now there is another complication: actually every SSD cell holds more than one bit of information. MLS holds 2 bits and can be 00 01 10 11, and if either of those bits neds to be flipped, both gets overwritten, so the flash will degrade faster. An MLC SSD will have about 3-5000 writes / cell because of this. Most SSD nowdays are TLC or QLC, as you can guess they hold 3 or 4 bit of information each, a TLC will be ~1000 writes and a QLC ~500, and indeed this might not sound much to you, however this is writes acroll the whole SSD, so a 256 GB QLC drive will easily endure 128 TB written to it. And for most consumers, like in an office PC or even a gaming rig this will last for 5-10 years with normal use. For a datacenter this does not make much sense thou, so in those cases SSD is mostly used as a read-cache, drives that can hold data that gets read a lot but writen very few times.