The Rockchip RK3588S processor is a chip with four ARM Cortex-A76 CPU cores, four Cortex-A55 cores, Mali-G610 MP4 graphics, and a neural processing unit (NPU) with up to 6 TOPS of performance. But it’s positioned as a slightly cheaper alternative to the high-performance RK3588 processor, with a more limited set of I/O features.

Now Firefly has introduced one of the first single-board computers powered by the new RK3588S chip. It’s called the ROC-RK3588S-PC and it’s available now for $219 and up.

The board, which can be configured with up to 32GB of RAM and supports PCIe NVMe SSDs as well as eMMC storage, is positioned as a solution for “edge computing, artificial intelligence, cloud computing, VR, and AR.” But it’s basically a tiny computer that can be used for general purpose work or for tasks that involve computer vision or other applications that can leverage the NPU.

Since this uses the RK3588S chip though, the M.2 2242 slot for NVMe storage only supports PCIe 2.0, since the chipset lacks PCIe 3.0 support. There’s also just a single HDMI 2.1 port and a single Gigabit Ethernet port, because the chip wouldn’t support more than that.

Other features include DisplayPort 1.4, USB 3.0 Type-C and Type-A ports, a USB 2.0 port, a USB-C power input, a 3.5mm audio jack, MIPI-CSI and MIPI-DSI interfaces for cameras and displays, aa microphone, and a 20-pin GPIO connector.

Firefly says the system should support Android 12 and Linux-based software including Ubuntu, Debian, Kylin, Buildroot, and RTLinux.

The ROC-RK3588S-PC measures 90 x 60mm (3.5″ x 2.4″).

Prices start at $219 for a model with 4GB of RAM and 32GB of storage. There’s also an 8GB/64GB option that sells for $299.

Update: An earlier version of this article speculated that prices could start below $129, based on the inclusion of the RK3588S processor. But that estimate was off by about $90.

via CNX Software

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Join the Conversation

6 Comments

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Software in Linux for making use of these NPUs for things like voice recognition a la Siri, would be an almost killer app.
    The oomph these have now can handle a variety of loads fairly well and local non-cloud-based voice recognition would really sweeten things compared to your typical Amazon or Google smart speakers.

    1. Hardware has never really been the problem…well, at least not in the past seven or so years, at least not just to run it. Any desktop PC could out-compute an Echo (it would just consume more energy in the process). It’s more of the fact that no one wants to develop a locally running vocal shell, because no one will fund it unless it is datamining. Snips was almost that but then it sold out to Sonos and ceased to exist. There’s Open Assistant, which appears to be an incomplete project of a few crazy people that no one talks about or even thinks about. There’s Rhasspy, which appears to be more put together but no one talks or even thinks about. There’s Mycroft, if you self-host the server (which is more than this thing can handle).
      Yeah, the small form factor boards are nice though because you can stick them onto the back of a speaker. I even got a raspberry pi hat that could drive small speakers, but I couldn’t get Snips to work right before it went under. And more importantly, every single good speech to text engine is proprietary simply by virtue of relying on data sets that you just don’t have, even if you could run the components needed to make them (then you’d really need a powerful computer).
      But honestly, all I even wanted that for is to play music for houseguests I don’t have, because saying “play album/song by whoever” is faster, and these days more socially acceptable, than anything else. If, you know, you want to sound more sophisticated than just streaming some of the garbage that Pandora or Spotify automatically determined you would like and then made you like when you really shouldn’t have because I guarantee some of those songs make you sound like a hypocrite about something or other.