Disclosure: Some links on this page are monetized by the Skimlinks, Amazon, Rakuten Advertising, and eBay, affiliate programs, and Liliputing may earn a commission if you make a purchase after clicking on those links. All prices are subject to change, and this article only reflects the prices available at time of publication.

Routers and Network-attached-storage (NAS) devices are basically small, purpose-built computers that run software designed to provide some of the core functionality used in networking and data storage respectively. Many users prefer to buy routers from established brands like Linksys, Netgear or Asus and NAS hardware from companies like Synology, QNAP or Asustor.

But, despite being considered a bit of a dark art for non-tech savvy users, you can also build your own NAS or router from nearly any mini PC. It helps to have a model with the hardware you need for networking and storage applications though, like multiple networking ports and support for multiple drives. And that’s where devices like the AOOSTAR R7 come in.

This little computer, which AOOSTAR markets as a “NAS + Router” is a versatile system that’s priced competitively with other 2-bay NAS systems with x86 processors. And while it may not be as simple to configure as some of the specialty hardware mentioned above, the AOOSTAR R7 offers significantly more performance and flexibility than you’d expect from a NAS in the same price range.

Prices for the AOOSTAR start as low as $299 when you buy a barebones model from AOOSTAR’s website or $359 for a model with 16GB of RAM and a 512GB SSD when you order from Amazon.

The AOOSTAR R7 is the most powerful in a line of NAS + ROUTER systems from Chinese PC maker AOOSTAR. It features an AMD Ryzen 7 5700U processor and room inside the chassis for:

  • 2 x SODIMM slots for DDR4-3200 memory
  • 2 x M.2 2280 slots for PCIe Gen 3 x4 NVMe SSD storage
  • 2 x bays for 2.5 inch or 3.5 inch SATA SSDs or HDDs

AOOSTAR sells the system with up to 32GB of RAM and a 1TB NVMe, but you’ll need to supply your own SATA drives if you plan to use the drive bays.

You can also find a cheaper model with a similar design called the AOOSTAR R1, which has an Intel Processor N100 instead of an AMD Ryzen processor.

Intriguingly, if the AOOSTAR R7 mini PC is ordered with memory and storage, it comes with Windows 11 Pro rather than a NAS or router-specific operating system. Perhaps the subtext could be construed to suggest that if you run Windows, you’ll need this mini PC to act as a NAS and router?

In this review of the AOOSTAR R7 I’ll briefly look at this functionality under Windows. But I’m also going to present some other options that might be more suitable together with examples of their performance.

AOOSTAR sent me an R7 review unit to test, featuring an AMD Ryzen 7 5700U processor, Radeon RX Vega 8 Graphics, 16 GB of RAM and 512 GB of storage. This unit was provided to Liliputing for free, with no requirement that the computer be returned upon completion of the review. This review is not sponsored by AOOSTAR, and the company did not modify or approve the content of this article in any way.

Design & Specs

The AOOSTAR R7 is a tall, white, and slightly tapered device with a circular base and round-cornered square top. From the front it looks a bit like a contemporary kettle or some other trendy kitchen appliance, as it measures 162 x 162 x 198 mm (6.4 x 6.4 x 7.8 inches).

It is designed to be a plastic tube which covers an inner metal frame that includes the motherboard, fans and drive bays.

The left side of the inner frame exposes the top of the motherboard which contains two M.2 2280 PCIe Gen 3 x4 NVMe SSD slots. In the review unit supplied by AOOSTAR, the slot on the left was occupied by a ASint AS806 512 GB PCIe Gen 3 drive.

To the left of the NVMe slots are two DDR4 SO-DIMM slots, which were populated with two ASint ASSD4320008G8b223 8 GB DDR4 3200 MHz sticks of memory, which is the highest speed supported by the Ryzen 7 processor.

Above these is a M.2 2230 slot, which is populated with an Intel Wi-Fi 6 AX200 card with support for 802.11ax dual band 2×2 2.4 GHz and 5 GHz (160 MHz) band Wi-Fi as well as Bluetooth 5.2 (the LMP Firmware Version shows as LMP 11.8810).

The right side of the inner metal frame holds the processor cooling fan, which expels air through a heat exchanger and out of the top of the device like a chimney between the drive bays.

On the bottom of the inner frame is a fan that pulls air through the device and out through the top in order to cool all the internal components.

The AOOSTAR R7 uses an AMD Ryzen 7 5700U mobile processor with 8 Zen 2 CPU cores and 16 threads boosting to 4.3 GHz. The iGPU is an AMD Radeon RX Vega GPU with 8 cores and a frequency of 1.9 GHz.

The device’s I/O ports are all located on the back of the device. Looking at the device from top to bottom, these are:

  • 1 x 3.5 mm audio jack
  • 2 x USB Type-A 2.0 ports
  • 2 x USB Type-A 3.2 Gen 2×1 (10 Gbit/s) ports
  • 1 x DisplayPort 1.4 port (left)
  • 1 x HDMI 2.1 port (right)
  • 1 x USB Type-C 3.2 Gen 2×1 (10 Gbit/s) with DP Alt Mode and PD port (left)
  • 1 x microSD card port (right)
  • 1 x 2.5 Gb Ethernet port
  • 1 x 2.5 Gb Ethernet port
  • 1 x power jack

Having a Power Delivery (PD) port provides the extra option of being able to power the AOOSTAR R7 from smaller laptop chargers and more powerful fast chargers. I was able to use the 100W port on my Ugreen Nexode 140W USB Wall Charger with the AOOSTAR R7 without issue.

Normally I show the ports of the mini PC without anything connected. However, given the spacing of the ports, I’d like to highlight just how close the USB Type-C port is to the microSD card port. If you are using the USB port for video or power and the other surrounding ports are also occupied, the microSD slot is quite difficult to access mainly because it is reset and blocked in each direction. If you have large hands or thick fingers you may need to use a pair of tweezers to insert or remove the microSD card.

The front, in keeping with the simple, sleek exterior design of the computer, has just an illuminated on/off button.

The inner frame that contains the two drive bays, has removable plastic trays that support either 2.5 or 3.5 inch SATA drives. Each drive is connected internally by SATA AHCI Controller.

The trays are somewhat flimsy especially when using with a 2.5 inch drive.

They are prone to warping if the drive’s screw-holes are not perfectly aligned with the corresponding tray screw-holes.

They can also be somewhat hard to pull out as there is not much to grab hold of. Conversely when inserting them, you need to apply force to make the SATA connection. With a 2.5 inch drive in the bay, you could easily break the bay by not pressing down only at the edges.

One important point to remember is that the drives are not hot-swappable so you must power the machine down if you want to add or remove a drive. When I tested this (having not read the warning) the mini PC instantly crashed. So it does pay to RTFM, as they say, before you start.

If you need to open the device to access the memory or NVMe slots, you should:

  1. Start by removing the top and taking out the drive trays.
  2. Next, turn the device upside down and remove the four rubber feet that are on the bottom together with the screws they conceal.
  3. Then remove the base of the mini PC. It is best to use a “spludger” or some long nose pliers to pull it off as it can take some effort.
  4. Finally turn the device the right way up so you can access the inner frame of the device.

The inner frame can be completely removed from the plastic case by pulling it up vertically whilst, at the same time, gently prising outwards the plastic around the headphone port.

The box containing the mini PC also includes the power supply (Bestec NA9002WBB 19 V 4.74 A maximum 90 W) and a cable with a country specific plug together with a very basic English & Chinese (Traditional) “Product manual” consisting of an instruction sheet on how to access the internals, install a drive and what the ports are.

Set-up considerations: Use the included software, or bring your own?

Option 1: Stick with Windows

The review AOOSTAR R7 came with Windows 11 Pro pre-installed, but the AOOSTAR website warns “To utilize the NAS function, it is necessary for you to install your preferred NAS software by yourself”.

This is somewhat intriguing as there are not many well-known NAS applications that run on Windows. But Windows does have built-in features that allow you to set up the system like a simple, basic NAS. For example, you could just use the computer’s drives as network share drives.

You could also use built-in Windows features to share Ethernet capabilities through the Wi-Fi hotspot function. Together with the Windows firewall, this could allow you to build a very rudimentary router.

However Windows also comes with Hyper-V (a Type-1 native or bare-metal hypervisor). It is probably better to create a series of Virtual Machines (VMs) and then install open-source solutions for the NAS and router.

Sticking with Windows and Hyper-V, a further alternative is to use Docker Desktop and then create containers for the NAS and router. Windows also offers WSL2 (a Type-1 hypervisor) so Linux containers for the NAS and router can be created, for example, using Docker Desktop or LXD.

Option 2: Bring your own OS

If the primary function of the mini PC is to act as a combined NAS and router, perhaps a better solution would be to install a Type-1 (native or bare-metal) hypervisor, create VMs for the NAS and router functionality and then, if required, create additionally VMs for Operating Systems like Windows, or other Linux OS. Whilst VMware ESXi is available for free (as VMware vSphere Hypervisor), Proxmox Virtual Environment (Proxmox VE) is a popular open-source platform which integrates the KVM hypervisor and Linux Containers (LXC). As an aside, LXD and LXC are two distinct implementations of Linux containers which are frequently confused with each other.

In terms of free software for the router, and there are many applications to choose from, the more popular ones include pfSense or OPNsense, and OpenWrt or DD-WRT. For free NAS software, amongst the many are OpenMediaVault (OMV) which is probably one of the easiest to start with, and TrueNAS CORE which is described as being the world’s most popular Storage OS.

The set-up minefield continues as there are quite a few combinations for configuring the storage used by the NAS and also the location for installing the OS.

Given there are two SATA drive bays accepting either 2.5 or 3.5 inch drives, you could go for using “spinning rust” or traditional hard disk drives (HDDs) as your NAS storage. You would then use the NVMes for the OS installation and local storage. If you wish to retain Windows for use in a dual-boot scenario, you could install the NAS/router OS on a new NVMe drive installed into the right-hand NVMe slot, leaving the originally supplied NVMe drive in the left-hand slot with only WIndows on it.

To reduce the risk of hardware failure of the HDDs, you could choose to use 2.5 inch SATA SSD drives for the NAS storage. It is also possible to get SSD adapters that allow either one or two Next Generation Form Factor (NGFF) SATA M.2 drives. These adapters are also capable of providing hardware RAID (Redundant Array of Independent Disks) support which apart from redundancy capability also means data access can be faster.

An alternative configuration to address HDD failure, if you installed the OS on a 2.5 inch SATA SSD drive and can then use the two NVMe drives for NAS storage. With this configuration you could also introduce a further SATA drive (either 2.5 or 3.5 inch) to act as a separate backup drive.

How it performs

This is a remarkably complex question to answer. Sure, I could run a few popular benchmarks on the AOOSTAR R7 but apart from showing CPU and iGPU performance on Windows, would it really show how the AOOSTAR R7 performs as a NAS or router? Given the multiple set-up configurations outlined above, I’m clearly not going to attempt to test each and every one of them. However I’ll try and briefly cover some permutations to provide an insight into obvious benefits and drawbacks.

The AOOSTAR R7 comes pre-installed with Windows 11 Pro version 22H2 OS build 22621.1702 and is activated with a digital licence. I applied all available upgrades to end up with version 23H2 OS build 22631.2861. After that I set power to “High performance” and briefly tested all the ports to confirm they functioned as listed above.

First, even though I think there’s limited benefit in simply reporting Windows benchmark scores, I did start by running some. This way I could get an idea of the machine’s “base” performance level, as having a sense of the CPU performance can be helpful when configuring VMs in a hypervisor. So the CPU benchmarks I ran gave the following results:

  1. Cinebench R23: Multi Core was 9051 and Single Core was 1262
  2. Geekbench 6.2.1: Multi Core was 7142 and Single Core was 1639
  3. PerformanceTest 11.0: CPU Mark was 18233 (Memory Mark was 2746)

I also ran Unigine’s Heaven benchmark which gave a result for the iGPU of 35.8 FPS and a score of 902.

The networking speeds for the two Ethernet ports were also checked:

  1. iperf3 in Windows: download of 2.36 Gbits/Sec and upload of 2.21 Gbits/sec
  2. iperf3 in Linux: download of 2.35 Gbits/Sec and upload of 2.35 Gbits/sec

It is not unusual to see faster upload speeds in Linux compared to Windows and this is not a device issue.

The pre-installed NVMe performance was checked using CrystalDiskMark:

CrystalDiskMarkRead (MB/s)Write (MB/s)
SEQ1M Q8T12106.301684.42
SEQ1M Q1T11885.601702.48
RND4K Q32T1556.05404.57
RND4K Q1T148.79153.33

Storage Components

The AOOSTAR R7 is available as a barebones device (i.e. without RAM or NVMe drive), or with either 16 GB RAM plus a 512 GB NVMe drive or 32 GB RAM plus a 1 TB NVMe drive with the NVMe drives having Windows 11 Pro pre-installed.

However the company doesn’t sell models with storage occupying the SATA drive bays, which means that you’re on your own when it comes to adding the storage that will most likely be used as Network-Attached storage.

In the set-up considerations, I referred to SSD adapters with built-in RAID support. For part of my testing I used a couple of such adapters, namely StarTech’s S322M225R Dual-Slot M.2 Drive to SATA Adapter with RAID. I put two 1TB Silicon Power M.2 2280 SATA SSD (A55) drives in each adapter.

I initially performed some basic “speed” testing using different combinations of both hardware RAID on the adapter and software RAID in Windows, just to get an indication of performance impact.

Drive ConfigurationWindows ConfigurationWindows Disk ManagerPartition SizeSequential ReadSequential Write
JBODSimple1862.90 GB487.85470.62
RAID 0Striped100 GB851.39904.12
RAID 1Mirrored1863 GB914.02477.80
JBODSimple100 GBLost the results!
RAID 0Striped3726 GB871.96943.1
RAID 1Mirrored50 GB924.92499.33
JBODSimple50 GB511.10336.64
RAID 0Striped100 GB568.02146.00
RAID 1Mirrored50 GB834.8588.93

It is clear that the benefit of using these adapters is when you only have one SSD port available and you are concerned about data loss caused by a drive failure.

Under these circumstances you would use the adapter with RAID 1 configured. Adding software RAID on top of hardware RAID 1 severely limits performance. In contrast, there is no performance degradation when adding software RAID on top of hardware RAID 0. For my testing I chose to simulate normal SSDs by setting the adapter’s to “SPAN” which treats the installed M.2 drives as one big drive.

I also performed some testing with some old 3.5 inch HDDs: a Seagate Barracuda 3TB 7.2K SATA (ST3000DM001) drive together with a Western Digital 3TB 5.4K SATA (WD30EZRS-00KEZB0) drive. Both were recommended as NAS drives back in their day (2011) although the ST3000DM001 became known for its high failure rate that even became a class-action lawsuit against Seagate. This probably explains why my other matching drive has “Fail” written on it.

When testing with NVMe drives I used a couple of 512 GB Intel 660p M.2 2280 PCIe 3.0 x4 NVMe (SSDPEKNW512G8) drives as well as a 500 GB Kingston NV1 M.2 2280 PCIe 3.0 x4 NVMe (SNVS500G) drive and a 500 GB Kingston A2000 M.2 2280 PCIe 3.0 x4 NVMe (SA2000M8500G) drive.

Finally I also included another couple of relics from my box of discards by using a 2.5 inch 120 GB Samsung SSD 650 SATA SSD (MZ-650120) drive and a 3.5 inch Western Digital Black 2TB 7.2K SATA (WD2003FZEX) drive.

Testing Scenarios

To illustrate the some of the various set-up configurations covered above, I’ve tested the AOOSTAR-R7 in the following scenarios:

  1. Using Windows as a pseudo NAS
  2. Using OpenMediaVault for the NAS software together with ZFS
  3. Using TrueNAS Core for the NAS software
  4. Using Proxmox as the hypervisor to create a NAS + router with the router VM using pfSense and the NAS VM using TrueNAS Core
  5. Adding an Ubuntu desktop VM with GPU passthrough to the NAS + router
  6. Adding a Windows desktop VM with GPU passthrough to the NAS + router (or, if this fails, adding a remote access Windows desktop VM to the NAS + router). Activating the Windows VM by transferring the Windows desktop licence
  7. Adding a backup drive to the NAS + router configuration

Testing Results

I’ll first point out that some of these tests were performed in a different order to the scenarios listed as it simplified the hardware swaps required for each test. Taking this into consideration, the narrative has been edited to reflect the order of the scenarios rather than the timestamps for the actual tests.

Hopefully this will alleviate any confusion if you notice any timestamps that appear out of sequence. Also some of the screenshots have been cropped to remove excess blank space to make them more readable. No smoke and mirror here.

1. Using Windows as a pseudo NAS

For this scenario I’ve installed the two Adapters with their hardware configured as SPAN into the SATA drives.

As Windows was already pre-installed on the included NVMe drive, I’ve just had to mirror (RAID 1) the SATA drives ready for testing.

The basic premise of creating this pseudo NAS is to use Windows built-in file sharing capabilities. Having created a new drive from mirroring the two SATA drives, this was then shared with access managed through Windows ACLs.

The shared drive could be accessed from other PCs on the network, including from within WSL2. You can see I had already successfully accessed the shared drive from other PCs and created some directories for testing results and other related files:

I then created a 100 GB file on the NAS whilst in WSL2 by using the command “/usr/bin/dd if=/dev/urandom of=xfer-hugefile bs=1G count=100”. I chose this size for the file to ensure that it was large enough to exceed any buffers and cache, and it was representative of the large size that recent “AAA” games have become.

For Windows “NAS” performance testing, I used another PC to copy the 100 GB file from the NAS (i.e. D: drive) to a local drive on that PC. Essentially this mimicked downloading a game from the “NAS”. The file transfer was quite quick taking around five or so minutes. I then copied the file back to the “NAS” to mimic uploading a game to the “NAS”. Unfortunately this was pretty slow and when repeated seemed to take anywhere between 40 to 45 minutes.

During the testing I captured some screen shots. And, whilst the file name may change between screenshots as I had to rename it to prevent overwriting, it is always the same 100 GB file.

For uploading, the transfer speed fluctuates, averaging around 40 MB/s which aligns with the 45 minute transfer time and, to be honest, is disappointingly slow. It is worth noting that the Windows “iperf3” speed for uploading is slower than Linux, but only by around 6% so that alone won’t make much difference.

Downloading, which was quick, is “as good as it gets” as the 2.5 Gb Ethernet port was basically saturated and the transfer speed was both stable and good averaging around 270 MB/s.

My conclusion from this test was that I needed to find a quicker storage solution for the NAS storage in order to improve the upload speeds. Some of the storage options that I explore in the other scenarios are certainly going to improve the results. So a takeaway of “don’t use Windows” is wrong based solely on the above performance..

2. Using OpenMediaVault for the NAS software together with ZFS

For this scenario I replaced the Windows NVMe with the SA2000M8500G NVMe drive and installed OpenMediaVault (OMV) directly to it. For NAS storage I left the two Adapters with their hardware configured as SPAN in the SATA drive bays.

Having installed OMV all the drives are correctly identified.

The easiest way to use OMV is to just use software RAID. For example, I could “mirror” (RAID 1) the two SATA drives using the GUI.

Be prepared to wait though as this takes a very long time to complete the “resync”. Once ready you will now have a new drive (/dev/md0) to use as your NAS storage.

Alternatively you can download the OpenMediaVault Plugin Developers repository and install the OMV plugin for ZFS which is what I did. I then created a mirrored pool “omv” from the SATA drives. I then created a shared folder “shared” on the pool and enabled SMB/CIFS for this folder.

Similar to the Windows pseudo NAS performance testing, I then used another PC to copy the 100 GB file from the NAS to the local drive on the PC. The download took around 7 minutes and the transfer speed was reasonably consistent hovering around 250 MB/s.

The network graph again shows that the download was running at the nearly the maximum permitted by the 2.5 Gb Ethernet connections.

The upload however was somewhat faster than in Windows. It only took around 25 minutes. Again the transfer speed fluctuated and averaged around 60 MB/s.

The upload was faster than Windows, because in this situation ZFS was faster than NTFS.

3. Using TrueNAS for the NAS software

I replaced the NVMe drive for this scenario with the two Intel NVMe drives (SSDPEKNW512G8). I also replaced the SATA drives with the two HDD (ST3000DM001 and WD30EZRS-00KEZB0) drives for NAS storage. I then installed TrueNAS Core onto the first NVMe leaving the second NVMe available for a log (SLOG) or cache (L2ARC). I also created a mirrored (RAID 1) pool “proxmox” from the two HDDs which was then shared with SMB enabled.

For testing I was primarily interested in the download or “write” performance of the HDDs. So I only copied the 100 GB from a local PC to the NAS. The transfer speed was faster than the previous configurations plus it was consistent. You can clearly see where the write buffers fill before memory is fully utilised and the transfer is limited only by the speed of the HDDs. On average the transfer speed was 110 MB/s and the upload only took 15 minutes.

I then added a Separate intent LOG (SLOG) to the second NVMe drive.

I also turned the write sync to “always” to ensure it got used.

I then repeated the upload which was very slightly slower, but as expected due to the write synchronisation. It now took 16 minutes as the average transfer speed had dropped to 105 MB/s.

Obviously there was no point in adding the “SLOG” under these test conditions. I didn’t test adding a “L2ARC” as checking the ARC hits showed there would be no improvement. Besides, adding more memory would be a more appropriate next step if the ARC hits were low.

Even though ZFS has been seen to be faster than NTFS, the HDDs are nearly double the speed to my adapters. At this point, if I had two spare 2.5 inch SSD drives I would have liked to seen the results from a comparative test. However, because of the impact of RAID 1 on SATA drives, I don’t think the transfer speed would have got high enough to saturate the 2.5 Gb Ethernet.

4. Using Proxmox as the hypervisor to create a NAS + router with the router VM using pfSense and the NAS VM using TrueNAS Core

All the previous scenarios have been about providing just a NAS solution. Now it was time to create a “NAS + router” and see what the knock-on effects might be. Using exactly the same hardware configuration as before, with two SSDPEKNW512G8 NVMe drives and two HDD (ST3000DM001 and WD30EZRS-00KEZB0) drives, I installed Proxmox on the first NVMe drive. I then created a VM and installed pfSense followed by a second VM and installed TrueNAS Core. I then initiated the now familiar upload test.

The copying took around 17 minutes and averaged around 100 MB/s.

Similar to the previous scenario, the transfer ran at a relatively consistent speed although it was just slightly slower by 10 MB/s.

So the impact that can be seen by this scenario is caused by using a VM for the NAS. This is understandable as ZFS uses memory for caching which improves performance. I had only allocated half of the available memory, i.e. 8 GB out of the 16 GB, when I created the VM. The previous NAS was a bare metal installation and so had access to the full 16 GB.

5. Adding an Ubuntu desktop VM with GPU passthrough to the NAS + router

Having spent money on the two Adapters, which after testing now appears somewhat wasted given they perform worse than my ten year old HDDs, I was determined to get some further use out of them.

So I removed the NVMe drives and installed just the SNVS500G NVMe drive. I then removed the two HDDs and replaced them with the two Adapters. As a reminder, each Adapter contained two 1 TB SATA M.2 drives which were configured within the Adapter as “SPAN” so they appeared to a host as a single drive just short of 2 TB.

I then installed Proxmox, created the VMs for pfSense and TrueNAS Core as before, together with an Ubuntu VM based on Ubuntu 23.10.

I configured pfSense with a local (DHCP disabled) LAN and a WAN that allowed access to the AOOSTAR R7 from PCs on a different network to before (purely for testing pfSense).

TrueNAS Core was configured with a mirrored (RAID 1) pool “proxmox” from the two Adapter SATA drives which was then shared with SMB.

After updating Ubuntu I also installed Webmin to allow remote management (just in case).

I then performed a download of the 100 GB file which took around 6 minutes and, despite the slow ramp up of speed at the beginning, the average transfer speed was around 270 MB/s.

Upload was faster than Windows but slower than OMV, again because this was using a VM with half the memory allocation in comparison. The average transfer speed was around 55 MB/s and it took about 30 minutes.

The real test is whether the GPU can be passed through to the Ubuntu VM. This would result in the Ubuntu desktop running on the monitor attached to the AOOSTAR R7 rather than it being a Proxmox login screen.

To fully function as a desktop also requires the mouse and keyboard to be passed through. Given there is a Wi-Fi card installed in the AOOSTAR R7, that might as well be passed through as well.

I found that GPU passthrough worked by following the typical process: First I set up the VFIO driver in Proxmox. Then I extracted the video BIOS and updated the hardware for the Ubuntu VM by adding the PCIe numbers for the GPU and audio together with the “romfile” for the video BIOS.

The Wi-Fi was just passed through as a PCIe device and the mouse and keyboard were passed through by adding USB devices with their respective Vendor and Device IDs. As I had set the BIOS to “OVMF (UEFI)” when creating the VM together with the Machine as “q35”, I then just needed to set Display to “none” and make the GPU PCIe device as the “Primary GPU”. On starting the VM, Ubuntu booted on the monitor as per a normal boot.

I then mounted the shared NAS drive “proxmox” on the Ubuntu VM and downloaded the 100 GB file followed by uploading it, both of which I timed. The download took 4 minutes 54.860 seconds which makes the transfer speed around 339 MB/s. The upload took 31 minutes 29.542 seconds meaning the transfer speed was around 53 MB/s. Thanks StarTech.

I also added the LAMP stack to Ubuntu and downloaded my own website which I then hosted locally on the VM. By adding a host override rule to pfSense, I could access the site locally as a sub-domain and test out changes.

Finally I ran a couple of benchmarks on the Ubuntu VM. Geekbench 6.2.1 returned a Multi Core score of 3046 and Single Core score of 994. With PerformanceTest 11.0, the CPU Mark was 6318 and the Memory Mark was 2044.

Arguably of more interest was the result I got from running Unigine’s Heaven benchmark. It achieved 32.8 FPS and a score of 826 which was only slightly lower than the bare-metal Windows results of 35.8 FPS and a score of 902.

GPU passthrough is not always guaranteed to work on mini PCs. There are many reasons why including the OS involved and the model and make of the iGPU. The good news is that the AOOSTAR R7 falls into the “working” category, at least with Linux. Now let’s try with Windows.

6. Adding a Windows desktop VM with GPU passthrough to the NAS + router (or, if this fails, adding a remote access Windows desktop VM to the NAS + router). Activating the Windows VM by transferring the Windows desktop licence

The hardware configuration I used here was actually based on the next test scenario, which is why it does not seem logical to completely change everything when all that was required was to add a Windows VM to the previous scenario’s config. I do not believe the hardware configuration affects the success or not of GPU passthrough so let’s see what happens.

The NVMe slots were populated with the two SSDPEKNW512G8 NVMe drives and an  MZ-650120 SATA drive was installed for the OS.

For this scenario I installed Proxmox on the SATA drive and created VMs of pfSense, TrueNAS Core, Windows and Ubuntu. TrueNAS Core was configured differently this time. I wanted to test better performance for “write” speeds, so I made the decision that RAID redundancy was unnecessary given there were no mechanical parts in NVMe drives to fail. This enabled me to create a striped (RAID 0) pool “proxmox” across the NVMe drives. I should also mention that this decision was made whilst taking into consideration the grand finale scenario covered next (hint: backups).

I created VMs for both Ubuntu and Windows as I wanted to replicate what I had done for Ubuntu with Windows when attempting GPU passthrough. And, if I had to change anything, I wanted to ensure that the Ubuntu GPU passthrough still worked.

However the first test was to try and activate the Windows installation. Typically the Windows licence key for a digital licence is stored in the UEFI (BIOS) and can be accessed via a Windows Registry key or from the ACPI tables on Linux. So I attempted to activate the Windows VM using the product key I extracted. However Windows refused to activate and said that the product key was already in use on another device.

Obviously it couldn’t distinguish this being a VM on the same device and so wouldn’t allow the product key to be shared between the two.

I had to resort to the “Activate by phone” option that logged a case for me and very shortly afterwards I received a telephone call-back. I explained to the support technician that I wanted to transfer my Windows licence from the bare metal installation into the Windows VM I was running on the same device.

I was told to read out a very long number from some setting or other, and to enter another very long number into something else. Suddenly my Windows VM was activated.

Next I modified the Windows VM configuration in exactly the same way as I did for the Ubuntu VM configuration. However, whilst GPU passthrough worked for Ubuntu, Windows steadfastly refused to cooperate.

By turning on Remote Desktop I was able to access the Windows VM. I could see that the GPU appeared to pass through “in part” because it showed up in Device Manager, it worked in GPU-Z and it also showed correctly in HWiNFO64. However, it wouldn’t appear in the Task Manager.

However this was not consistent. Sometimes when starting the VM, the GPU appeared in the Device Manager showing the infamous “Code 43”.

After spending a couple of days trying to get this to work I gave up. I just could not find a way to get it to work. I was however successful in passing through the Wi-Fi.

I feel that your results may vary for both Windows activation and GPU passthrough, as there doesn’t appear to be definitive documentation for either.

For testing the NAS performance using striped NVMe drives, I copied the 100 GB from a local PC to the NAS. The upload was obviously much faster, seemingly at 280 MB/s but it tailed off at the end and took 7 minutes averaging 240 MB/s.

The download, which is normally very quick, took slightly longer this time and was also not as consistent as previously. In total it also took 7 minutes averaging 240 MB/s

Clearly, using NVMe drives were going to be faster than SATA drives and striping them should have improved the read speed even further. Whilst the download didn’t quite meet with my expectations, I only performed one run where I monitored the elapsed time.

There could have been other activity happening on the target PC that I wasn’t aware of at the time which impacted the download. It was however sufficient for the purposes of demonstrating this scenario.

7. Adding a backup drive to the NAS + router configuration

In some people’s understanding, RAID is conflated as implying backup. Sometimes a NAS is considered to be a backup solution. But as the name suggests, it is only storage. It may form part of a backup solution but that is different from being the only solution.

I wanted to test how feasible it was to use the previous hardware set-up with an HDD drive used for a backup pool.

So in the remaining free SATA drive bay I installed a HDD (WD2003FZEX) drive. I then created the backup pool “backup” on this drive.

To create a simple backup process, I set up a periodic snapshot task to take daily snapshots of “proxmox”.

I also set up a replication task to push these snapshots to the “backup” pool.

I then tested a recovery scenario using a mix of ZFS commands in the shell and by using the TrueNAS GUI. I started with ZFS commands to take a snapshot of “backup” which I then cloned. Next using the GUI, as it was simpler for SMB support, I shared the clone as “recovery”.

I was now in a position to check what files were on the “recovery” share that I was missing on the “proxmox” share, assuming this was the reason for performing a recovery. For example, if I had accidentally just deleted a file on the “proxmox” share that I still required, I could simply copy it back from the “recovery” share. Once finished, to clean up I simply had to unshare “recovery” and destroy the cloned “recovery” file share and its snapshot.

I highly recommend that after you have set up your backup pool, you thoroughly test and check that the backups are happening as you planned and that your recovery process is both well documented and tested.

Power Usage

Indicative power consumption was measured whilst running Proxmox configured with three VMs as in scenario 5 above:

  • Powered off (shutdown) – 1.1 W
  • pfSense VM running – 16.0 W
  • pfSense and TrueNAS Core VMs running – 16.1 W
  • pfSense, TrueNAS Core and Ubuntu VMs running – 16.2 W

Obviously, for all the VM cases, Proxmox would have also been running as well.


Hopefully the above scenarios provide some understanding of how the AOOSTAR R7 performs under different circumstances, and helps clarify why it’s so hard to offer a one-size-fits-all answer to “how does this system perform?

If you’d been keeping track of the transfer speeds in each test scenario you would have noticed that downloading was never a problem as it was always just about as fast as possible given the 2.5 Gb Ethernet limit.

ScenarioOSNAS StorageDownloadUpload
1WindowsAdapter with RAID 1270 MB/s40 MB/s
2OpenMediaVaultAdapter with RAID 1250 MB/s60 MB/s
3TrueNAS CoreHDD with RAID 1110 MB/s
4Proxmox with TrueNAS Core VMHDD with RAID 1100 MB/s
5Proxmox with TrueNAS Core VMAdapter with RAID 1270 MB/s55 MB/s
6Proxmox with TrueNAS Core VMNVMe with RAID 0240 MB/s240 MB/s

Upload speeds can vary greatly though, depending on your choice of OS, memory allocation, drive type and whether to use RAID or not. At the end of the day, the AOOSTAR R7 has the potential to perform very well, and is only constrained by what is used with it.

When operating as a NAS + router, the dual-fan cooling cooling system seems adequate, especially if 2.5 inch SATA drives are used. Using 3.5 inch SATA drives might obstruct the airflow over the NVMe drives as the space between them is quite narrow. Equally they might generate warm air which gets recycled as part of the processor cooling.

Consideration should be taken if only using one 3.5 inch drive, and using the drive bay on the opposite side to the what is being affected is probably the best choice. Having said that, I did not encounter any high temperature indicative of a cooling issue.

Under normal load the AOOSTAR R7’s fans are quiet. However, if a VM makes heavy use of the CPU, which consequently causes its temperature to rise, the fans do ramp up very quickly and are quite loud. There are some settings in the UEFI (BIOS) for fan management including a fan curve which is shared by both the CPU and system fans. The default settings are:

  • Fan off at 44°C
  • Start fan at 45°C
  • Run fan at full speed at 90°C
  • Start PWM at 50°C

Some adjustments may be necessary on a case-by-case basis.

Otherwise the UEFI (BIOS) is relatively open. It includes some power configuration choices for AC Failure and WOL. Under AMD CBS there is a UMC subsection allowing memory overclocking (although I’m not sure if this is supported on the AOOSTAR R7) and a NBIO subsection including GFX Configuration for memory allocation to the GPU. There is also a Secure Boot setting allowing it to be disabled.

Overall I’ve been very impressed with the AOOSTAR R7. It does exactly what it claims to do, and that is function as a NAS + router. The CPU is very powerful for just these functions, especially when compared to the processors used by most recent consumer-oriented, off-the-shelf NAS solutions from companies like Synology, QNAP and Asustor.

The iGPU is totally wasted in most scenarios. But if you set up a desktop VM with GPU passthrough you get the best of everything, namely a reasonably powered mini PC (as a VM) together with a NAS and router.

I’d like to thank AOOSTAR for providing the review unit. At time of publication, the AOOSTAR R7 is on sale with prices starting at just $299 for a barebones model purchased from the AOOSTAR website, or $359 for model with 16 GB of RAM, a 512 GB SSD and Windows 11 Pro when you order from Amazon.

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign


Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 9,534 other subscribers

Join the Conversation


Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Hi,
    Thank you for this article which motivated me to buy this PC 😉
    Can you help me I can’t extract the vbios, can you tell me how you did it? Would it be possible to send your vbios file? THANKS !!

    1. To extract the vbios, look at my reply below date stamped “01/24/2024 at 8:48 AM”. Go to the first link for “PCI Passthrough” then about halfway down read “How to know if a graphics card is UEFI (OVMF) compatible” and follow the steps to use Alex Williams rom-parser tool.

      1. Thanks ! I followed the guide but the fact is that I am unable to extract the rom file which is not present. I tried extracting the vbios with windows and it also failed. For some unknown reason the vbios does not seem accessible and yet I have the same machine as you. If you ever have any ideas… and if it doesn’t bother you, can you send me your vbios file so I can test it?

  2. I ended up returning the unit, found out one of the SATA port was not working Not sure if BIOS doesn’t support booting from the SATA. The same SSD drive works fine if booting from USB dock and it also works from a second SATA port. Anyone encountered this odd issue? Trying to figure if it’s my fault or the unit was bad.

    1. I encounter a similar issue with a SATA port that triggers the message “ata2.00: irq_stat 0x08000000, interface fatal error” repeatedly when scrubbing. I reached out to their support email and quickly diagnosed the issue to be a faulty SATA backplane. They shipped the replacement part which arrived in 5 days and resolved the issue.

  3. Hello, thank you for your review, really good…
    I ordered a piece for myself, got it delivered to EU (paid approx 50USD on duties) and I am playing around 🙂
    Wondering if you seen the “extra” connection (looks like some PCIexpress) next to the extension board for 3.5inch drives. The board itself is connected by same, and there is one extra next to it… Maybe we can utilize that somehow (for extra disk connection) 🙂

  4. I ended up ordering R7 which I received last Friday, after testing connectivity found out SATA port 1 is defective not detecting drives I tired 3 different SSD drives and all of them are not getting detected. I installed Proxmox on one of SSD which is connected to Sabrent dual SATA dock and it works fine, so I used the same SSD boot drive and connected it to SATA port and it keeps going to BIOS which I’m guessing due to drive not being detected, tried same on second port and it works fine.

    Returning back, lucky I purchased it from Amazon. Not sure if I should order another unit or not after refund is received. This was perfect device for a small home lab. I don’t think there is anything else comparable.

  5. Excellent review, thank you. Is it possible to replace wifi card with SSD nvme drive? I believe it’s 2230 or 2260

    1. Yes, via an adapter. You could replace the Wi-Fi card with an M.2 Key A/E to M.2 Key M (NVMe) adapter which a pretty cheap on AliExpress. If cut down to 30 mm in length, it can support an NVMe M.2 2230 storage card like a SABRENT Rocket NVMe PCIe 4.0 M.2 2230 SSD, but it will only then function as PCIe 3.0 using a single lane for throughput.

          1. So it does work with NVME SSD? Don’t want to order and wait for a month then find out it will not work. Thank you

        1. Yes, it works.

          It is actually a really good idea as it frees up the two NVMe M.2 2280 drives which can then be used as storage for a NAS VM.

          Here are some pictures of it (https://imgur.com/a/Ha0MN9C). The first picture shows the adapter cut down to a total length of 42mm and installed into the Wi-Fi slot and being retained with the original screw at the 30mm point. The second picture shows the NVMe M.2 2230 installed into the adapter and retained with a spare screw the adapter at 42mm point. Notice how the adapter doesn’t actually lie flat perfectly due to overlapping with the second memory slot however it is good enough not to be a concern. I also removed the Wi-Fi antennas which are just stuck down rather than leaving them loose or taped down as in the first picture.

          The NVMe M.2 2230 is then accessible just like the NVMe M.2 2280 drives although running as x1. I installed Proxmox on the drive, booted from it, installed an Ubuntu 23.10 VM, and then ran my “fio” script from the shell which emulates CrystalDiskMark and got a “Sequential 1MB Block Size 8 Queues 1 Thread” read speed of 912MB/s and write speed of 800MB/s.

          1. That’s awesome, thank you so much. First time installing Proxmox so this very beneficial, do you know how to backup the boot disk in this case M.2 2230 when it fails since there will so many config changes it will be hard to remember or backup certain directories/files and then reinstall Proxmox copy over the files/directories from backup?
            p.s It might be good idea to create forum.liliputing for this type of discussions.

  6. How easy is it to replace the cooling fans in that retro-mac-trashcan device? How long until you have to pay to replace it?

    1. The fan at the bottom of the device could be replaced quite easily. However the one over the processor is part of a CPU Cooling Fan assembly similar to those found in laptops. It is probably best to ask Aoostar directly regarding the specifications of each fan, their SKUs, cost, availability and ease of replacement to ensure the accuracy of the information. The warranty is 12 months with a 30-day return policy.

  7. Thanks for the review! Can you also update the article with power consumption numbers? What is the idle consumption when all risks are powered down? A lot of the Chinese minipcs and mainboards do not have support for all cpu power states.

    1. I’m not sure I understand your question. Under “Power Usage” above, I briefly covered power consumption including “Powered off (shutdown) – 1.1 W” which seems to be what you are after. Also, see William’s comment below as this also gives some further power consumption details.

  8. Very comprehensive testing, thanks! Lots of stuff I don’t really understand, but I’m learning… xD

    Coincidence or not, my last Aliexpress purchase was… this. I was going for the N100 model, but noticed there was this Ryzen 7 model going around, and as I’m planning to use Truenas Scale with more than a few apps running, thought that this might be worth the extra investment. It didn’t arrive just yet, but that’s just the way it is for my country. Seller shipped plenty fast, but it’s gonna spend sometime in customs.

    Not planning to use it as a router, but the extra network stuff is welcome anyways. Perhaps I’ll change my mind later on. I’m using a tiny computer as a router/firewall currently, running OpnSense.

    My angle is a little bit different. I already have an old desktop doing the NAS job for me, but it’s just getting a bit too old, and it’s a huge power hog. I’m also having a whole lot of problems trying to run stuff on it and I cannot identify what it is… I actually already had to get new memory, new network card, a new PSU, and tweak a whole bunch of stuff in it and it’s driving me crazy. This desktop has been working a little bit weird for years now on Windows… random crashes and freezes, but it ran stable enough for me not to bother checking. Retiring it and turning into a NAS machine brought all those problems up and forced me to start replacing faulty stuff.

    I managed to make it run Tailscale, Syncthing and PiHole so far, but a whole ton of apps just won’t install properly, I dunno what it is, and I’m getting tired of reinstalling everything.
    I was more successful on a previous install managing to run Nextcloud, Vaultwarden, Plex, FreshRSS plus a bunch of other stuff for testing, but as expected it was hitting the RAM limits and starting to slow down and get a bit wonky.

    It’s actually my last desktop which was used as a video editing station, so you can imagine.
    Anyways, hopefully it won’t be too traumatic to migrate from one machine to another. We’ll see… xD Thanks for the writeup!

  9. Could you elaborate on the process to extract the GPU rom, configure proxmox to use it, etc?

    1. There are a couple of pages on the Proxmox wiki that cover this in detail including links to an extract program for the GPU romfile. Search for “proxmox pci passthrough” and “proxmox pcie passthrough” as links get updated/outdated too regularly!

  10. Using the R7 with 32GB of RAM and 2 SSDs, the power consumption in standby mode under Windows is around 8 watts. With an additional helium disk, it’s about 15 watts, which is much lower than I expected. The integrated graphics are comparable to the 5600G(it seems there’s not much difference between the 5600G and the 5700G) The BIOS allows for memory overclocking, but the downside is that the compact design leads to slightly higher hard drive temperatures, although it hasn’t triggered any high-temperature warnings. I read online that some people suggested replacing the bottom fan, but when I tried using an old fan from home, it didn’t significantly reduce the temperature. Later, I contacted their after support, and they recommended buying an inexpensive USB fan to place under the machine or above . This method indeed reduced the temperature by 8 degrees when I tested it.

    1. I believe the fan is 92mm x 15mm, so it’s slim and won’t push much air. I’m thinking of removing the case and bottom fan, and attaching a 12/14cm fan to the front of the chassis (with zip ties or similar) so there is more airflow in all areas, going from front to back instead of bottom to top

  11. I’m so happy it comes with Windows 11! I will be saving money forking over money to Microsoft!

    1. Your name is funny but true story.

      Just today I used Windows 11 for the first time in my life for VM testing and I found out is pure trash. Explorer patcher fixed some issues, but the OS is still unusable, I had to debloat it and tweak some regs to make it somewhat “just fine”.

      It would be wonderful if all websites did reviews of devices running Windows 10 instead of 11, to make sure Microsoft understand they are not doing any good to humanity. So pity most websites only do things for money, and Microsoft has lots of them 🙁

      1. You truly are a stronger human being than I. The last update I ever enjoyed on Microsoft was some of the improvements in Notepad, as for advances in their Paint app I never made it that far to see that as now I have no more will left to bother with windows other than to help other people’s Windows problem. I fully expect to no longer be able to use Windows in the future as it slowly progresses into a cloud-based Android app.

  12. Thank you very much! I’m a nas newbie and have been looking at R7 lately, with your test I can choose the NAS system I want

  13. One scenario which would have been good for this review would be using it as a media server, be it in windows or Linux. I believe that at least a few of these will be set up that way

    1. Exactly. It doesn’t have to just be a “NAS + router” as the hardware is versatile to support many different uses and configurations. Maybe I should have also looked at installing Jellyfin in a LXC container on Proxmox and passing through the iGPU for video hardware acceleration.

      1. I have the same unit currently running VMware Esxi 8 u2 on it! . I maxed the unit out at 2x 20TB iron wolf drives for NAS, and 64GB of ram and 2 nvme drives one for Esxi and one for Virtual machines. Currently I am using pfsense as a firewall, and I have also installed Expenology and passed through the drives. I am using a raid on them its working perfectly. Also I have activated the VMS using built in license works perfectly. I am facing two issues one the cheap fan they used is not sufficient to cool this thing so I purchased a noctua fan same size and has an extra pin but it spins at 100% and it runs like a server. Not sure why the bios settings are not working to cool it. so I am thinking of converting it in a DIY usb powered fan and i will run it from the outside and just put it on top to suck the hot air out or some how mount it at the bottom and push more air in the unit to cool it. Lastly the major issue is I run plex and this is my first time buying an AMD cpu device in the last 10 years. There is no quicksync technology in AMD and thus it cannot transcode videos / movies FREEZE so currently I have turned off transcoding to see which of my devices can / cannot handle it. Turns out my 2017 LG smart tv is having issues and my iphone 12 can handle it perfectly. So now not sure what to do. Lastly I tried to get gpu passthrough so badly in vmware I could not I also loaded proxmox and I was not able to get gpu passthrough However I would love to get that working. Just for that I would be willing to leave esxi lol..

        1. Hi Sam,
          I have also ordered R7 waiting to be delivered, currently running Proxmox on a laptop just to test it. I’m trying to figure out if it’s to replace the Wi-Fi card with M.2 NVME 2230 disk? Not sure if you have tried it.

          1. Hi Nirav, may I ask why your trying to remove the wifi card as there as there is already two full size nvme slots ? are you trying to add more storage ?

          2. Hi Sam,

            Yes, probably in future if need arise. Just wanting to know if it will work since you are already using the device. I have already planned out exactly how much storage and the type to use and trying to stay within the budget it can easily get of hand.

          3. Hi Sam,

            Forgot to mention, I’m currently using a USB fan purchased for about $11 from Amazon which has a speed switch built in, I’m using it for my OPNsense FW at low speed it works extremely well very silent even at high speed. Just thought it might be helpful
            SCCCF Quiet 120mm USB Fan, 5V USB Portable Cooling Fan https://a.co/d/az4SOTm

          4. Hi Nirav, yaar thanks so much!! this Fan is perfect for the usb fan solution lol!! I just ordered two this will solve the air issue now to see what i can do with gpu passthrough if i cant get the internet gpu to work i will just buy an external enclosure and pass it that way in vmware. (I have not yet decided to go to proxmox but i might eventually as it does offer more hardware support but I use Esxi for work everyday and I am so used to it.)

        2. For GPU passthrough on Proxmox, follow the Proxmox wiki information for “PCI Passthrough” (https://pve.proxmox.com/wiki/PCI_Passthrough) and “PCI(e) Passthrough” (https://pve.proxmox.com/wiki/PCI(e)_Passthrough) using an Ubuntu VM (I used Ubuntu 23.10O) as this works. I couldn’t get a Windows 11 VM to work properly with GPU Passthrough which probably explains your difficulties.

          I also set up Jellyfin on Proxmox in a Linux “privileged” container using a CT Template based on Ubuntu 23.10 by adding the Jellyfin GPG key and repo to APT, then installing Jellyfin. For GPU Passthrough I followed the Intel GPU Hardware Acceleration on the Jellyfin documentation (https://jellyfin.org/docs/general/administration/hardware-acceleration/intel) starting at “LXC And LXD Container” and installed intel-media-va-driver-non-free and ocl-icd-libopencl1 just to be safe for the “drivers” at point 1. I also had to modify some group IDs to align them properly before configuring Jellyfin to use VA-API acceleration. Note the AMD GPU Hardware Acceleration points to the Intel documentation for “Other Virtualizations” which is why I’ve covered it above.

          Without the GPU passthrough on Jellyfin, watching a movie winds up the fans to screaming “bean sí” levels as the processor tries to cope. The difference of GPU passthrough is just “massive”.

          1. Hi Linuxium, Thank you so much for your reply man! I have tested a proxmox machine today with a gt 1030 2gb and it worked. However I did not need to perform a bios extractions as you had mentioned above in your testing. I saw the links you have posted I will have to spend time to figure this out lol “hostpci0: 01:00,x-vga=on,romfile=vbios.bin” this part will get back to you but thank you for putting me on the right path.!! with that bieng said was this part necessary ? as the nvidia gpu worked without this?.

          2. The “romfile=vbios.bin” was necessary on the AOOSTAR R7 in order to pass through the AMD iGPU. It didn’t work without it during testing, but each configuration is different according to the documentation.

          3. I will test it out this weekend! but good news I fixed my plex transcoding! Running Plex in Docker seems to do the trick! Now the last piece of the puzzle is GPU passthrough and I am all set! I have also been thinking of getting a external GPU for gaming and connecting that as I do game a few times a year lol.