Adding a GPU to a NAS for AI workloads is possible on a narrow set of hardware, but most NAS devices cannot accept one. QNAP's higher-end workstation-class NAS models (the TVS-H series) have PCIe expansion slots, and some Thunderbolt-equipped models can connect an external GPU enclosure. Synology, UGREEN, and QNAP's standard lineup do not support GPU expansion at all. For most people asking this question, the practical answer is: use a dedicated mini-PC or homelab server instead.
In short: Only QNAP's workstation NAS (TVS-H series) supports GPU or PCIe expansion. Synology, UGREEN, Asustor, and TerraMaster NAS devices have no GPU expansion path. If you need GPU inference for AI, a mini-PC running Ollama costs far less and delivers better performance.
Why Most NAS Devices Have No GPU Path
Consumer and prosumer NAS devices are built around low-power, low-heat SoCs or embedded Intel Celeron/Pentium CPUs. The board designs prioritise drive bays, SATA controllers, and network ports. PCIe lanes are allocated to M.2 NVMe cache slots or 10GbE expansion cards, not graphics.
The PCIe slots you see on a QNAP TS-464 or TS-873A are primarily designed for QM2 NVMe expansion cards and 10GbE/25GbE network cards, not GPUs. The physical slot may accept a GPU card, but driver support in QTS is limited, power delivery may be insufficient, and there is no guarantee the OS will expose the GPU to Docker containers in a useful way without significant configuration.
This is not a software limitation that will be patched away. It reflects a hardware design choice. If GPU AI inference is a serious workload, a NAS is the wrong starting point.
QNAP TVS-H Series: The Exception
QNAP's workstation-class TVS-H series is purpose-built for exactly this kind of expansion. These use Intel Core i3/i5/i7 processors (including Core Ultra generations with integrated NPUs) and provide proper PCIe x16 slots designed for GPU cards alongside multiple drive bays.
The QNAP TVS-H874X is the flagship option in Australia, stocked at Scorptec. It has a PCIe x16 slot that can accept consumer GPU cards from NVIDIA or AMD. With a compatible GPU installed, QNAP's Container Station can expose the GPU to Docker containers, enabling CUDA-accelerated inference with tools like Ollama, LocalAI, or ComfyUI.
| Model | QNAP TVS-H874X |
|---|---|
| CPU | Intel Core i5-1235U / i7-1255U (12th Gen) |
| RAM | 8GB DDR4 (expandable to 64GB) |
| Drive Bays | 8 x 3.5" SATA + 2 x M.2 NVMe |
| PCIe Expansion | 1 x PCIe x16 slot (GPU-capable) |
| Network | 2 x 2.5GbE + 2 x 10GbE SFP+ |
| Thunderbolt | No |
| AU Price (Scorptec) | ~$8,999 |
Power delivery warning: The TVS-H874X's PCIe slot has a standard power budget. High-end consumer GPUs (RTX 4070 and above) may require supplemental power connectors the enclosure does not provide. Mid-range cards (RTX 3060, RTX 4060) are a safer fit. Check QNAP's compatibility list before purchasing.
eGPU via Thunderbolt: Possible but Constrained
QNAP makes Thunderbolt-equipped NAS models (TVS-H674T, TVS-H874T) that can theoretically connect to an external GPU (eGPU) enclosure. In practice, eGPU setups on Linux-based NAS OSes are unreliable.
The challenges are real: Linux Thunderbolt support for eGPU has improved but remains finicky compared to macOS or Windows. QNAP's QTS is a hardened Linux fork, which limits kernel module flexibility. eGPU enclosures from vendors like Razer Core X or Sonnet deliver less bandwidth than a native PCIe slot due to Thunderbolt's x4 PCIe bandwidth cap, roughly halving GPU throughput versus a native x16 connection.
For AI inference workloads, which are already memory-bandwidth-constrained, this penalty is significant. A 7B model running on an RTX 3060 via eGPU on a QNAP Thunderbolt NAS will be noticeably slower than the same card in a desktop or mini-PC PCIe slot.
eGPU is worth attempting only if you already own the equipment. It is not a recommended first-purchase path for AI workloads.
What About QNAP AI Accelerator Cards (QM2)?
QNAP's QM2 series cards are frequently mentioned in the same conversation as GPU expansion. They are not GPU cards. QM2 cards are NVMe SSD expansion cards that add M.2 slots and sometimes 10GbE ports. They occupy a PCIe slot but they do not provide AI acceleration.
QNAP has also announced AI accelerator add-in cards (the QAI series) designed for edge inference in enterprise deployments. These are not widely stocked in Australian retail, and they are purpose-built for specific inference tasks (video analytics, object detection) rather than general-purpose LLM inference.
If you see "AI NAS" marketing on a QNAP product, check whether the AI features run on dedicated hardware or leverage the host CPU with optional cloud API calls. Many marketed AI features fall into the latter category.
The Honest Cost-Benefit Calculation
A QNAP TVS-H874X with a mid-range GPU (RTX 4060, ~$650 in Australia) runs approximately $9,650 before drives. An Intel N100 mini-PC capable of running 7B models via CPU inference costs $350-500. A Beelink EQ12 or Minisforum MS-01 running Ollama is more practical, quieter, and draws far less power for the same CPU-inference tasks.
The TVS-H874X makes sense when you need both large centralised storage (8 bays) and GPU inference in the same appliance for a workgroup or SMB. For a home user or solo AI experimenter, buying a TVS-H874X for AI is like buying a truck to commute.
GPU Expansion Options Compared
| QNAP TVS-H874X + GPU | QNAP TVS-H874T + eGPU | Mini-PC (e.g. Beelink SER8) | |
|---|---|---|---|
| GPU support | Native PCIe x16 | Thunderbolt eGPU (patchy) | Dedicated GPU models available |
| AI inference throughput | Full GPU speed | ~50% of native (TB bandwidth limit) | CPU or GPU depending on model |
| Storage bays | 8 x 3.5" | 8 x 3.5" | None (external only) |
| NAS OS | QTS / QuTS Hero | QTS / QuTS Hero | Proxmox / Unraid / Linux |
| AU entry price (no drives) | ~$9,650 with RTX 4060 | ~$8,500+ with eGPU enclosure | ~$450-900 |
| Power draw (with GPU) | ~200-350W continuous | ~200-350W continuous | ~20-80W |
| Best for | SMB: storage + AI in one box | SMB: existing setup, add eGPU | Home/solo: dedicated AI inference |
Australian Buyers: What Is Actually Stocked
Australian availability for QNAP's PCIe-capable NAS is limited. The TVS-H874X is available at Scorptec. Most other TVS-H models (TVS-H674, TVS-H474) are listed but may be on indent order rather than warehouse stock. Check with the retailer before ordering.
eGPU enclosures suitable for this use case (Razer Core X, Sonnet Echo Box) are available via Scorptec and Amazon AU. GPU cards (NVIDIA RTX 4060, RTX 3060) are standard retail stock at Scorptec, PLE, Mwave, and MSY.
Running a GPU-equipped NAS 24/7 in Australian conditions is worth factoring for electricity. An RTX 4060 under moderate AI inference load adds ~100-120W to baseline NAS draw. At Queensland's average residential rate of ~$0.30/kWh, that adds roughly $260/year to electricity costs. At NSW or Victoria rates (~$0.35/kWh), closer to $300/year. The NAS power cost calculator can estimate your specific scenario.
Australian Consumer Law applies to hardware purchases from Australian retailers. If a PCIe GPU card is faulty within a reasonable time, you have repair, replacement, or refund rights regardless of the manufacturer's warranty card terms. Grey import GPU cards from overseas (no Australian distributor) forfeit these protections.
When GPU Expansion on NAS Makes Sense
There are legitimate reasons to pursue GPU expansion on a NAS, but they are narrow:
- You already have a TVS-H series QNAP and want to add inference without buying separate hardware
- Centralised GPU for a workgroup where multiple users will share the inference endpoint via Ollama's API
- AI surveillance analytics on an existing 8-bay NAS deployment where per-camera AI object detection justifies the GPU
- ComfyUI or Stable Diffusion as a shared service for a small creative team
In all other cases, a dedicated mini-PC or a homelab build (Unraid/TrueNAS on a tower) running alongside your NAS is more cost-effective, easier to maintain, and more upgradeable. The NAS does storage. The inference box does inference.
Common Mistakes to Avoid
Mistake 1: Assuming any PCIe slot on a NAS accepts a GPU. Most PCIe slots on standard NAS (TS-464, TS-873A, DS1821+) are x4 or x8 electrically and lack driver support for GPU inference in the NAS OS. Even if a GPU physically fits, it will not work for CUDA workloads.
Mistake 2: Buying an eGPU enclosure expecting plug-and-play. eGPU on Linux requires manual kernel module work. QNAP's QTS has limited kernel customisation. Plan for several hours of troubleshooting even in the best case.
Mistake 3: Sizing the power supply around the NAS alone. A GPU under AI inference load can draw as much power as the rest of the NAS combined. Verify the unit's PSU rating before installing a high-TDP GPU card.
Related reading: our NAS buyer's guide, our NAS vs cloud storage comparison, and our NAS explainer.
Use our free AI Hardware Requirements Calculator to size the hardware you need to run AI locally.
Can I add a GPU to a Synology NAS?
No. Synology NAS devices have no accessible PCIe expansion slots for GPU cards. Synology's AI features (photo recognition, smart search) use Synology's cloud services or on-device models that run on the CPU. If you need GPU inference, Synology is not the platform for it.
Will an NVIDIA GPU work in a QNAP TVS-H with Docker?
On TVS-H models with proper PCIe x16 slots (such as the TVS-H874X), NVIDIA CUDA works via Docker with the NVIDIA Container Toolkit installed in Container Station. QNAP has published setup guides for this. AMD GPU support (ROCm) is less tested and not officially supported by QNAP.
What is the difference between a GPU and an NPU in a NAS context?
A GPU is a general-purpose parallel processor used for AI training and inference, gaming, and compute tasks. An NPU (Neural Processing Unit) is purpose-built for specific inference tasks and uses far less power. Modern Intel Core Ultra CPUs include an integrated NPU. For NAS use, an NPU handles tasks like AI photo recognition; a GPU is needed for running large language models or image generation. See our NPU explained guide for more detail.
Can I run Ollama on a NAS with GPU acceleration?
On QNAP TVS-H series NAS with a compatible NVIDIA GPU installed and Container Station configured, yes. Ollama has an official Docker image that supports NVIDIA CUDA. On standard NAS (Synology, UGREEN, QNAP TS series), Ollama runs via Docker but uses CPU inference only, which is significantly slower. See our Ollama on Synology guide for a CPU-only setup walkthrough.
Is a QNAP TVS-H874X worth buying for AI workloads?
Only if you genuinely need the 8-bay NAS storage alongside the GPU inference capability. At ~$9,000 without drives or GPU, it is a substantial investment. A homelab build (Unraid tower + separate NAS) achieves the same GPU inference capability at lower cost but requires more configuration. For a small business needing a single-appliance solution, the TVS-H874X is defensible. For home use, it is almost certainly overkill.
What eGPU enclosures work with QNAP Thunderbolt NAS?
QNAP has compatibility notes for the Razer Core X and some Sonnet enclosures with Thunderbolt-equipped TVS-H models. Setup requires command-line configuration in QTS and is not guaranteed to work across firmware updates. Check QNAP's compatibility list and community forums before purchasing an eGPU enclosure specifically for NAS use.
Trying to decide between a GPU-capable NAS and a dedicated mini-PC for local AI? The comparison guide covers performance, cost, and noise trade-offs in detail.
Mini-PC vs NAS for Local AI