AI workloads running on a NAS. Whether that's local large language model inference, image recognition, facial detection in surveillance, or vector database search. Place fundamentally different demands on hardware than traditional file sharing or backup tasks. A NAS that handles a household's Plex streams and Time Machine backups without breaking a sweat can grind to a halt the moment you ask it to run a local AI model or process a batch of photos with machine learning. Understanding the hardware requirements before you buy. RAM capacity, CPU architecture, NPU availability, and storage throughput. Will save you from buying a device that can't do what you need it to do.
In short: For meaningful AI workloads on a NAS, you need a minimum of 8GB RAM (16GB+ preferred), a modern x86 CPU or ARM chip with dedicated AI acceleration, ideally an NPU or GPU for inference tasks, and fast NVMe storage for model loading. Budget NAS hardware with 2-4GB RAM and a low-power Realtek or older ARM chip will not run AI applications effectively. And some won't run them at all.
Why AI Workloads Are Different
Traditional NAS tasks. Serving files, running backups, hosting Plex. Are largely I/O bound. The limiting factor is how fast data moves between disks and the network. The CPU is mostly idle, and RAM requirements are modest. A 2GB or 4GB system can handle these tasks comfortably.
AI workloads are fundamentally different. Running inference on a local language model, processing images through a neural network for face or object recognition, or indexing files using semantic embeddings all require:
- Sustained CPU or accelerator compute. Not brief bursts, but extended arithmetic-heavy processing
- Large working memory. AI models load entirely into RAM or VRAM before inference begins; a 7-billion-parameter model quantised to 4-bit needs approximately 4-5GB of RAM just to load, before the OS and other applications take their share
- Fast storage for model loading. Reading a multi-gigabyte model file from spinning hard drives on every startup is slow; NVMe dramatically reduces this
- Thermal headroom. Compact, passively cooled NAS enclosures can throttle under sustained AI compute, degrading performance over time
The gap between a NAS that can run AI software and one that can run it usefully is significant. Understanding each hardware component helps you make the right call.
RAM: The Most Important Specification for AI on a NAS
RAM is the single most important hardware variable for AI NAS workloads. Unlike CPU or NPU performance. Which affect how fast inference runs. Insufficient RAM means the application simply cannot run at all. AI models must be loaded into memory before they can process anything.
Practical RAM requirements by AI task type:
- AI-powered surveillance (face detection, object recognition): 4GB minimum, 8GB recommended for multi-camera setups
- Photo indexing and semantic search (e.g. Synology Photos AI, QNAP's AI-powered Moments): 4GB workable, 8GB for large libraries
- Local LLM inference (small models, 1B-3B parameters): 8GB absolute minimum, model quality is heavily constrained
- Local LLM inference (7B parameter models at 4-bit quantisation): 16GB. The practical entry point for usable local AI chat
- Local LLM inference (13B+ models): 32GB+, outside the scope of most current NAS hardware
- Vector database and RAG (Retrieval-Augmented Generation): 8-16GB depending on index size
The RAM constraint is why entry-level NAS devices. Models shipping with 2GB or 4GB of soldered, non-upgradeable RAM. Are largely unsuitable for meaningful AI workloads. This includes most value-series Synology models (DS223J), budget Asustor models, and the lower QNAP consumer range.
Soldered vs upgradeable RAM: Many NAS devices have RAM soldered directly to the board with no upgrade path. Before buying a NAS for AI workloads, confirm whether RAM is upgradeable. Models like the QNAP TS-464 (from $989 at PLE Computers and Scorptec) support RAM expansion, which future-proofs your investment as AI model sizes grow. Models with soldered 4GB RAM are a dead end for AI use.
CPU: Architecture Matters More Than Clock Speed
NAS CPUs span a wide range. From ultra-low-power ARM chips designed purely to shuffle files, to full desktop-class Intel Core processors in premium models. For AI workloads, architecture matters more than raw clock speed.
CPU tiers for AI NAS use:
- Not suitable for AI: Realtek RTD1296/RTD1619B, older Marvell Armada chips. These are file-serving processors with minimal floating-point capability and typically paired with 1-2GB of non-upgradeable RAM. They can run vendor AI software in name only; practical performance is negligible.
- Entry-level AI capable: Intel Celeron J4125, Intel Celeron J6412. Quad-core x86 chips with Intel UHD graphics capable of basic neural network acceleration. Adequate for photo indexing and single-camera AI surveillance. Not suitable for LLM inference.
- Mid-range AI capable: Intel Core i3/i5 (12th gen and newer), AMD Ryzen R1600/V1500B. These offer meaningful compute for AI tasks, especially when paired with 16GB+ RAM. The QNAP TVS-H674-I3 (available at Scorptec) falls into this category.
- High-performance AI: Intel Core i5/i7/i9 (12th-13th gen), found in top-tier QNAP TVS-H series. Capable of running quantised 7B LLMs with acceptable token generation rates and handling complex multi-model AI pipelines.
One often-overlooked factor: modern Intel CPUs include Intel Deep Learning Boost (VNNI instructions) from the 10th generation onward. These vector neural network instructions accelerate integer-based AI inference significantly on compatible software. No NPU required. If you're running QNAP's AI applications or open-source inference tools that leverage VNNI, a 12th-gen Intel Celeron outperforms an older Core i5 for AI specifically, despite the lower overall benchmark score.
NPU: What It Is and When You Actually Need One
NPU stands for Neural Processing Unit. A dedicated hardware block designed specifically for the matrix multiplication operations that dominate AI inference. Unlike a CPU (which handles general-purpose tasks) or a GPU (which handles massively parallel graphics and compute), an NPU is optimised purely for the type of arithmetic that neural networks perform.
Why an NPU matters for NAS AI workloads:
- Power efficiency: An NPU can perform AI inference at a fraction of the power draw of a CPU doing the same task. Critical in a 24/7 always-on device like a NAS
- Real-time processing: For tasks like live video analytics, continuous photo indexing, or real-time object detection, an NPU handles the workload without competing with the OS and file-serving tasks on the main CPU cores
- Freeing the CPU: When the NPU handles inference, the CPU remains available for file transfers, RAID calculations, and other NAS functions. Preventing the AI workload from degrading core NAS performance
Current NPU-equipped NAS models available in Australia:
The Asustor AS6704T (from $1,013 at Mwave) and AS6804T (from $2,175 at Mwave) are built on the Intel N305 platform, which includes Intel's integrated NPU (part of the Intel Core Ultra / Meteor Lake architecture lineage). This NPU delivers hardware-accelerated AI inference directly on the NAS without an external GPU.
QNAP's AI NAS initiative, branded as QAI, targets the TS-464 series and above with software-based AI acceleration. Leveraging Intel's VNNI instructions rather than a discrete NPU. The TVS-H series with Core i-series processors takes this further, with sufficient CPU horsepower to run meaningful inference workloads without dedicated NPU hardware.
Do you need an NPU? For most home and SMB AI NAS use cases. Photo indexing, smart search, AI-powered surveillance with a handful of cameras. A capable modern CPU with VNNI support is adequate. An NPU becomes genuinely valuable when:
- You're running AI inference continuously (always-on video analytics across 8+ cameras)
- You need AI inference to run without impacting file-serving performance
- You're running multiple AI models simultaneously
- Power efficiency matters for 24/7 operation
GPU Acceleration: A Different Approach
Some NAS platforms support external GPU acceleration via PCIe slots. Primarily the higher-end QNAP TVS-H series. A mid-range NVIDIA GPU in a PCIe slot transforms LLM inference performance: what takes 30-60 seconds per response on a CPU alone can drop to 2-5 seconds with GPU acceleration, depending on the model and quantisation level.
This approach suits technically advanced deployments. Developers, researchers, or businesses running private AI assistants on-premises. The QNAP TVS-H874 and TVS-H874T series (available at Scorptec) support PCIe GPU expansion, making them the closest thing to a dedicated AI inference server in a standard NAS form factor.
The trade-offs are real: PCIe GPUs add significant cost, increase power draw and heat output, and require compatible enclosures. For most NAS buyers, integrated NPU or strong CPU-based inference is the more practical path.
Storage: NVMe Changes the AI Experience
Storage speed affects AI NAS performance in two distinct ways: model loading time and data throughput during inference.
Model loading: A 4GB quantised language model stored on a 7,200RPM HDD takes 15-30 seconds to load into RAM at HDD read speeds of 150-200MB/s. The same model on an NVMe SSD loads in 2-4 seconds at 3,000MB/s+. For a NAS that may need to load models on demand rather than keeping them resident in RAM permanently, this difference is meaningful.
Data throughput during inference: If your AI workflow involves processing large batches of files. Indexing thousands of photos, running documents through a vector embedding model. The speed at which data can be read from storage directly affects throughput. NVMe-cached or all-NVMe storage pools handle these read-intensive AI workloads far better than HDD-only configurations.
Recommended storage configuration for AI NAS:
- M.2 NVMe for model storage and AI application data: Most AI-capable NAS models include M.2 NVMe slots. Use these for AI model files, databases, and application working directories, not just SSD cache
- HDD pool for bulk data: Photos, documents, and files being processed by AI can live on HDD; only the model files and active inference data need NVMe speeds
- RAM-based caching: On QNAP's QuTS Hero platform, ZFS ARC read caching in RAM can accelerate repeated reads of AI model files. A practical reason to install more RAM than the minimum requirement
Note: If you're evaluating a Synology NAS for AI workloads, check the M.2 NVMe compatibility list carefully. Following the 2025 drive compatibility changes and the DSM 7.3 partial reversal, M.2 NVMe slots on Synology NAS devices still require drives from Synology's official compatibility list for storage pool and cache creation. Third-party NVMe drives are restricted even on current Plus series models. A meaningful limitation when configuring fast AI storage.
Thermal Performance: The Hidden Bottleneck
AI inference is a sustained, high-CPU-utilisation workload. Unlike file transfers that spike and resolve quickly, an LLM generating a long response or a batch image processing job can pin CPU utilisation at 80-100% for minutes at a time. Compact NAS enclosures designed for intermittent file-serving loads may throttle their CPU under this kind of sustained demand. Reducing clock speeds to protect thermals and, as a result, reducing AI inference performance.
This is less of an issue on desktop NAS models with active cooling and reasonable airflow, but worth checking for rackmount and compact form-factor devices. If you're planning sustained AI workloads, look for NAS models with active CPU cooling, good internal airflow, and thermal specifications that confirm the CPU can sustain boost clocks under continuous load.
QNAP's TVS-H series desktop models use standard Intel desktop cooling solutions with fan-cooled enclosures specifically because these workloads demand it. The difference in sustained inference performance between a well-cooled desktop NAS and a thermally throttled compact model can exceed 50%. A significant real-world impact.
Network Throughput: Don't Forget the Connection
If you're running AI inference on the NAS and serving results to multiple users or workstations on your local network, network throughput becomes relevant. Standard 1GbE limits file transfer to approximately 100-110MB/s. For transferring multi-gigabyte AI model files or streaming large datasets to the NAS for processing, this can create a bottleneck even if the NAS hardware itself is capable.
The practical solution for AI-heavy NAS deployments is 2.5GbE or 10GbE connectivity. Many AI-capable NAS models include 2.5GbE as standard, with 10GbE available on higher-tier models. QNAP's QSW switch range makes upgrading your local network to 10GbE practical without enterprise switch pricing. A 5-port 10GbE unmanaged switch covers a small workstation, NAS, and backup target without requiring a $2,000 enterprise switch.
For remote access to an AI NAS. Running inference queries from outside your home or office. Australia's NBN infrastructure creates a real ceiling. Typical NBN 100 upload speeds sit around 56Mbps, which is adequate for sending text prompts and receiving responses from a local LLM, but not for streaming large files to the NAS for processing in real time. CGNAT on some NBN connections also blocks direct inbound connections entirely, requiring a VPN service or a relay service to reach your NAS from outside your network.
Current AI NAS Models Available in Australia
Based on models currently stocked at Australian retailers, here are the NAS devices that meet meaningful AI hardware thresholds. All prices sourced from Mwave, PLE Computers, and Scorptec.
AI-Capable NAS Models. Australian Market (March 2026)
| QNAP TS-464 | QNAP TVS-H674-I3 | Asustor AS6704T | Asustor AS6804T | QNAP TS-473A | |
|---|---|---|---|---|---|
| AU Price (from) | $989 (PLE/Scorptec) | Scorptec (POA) | $1,013 (Mwave) | $2,175 (Mwave) | $1,489 (PLE Computers) |
| CPU | Intel Celeron N5095 | Intel Core i3-12100 | Intel N305 (8-core) | Intel N305 (8-core) | AMD Ryzen V1500B |
| NPU / AI Accel | VNNI (software) | VNNI + iGPU | Intel NPU + iGPU | Intel NPU + iGPU | Limited (no VNNI) |
| Base RAM | 8GB (upgradeable) | 32GB (upgradeable) | 8GB (upgradeable) | 8GB (upgradeable) | 8GB (upgradeable) |
| Max RAM | 16GB | 64GB | 32GB | 32GB | 32GB |
| M.2 NVMe Slots | 2x M.2 | 2x M.2 | 2x M.2 | 2x M.2 | 2x M.2 |
| Network | 2x 2.5GbE | 2x 2.5GbE + 2x 10GbE | 2x 2.5GbE | 2x 2.5GbE + 2x 10GbE | 2x 2.5GbE + 1x 10GbE |
| PCIe Expansion | No | Yes (PCIe 4.0) | No | No | Yes (PCIe 3.0) |
Prices last verified: 28 March 2026. Always check retailer before purchasing.
The Asustor AS6704T (from $1,013 at Mwave) stands out as the most accessible NPU-equipped NAS currently available in Australia. The Intel N305 chip includes a dedicated NPU capable of hardware-accelerated AI inference, which Asustor's ADM software and compatible third-party applications can leverage for surveillance AI, photo recognition, and media analysis tasks.
The QNAP TVS-H674-I3 takes a different approach. Using a full Intel Core i3-12100 processor with significant iGPU compute and Intel Quick Sync for AI-adjacent tasks like video transcoding. With PCIe expansion, it can also accept an NVIDIA GPU for proper CUDA-accelerated LLM inference, making it genuinely flexible for evolving AI workloads. QNAP's QuTS Hero operating system on this model supports ZFS with RAM-based ARC caching. Meaning every gigabyte of RAM above the OS baseline contributes to read performance for AI model files.
The QNAP TS-464 (from $989) represents the practical entry point for AI on a budget. A capable Intel Celeron N5095 with VNNI support, upgradeable RAM to 16GB, dual M.2 NVMe slots, and 2.5GbE connectivity. It handles AI photo indexing, smart search, and moderate AI surveillance workloads without straining. It will not run 7B LLMs at usable speeds, but for AI-enhanced NAS features rather than full local AI inference, it delivers strong value.
Note that QNAP's production schedules have been impacted by global chip and RAM shortages through 2025-2026. Some high-end models carry extended lead times. Check actual stock levels before planning a purchase around a specific QNAP model; a unit ordered in early 2026 may not arrive for several months if it's currently out of stock at the distributor level.
Software Platform Considerations for AI
Hardware alone doesn't determine AI capability. The NAS operating system and available applications matter equally. Here's how the major platforms compare for AI workloads.
QNAP QTS / QuTS Hero: QNAP's AI NAS initiative (QAI) is the most developed AI software stack currently available on a consumer NAS platform. QTS includes AI-powered multimedia analysis, QNAP AI Core for local model inference, and compatibility with open-source tools via Container Station (Docker). QuTS Hero's ZFS architecture provides additional value for AI workloads. Inline compression reduces model storage footprint, and ARC caching accelerates repeated model reads from RAM. QuTS Hero requires a minimum of 8GB RAM to install and 16GB+ to take meaningful advantage of ZFS features.
Synology DSM: DSM includes AI-powered features in Synology Photos (face recognition, subject detection) and basic indexing. However, Synology's AI development focus is narrower than QNAP's. DSM is designed to make AI features accessible rather than to support advanced AI infrastructure. For running local LLMs or custom AI pipelines, DSM's Docker support works, but Synology hasn't built the dedicated AI tooling that QNAP has. DSM suits buyers who want AI-enhanced NAS features without the complexity of managing an AI inference platform.
Asustor ADM: ADM supports AI surveillance features natively on NPU-equipped models, with improving third-party container support. The AS6704T's hardware is ahead of its software maturity. ADM is catching up to the AI capabilities the hardware enables.
Minimum Viable Specs: A Practical Summary
| RAM (minimum) | 8GB. Absolute floor for AI-enhanced NAS features; 4GB systems will not run AI applications effectively |
|---|---|
| RAM (recommended) | 16GB for AI photo indexing, smart search, and small LLMs; 32GB for 7B model inference or multi-model workloads |
| RAM (upgradeable) | Essential. Avoid NAS models with soldered non-upgradeable RAM for AI use cases |
| CPU (minimum) | x86 architecture with VNNI support (Intel Celeron N5095 / J6412 or newer). Older ARM and Realtek chips are not suitable |
| CPU (recommended) | Intel Core i3/i5 (12th gen+) or equivalent for LLM inference; AMD Ryzen V-series for ZFS-heavy QuTS Hero deployments |
| NPU | Desirable for always-on inference (surveillance, continuous indexing); not required for intermittent AI feature use |
| M.2 NVMe | Minimum 1 slot; 2 slots preferred. Use for AI model storage and application working directories, not just SSD cache |
| NVMe capacity | 256GB minimum; 512GB-1TB recommended if storing multiple AI models locally |
| Network | 2.5GbE minimum for AI NAS; 10GbE for multi-user inference serving or large dataset ingestion |
| PCIe expansion | Required only for GPU-accelerated LLM inference. Relevant for advanced deployments only |
| OS platform | QNAP QTS or QuTS Hero for maximum AI software depth; Synology DSM for integrated AI features with simpler management |
What to Avoid
Don't buy a budget NAS expecting to run AI workloads on it. Specific configurations that will disappoint for AI use:
- Any NAS with 4GB or less of soldered RAM. The DS223J ($319 at PLE/Scorptec), AS1104T ($475 at Mwave), and similar budget models are file servers, not AI platforms
- Realtek RTD-based NAS devices. The TS-133 ($259 at PLE/Scorptec) and DS124 ($269 at Mwave/PLE/Scorptec) use ARM chips designed for basic NAS duties; AI applications will install but perform so poorly as to be unusable
- NAS models without M.2 NVMe slots. Loading multi-gigabyte AI models from spinning HDDs on every use is a frustrating experience; NVMe is effectively mandatory for AI model storage
- Non-upgradeable RAM platforms. If you can't add RAM as AI models grow, you're locked into the hardware ceiling from day one
If your budget sits below the $1,000 mark and AI workloads are a genuine priority, consider waiting and saving rather than buying a capable NAS that will not meet your AI requirements.
Buying in Australia: What to Know
AI-capable NAS models sit in the mid-to-high price range. Typically $989 and above for the models covered in this article. At these price points, where you buy matters.
Australian retailers including Scorptec, PLE Computers, and Mwave are the recommended purchase points for AI NAS hardware. These specialist retailers have genuine pre-sales capability and can advise on stock availability, RAM upgrade options, and NVMe compatibility for specific models. For AI NAS purchases specifically, pre-sales advice on compatible RAM and NVMe upgrades is valuable. And it's advice you won't get from Amazon or a generic marketplace seller.
Business-critical AI NAS deployments should request a formal quote rather than purchasing at listed retail price. Resellers can request pricing support from distributors for quoted deals, and the additional service capability justifies the conversation. Given that some QNAP models are currently 3-6 months behind on production due to global chip and RAM shortages, confirming actual stock availability before committing to a purchase is essential.
Under Australian Consumer Law, your warranty claim sits with the place of purchase, not the manufacturer. Synology, QNAP, and Asustor have no service centres in Australia. The standard warranty process runs through retailer → distributor → vendor in Taiwan, with typical resolution times of 2-3 weeks. For an AI workload NAS that may be serving business functions, this downtime window should factor into your planning. Ask your retailer about their warranty process and whether an advanced replacement arrangement is available before you need it. For official information on your consumer rights, visit accc.gov.au. This article provides general guidance only and does not constitute legal advice.
Australian Consumer Law note: When purchasing a NAS from an Australian retailer, your consumer guarantees apply to the place of purchase regardless of manufacturer warranty terms. A faulty NAS is a minor failure under ACL. The retailer may offer repair or replacement rather than an immediate refund. Plan for a 2-3 week resolution window and consider asking about advanced replacement arrangements at the time of purchase.
Related reading: our NAS buyer's guide, our NAS vs cloud storage comparison, and our NAS explainer.
Use our free AI Hardware Requirements Calculator to size the hardware you need to run AI locally.
How much RAM do I need for AI on a NAS?
8GB is the practical minimum for AI-enhanced NAS features like photo recognition and smart search. For running local language models (LLMs), 16GB is the entry point for small models (up to 7B parameters at 4-bit quantisation). For larger models or running multiple AI applications simultaneously, 32GB is recommended. Avoid NAS models with 4GB or less of non-upgradeable RAM. They cannot meaningfully run AI workloads.
Does my NAS need an NPU for AI tasks?
Not necessarily, but an NPU is genuinely useful for specific scenarios. A modern x86 CPU with Intel VNNI support handles AI photo indexing, smart search, and moderate AI surveillance workloads without dedicated NPU hardware. An NPU becomes valuable when you're running AI inference continuously (always-on video analytics, real-time indexing), when you need inference to run without competing with core NAS functions, or when you're particularly focused on power efficiency in a 24/7 device. Currently, the Asustor AS6704T (from $1,013 at Mwave) is the most accessible NPU-equipped NAS available in Australia.
Can I run a local LLM (like Llama or Mistral) on a NAS?
Yes, on appropriately specced hardware. But with realistic expectations. A QNAP TVS-H674-I3 or similar high-end NAS with 32GB RAM can run quantised 7B models via Docker containers using tools like Ollama or LM Studio. Token generation rates will be significantly slower than a dedicated GPU. Expect 2-8 tokens per second on CPU inference, compared to 30-100+ tokens per second on a consumer GPU. For personal productivity use (summarising documents, drafting responses) this is usable. For real-time multi-user inference, it's inadequate. Models above 13B parameters are generally outside the practical scope of current NAS hardware without a PCIe GPU expansion card.
Does storage type affect AI performance on a NAS?
Yes, significantly for model loading time and less so for inference speed. AI models must be read from storage into RAM before inference begins. A 4-5GB model file reads from an NVMe SSD in 2-4 seconds versus 15-30 seconds from a spinning HDD. If your NAS loads models on demand rather than keeping them resident in RAM, the difference is noticeable in daily use. The practical recommendation is to install an M.2 NVMe drive specifically for AI model and application storage, keeping the HDD pool for bulk data. Most AI-capable NAS models include at least two M.2 slots for this purpose.
Is QNAP or Synology better for AI NAS workloads?
QNAP is the stronger platform for AI infrastructure. QNAP's QAI initiative, Container Station (Docker support), AI Core application, and the deeper capabilities of QuTS Hero (ZFS, ARC caching, deduplication) provide more tools for building an AI workload on a NAS. Synology DSM includes AI-powered features in Synology Photos and basic smart indexing, but doesn't match QNAP's depth for advanced AI applications. The practical summary: Synology suits buyers who want AI-enhanced features in a simpler, more approachable package. QNAP suits buyers who want to build a genuine private AI inference platform and are comfortable with a steeper learning curve. Note that Synology's M.2 NVMe restrictions (requiring drives from Synology's compatibility list) remain in place following the DSM 7.3 update. A practical constraint when configuring AI model storage.
Will my AI NAS work with remote access over the internet?
You can access an AI NAS remotely, but Australia's NBN infrastructure creates real limits. Typical NBN 100 upload speeds sit around 56Mbps. Adequate for sending text prompts to a local LLM and receiving responses, but not for streaming large datasets to the NAS for real-time processing. A more significant issue is CGNAT: some NBN connections use Carrier-Grade NAT, which blocks direct inbound connections to your NAS entirely. If you're on CGNAT, you'll need a VPN service (like Tailscale) or a relay service to reach your NAS from outside your network. Check with your ISP whether your connection uses CGNAT before planning remote AI access.
What happens if my AI NAS fails under warranty in Australia?
Your warranty claim goes to the retailer you bought from. Not to Synology, QNAP, or Asustor directly. None of these vendors have service centres in Australia. The resolution process typically runs retailer → distributor → vendor in Taiwan, with standard resolution times of 2-3 weeks minimum. Advanced replacements are not officially supported by most vendors, though some specialist retailers will arrange an informal advance replacement purchase with a refund on return. For an AI NAS running business workloads, this 2-3 week window matters. Ask your retailer about their warranty process before purchasing, not after a failure. This is general guidance only; visit accc.gov.au for official information on your consumer rights under Australian Consumer Law.
Ready to find an AI NAS that fits your workload? Our buying guides break down the best options at each price point for the Australian market.
Browse NAS Buying Guides →