NBN Upload Limits and Cloud AI — When Local Inference Wins in Australia

Australia's NBN upload speeds are slow enough to make cloud AI APIs noticeably laggy for image, audio, and large document tasks. Here is when local inference on a NAS or mini-PC is faster and more practical.

Australia's NBN upload speeds are a genuine constraint for cloud AI use cases involving large files. Sending a 10MB photo to a cloud AI API over a typical NBN 100 connection takes 1-2 seconds of upload time before the API even starts processing. For voice transcription, document analysis, or batch image processing, that upload overhead accumulates. Local inference on a NAS or mini-PC eliminates it entirely.

In short: For text chat, cloud AI APIs are fine on any NBN connection. The prompt is small and the round-trip latency is negligible. For image analysis, audio transcription, or processing large documents (PDFs, video), the upload overhead on NBN makes local inference faster and more practical. If you are regularly sending files larger than 1-2MB to cloud AI, local inference is worth considering.

What NBN Upload Speeds Actually Are

NBN upload speeds are significantly slower than download speeds for most Australian households. This is a deliberate design characteristic of NBN's tiered plans. The upload tier is lower than the download tier on every plan except the top-end NBN 1000 plans.

NBN Typical Upload Speeds by Plan

Plan Typical Download Typical Upload Max Upload (spec)
NBN 25 NBN 2525 Mbps3-5 Mbps5 Mbps
NBN 50 NBN 5050 Mbps5-10 Mbps20 Mbps
NBN 100 NBN 100100 Mbps15-25 Mbps20 Mbps
NBN 250 NBN 250250 Mbps20-25 Mbps25 Mbps
NBN 1000 NBN 1000750-1000 Mbps40-50 Mbps50 Mbps
FTTP (some plans) FTTP boosted250+ Mbps50-100 Mbps100 Mbps

Most Australian households are on NBN 100 plans, which provide a typical upload of 15-25 Mbps. Far below the download speed. NBN 250 plans barely improve the upload at all. Only NBN 1000 plans with an FTTP connection type offer meaningfully higher upload bandwidth (50-100 Mbps). FTTP coverage is expanding, but the majority of NBN connections in 2026 are still FTTC, FTTN, or HFC with the standard upload caps.

How Upload Speed Affects Cloud AI

Cloud AI APIs receive your input, process it in a data centre, and return the result. The upload time. The time your data takes to reach the API. Is a direct function of your upload speed and the file size. For a text prompt, this is negligible: a 1,000-word prompt is approximately 5-6KB, which uploads in under a millisecond on any NBN connection.

The picture changes for other data types:

Cloud AI Upload Time. NBN 100 (20 Mbps upload)

Input Type Typical File Size Upload Time (20 Mbps) Upload Time (5 Mbps)
Text prompt (1000 words) Text, 1,000 words~6 KB<0.01s<0.01s
Screenshot (PNG) Screenshot PNG~500 KB~0.2s~0.8s
Photo (JPEG, 8MP) JPEG 8MP photo~3-5 MB~1.5-2.5s~5-8s
RAW photo (24MP) RAW 24MP~25-40 MB~10-16s~40-64s
1 min voice recording (MP3) Voice MP3, 1 min~1 MB~0.4s~1.6s
5 min voice recording (MP3) Voice MP3, 5 min~5 MB~2s~8s
Short video (30s, 1080p) Video 30s 1080p~50-80 MB~20-32s~80-128s
PDF document (20 pages) PDF, 20 pages~2-5 MB~1-2s~4-8s

These upload times are before the API starts processing. Add API processing time (1-10 seconds depending on the model and input complexity) and you can see that for photo analysis and audio transcription, upload overhead is a significant fraction of total latency. Not the bottleneck it would be in a US or European home where 100 Mbps symmetrical upload is common.

Where Local Inference Wins on NBN

Local inference processes data on your hardware. There is no upload step. The model reads from local storage directly. This makes it faster than cloud AI for any workflow involving large inputs:

  • Photo management and tagging: Immich running on a NAS with AI photo recognition processes photos from local storage at full NAS throughput. Sending each photo to a cloud API would require uploading the file first. For a library of 50,000 photos, that upload step alone would take hours on a typical NBN connection.
  • Voice transcription: Running Whisper locally on a NAS or mini-PC processes audio files without upload. A 1-hour recording (approximately 60MB in MP3) takes 24 seconds to upload at 20 Mbps, versus instant access locally.
  • Document processing: Feeding a 50-page PDF to a local LLM is instantaneous from local storage. Uploading the same PDF to a cloud API takes 2-5 seconds on NBN 100, plus API processing time.
  • Video analysis: Any cloud AI workflow involving video files is severely limited by NBN upload speeds. Even a 2-minute clip at 1080p is 200-320MB. A 90-second upload before processing begins. Local inference removes this bottleneck entirely.

Where Cloud AI Still Wins on NBN

Local inference is not better for all AI tasks on NBN. Cloud AI retains advantages in specific scenarios:

  • Text-only tasks: Chat, code generation, writing assistance, summarisation of short documents. The upload overhead is negligible for text. A fast cloud model (GPT-4o, Claude, Gemini) also produces text faster than most CPU-based local models at 8-15 tok/s. Cloud models return 100+ tok/s on dedicated GPU infrastructure.
  • Irregular or infrequent use: If you use AI once a week, the infrastructure cost of running a local inference server does not pay off. Cloud APIs scale to zero cost at zero usage.
  • Frontier model capability: The best local models (Llama 3.3 70B, Mistral Large) are behind the frontier cloud models (GPT-4o, Claude Opus, Gemini 1.5 Pro) in capability. For complex tasks requiring the best available model quality, cloud wins regardless of upload speed.
  • Mobile use on cellular: If you are on 5G or 4G mobile with 50-100 Mbps upload, the NBN upload constraint is irrelevant. Cloud AI is the simpler path.

CGNAT and Privacy Compound the NBN Problem

Many Australian NBN users are behind Carrier-Grade NAT (CGNAT), particularly on HFC and fixed wireless connections. CGNAT means your public IP address is shared with dozens of other customers, which blocks incoming connections and complicates remote access to local AI systems. If you want to access your local AI from outside your home network, CGNAT forces you to use a VPN service, Tailscale, or a similar tunnel. Adding setup complexity.

At the same time, privacy is a genuine reason many users prefer local AI: your documents, photos, and voice recordings never leave your network. For sensitive business documents, personal photos, or medical records, local inference keeps data entirely under your control. No upload to third-party servers, no potential for data retention by the API provider.

A Practical Decision Framework

Use this as a starting point for deciding whether to invest in local AI infrastructure:

Cloud AI vs Local Inference. Decision Guide for Australian NBN Users

Use Case Cloud AI (NBN) Local Inference
Text chat and writing Text chatCloud wins. Faster models, no setupOverkill for text-only
Photo library AI tagging Photo library AIUpload overhead impractical at scaleLocal wins. Direct storage access
Voice transcription (occasional) Occasional transcriptionFine. 1-2s upload acceptableEither works
Voice transcription (batch) Batch transcriptionUpload bottleneck for long filesLocal wins. No upload
Document analysis (occasional) Occasional docsFine. 1-5s upload acceptableEither works
Document analysis (batch/sensitive) Batch or sensitive docsUpload time + privacy concernLocal wins
Video frame analysis VideoUpload takes minutes per clipLocal wins definitively
Code generation Code genCloud wins. Better modelsAcceptable quality at 7-15 tok/s

For most Australian households, a hybrid approach makes the most practical sense: use cloud AI for text-heavy tasks where frontier model quality matters, and local AI for file-based tasks where the NBN upload overhead and privacy considerations make cloud impractical. A NAS already handling your photos and documents is a natural host for local AI processing of those files.

The full cost comparison including hardware amortisation is in the NAS vs cloud AI cost comparison. For hardware recommendations, see the best NAS for AI in Australia. The power cost of running local AI 24/7 by state is covered in the AU power cost guide.

Australian Buyers: What You Need to Know

Australian NBN characteristics that are not widely understood outside Australia:

  • Most plans have asymmetric speeds by design. NBN is a wholesale network. RSPs (Retail Service Providers) resell capacity at tiers. The upload limit is built into the network layer, not just the plan.
  • FTTP is being rolled out to more premises. If you are getting fibre-to-the-premises upgrade in 2026, symmetric upload speeds (up to 100 Mbps) become available on NBN 1000 plans. This changes the cloud vs local calculation significantly for video and large file workloads.
  • Starlink users have different characteristics. Starlink latency (25-60ms) is higher than fibre NBN (8-20ms) but upload speeds can reach 20-50 Mbps. The latency overhead for cloud AI API round-trips is noticeable on Starlink. Local inference eliminates this variable entirely.
  • Business NBN (CGNAT-free) is available. Some business NBN plans provide a static IP and no CGNAT for $10-30/month extra. This makes remote access to local AI systems straightforward via standard port forwarding.
Does NBN speed affect text-based cloud AI chat?

Not meaningfully. A text prompt of 1,000 words is approximately 5-6KB, which uploads in under a millisecond on any NBN connection. The bottleneck for text-based AI chat is API processing time and round-trip latency, not upload bandwidth. Cloud AI for text is fast on any NBN plan.

How long does it take to upload a photo to a cloud AI API on NBN?

A typical JPEG photo (8 megapixels, 3-5MB) takes approximately 1.5-2.5 seconds to upload on NBN 100 at its typical 20 Mbps upload speed. On NBN 50 (typically 5-10 Mbps upload), the same photo takes 4-8 seconds. For occasional one-off photo analysis, this is acceptable. For batch processing a large photo library, the cumulative upload time is prohibitive. Local AI is the practical solution.

Is local AI worth setting up just because of NBN upload speeds?

Not for text-only use cases. It is worth considering if you have regular workflows involving photos, audio files, or documents where upload time and privacy matter. If you already own a capable NAS (QNAP TS-473A or similar), the marginal cost of adding Ollama is minimal. If you are buying hardware specifically for local AI, the NBN upload constraint is one factor among several. See the full cost comparison first.

What NBN plan do I need for cloud AI to be practical?

NBN 50 or higher is sufficient for text-based cloud AI. For photo and audio tasks, NBN 100 with its typical 20 Mbps upload is acceptable for occasional use. For batch file processing, no NBN plan is really fast enough at scale. The upload ceiling (20-50 Mbps even on NBN 1000) makes large volume file uploads slow regardless of plan tier. That is the fundamental case for local AI on file-heavy workloads.

Does CGNAT affect cloud AI API access in Australia?

CGNAT does not affect outbound connections to cloud AI APIs. You can still reach ChatGPT, Claude, or any cloud API from behind CGNAT. It blocks incoming connections, which only matters if you want to access your local AI setup remotely. For remote access to a home-hosted AI server, use Tailscale or a similar VPN mesh that tunnels through NAT. CGNAT is common on HFC and fixed wireless NBN connections in Australia.

Compare the full cost of local AI versus cloud AI subscriptions, including hardware amortisation and AU electricity rates.

NAS vs Cloud AI Cost Comparison