NetCalc / Pro · Docs ← Calculator

02 / Examples

Examples

Four full walkthroughs — home lab, one AWS account, a mid-size hybrid (4 AWS accounts + 2 data centres), and a 15-account enterprise plan. Every example has a Load button that drops the plan straight into the tool.

Every example below is ready to click into NetCalc Pro. The Load in tool ↗ button loads a single workspace; the Load all in tool ↗ button on multi-workspace examples replaces every workspace in the app with the full fleet at once.

AWS sizing: production VPCs in these examples use /14 (262 144 addresses) and staging VPCs use /15. The awsvpc CNI assigns one pod IP per container, so EKS and ECS Fargate exhaust a /16 quickly. Small / support VPCs (shared services, dev sandboxes, on-prem sites) stay at /16. Native AWS limits a single VPC primary CIDR to /16, so bigger account-level envelopes like these are typically implemented as one primary VPC plus one or more secondary CIDR blocks (commonly from 100.64.0.0/10 for EKS pod ranges).

Example 1 — Home lab / small office


A /20 (4 k addresses) carved into six VLAN-separated subnets. Covers a serious home lab or a small office with user, IoT, server, guest, management, and DMZ segments. Every tier has room for growth without bumping the parent mask.

Network192.168.0.0 /20 ProviderStandard
VLSM input500, 500, 250, 250, 100, 50
Load this plan in the tool ↗
CIDRLabelVLANNotes
192.168.0.0/23 lan-users 20 Laptops, desktops, personal devices.
192.168.2.0/23 lan-iot 40 Smart plugs, cameras, sensors, TVs.
192.168.4.0/24 lan-servers 30 NAS, homelab VMs, Pi-Hole, k3s nodes.
192.168.5.0/24 lan-guest 50 Guest Wi-Fi, isolated from LAN.
192.168.6.0/25 lan-mgmt 10 Switches, APs, IPMI, firewall mgmt.
192.168.6.128/26 lan-dmz 60 Externally reachable services.

Example 2 — Startup: one AWS account, one VPC


Production VPC for a web app on EKS (or ECS) with RDS backing data. Three availability zones. App tier carries the EKS pod IPs, so it's sized /16 per AZ — 65 k usable addresses each. The public tier sits at /20 and the data tier at /22. Real-world scale for a growing startup.

Network10.0.0.0 /14 ProviderAWS
VLSM input 60000, 60000, 60000, 4000, 4000, 4000, 1000, 1000, 1000
Load this plan in the tool ↗
CIDRLabelAZNotes
10.0.0.0/16 app-a us-east-1a EKS / ECS workers — one IP per pod.
10.1.0.0/16 app-b us-east-1b EKS / ECS workers — one IP per pod.
10.2.0.0/16 app-c us-east-1c EKS / ECS workers — one IP per pod.
10.3.0.0/20 public-a us-east-1a ALB, NAT gateway, bastion.
10.3.16.0/20 public-b us-east-1b ALB, NAT gateway.
10.3.32.0/20 public-c us-east-1c ALB, NAT gateway.
10.3.48.0/22 data-a us-east-1a RDS multi-AZ primary, ElastiCache, Lambda ENIs.
10.3.52.0/22 data-b us-east-1b RDS multi-AZ standby, ElastiCache.
10.3.56.0/22 data-c us-east-1c RDS read replica.

What to do next

  • Export Infrastructure as Code → AWS Terraform for a ready-to-apply .tf file.
  • Use the inline-editable table to tweak labels, VLANs, and AZs as the plan evolves — Tab rotates through the cells.
  • Watch the Radar drawer on the right — any overlap, duplicate VLAN, or unlabelled tier shows up the moment it appears.

Example 3 — Mid-size company: 4 AWS accounts + 2 on-prem DCs


Four AWS accounts (shared services, prod, staging, dev) connected through a Transit Gateway, plus two on-premises data centers (HQ and a DR site) reaching AWS over Direct Connect or VPN. Every network gets a non-overlapping block out of the 10.0.0.0/12 envelope so prod-to-stage peering and VPN tunnels never collide. Production AWS gets a /14 envelope for EKS + RDS + ECS + Lambda ENI headroom.

Load all 6 workspaces in tool ↗ Replaces every workspace in the app with the full fleet.

Top-level CIDR allocation

BlockWorkspaceKindPurpose
10.0.0.0/16 aws-shared AWS Transit Gateway, Route 53 Resolver, Directory Service.
10.1.0.0/16 (reserved) Held for aws-shared growth.
10.2.0.0/15 aws-stage AWS Pre-prod EKS workloads, mirrors prod tier shape.
10.4.0.0/14 aws-prod AWS Production — EKS, RDS, ECS, ALB, Lambda ENIs.
10.8.0.0/16 aws-dev AWS Developer sandboxes, 1-AZ, low scale.
10.9.0.0/16 onprem-hq On-prem HQ data center, primary site.
10.10.0.0/16 onprem-dr On-prem Disaster recovery site.

Per-workspace plans

aws-shared · Transit Gateway, Route 53 Resolver, Directory Service.

10.0.0.0/16 · AWS

Load just this one ↗
CIDRLabelAZNotes
10.0.0.0/21 tgw-attach-a us-east-1a Transit Gateway attachment fleet.
10.0.8.0/21 tgw-attach-b us-east-1b Transit Gateway attachment fleet.
10.0.16.0/23 dns-a us-east-1a Route 53 Resolver inbound.
10.0.18.0/23 dns-b us-east-1b Route 53 Resolver outbound.
10.0.20.0/23 directory-a us-east-1a Managed Microsoft AD.
10.0.22.0/23 directory-b us-east-1b Managed Microsoft AD replica.

aws-prod · Production — EKS, RDS, ECS, ALB, Lambda ENIs.

10.4.0.0/14 · AWS

Load just this one ↗
CIDRLabelAZNotes
10.4.0.0/16 prod-app-a us-east-1a EKS workers — one IP per pod.
10.5.0.0/16 prod-app-b us-east-1b EKS workers — one IP per pod.
10.6.0.0/16 prod-app-c us-east-1c EKS workers — one IP per pod.
10.7.0.0/20 prod-public-a us-east-1a Public ALB, NAT gateway.
10.7.16.0/20 prod-public-b us-east-1b Public ALB, NAT gateway.
10.7.32.0/20 prod-public-c us-east-1c Public ALB, NAT gateway.
10.7.48.0/22 prod-data-a us-east-1a RDS primary + ElastiCache + Lambda ENIs.
10.7.52.0/22 prod-data-b us-east-1b RDS standby + ElastiCache.
10.7.56.0/22 prod-data-c us-east-1c RDS read replica.

aws-stage · Pre-prod EKS workloads, mirrors prod tier shape.

10.2.0.0/15 · AWS

Load just this one ↗
CIDRLabelAZNotes
10.2.0.0/17 stage-app-a us-east-1a Staging EKS workers.
10.2.128.0/17 stage-app-b us-east-1b Staging EKS workers.
10.3.0.0/21 stage-public-a us-east-1a Public ALB, NAT.
10.3.8.0/21 stage-public-b us-east-1b Public ALB, NAT.
10.3.16.0/23 stage-data-a us-east-1a RDS primary.
10.3.18.0/23 stage-data-b us-east-1b RDS standby.

aws-dev · Developer sandboxes, 1-AZ, low scale.

10.8.0.0/16 · AWS

Load just this one ↗
CIDRLabelAZNotes
10.8.0.0/19 dev-app-a us-east-1a Dev EKS cluster.
10.8.32.0/21 dev-app-b us-east-1b Dev EKS cluster (optional AZ).
10.8.40.0/23 dev-public us-east-1a ALB, NAT.
10.8.42.0/25 dev-data us-east-1a RDS single instance.

onprem-hq · HQ data center, primary site.

10.9.0.0/16 · Standard

Load just this one ↗
CIDRLabelVLANNotes
10.9.0.0/20 hq-users 100 Office floors, laptops, desktops.
10.9.16.0/20 hq-servers 140 Local app servers and VMs.
10.9.32.0/21 hq-voip 110 VoIP phones fleet.
10.9.40.0/22 hq-printers 120 Network printers and MFPs.
10.9.44.0/23 hq-mgmt 130 Switches, APs, firewalls.
10.9.46.0/23 hq-backup 150 Backup appliances, tape library.

onprem-dr · Disaster recovery site.

10.10.0.0/16 · Standard

Load just this one ↗
CIDRLabelVLANNotes
10.10.0.0/21 dr-users 200 DR office staff.
10.10.8.0/21 dr-servers 240 DR server replicas.
10.10.16.0/22 dr-voip 210 VoIP failover.
10.10.20.0/23 dr-mgmt 230 DR switches, firewalls.
10.10.22.0/24 dr-backup 250 DR backup appliance.

After loading

  • Open Tools → VPC peering validator. It scans every pair of workspaces for CIDR overlap — all six blocks above are disjoint, so every pair comes back clean. Edit any workspace's parent to another's range to see the overlap report in action.
  • Export each AWS workspace as Terraform separately so each account has its own state file and CI pipeline.
  • Scan the Radar drawer on the prod workspace — it surfaces unlabelled subnets, duplicate VLANs, and cross-workspace overlaps continuously, no button-click required.

Example 4 — Enterprise: 15 AWS accounts


The hardest part of running a full AWS Organization is not losing track of which account owns which block. Overlapping CIDRs make Transit Gateway attachments and VPN tunnels fail the moment you connect two VPCs. The pattern below carves a 10.0.0.0/10 envelope (4 M addresses) into per-account blocks sized by workload: /14 for data-platform and business-unit production accounts (EKS + RDS + ECS + Lambda ENIs), /15 for staging, /16 for supporting workloads and sandboxes. Every account has enough room for secondary CIDRs if the primary VPC hits AWS's native /16 limit.

Load all 15 workspaces in tool ↗ Replaces every workspace with the complete enterprise fleet (15 accounts).

Top-level CIDR allocation — 15 accounts in one table

BlockAccountPurpose
10.0.0.0/16 shared-core Transit Gateway hub, central DNS, Directory Service, CI/CD.
10.1.0.0/16 security-audit Central CloudTrail, Config, Security Hub, log archive.
10.2.0.0/16 network-hub Egress VPC, traffic inspection, Direct Connect gateway.
10.4.0.0/14 data-lake S3 + Glue + Athena + Lake Formation.
10.8.0.0/14 analytics Redshift, EMR, Kinesis.
10.12.0.0/14 bu-a-prod Business Unit A — production workloads.
10.16.0.0/15 bu-a-stage Business Unit A — staging.
10.18.0.0/16 bu-a-dev Business Unit A — development.
10.20.0.0/14 bu-b-prod Business Unit B — production workloads.
10.24.0.0/15 bu-b-stage Business Unit B — staging.
10.28.0.0/14 bu-c-prod Business Unit C — production workloads.
10.32.0.0/15 bu-c-stage Business Unit C — staging.
10.36.0.0/14 bu-d-prod Business Unit D — production workloads.
10.40.0.0/15 bu-d-stage Business Unit D — staging.
10.42.0.0/16 sandbox Developer sandboxes (short-lived).
10.43.0.0/16 (reserved) Reserved for future account expansion (16th slot onward).

Representative VPC plans

Every business-unit account follows the same pattern — only the parent CIDR shifts. Two detailed plans are shown below: the network-hub (centralised egress and inspection) and one of the business-unit production VPCs. All 15 plans are included in the Load all bundle above.

network-hub

10.2.0.0/16 · AWS · Egress VPC, traffic inspection, Direct Connect gateway.

Load just this one ↗
CIDRLabelAZNotes
10.2.0.0/21 network-hub-inspect-a us-east-1a Traffic inspection fleet.
10.2.8.0/21 network-hub-inspect-b us-east-1b Traffic inspection fleet.
10.2.16.0/22 network-hub-egress-a us-east-1a Centralised NAT gateway.
10.2.20.0/22 network-hub-egress-b us-east-1b Centralised NAT gateway.
10.2.24.0/23 network-hub-dx-a us-east-1a Direct Connect gateway.
10.2.26.0/23 network-hub-dx-b us-east-1b Direct Connect gateway.

bu-a-prod

10.12.0.0/14 · AWS · Business Unit A — production workloads.

Load just this one ↗
CIDRLabelAZNotes
10.12.0.0/16 bu-a-prod-app-a us-east-1a EKS workers.
10.13.0.0/16 bu-a-prod-app-b us-east-1b EKS workers.
10.14.0.0/20 bu-a-prod-public-a us-east-1a Public ALB + NAT.
10.14.16.0/20 bu-a-prod-public-b us-east-1b Public ALB + NAT.
10.14.32.0/22 bu-a-prod-data-a us-east-1a RDS primary + caches.
10.14.36.0/22 bu-a-prod-data-b us-east-1b RDS standby + caches.

Keeping 15 accounts conflict-free

  • Post the allocation table above in your IPAM or pin it in your internal wiki. Every new account request references a single source of truth.
  • Whenever you open a new account, copy the business-unit template, shift the parent to the account's envelope, and re-run the VPC peering validator across every open workspace. Any overlap gets flagged before you stand up a Transit Gateway attachment.
  • Reserve the low-order /16 slots for shared accounts (core, security, network-hub). They peer with everything, so keep their CIDRs memorable and document the reservation.
  • Keep several slots unallocated at the end of the envelope. Acquisitions, pilots, and workload splits always show up eventually, and pre-reserved blocks are cheaper than re-planning the tree.

Adapting an example


The host-count lists above are starting points. Change the inputs to match your actual workload, then re-run the VLSM Wizard to resize the tree. A few rules of thumb:

  • Size EKS / ECS tiers by peak pod count, not node count. The awsvpc CNI gives every pod an IP from the subnet. Double it if you expect rolling deployments with both old and new pods present at once.
  • Size the data tier small — databases live in a handful of subnets, not thousands.
  • Use /27 or /28 for the public (NAT / ALB) tier unless you expect dozens of internet-facing endpoints.
  • When in doubt, open Tools → Address Space Calculator first — it returns the recommended mask per tier before you touch the tree.
  • When you open more than one workspace, run Tools → VPC peering validator after each addition — it catches CIDR overlaps before they cost you a failed peering request.