Matrix IT manages the same CloudStack platform regardless of who owns the metal underneath. Pick the model that fits your budget and risk profile.
We procure, rack, and manage dedicated hardware for your workloads. Zero capital expenditure — pay monthly per node.
Already have physical servers? We install CloudStack, configure Zero Trust, and hand you the self-service portal — on your metal.
Your hardware (or ours) in a third-party Canadian data centre. Matrix IT manages remotely — no one needs to visit the DC day-to-day.
CTOs face the same three obstacles when running their own hardware. Tier1cloud solves all three.
Servers have a 5–7 year lifecycle. Planning, procuring, and migrating every refresh cycle drains capital and IT bandwidth. With Tier1cloud, hardware costs become predictable monthly opex.
Hypervisors, networking, storage pools, and security patching require specialised skills most development-focused teams don’t have. Matrix IT provides that expertise 24/7 as part of the service.
Unmanaged on-premise infrastructure is a liability. Missed patches, failed drives, and open firewall ports are all preventable. Tier1cloud includes NOC monitoring, SLA-backed uptime, and Zero Trust by default.
Everything your team needs to move from on-premise to cloud — without hiring a platform team.
Apache CloudStack on dedicated hardware. Self-service portal, REST API, Terraform provider. Deploy VMs in minutes — any hypervisor, any OS.
No VPN, no open ports. AppGate SDP or any ZTNA solution gives each developer identity-bound access to exactly the VMs they need — nothing more.
SLA-backed managed VMs. Matrix IT NOC monitors 24/7. Live migration, anti-affinity, automated snapshots, and off-cluster backups included.
CloudStack 4.x is the proven open-source platform powering thousands of clouds worldwide. It supports every major hypervisor, includes a full REST API, and gives you a self-service portal out of the box — without AWS pricing surprises.
AppGate SDP (or any ZTNA solution) gives each developer a per-user micro-tunnel to exactly the VMs they’re entitled to — nothing more. Team A cannot see Team B. No firewall rules to manage.
No ticket queues, no waiting on IT. Developers log into the CloudStack portal, pick a template, set CPU/RAM, and click deploy. Zero Trust credentials are provisioned automatically.
A clear division of responsibility. Matrix IT manages every layer below the guest OS: hypervisors, CloudStack, network fabric, storage, backups, and security patching. Your teams run applications and manage their VMs.
Tiered storage is built into every Tier1cloud deployment. Primary NVMe for hot workloads, secondary NFS for templates and snapshots, and Wasabi S3 for offsite archiving — at $10 CAD/TB/month with no egress fees.
Three buyers. Three journeys. Same platform underneath.
Dev lead logs into portal, picks Ubuntu 22.04, deploys 4 vCPU / 8 GB VM. Zero Trust entitlement appears in their AppGate client automatically. Team isolated in their own VLAN. No tickets.
Discovery call with Matrix IT → hardware sizing → BYOH or rental decision → CloudStack installed → VMs migrated one at a time → Zero Trust replaces VPN → old hardware decommissioned.
Matrix IT audits hardware, installs KVM + CloudStack, connects Zero Trust, configures VLANs per team. Customer gets full self-service portal on their own hardware within one week.
Dedicated infrastructure with managed services — the best of both worlds.
| Feature | Tier1cloud | AWS / Azure | Self-Managed On-Prem |
|---|---|---|---|
| Dedicated physical hardware | ✓ Always dedicated | ✗ Shared (most tiers) | ✓ Yes |
| Canadian data sovereignty | ✓ Guaranteed | Optional (extra cost) | ✓ Yes |
| Zero Trust built-in | ✓ Included | Extra product, extra cost | ✗ DIY |
| Predictable monthly cost | ✓ Flat per node | ✗ Variable / egress fees | Capex only |
| Bring your own hardware | ✓ BYOH supported | ✗ Not possible | ✓ Yes |
| Colocation option | ✓ Canadian DCs | ✗ Not applicable | ✓ Yes |
| Managed 24/7 by experts | ✓ Matrix IT NOC | Self-service only | ✗ Your burden |
| Self-service VM portal | ✓ CloudStack UI | ✓ Yes | ✗ DIY |
| No egress fees | ✓ None | ✗ Expensive | ✓ None |
| Hardware agnostic | ✓ KVM / VMware / XCP-ng | ✗ Proprietary | Depends |
No proprietary lock-in. Every component is enterprise-grade and replaceable.
Book a free 30-minute infrastructure assessment with Matrix IT. We’ll map your workloads to a Tier1cloud deployment plan and show you what the monthly cost looks like.
No capital expenditure. Matrix IT procures, racks, and manages dedicated physical hardware for your workloads. You get a private cloud with none of the hardware headaches — all on predictable monthly pricing.
Compute nodes, storage, networking specs available
How hardware is sized and selected for your workload
Your VMs run only on your physical nodes
Monthly per-node — no surprise refresh costs
CTOs who want to exit on-premise completely. No hardware procurement, no rack space, no refresh planning. Matrix IT handles everything — you get a self-service portal and Zero Trust access from day one.
Week 1: discovery & sizing • Week 2: hardware procurement • Week 3–4: rack, configure, test • Week 5: customer onboarding & VM migration begins.
Already have physical servers? Matrix IT installs CloudStack on your hardware, connects Zero Trust, and hands you the self-service portal. You keep owning the metal — we manage everything above it.
Minimum specs to run CloudStack effectively
How we take over management of your existing hardware
KVM, VMware, XenServer/XCP-ng all supported
You own hardware, Matrix IT manages software layer
Companies with existing server investment that want CloudStack, Zero Trust, and managed services layered on top — without giving up hardware ownership or paying for new metal.
Your hardware (or ours) in a third-party data centre of your choice. Matrix IT manages the CloudStack platform remotely. Full Zero Trust access — no need for anyone to visit the DC day-to-day.
Your hardware in a DC, or Matrix IT hardware in colo
Canadian colo partners & power/cooling requirements
IPMI/iDRAC, out-of-band, KVM-over-IP
Cross-connects, BGP peering, carrier diversity
Companies who want physical infrastructure in a professional data centre but don’t want to manage it. Ideal when your lease is expiring or you need carrier-grade connectivity without building your own DC.
Apache CloudStack powers everything. Self-service portal, open API, no vendor lock-in. Works identically across all three hardware models. Deploy VMs in minutes, automate with Terraform, integrate with any CI/CD pipeline.
vCPU, RAM, resource pools, service offerings
VLANs, virtual routers, public IPs, firewall rules
Primary/secondary, volumes, snapshots
REST API, CloudMonkey CLI, Terraform provider
No VPN. No open firewall ports. AppGate SDP gives each developer an identity-bound micro-tunnel to exactly the VMs they’re entitled to. Team A cannot reach Team B’s infrastructure. Works identically on hosted, BYOH, and colo.
Software-defined perimeter, per-user micro-tunnels
User mapped to VLAN and specific VM set
Dev Team A cannot see or reach Team B’s VMs
Every connection logged: user, time, destination, duration
SLA-backed managed VMs. Matrix IT runs the platform — you run your applications. NOC monitors 24/7. Live migration ensures zero downtime during host maintenance.
OS management, dedicated resources, SLA
Live migration, anti-affinity, failover
What Matrix IT watches 24/7
Snapshots, off-cluster storage, RTO/RPO
Open-source IaaS engine. CloudStack 4.x powers thousands of clouds worldwide. Supports every major hypervisor, includes a full REST API, and has a battle-tested self-service portal — no vendor lock-in, no licensing fees.
Data centre organisation and failure isolation
KVM, VMware, XenServer/XCP-ng
Basic, Advanced, VPC networking
REST, AWS EC2/S3 compat, CloudMonkey, Terraform
Developers spin up VMs in under 5 minutes. No tickets, no waiting on IT. Log into the CloudStack portal, pick a template, set CPU and RAM, click deploy. Zero Trust access credential provisioned automatically on creation.
Web self-service portal walk-through
Pre-built OS images ready to deploy
Credential provisioned on VM creation
Shared environments and personal sandboxes
Matrix IT owns the CloudStack platform and network layer. Your teams own everything above — guest OS, applications, and data. Clear division. Same model regardless of hardware ownership.
Hypervisor, CloudStack, network, monitoring
VMs, apps, data, team access policies
Security policy, change management, capacity
Week 1 discovery through Week 3 go-live
Tiered storage from fast NVMe primary to infinite offsite archive. Snapshot policies, online volume resize, and Wasabi S3 integration for offsite archiving at $10 CAD/TB/month with no egress fees.
IOPS allocation and QoS per VM
NFS/object for templates, ISOs, snapshots
Canadian archive, no egress fees
Attach, resize, detach live
Dev team lead logs into the CloudStack portal, selects Ubuntu 22.04 template, picks 4 vCPU / 8GB RAM, clicks deploy. VM is live in 3 minutes. Zero Trust entitlement appears in their AppGate client automatically. Team is isolated in their own VLAN. No tickets, no waiting on IT.
1. CloudStack spun up VM on an available KVM host • 2. VLAN 100 assigned for Team Alpha • 3. IP allocated from 10.10.100.0/24 • 4. AppGate entitlement pushed to dev’s client • 5. Hourly snapshot policy applied • 6. NOC monitoring enabled
How the self-service portal works
How VLANs keep teams separate
VLAN and virtual router setup
Discovery call with Matrix IT → hardware sizing → BYOH or hardware rental decision → CloudStack installed → VMs migrated one at a time → Zero Trust replaces VPN → old hardware decommissioned. No downtime, no data centre visit required.
If bringing your own servers
Week-by-week migration plan
AppGate SDP migration from VPN
Customer has 10 Dell PowerEdge servers sitting in a rack. Matrix IT audits hardware, installs KVM + CloudStack, connects Zero Trust, configures VLANs per team. Customer gets full self-service portal on their own hardware within a week.
Monday: Matrix IT engineers access servers remotely via IPMI. Kernel updated, KVM installed. Tuesday: CloudStack management server deployed. Wednesday: Zones, Pods, Clusters configured. Thursday: VLANs per team, Zero Trust gateway, AppGate entitlements. Friday: Customer walkthrough and portal handover.
What specs we check before starting
CloudStack + ZT install timeline
Everything above the hardware layer
Book a free 30-minute infrastructure assessment. We’ll map your workloads to a Tier1cloud deployment plan and provide a clear monthly cost estimate.
🇨🇦 All data hosted in Canada • No lock-in contracts • Powered by Apache CloudStack
AMD EPYC or Intel Xeon class CPUs. Minimum 2-socket configurations. Options: 32-core/256GB, 48-core/512GB, 64-core/1TB RAM. NVMe SSD local storage per node for primary pool.
10GbE minimum per node, 25GbE available. Bonded uplinks for redundancy. VLAN trunking on all ports. Dedicated management network for IPMI/iDRAC out-of-band.
Standard: 2x 1.9TB NVMe SSD in RAID-1 for primary. High-performance tier: 4x 3.84TB NVMe in RAID-10. Separate HDD or SAS for secondary/NFS storage pool.
CPU generations, RAM configs, NVMe options
10GbE/25GbE, bonded uplinks, VLAN trunking
Matrix IT conducts a workload sizing exercise to determine the right hardware. We issue a fixed monthly quote per compute node, no surprises. Hardware is procured, configured, and delivered to our facility or your colo.
Hardware is provided month-to-month. No 3-year lock-ins. Scale up by adding nodes. Scale down by returning nodes (30-day notice).
How Matrix IT right-sizes for your workload
Typical hardware procurement and racking timeline
Your VMs run only on hardware provisioned for your account. No noisy neighbours. Predictable IOPS, predictable memory — the physical resources are yours alone.
Other Tier1cloud customers run on separate physical hardware in separate VLANs. Network traffic is isolated at the switch layer, not just in software.
Your VMs on your physical nodes only
How other customers are isolated
Pricing is fixed per physical node per month. Includes: CloudStack management, hypervisor, Zero Trust gateway, NOC monitoring, backup snapshots, and support. No egress fees. No per-GB storage charges beyond the included pool.
A typical 5-node cluster costs $25K–$40K to purchase. Monthly rental removes this capex hit. Matrix IT also handles the hardware refresh cycle — you never need to buy again.
Monthly per node, per rack unit breakdown
When renting makes more financial sense
Minimum per node: 8-core CPU (x86_64), 32GB RAM, 500GB SSD, 1GbE NIC (10GbE recommended). Minimum cluster: 3 nodes for HA. Management server can be a VM on the cluster or a separate physical machine.
Dell PowerEdge, HP ProLiant, Lenovo ThinkSystem, Supermicro, Cisco UCS, custom whitebox builds. Any x86_64 server capable of hardware virtualisation (VT-x/AMD-V) qualifies.
CPU, RAM, storage, NIC requirements
Dell, HP, Supermicro, Lenovo, custom
What we check before deploying CloudStack
CloudStack + hypervisor + Zero Trust timeline
Linux KVM with libvirt. Open-source, best performance, lowest overhead. Matrix IT installs and manages the hypervisor layer. No licensing cost.
If you have existing VMware infrastructure, Matrix IT can integrate CloudStack with your vCenter. Existing VMs remain running — CloudStack manages new deployments alongside them.
Open-source XenServer fork. Good choice if you have XenServer experience. Full CloudStack support, including live migration and storage motion.
What we install, kernel version, libvirt setup
How we integrate without disrupting current VMs
Matrix IT manages software only. If you ever leave, we cleanly remove CloudStack, Zero Trust software, and management access. You keep your servers and all data.
When you decide to refresh hardware, Matrix IT re-provisions the new hardware with CloudStack at no additional setup charge within the same contract.
Removing CloudStack management if you leave
How aging hardware gets replaced on your schedule
You ship or transport hardware to a colo facility. Matrix IT manages remotely via IPMI/iDRAC. Full CloudStack + Zero Trust setup. No Matrix IT hardware involved.
Matrix IT procures and owns hardware, ships it to the colo facility of your choice. You get dedicated hardware in a premium DC without building your own.
You ship hardware, we manage remotely
Best of both worlds
Cologix (Montreal, Toronto, Vancouver), eStruxture (Montreal, Calgary), 151 Front (Toronto), Rogers Datacentres (Toronto), Telus/eBay Centres. Matrix IT has experience with all major Canadian colo operators.
Typical per-rack draw: 5–15 kW. Redundant PDU (A/B circuits) recommended. Hot-aisle/cold-aisle containment preferred. 1U/2U form factors standard. Out-of-band console KVM switch required.
Cologix, 151 Front, Rogers, eStruxture
Per-rack draw, redundant PDU options
Matrix IT requires out-of-band management access to all colo hardware. IPMI on commodity hardware, iDRAC on Dell, iLO on HP. Provides remote console, power cycle, and hardware diagnostics without needing a site visit.
Matrix IT engineers access colo hardware via Zero Trust tunnels. No site-to-site VPN. No jumpbox. Any engineer can access any authorised server from anywhere securely.
Remote console, power cycle, diagnostics
When physical access is needed
Connect directly to your internet provider, MPLS network, or cloud on-ramp (AWS Direct Connect, Azure ExpressRoute) via colo cross-connect. Matrix IT can procure and manage cross-connects on your behalf.
For customers with their own IP address space, BGP peering is available at major colo facilities. Carrier diversity provides redundant internet paths.
Connecting to your ISP or MPLS
No VPN even in colo environments
Pre-built CPU/RAM tiers: Small (2 vCPU/4GB), Medium (4/8GB), Large (8/16GB), XL (16/32GB), Custom. CTO can restrict which tiers teams can use.
Per-account, per-team CPU and RAM caps set by the CTO. Teams can self-serve within their quota. Quota increase requests go through a simple approval workflow.
Pre-built CPU/RAM tiers available
Per-team caps, how to request more
Each team gets a dedicated VLAN. CloudStack provisions VLANs dynamically. Virtual routers provide DHCP, DNS, and NAT. Teams cannot reach each other unless explicitly configured.
Default-deny on all ingress. Teams can add egress rules for outbound internet. Inbound ports must be explicitly opened. Security group model similar to AWS.
Why each team gets its own network segment
Default deny, egress control, security groups
Create, attach, detach, and resize data volumes without VM downtime. Thin provisioning by default. Thick provisioning available for latency-sensitive workloads.
Hourly retention: 24 hours. Daily retention: 7 days. Weekly retention: 4 weeks. Monthly: 12 months. Snapshots stored on secondary storage. Off-cluster to Wasabi S3 on schedule.
Hourly/daily/weekly retention schedules
Attach, detach, resize live
Full REST API for all CloudStack operations. AWS EC2 and S3 compatible API layer available. Works with any HTTP client. JSON responses.
# Deploy a VM via CloudMonkey CLI
cloudmonkey deploy virtualmachine
--serviceofferingid=<id>
--templateid=<id>
--zoneid=<id>
--networkids=<vlan-id>
--displayname="dev-server-01"
Example commands to spin up a VM
Full cloudstack_instance HCL example
AppGate SDP creates a per-user, per-session micro-tunnel to the specific resources the user is entitled to. No network-wide access. No broadcast domain. The network topology is invisible to the user — they see only their VMs.
1. Matrix IT creates entitlement (user → VM/VLAN mapping). 2. User installs AppGate client. 3. User authenticates (SSO/SAML + MFA). 4. User sees only their entitled resources in the client. No network config required.
Comparison table, why VPN fails at scale
User gets entitlement, installs client, done
It’s not about what network you’re on. It’s about who you are. Each user has explicit entitlements. Revoking a user’s access is instant — no firewall rule changes required.
TOTP (Google Authenticator, Authy), hardware security key (YubiKey, FIDO2), SAML/SSO integration (Okta, Azure AD, Google Workspace). MFA is enforced on every session.
TOTP, hardware key, SSO/SAML
Idle timeout, re-auth triggers
Each development team is assigned a dedicated VLAN. CloudStack provisions the VLAN automatically when the team is created. Team A’s VMs are on VLAN 100, Team B on VLAN 200, etc. VLANs are isolated at the virtual switch layer.
Without an AppGate entitlement, a VM’s IP address is not discoverable. There are no open ports to scan. The server does not respond to ping from outside its VLAN. Zero Trust + VLAN isolation provides true micro-segmentation.
Network diagram of team segments
VMs not discoverable without entitlement
User, timestamp (UTC), source IP, destination VLAN/IP, port, protocol, session duration. Logs are immutable. Stored for 90 days by default (12 months available). Exportable to SIEM (Splunk, Elastic, etc.).
Generate access reports per user, per VM, per time range. Useful for SOC 2, PIPEDA, internal audits. Reports exportable as CSV or PDF.
User, timestamp, source IP, destination, duration
Exporting logs for audit trail
Ubuntu 22.04 LTS, Ubuntu 24.04 LTS, RHEL 8/9, Rocky Linux 8/9, Debian 11/12, Windows Server 2019/2022. Custom templates available on request.
Monthly patching windows coordinated with customers. Emergency CVE patches applied within 24 hours of critical disclosure. Zero-downtime patching via live migration.
Ubuntu, RHEL, Windows, Debian
Monthly windows, emergency CVE patches
When a physical host requires maintenance, VMs are live-migrated to another host in the same cluster. Zero downtime for the VM. Users connected via Zero Trust experience no interruption.
Critical VMs can be placed on separate physical hosts to survive single-host failures. Configure anti-affinity groups in CloudStack to spread replicated workloads.
Zero downtime host maintenance
Spreading VMs across physical hosts
Matrix IT NOC monitors: hypervisor CPU/RAM/disk, CloudStack management server health, network interface errors, IPMI hardware alerts (temperature, fan, PSU), storage pool utilisation, and VM uptime ping checks.
Alert → automated remediation attempt → NOC engineer review → on-call engineer page → customer notification. P1 (site down): 15-minute response. P2 (degraded): 1-hour response.
CPU, RAM, disk I/O, network, uptime
Alert to NOC to on-call to customer
Hourly snapshots retained for 24 hours. Daily snapshots retained for 7 days. Weekly snapshots retained for 4 weeks. Off-cluster snapshots sent to Wasabi S3 on daily schedule.
Customer submits restore request via Matrix IT portal or support ticket. Typical turnaround: 15–60 minutes depending on snapshot age and VM size. Full VM restore or single-volume restore available.
Hourly 24h, daily 7d, weekly 4w retention
How to request, typical turnaround
Matrix IT owns and maintains: hypervisor (KVM/VMware/XCP-ng), CloudStack management server, virtual routers, VLAN fabric, Zero Trust gateway, storage pools (primary NVMe and secondary NFS), and off-cluster backups.
24/7 monitoring, hypervisor security patching, CloudStack upgrades, hardware firmware updates, and incident response. All performed without customer involvement unless change management requires approval.
What Matrix IT maintains at hardware layer
CloudStack, hypervisor, security CVE patching
Everything above the hypervisor is yours: guest OS installation, application deployment, data management, user accounts within VMs, application-level backups, and development workflows.
All data remains your property. You can export or migrate VMs at any time. Matrix IT has no access to VM contents unless explicitly granted for support purposes.
What remains your responsibility
Always yours, exportable at any time
Discovery calls with CTO and IT leads. Workload inventory. Network diagram review. Security requirements. Team structure and VLAN design. Hardware model decision (hosted / BYOH / colo).
Hardware procurement (if hosted) or hardware audit (if BYOH/colo). CloudStack zone/pod/cluster design. VLAN layout. Zero Trust entitlement map. Storage pool sizing.
CloudStack + hypervisor + Zero Trust installed. VMs migrated (or new deployments). Customer team walkthrough. Portal access granted. NOC monitoring enabled. SLA starts.
On-prem to cloud for all hardware models
Discovery calls, inventory audit, network design
A Zone is a physical data centre or failure domain. Each Zone has its own network, storage, and compute. Multiple Zones can be linked for DR.
Within a Zone, Pods represent a layer-2 broadcast domain (a rack row). Each Pod has its own IP address space and Pod-level storage.
Within a Pod, a Cluster is a group of hosts running the same hypervisor. Live migration is possible within a Cluster. Clusters can be KVM, VMware, or XCP-ng.
Zone/Pod/Cluster diagram
Failure isolation at zone, pod, cluster level
Linux KVM with libvirt. Best performance/cost ratio. Open-source, no licensing. Recommended for BYOH and hosted models.
For customers with existing VMware. CloudStack integrates with vCenter. Mixed clusters (KVM + VMware) not supported in the same cluster but can co-exist in different Clusters within the same Zone.
Open-source XenServer. Strong SR-IOV network support. Good choice for networking-intensive workloads.
Kernel version, libvirt, performance
vSphere, vCenter integration, migration
Each team gets a dedicated VLAN with a CloudStack Virtual Router providing DHCP, DNS, NAT, and stateful firewall. Egress internet via NAT. Inbound via static NAT / port-forward.
Virtual Private Cloud within CloudStack. Multiple tiers (subnets) within one VPC. ACL rules between tiers. Site-to-site VPN to on-premise or other clouds. Best for multi-tier app architectures.
Multi-tier application networking
NAT, DHCP, DNS, firewall all-in-one
Every CloudStack operation is available via REST. JSON responses. API keys per user. Full documentation at cloudstack.apache.org/api.
# Terraform HCL example
resource "cloudstack_instance" "dev_server" {
name = "dev-server-01"
service_offering = "Small-4vCPU-8GB"
template = "Ubuntu-2204"
zone = "zone-01"
network_id = cloudstack_network.team_a.id
}
Deploy a VM with a single REST call
Full cloudstack_instance HCL example
Login → Dashboard shows your VMs, their state, and IP addresses. Click “Add Instance” → select Zone → pick Template → choose Service Offering (CPU/RAM) → select Network → Deploy. VM appears in your dashboard within 3 minutes.
Embedded VNC console in the portal. Access your VM’s console without installing an SSH client or Zero Trust client. Useful for initial OS setup or emergency recovery.
Step-by-step portal walkthrough
Viewing VMs, IPs, console access
Ubuntu 22.04 LTS, Ubuntu 24.04 LTS, RHEL 8, RHEL 9, Rocky Linux 9, Debian 12, Windows Server 2019, Windows Server 2022, Docker CE pre-installed, Kubernetes node (kubeadm ready).
Teams can register their own templates. Build your base image, snapshot it, register it as a template. Your team’s custom template appears only in your account’s template library.
Ubuntu, RHEL, Windows, Docker, K8s
Registering a team golden image
When a VM is created in the portal, an entitlement is automatically provisioned in AppGate for all users in the VM’s team. The entitlement grants SSH (port 22) and any other ports defined in the team’s access policy. No manual firewall rule changes.
No Zero Trust client needed for browser VNC. The portal provides in-browser console access using a secure WebSocket connection through the CloudStack management server.
VM created → identity mapped → access granted
VNC without needing ZT client
VMs deployed to a team VLAN are shared within the team. All team members with entitlements can SSH into any team VM. Good for shared dev/test environments, build servers, shared databases.
A developer can spin up personal VMs in their own sub-VLAN. Only they can access these VMs. Good for personal dev environments, destructive testing, learning exercises.
All team VMs on same isolated network
CTO sets per-team VM/CPU/RAM limits
Primary storage is local NVMe SSD on each compute node, aggregated into a CloudStack storage pool. Thin provisioning with over-provisioning ratio of 2:1. QoS limits prevent any single VM from saturating the pool.
For latency-sensitive workloads (databases, real-time systems): dedicated NVMe pool with thick provisioning and no QoS limits. Available as an add-on storage offering.
Standard vs high-performance storage
Trade-offs and recommendations
Secondary storage stores VM templates, ISO images, and snapshots. Implemented as NFS share from a dedicated storage server. Capacity scales independently from compute nodes.
CloudStack automatically replicates templates to secondary storage in each Zone. When a new VM is deployed, the template is served from the Zone’s local secondary storage — fast and efficient.
How CloudStack stores and distributes images
Where snapshots live, retention policy
VM snapshots and backup exports are pushed to Wasabi S3 on a daily schedule. Wasabi’s Canadian region (ca-central-1) ensures data sovereignty. 11 nines (99.999999999%) durability. ~$10 CAD/TB/month with no egress fees.
Tier1cloud offsite archiving is powered by StorageCloud360 — Matrix IT’s managed Wasabi S3 service. storagecloud360.com
~$10 CAD/TB/mo, no egress fees
rclone, AWS CLI, any S3 client
Expand a data volume without VM downtime. CloudStack extends the block device, guest OS extends the filesystem (Linux: resize2fs/xfs_growfs, Windows: Disk Management). No migration required.
Read-write-many volumes available for clustered applications (GlusterFS, Ceph-based apps, OCFS2). Requires shared storage pool configuration. Available on request.
Expand a volume without VM downtime
Read-write-many for clustered apps