














SMB SERVER I - AMD Rack
Starting at $707 / month with affirm
Rackmount AI Power for Professional Workloads
Overview
All the power of SMB SERVER I tower in a convenient rack mount. This performance-expanded AI server is built for research labs, enterprises, and studios that need scale beyond entry-level compute—more speed, bigger memory resources. Powered by AMD Threadripper PRO 9965WX, dual Radeon Pro W7900 GPUs, and 512GB ECC DDR5 memory, it enables billion-parameter AI training, high-resolution rendering, and large-scale simulation — all fully private, without cloud lock-in.
Key Highlights
AMD Threadripper PRO 9965WX + Dual Radeon Pro W7900 GPUs → 64GB VRAM for advanced AI acceleration and rendering
512GB ECC DDR5 → half a terabyte of error-corrected memory for large models and long-duration workloads
Hybrid Storage (34TB) → 2TB PCIe Gen5 NVMe + 32TB SATA SSDs for projects, datasets, and archives
Professional Reliability → ECC RAM, Titanium PSU, and precision cooling for 24/7 uptime
Local-first AI → full privacy, no recurring cloud fees, ready with Companion OS
Starting at $707 / month with affirm
Rackmount AI Power for Professional Workloads
Overview
All the power of SMB SERVER I tower in a convenient rack mount. This performance-expanded AI server is built for research labs, enterprises, and studios that need scale beyond entry-level compute—more speed, bigger memory resources. Powered by AMD Threadripper PRO 9965WX, dual Radeon Pro W7900 GPUs, and 512GB ECC DDR5 memory, it enables billion-parameter AI training, high-resolution rendering, and large-scale simulation — all fully private, without cloud lock-in.
Key Highlights
AMD Threadripper PRO 9965WX + Dual Radeon Pro W7900 GPUs → 64GB VRAM for advanced AI acceleration and rendering
512GB ECC DDR5 → half a terabyte of error-corrected memory for large models and long-duration workloads
Hybrid Storage (34TB) → 2TB PCIe Gen5 NVMe + 32TB SATA SSDs for projects, datasets, and archives
Professional Reliability → ECC RAM, Titanium PSU, and precision cooling for 24/7 uptime
Local-first AI → full privacy, no recurring cloud fees, ready with Companion OS
Starting at $707 / month with affirm
Rackmount AI Power for Professional Workloads
Overview
All the power of SMB SERVER I tower in a convenient rack mount. This performance-expanded AI server is built for research labs, enterprises, and studios that need scale beyond entry-level compute—more speed, bigger memory resources. Powered by AMD Threadripper PRO 9965WX, dual Radeon Pro W7900 GPUs, and 512GB ECC DDR5 memory, it enables billion-parameter AI training, high-resolution rendering, and large-scale simulation — all fully private, without cloud lock-in.
Key Highlights
AMD Threadripper PRO 9965WX + Dual Radeon Pro W7900 GPUs → 64GB VRAM for advanced AI acceleration and rendering
512GB ECC DDR5 → half a terabyte of error-corrected memory for large models and long-duration workloads
Hybrid Storage (34TB) → 2TB PCIe Gen5 NVMe + 32TB SATA SSDs for projects, datasets, and archives
Professional Reliability → ECC RAM, Titanium PSU, and precision cooling for 24/7 uptime
Local-first AI → full privacy, no recurring cloud fees, ready with Companion OS
Technical Specs
CPU: AMD Ryzen Threadripper PRO 9965WX (24-core, 48-thread)
GPU: Dual AMD Radeon Pro W7900 (32GB each, 64GB total VRAM)
Memory: 512GB DDR5 ECC RDIMM (8×64GB, 5600 MHz)
Motherboard: ASUS PRO WS TRX50-SAGE
Storage: 2TB PCIe Gen5 NVMe SSD + 32TB SATA SSDs (4×8TB)
Cooling: Arctic Freezer 4U-M Rev2 + Noctua NF-F12 Fans
Power: Corsair AX1600i Titanium PSU
Cooling: Arctic Freezer 4U-M Rev2 + Noctua NF-F12 fans
Case: RackChoice 4U Rackmount Chassis + Gator Pro 6U Rolling Case
Support & Warranty
Includes 1-year parts & labor, with optional 2–3 year extensions.
Enterprise deployment, migration, and scaling support available.
Companion OS (Included)
CI Digital Memory — Persistent, context-rich knowledge recall
Docker-Ready — Seamless containerized workflows for AI, rendering, or business tasks
Preloaded AI Models — Llama 3, Mistral, R1, and more, available day one
Learning Kit — No-code tools, agent templates, tutorials, and 2 hours of expert onboarding
What You Can Do
AI Training — Train multi-billion parameter models with ECC-protected reliability
Simulation — Run engineering and scientific workloads without memory bottlenecks
Rendering — Drive high-resolution VFX, 3D, and generative media pipelines
Virtualization — Host multiple VMs and containerized workflows securely
Who It’s For
Research Institutions — AI experiments and model evaluation with data kept on-prem
Enterprises — Compliance-ready AI infrastructure without datacenter overhead
Studios & Creators — Scalable rendering and generative media production pipelines
Why it Matters
Cloud independence: Avoid recurring GPU rental costs
Privacy & sovereignty: Keep sensitive datasets and models local
Scalable design: Expand RAM, GPUs, and storage as your needs grow
Enterprise reliability: ECC DDR5 and Titanium PSU ensure uninterrupted uptime
Future-ready: PCIe Gen5, DDR5, and modular architecture extend long-term value