


























ENTERPRISE SERVER II - NVIDIA
Extreme-Scale Compute. Enterprise Sovereignty.
Overview
The CI Enterprise Server II – NVIDIA is engineered for organizations that demand HPC-class performance, compliance, and uptime without depending on public cloud. With a 96-core AMD Ryzen Threadripper PRO 9995WX, dual NVIDIA RTX Pro 6000 GPUs (96GB GDDR7 each), and 1TB of ECC DDR5 memory, it handles multi-billion parameter AI training, petabyte-scale simulations, and enterprise-grade rendering. Built for resilience and scalability, it anchors multi-node deployments, ensuring sovereign, compliant, and futureproof AI infrastructure.
Key Highlights
96-Core Threadripper PRO CPU → HPC-grade parallel performance for AI + scientific workloads
Dual NVIDIA RTX Pro 6000 GPUs (192GB total VRAM) → frontier-scale AI training & rendering
1TB ECC DDR5 Memory → error-corrected stability for multi-week enterprise workflows
Hybrid Storage (38TB) → 6TB PCIe Gen5 NVMe + 32TB SSD archive-ready storage
Enterprise Reliability → N+1 redundancy, Titanium PSU, ECC memory, and optimized cooling
Rack-Ready Option → Standard 4U form factor with mobility-ready Gator 6U Pro rack case
Extreme-Scale Compute. Enterprise Sovereignty.
Overview
The CI Enterprise Server II – NVIDIA is engineered for organizations that demand HPC-class performance, compliance, and uptime without depending on public cloud. With a 96-core AMD Ryzen Threadripper PRO 9995WX, dual NVIDIA RTX Pro 6000 GPUs (96GB GDDR7 each), and 1TB of ECC DDR5 memory, it handles multi-billion parameter AI training, petabyte-scale simulations, and enterprise-grade rendering. Built for resilience and scalability, it anchors multi-node deployments, ensuring sovereign, compliant, and futureproof AI infrastructure.
Key Highlights
96-Core Threadripper PRO CPU → HPC-grade parallel performance for AI + scientific workloads
Dual NVIDIA RTX Pro 6000 GPUs (192GB total VRAM) → frontier-scale AI training & rendering
1TB ECC DDR5 Memory → error-corrected stability for multi-week enterprise workflows
Hybrid Storage (38TB) → 6TB PCIe Gen5 NVMe + 32TB SSD archive-ready storage
Enterprise Reliability → N+1 redundancy, Titanium PSU, ECC memory, and optimized cooling
Rack-Ready Option → Standard 4U form factor with mobility-ready Gator 6U Pro rack case
Extreme-Scale Compute. Enterprise Sovereignty.
Overview
The CI Enterprise Server II – NVIDIA is engineered for organizations that demand HPC-class performance, compliance, and uptime without depending on public cloud. With a 96-core AMD Ryzen Threadripper PRO 9995WX, dual NVIDIA RTX Pro 6000 GPUs (96GB GDDR7 each), and 1TB of ECC DDR5 memory, it handles multi-billion parameter AI training, petabyte-scale simulations, and enterprise-grade rendering. Built for resilience and scalability, it anchors multi-node deployments, ensuring sovereign, compliant, and futureproof AI infrastructure.
Key Highlights
96-Core Threadripper PRO CPU → HPC-grade parallel performance for AI + scientific workloads
Dual NVIDIA RTX Pro 6000 GPUs (192GB total VRAM) → frontier-scale AI training & rendering
1TB ECC DDR5 Memory → error-corrected stability for multi-week enterprise workflows
Hybrid Storage (38TB) → 6TB PCIe Gen5 NVMe + 32TB SSD archive-ready storage
Enterprise Reliability → N+1 redundancy, Titanium PSU, ECC memory, and optimized cooling
Rack-Ready Option → Standard 4U form factor with mobility-ready Gator 6U Pro rack case
Technical Specs
CPU: AMD Ryzen Threadripper PRO 9995WX (96-core, 192-thread)
GPU: Dual NVIDIA RTX Pro 6000 (96GB GDDR7 each)
Memory: 1TB DDR5 ECC RDIMM (8×128GB, 5600 MHz)
Motherboard: ASUS PRO WS TRX50-SAGE
Storage: 3×2TB PCIe Gen5 NVMe SSD (6TB) + 32TB SATA SSDs (4×8TB)
Cooling: Arctic Freezer 4U-M Rev2 + Noctua NF-F12 Fans
Power: Corsair AX1600i Titanium PSU
Case Options: Fractal Design Define 7 XL tower or 4U rackmount with Gator Pro 6U case
Warranty: 1-Year Limited Parts & Labor (extendable to 2–3 years)
Support & Warranty
1-year standard parts & labor (extendable)
Guided enterprise onboarding + cluster scaling support
SLA-backed optional managed service model
Companion OS (Included)
CI Digital Memory — Context-rich, persistent recall stored locally
Docker-Ready — Containerized workflows for AI, research, and enterprise services
Preloaded AI Models — Llama 3, Mistral, R1, and more, available on day one
Just-in-Case Dataset — Preloaded essential public resources for backup use
Agent Alpha Access — 2 free years of Companion Agent platform
Learning Kit — Tutorials, no-code tools, agent templates, and 8 hours of enterprise onboarding
What You Can Do
AI Training at Scale — Train multi-billion parameter LLMs with GPU + CPU synergy
HPC & Simulation — Run physics, engineering, and life sciences workloads
Rendering & Visualization — Drive real-time VFX, digital twins, and generative media
Enterprise AI Services — Deploy RAG pipelines, assistants, and multi-department AI infrastructure
Cluster-Ready Expansion — Build into multi-node deployments with redundancy & orchestration
Who It’s For
Research Supercomputing Centers — Handle experimental AI + HPC workloads
Enterprises & Cloud Providers — Sovereign AI backbone with compliance-ready scaling
Vizualization Teams & Media Labs — Ultra-high-resolution rendering and GPU-driven creative pipelines
Why It Matters
Resilience at Scale — Redundant nodes, failover orchestration, and compliance logging
CUDA Advantage — Access NVIDIA’s enterprise GPU ecosystem for AI + creative apps
Futureproof Expansion — PCIe Gen5, DDR5, rackmount options for long-term scaling
Sovereign AI Infrastructure — Keep data private, compliant, and fully under your control
Cost Efficiency — Replace recurring cloud GPU spend with owned enterprise hardware