


























ENTERPRISE SERVER I - NVIDIA
The baseline node for private enterprise AI.
Overview
The CI Enterprise Server I – NVIDIA is the foundation of enterprise-grade, local-first AI. Powered by AMD Threadripper PRO 9965WX and dual NVIDIA RTX Pro 6000 GPUs (96GB GDDR7 each), it delivers unmatched compute throughput and GPU memory for training large-scale AI models, ultra-high-resolution rendering, and scientific simulations. With 1TB ECC DDR5 memory and 38TB hybrid storage, it provides mission-critical stability, compliance, and scalability — all without the cloud.
Key Highlights
Dual NVIDIA RTX Pro 6000 GPUs (192GB total VRAM) → industry-leading GPU memory and CUDA acceleration
1TB ECC DDR5 Memory → multi-week training stability for billion-parameter models
Hybrid Storage (38TB) → 6TB PCIe Gen5 NVMe + 32TB extended SSDs for active + archival workloads
Enterprise Reliability → ECC RAM, Titanium PSU, RAID support, and redundant cooling
Compliance Ready → hardware encryption, audit logging, and regulatory alignment (GDPR/HIPAA/SOC 2)
The baseline node for private enterprise AI.
Overview
The CI Enterprise Server I – NVIDIA is the foundation of enterprise-grade, local-first AI. Powered by AMD Threadripper PRO 9965WX and dual NVIDIA RTX Pro 6000 GPUs (96GB GDDR7 each), it delivers unmatched compute throughput and GPU memory for training large-scale AI models, ultra-high-resolution rendering, and scientific simulations. With 1TB ECC DDR5 memory and 38TB hybrid storage, it provides mission-critical stability, compliance, and scalability — all without the cloud.
Key Highlights
Dual NVIDIA RTX Pro 6000 GPUs (192GB total VRAM) → industry-leading GPU memory and CUDA acceleration
1TB ECC DDR5 Memory → multi-week training stability for billion-parameter models
Hybrid Storage (38TB) → 6TB PCIe Gen5 NVMe + 32TB extended SSDs for active + archival workloads
Enterprise Reliability → ECC RAM, Titanium PSU, RAID support, and redundant cooling
Compliance Ready → hardware encryption, audit logging, and regulatory alignment (GDPR/HIPAA/SOC 2)
The baseline node for private enterprise AI.
Overview
The CI Enterprise Server I – NVIDIA is the foundation of enterprise-grade, local-first AI. Powered by AMD Threadripper PRO 9965WX and dual NVIDIA RTX Pro 6000 GPUs (96GB GDDR7 each), it delivers unmatched compute throughput and GPU memory for training large-scale AI models, ultra-high-resolution rendering, and scientific simulations. With 1TB ECC DDR5 memory and 38TB hybrid storage, it provides mission-critical stability, compliance, and scalability — all without the cloud.
Key Highlights
Dual NVIDIA RTX Pro 6000 GPUs (192GB total VRAM) → industry-leading GPU memory and CUDA acceleration
1TB ECC DDR5 Memory → multi-week training stability for billion-parameter models
Hybrid Storage (38TB) → 6TB PCIe Gen5 NVMe + 32TB extended SSDs for active + archival workloads
Enterprise Reliability → ECC RAM, Titanium PSU, RAID support, and redundant cooling
Compliance Ready → hardware encryption, audit logging, and regulatory alignment (GDPR/HIPAA/SOC 2)
Technical Specs
CPU: AMD Ryzen Threadripper PRO 9965WX (24-core, 48-thread)
GPU: Dual NVIDIA RTX Pro 6000 (96GB GDDR7 each)
Memory: 1TB DDR5 ECC RDIMM (8×128GB, 5600 MHz)
Motherboard: ASUS PRO WS TRX50-SAGE
Storage: 3×2TB PCIe Gen5 NVMe SSD (6TB total) + 32TB SATA SSDs (4×8TB)
Cooling: Arctic Freezer 4U-M Rev2 + Noctua NF-F12 Fans
Power: Corsair AX1600i Titanium PSU
Case Options: Fractal Design Define 7 XL (tower) or RackChoice 4U + Gator 6U Pro Rack Case
Warranty: 1-Year Limited Parts & Labor (extendable 2–3 years)
Support & Warranty
Support & Warranty
1-year standard parts & labor (extendable 2–3 years)
Guided onboarding, setup, and migration assistance
Update pipeline + optional managed service add-ons
Companion OS (Included)
CI Digital Memory — Context-rich, persistent recall stored locally
Docker-Ready — Containerized workflows for AI, research, and enterprise services
Preloaded AI Models — Llama 3, Mistral, R1, and more, available on day one
Just-in-Case Dataset — Preloaded essential public resources for backup use
Agent Alpha Access — 2 free years of Companion Agent platform
Learning Kit — Tutorials, no-code tools, agent templates, and 8 hours of enterprise onboarding
What You Can Do
AI Training — Handle multi-billion parameter models with dual 96GB GPUs
Simulation & HPC — Run real-time physics, engineering, and scientific workloads
Rendering & Visualization — Accelerate ultra-high-resolution rendering and digital twins
Enterprise AI Services — Deploy multi-user RAG, assistants, and knowledge pipelines
Virtualization — Host VMs and containerized services for distributed teams
Who It’s For
Enterprises — Deploy compliant, sovereign AI infrastructure with scalability
Research Institutions — Train, simulate, and analyze at frontier scale
Studios & Visualization Teams — Drive high-resolution rendering and generative pipelines
Why it Matters
Enterprise-Class GPUs: RTX Pro 6000 delivers 96GB VRAM per GPU for unmatched throughput
Local Sovereignty: Keep data private, compliant, and under your control
Expandable: Scale to multi-node clusters as workloads grow
Turnkey Deployment: Ships preloaded with Companion OS and enterprise AI workflows
Cost Efficiency: Avoid recurring GPU rental fees and datacenter bloat