BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Asia/Kolkata
X-WR-TIMEZONE:Asia/Kolkata
BEGIN:VEVENT
UID:144@cds.iisc.ac.in
DTSTART;TZID=Asia/Kolkata:20250912T113000
DTEND;TZID=Asia/Kolkata:20250912T123000
DTSTAMP:20250910T022538Z
URL:https://cds.iisc.ac.in/events/seminar-cds-102-september-12th-1130-arch
 itectural-divergence-in-cloud-computing-a-comparative-analysis-of-cpu-cent
 ric-and-ai-centric-paradigm/
SUMMARY:{Seminar} @ CDS: #102\, September 12th \,11:30: "Architectural Dive
 rgence in Cloud Computing: A Comparative Analysis of CPU-Centric and AI-Ce
 ntric Paradigm."
DESCRIPTION:Department of Computational and Data Sciences\nDepartment Semin
 ar\n\n\n\nSpeaker: Mr. Xavier Kurian from Neysa.ai\nTitle: "Architectural 
 Divergence in Cloud Computing: A Comparative Analysis of CPU-Centric and A
 I-Centric Paradigm"\nDate &amp\; Time: September 12th\, 2025 (Friday)\, 11
 :30 AM\nVenue: # 102\, CDS Seminar Hall\n\n\n\nABSTRACT\n\n“Architectura
 l Divergence in Cloud Computing: A Comparative Analysis of CPU-Centric and
  AI-Centric Paradigms — Insights from Xavier Kurian\, CRO\, Neysa.ai”\
 n\nThe architectural paradigms underlying CPU-centric and AI (GPU-centric)
  clouds represent fundamentally opposing approaches to resource management
  and workload execution. Traditional CPU-based clouds are designed to disa
 ggregate compute\, memory\, and I/O into smaller virtualized units optimiz
 ed for diverse\, concurrent\, and latency-sensitive applications. In contr
 ast\, AI clouds must aggregate large numbers of GPUs into tightly coupled 
 clusters that behave as unified computational entities—an approach remin
 iscent of high-performance computing (HPC) systems\, with similar demands 
 on interconnect bandwidth\, I/O throughput\, and memory coherence.\n\nThis
  work presents a comparative analysis of CPU and AI cloud infrastructures 
 across key dimensions: cluster and container orchestration\, disk and memo
 ry I/O characteristics\, and the broader implications for datacenter archi
 tecture. We examine the divergent scheduling models\, data flow patterns\,
  and hardware requirements that underpin each system. Additionally\, we as
 sess the energy and thermal implications of both architectures\, consideri
 ng power delivery\, cooling technologies\, and physical layout constraints
 .\n\nThrough this comparison\, we aim to highlight the architectural and o
 perational shifts necessary to support large-scale AI workloads in cloud e
 nvironments.\n\nBIO: Xavier Kurian currently serves as the Chief Revenue O
 fficer (CRO) at Neysa\, an AI Acceleration Cloud System provider. A season
 ed thought leader with deep expertise in digital infrastructure and the en
 terprise market\, Mr. Kurian is pivotal in guiding businesses to effortles
 sly accelerate their AI adoption journey. In his role at Neysa\, he spearh
 eads revenue growth\, market expansion\, and customer success\, focusing o
 n delivering substantial value through Neysa's cutting-edge AI cloud solut
 ions.\n\nWith a career spanning over 24 years in the IT sector\, Mr. Kuria
 n has a rich background in solution architecture\, presales\, and strategi
 c alliances. Before Neysa\, he had a notable 14-year tenure at Dell Techno
 logies\, where he served in key leadership capacities\, including Director
  of Solution Architects &amp\; Presales for India and Director of Solution
 s &amp\; Alliances. His earlier career also includes roles at Trend Micro 
 as a Product Specialist and at Sun Microsystems as an Enterprise IT Archit
 ect\n\nHost Faculty: Prof. Venkatesh Babu Radhakrishnan/Dr. Anirban Chakra
 borty\n\n\n\nALL ARE WELCOME
CATEGORIES:Events,Talks
END:VEVENT
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
X-LIC-LOCATION:Asia/Kolkata
BEGIN:STANDARD
DTSTART:20240912T113000
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR