About the talk
Ian Ferriera and John Curran from Core Scientific discuss the differences between traditional enterprise data centers, and data centers built specifically for AI workloads.
Hi, I'm in for air I'm Chief product officer core scientific 00:05 scientific company that operates a high-end data 00:09 centers focused on blockchain and artificial intelligence. Something first and foremost is 00:19 around power density. So if you look at a traditional server in price ever' you're driving down one to two kilowatts. Are you 00:28 doing down 10 kilowatts soccer facilities typically have a lot more power newest facility 830 Mega West Ridge compared to an average data center, 00:38
which is around 5 to 25. Mega was pretty substantial difference. Some of the Practical aspects dgx-2 is 00:48 over 400 lb floor Datacenter become slightly less raised. So people trying to put the new one for the 00:57 new capabilities into existing data centers find it they've got a lot to do. 01:07 ESO if you look at blockchain on the proof upwork side, it has the same power density requirements as he spoke about on the AI. So that's really the 01:18 foundation of the company, but we do have other crossovers. For example, we use AI models as part of our intelligent infrastructure to help optimize 01:28
the operational parameters of our infrastructure is trying to get the most Roi for customers at capital investment. We also do work 01:37 optimization to figuring out what the right work like to run at any given time on your infrastructure. Sure, 01:46 so core. Scientific is one of nine dgx certified data centers globally and what that allows us to do is that we 01:56 had to be able to maintain and manage the capabilities and requirements of a dgx 02:05 practically speaking that allows us to offer both colocation services for folks who want to move to New infrastructure like the 02:15
dgx, but don't want to retrofit their existing data centers as a cloud service to customers. 02:24 So what we saying really is is is the graduation of projects that started as POC for Skunk Works and are starting to get 02:38 operationalize and we see that showing up in two places on the training side. We see that with longer running duty cycle to instead of the kind of 02:48 Burst Mode utilisation. People are training models consistently we've seen customers that have outgrown their cloud and on-premise infrastructure. And 02:57
then on the inference side, we see a lot more attention being paid to the cost to serve. So there's two aspects your egress fees and then you're 03:06 confused face. Are we seeing customers starting to look at fpga solutions for the AIA to help drive down the price performance car? I think the other 03:13 thing that were really starting to see us because we're operating high-end a to centers for for our customers. We're starting to see some of the more 03:22 advanced workloads really move into mainstream adoption things like computer vision cetera. We're also seeing a number of growing Trends where 03:31
Buy this talk
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.