Duration 30:24
16+
Play
Video

SAP+Google+Intel: A Winning Formula for Big Data and Lower TCO (Cloud Next '19)

Tim Allen
Global SAP Alliance Manager at Intel
+ 3 speakers
  • Video
  • Table of contents
  • Video
Google Cloud Next 2019
April 9, 2019, San Francisco, USA
Google Cloud Next 2019
Request Q&A
Video
SAP+Google+Intel: A Winning Formula for Big Data and Lower TCO (Cloud Next '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
1.29 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Tim Allen
Global SAP Alliance Manager at Intel
Rohit Kamath
SAP Solution Consultant at Google Cloud
Andreas Schuster
Product Manager at SAP SE
Jack Vargas
Product Marketing Engineer at Intel

Tim Allen is a global SAP Alliance manager at Intel Corporation. Tim has had prior roles in product management, business development and marketing for hardware, software & services at Intel. Prior to Intel Tim has held technical sales, engineering, IT and system analyst roles various IBM, Sequent, MicroFocus & Tektronix. Tim holds a BSEE in computer engineer and an MBA in finance. In the community Tim's volunteer leader with BSA.

View the profile

Andreas is product manager for the SAP HANA database platform at SAP. He works closely with engineering to guide new features of SAP HANA from concept to launch, bridging between technical and business worlds. His special focus lies on various innovation topics in hardware and software.

View the profile

Jack Vargas is the Product Manager for the first generation of Intel® Optane™ DC persistent memory. Since 2016 he has been working to enable and evangelize persistent memory software across the cloud and data center industry. He enjoys being the interface between hardware and software and focuses on Intel’s software alliances and developer relations to solve big problems with pmem. Jack is a pizza enthusiast in the city of Portland Oregon, has a growing collection of geeky dress socks, and tirelessly campaigns to end 4:3 size for slides.

View the profile

About the talk

SAP has joined forces with Google and Intel to deliver real-time data processing for vastly larger amounts of data at a lower TCO. Now companies can leverage their growing data volumes to impact business outcomes. Learn how Intel Optane DC persistent memory enables Google Compute Engine VMs to have significantly larger memory availability than possible in the past, broadening the range of use cases our customers can implement on Google Cloud Platform. Intel Optane DC persistent memory combines the best features of both dynamic random-access memory (DRAM) and disk-based storage, enabling fast access for larger amounts of data and the ability to preserve data even if there is a power cycle (such as for patching or unexpected downtime).

Share

Hey everyone, good afternoon. And thank you for joining us during your your lunch or hopefully your skip your lunch at 2 to check us out here. First time. I've presented in a movie theater too bad. We're not doing like Avengers endgame. I would be a pretty cool session but we're here to actually talk about a partnership between sap Google and Intel specifically around a technology called Intel optane DC persistent memory. I'm Jack Vargas that one of the product managers on a TENS hell working on this technology soon in the next couple sessions. You'll be joined by my friends Tim Aundrea

Sandro hit the beginning of review about why we're here and what we're talking about. I'll spend the first section talking a bit about the history of this journey. What does Technologies about why we're even doing it and then Tim will come up talk about sap Hana and then undress and Rohit will join us for the final session. We plan to have a couple minutes for Q&A. I know it's kind of dark, but there are some Mike's up here on the second row here. So if you have any questions, Just head on down and we'll take your questions toward the end of it. Alright, so first I want to talk about this

journey that Intel's been on we've been working on this technology for the past 10 years and really the last six of that has been in deep engineering collaboration with sap specifically optimizing apps to be hunted to take advantage of this new technology category called persistent memory down the last couple years have been really exciting for the partners lifted up here. So actually two years ago sap was the very first coming to demo persistent Memory live at sap Sapphire and then this past fall during an Intel event. We gave bark Santo Santo Santo.

Sorry who is a VPI Google Cloud the very first production dim of this Intel optane DC persistent memory and that kind of started a slew of activities working with sap Google Partners to really start prototyping ending pocs around this technology, and then actually just a week ago and tell officially launched this Intel optane DC persistence. availability along with our second generation of Intel Xeon scalable processor Now why why the hell why are these companies kind of rallying behind the technology? You know, what? What's the big deal? What what's going

on and I really have to get do with with data. So in the the short now six years and in 20-25 respect to generate 125 zettabyte to data compared to 33 zettabytes generated in the last year, you know current Technologies aren't really quick to make that data useful to the industry's your customers your your partner's. So let's explore, you know, kind of wire that is to give you a fundamental knowledge of of why this technology is going to change computer architecture as we know it. They are kind of like classical pyramid diagram of current Technologies when it comes to memory

and storage that you may notice. There are big gaps between these and the remote computer architectures. These are oceans away at the very top we have memory, which currently the Mainstay is is d'ram memory. So this is Tiny capacity as compared to what you can see in a solid state devices can be relatively expensive but it's really fast. We actually measure latency in nanoseconds within this realm that we cross tag after we enter the storage room, which currently the the two main Technologies are a flash nand SSD and then as well as spinning magnetic dislike hard drives

or even tape simply latency measured in these are within the realm of microseconds Logan a magnitude different than what we have with the memory do in the storage room. They they have hired capacities. They're fairly affordable for the gigabyte per cost. But again, there there a magnitude different NM performance. So this bottleneck has actually been in computer architecture for quite some time or actually software has really just worked around this time. I really optimizing code to make advantage of it. Now how how we're changing this equation is by this new technology

called Intel optane DC persistent memory really the first mainstream persistent memory technology that's coming to the marketplace. So this is big affordable as well as the data on it can be persistent even after a server reboot a power loss to Federer or coming to Market with a 128 256 and 512 in my pocket. I have one of our 512 capacities right here. So with these you can have a total of six of these on a second generation Xeon Processor platform you pair of those with a deer am so on Woonsocket six of these on a standard to socketserver you going to

up to 12. So if you take the 512 x 512 you have a total of 60 GB of persistent memory available on your platform. The author I will mention that since now and the memory round day does not rest security is quite important. So these actually come in with standard Hardware encryption. All right. Now how I'm going to give it just a quick download of how we actually uses technology. So they're specifically to offering meeting notes. So think of one as being a persistent memory and I'll get details of of how you can use that and the other one is actually

kind of our are easy way to take advantage of that large memory footprint, but without the persistency and we call that memory mode. So let me talk about after rectum own son. After erect know this technology is transparent to your operating system your users your application and smart TVs independent software vendors like sap who understand their data structures is very well. We'll put the data structures which makes the most sense in the medium that they have available. So for those data structures that make the most sense and d'ram or you need that high bandwidth, they can hit the

data structure there. But if you could utilize maybe a lot of persistent memory and take advantage of it being persistent if I had to do a server patch or something it's able to come up quickly versus previous generations. I will make note with after wreck We There is that uses but there's actually a second usage we can actually use standard storage API. So instead if you wanted to just use standard block IO actually dude from the the the the dented self. We do support that now with memory mode how this is utilizes. Basically the platform is consumed the deer am in your system to

act as a really fast right back cash The Firm application standpoint that 6 terabytes I mentioned before I'll just see that 6tb and won't see the deer am behind on acting as a cash. So what this again? This is a volatile. You said, I'm really this is to be out of the box if you want to have 60 be available as volatile memory. This is the route to go now. Hopefully that's a quick one-on-one of Intel optane DC persistent memory. I'll turn it over to my friend Tim now so I got sap Hana SAP account team Recipes art is the first major

or dbms to support Intel optane persistent memory in after I came out today. It's kind of fitting. My name is Tim Allen. I used to do a little acting were in the movie theater. So I just wanted to just made me think about my prior life. So when you think of him too often these persistent memory just going to fill in the blanks that Jack was talking about as you can. See there's this new layer persistent memory that can bridge the gap between storage and memory and like I said sap that we've been working

on it for several years. It finally launched last week's. So when you look at into Washington DC persistent memory, I look at on a 2.3 Honda 2.3 was released prior to Sapphire last year. It was the first major database to put to support persistent memory and then we have some specifics now on dries. I'm not going to talk about 2.4. But obviously we know that's eminently coming real soon. Okay, so tell me that it was released last Friday. So this world could be updated but it's still the same as I like to think of it.

Sap has been not as supported persistent memory for a long time. Even before the product released a couple things to pull out. The soil is with persistent memory and this is just compared to last generation are the first generation of Intel Xeon Platinum processor for transactions. It supports up to 3x more capacity and for analytics. It supports up to 6X. So get into specifics of how that works here. How historically that until we work with sap for for over a decade on Hannah

each generation, we out of these are features new features of the Xeon Processor level as an example about three or four years ago. We added a feature called TSX which better trade by transactions by almost 500% and that was the E7 V3 platform M fast forward this year released the second generation of the Intel Xeon Platinum processor, and now we support persistent memory on the processor level. And so just going to give you a brief summary now. I know there's not a one-to-one mapping with gcp but at least it tells you what the processor level

historically for Transaction. What were some limitations? So if you'd look back to the E7 V4, we supported up to three terabytes per processor with the last generation of the processor. We sped up everything but about 60% if you look at some of the S&P bw4hana benchmarks, however, we took half of the processor or acetate we took half of the den slots away. So that's why you see it went down from three terabytes 21.5. Well with this one you still have the same amount of do slots as you did as the first generation of Intel Xeon

Platinum processor, but now we support up to 4 .5 turbo processor if you think about that that's pretty powerful. So what does that say on a for soccer configuration? You can actually have 18 terabytes. So when I pick up my data management challenges obviously know where did always grows. There's always a consistent cost of Walt. How do we manage it? What's our operational budget? And then how can we get answers and solve problems quicker for a customer so decrease time to value. When I think of persistent memory, I think it solves kind of this heat map issue.

So traditionally in Jack refer to this previously you just had three tiers you do. Do you round 4 in memory foam for things. I have to admit another previous life. I used to be a DVA at and even within memory you don't store everything in the in memory database. So like Your data that's like you're cordially your monthly reports. You've likely to put that in one data, which is the orange represented here and the blue data is for for like your big data stuff that you store like in Cloudera or or something similar. now with free system memory and this

larger-capacity what you see is kind of a emerging or a new Zone II to the left there the interrupting DC persistent memory is on which has both the capacity to store warm and and hot data I think about this what a life I think about it a database layer specifically the main memory or about 95% of our data can now be stored in persistent memory that's also applies to potential new features that were added in Honda 2.3 like extension notes so that because

you're not paternity to put persistent memory both in the hot and the one Dana Terrace. When I also think about persistent memory, it's we are driving the cost down of compute now on Hannah when I swear I think of historical on-premise scenario. I'm hearing customers. I just last you a buy new systems and their tell me about about 60% of the cost of that system was just a memory, So we're going to drive down that cost with the Dems that were just released and there's going to be better opportunities for

business continuity in this is a good place where gcp plays that I'm hearing some customers that kind of a hybrid scenario or there will still host locally on your Datacenter what they're going to use replication to do disaster recovery for instance to the Google Cloud. So that's something to think about in your scenario and already talked a little bit about increased memory capacity. Okay. So when we do have a little deeper also seen with this increase capacity of an opportunity not just for a from a lord dim price perspective or price per gigabyte.

There's opportunities to consolidate multiple instances until I T in particular were actually in the process of consolidating both r e c C & R Us another haunted native instance into one. So look for a solution to be discussing that in detail here in the next couple weeks leading up to Sapphire. The other thing I like to think about my alluded to this earlier. I called this the layer cake, but as you can see that this is a piece of cake that's falling on its side. And now we have

princess the Aryan green is where persistent memory falls into place, That is your main data storage smell as Jack said to previously you still going to need some D Ram. But the vast majority of the Hana database in Orange is where your morning memory is going to be stored. So it's in a very poor customer but rough estimates I'm hearing is between 95 96% of your data will now be in persistent memory and that's going to improve your loading reloading of the database at startup disk can improve the

the time the date a load of that significantly is going to go from potentially up to an hour to minutes. The last thing I want to talk about it is business continuity. Now, there's lots of things when you think of business continuity. The first thing I think of first and foremost is is back up as you can see this actually shows an opportunity to introduce persistent memory into your existing environment where you still have some some older systems that just have Aldi RAM and now you can do a replication scenario where you can

use persistent memory on the right with the orange Delta darker orange with some d'ram cuz you know have a bigger footprint. This can also be used for Rebecca this can be used for disaster recovery at a desperate load at a different location to be used it on the Google Google Cloud as an example. The last one is the new use case with extension note. So extension nodes was a new feature that was added in on a 2.3. It does not replace Dynamic tiering that still exists, but it's a feature that allows you to do

to place your warm data into persistent memory and then has a scaling factor of 821 and other words. You can have eight times as much persistent memory in the dark orange as you have. as you have with the ramp, so look look to ASAP to get more details on how to implement that Everything nice with extension nodes is it gives you the a full-on instance for your warm data? And then that will give you full access to all the the the Powell Library is to do Analytics. But that'll

turn it over to Andria's. Thanks about Tim. So microphone is working. My name is all the way from Germany. So I'm spending any Corporation Corporation between the three company's product manager for persistent memory in sap Hana. And before I get started in telling you in a bit more about Johannes XR using it and what it means for Honor. Let's be clear about what we just heard. So we had check the very beginning introducing a technology yourself off interrupting to heat up assist memory. Just knew pretty nice pretty

exciting stuff for dinner and we had to speaking about I should be home in a couple of use cases that we see how I can make use of them. So but what does that actually mean for sap Hana in for sap Hana on gcp So this is a pretty packed light. I have to admit that but it's actually a pretty good outline of the use case that we see in the benefits. So basically the main benefit being can actually possess more data in real-time. So I basically stable for Mississippi to give him the TCO is supposed to be

much much more better for you and for companies are cheaper than touch a traditional tea room and the one aspect that we haven't really talked so far is the improve business continuity salt. The main differences between persistent memory RAM is not your size and cost but it's also did it just persistent. So it means that he doesn't lose the data when you shut down to server order database at some point you just have to do if you need to do maintenance to applying a database upgrade operating system upgrade

or just tune in 5 so you can actually measures what it means for any memory database like Hannah. So all the data is kept in main memory at all times to be able to process it and presented with the with the Speed and Performance that you actually expect and that Hannah is promising but it also means that if you start on the database all the data gets the allocated giving back to the operating system and went database is coming back up. The database is behind I need to reload that from a persistent layer, which is In the

case of supano's just simple storage. No, it's a peace offering or positioning Hannah in a big data environment. So form for large amounts of data and a terabyte scale and if you imagine so no matter how fast do storage actually is it to take some time that you need to load 6 terabyte of data into from from like an SSD. Even if it's very fast as his teeth into my memory just can't take a considerable amount of time. So what if we were able to achieve in this area is an improvement by a factor of 12.5 soda is based on an epic something that we did

before Sapphire last year. So we were able to reduce the startup time including the beta loading face from 15 minutes to Just 4 minutes. So just means your database is Bay City going to be much faster after the maintenance of face, then it was before. And of course as a result of a product at 3 when the long-standing collaboration so that we actually able to announce that we are the first I made your database that is fully supporting that we touch briefly. To up writing notes of precious memory before so memory mode and have direct memory mode

is basically the yeah, you can run every application the memory mode but you actually take full advantage of the technology including the persistency aspects that takes the app direct mode and this is also so we spent significant efforts in our engineering department actually enable at if you had enough of that technology and we got great assistance from Intel ended by the way. Yeah, so basically summarizing that again so you can reduce TCO And up four times increase in density soul from under 28 to 512gb.

These are the ones that are available right now. And you can also do you have to put a potential to simplify your landscape? So we are seeing a couple of examples before so you because you can make use of that capacity to while putting more data or just install two databases on one physical server. Oregon won the Astro van Genesis king, of course also be fertilized. I'm in some cases allows you to actually a boy to skate out scenario. So will stand every of a you logically combine two servers to one.

I mean, it's me to service. So you also in terms of operating that it becomes more complex and you can which is also pretty good value proposition is the aspect of data tiering system memory is not presented data tearing technology. So it does not how it is positioned. It's also would not be fair to call it that because the performance is actually not for one day. So you don't have any performance difference if you keep data there but it certainly has an influence on how you do data tuning or whether you have to do data tearing and in the first

place because if let's assume you have like 50% off your database size is warm day today. That you could actually move somewhere else. By using just dims too. Twice twice as large. You can just keep that in your heart to hear you have to take the phones for the hard data. And what I think is even more important you don't need to to go through that whole project will fix the identifying what data is warm and you don't need to move the data. You can just leave it as it is. It is a simple formal operations put a perspective and you're also basically might prevent

you or you don't need to add an additional system like the mnemic tearing or a second I can order external Rotator tear Young I explained about the business continuity aspect already. So what is 12.5 * Improvement in restart X or like be loading time to stop this is about to slap example. We have actually proved that many times during the last year. So we've done a couple Tuesdays and with customers so we don't realize customer systems and sometimes me if even seen

my child factors so up to factor 21 depends a little bit on the actual size of the database so because well with the logical system is the more time you safe basically by not having to load data and the hydros are the factors becomes. I'm so on Google Cloud collaborative part of sleep in the last year and today Gujarati sexy the first poppycock cloud provider to offer a virtual machines that use persistent memory. I'm sorry instances that are cheesy. Peace offering its a 96 BC pill 7 terabyte virtual machine. So 7

terabyte combined derailment the system memory. I'm of which 80% is actually obtained. So persistent memory a factor of 124. This basically brings you a high overall capacity at a lower cost or at the same cost as a smaller one. Then you should just compare that to to assistant with just the rim. Okay, so I took to put a couple more numbers behind that that the promise this will make every make in terms of restart. I'm into also show you how hot is actually works Ohana is

designed. And when what time will the startup time is actually composed of soda are two components that are pretty stable. Like the honest. Don't know how to start up. This is just to basically know what the wineries and shut down software stays pretty much the same no matter what they decide to have and also whether you steer a mobile system memory, so there's not too much that you can actually move there. But what is what is really really important is the time it takes you until your database is Speck. For normal operations. So the time

until you can use your database in the same way in which the same performance as you could before you shut down a database. I'm in this is your Justice. These Bars in Nevada, right one. So with the system memory in this case, it took 23 seconds compared to 2018 which roughly corresponds to detector that we've seen before and this difference will of course increase the logical systems get so if you ever I mean Hannah is available in sizes of 2 and 24 terabyte for a single instance so we can expect even large improvements if you directly running a system in

that it's ice. So this light is actually very similar from what he tries to tell. So how do you update iPod it down time of a typical sap system actually looks like so there's typically a face by the very beginning to prepare the application to make it ready to tell each other databases going to be shut down. Takes a couple minutes. So that takes you're not too long. And then there's this slide there's a pretty pretty large don't imma 45 minutes the maintenance stars. Did you did you need to to execute on your database the duration of that? Of course, there is so 45 minutes or so Pretty

Ricky. I already put the last month so like an operating system upgrade, database upgrade. It's also a time that you cannot read your voice. So this is just what it takes to get ready. And then the interesting part is when you start the host again, so you see on them. Like the upper one the upper bar shows the system memory obtained just like 4 minutes after you instantiate or basically Wheels Tenchi the database it is packed with the same performance as it was before. if you compared to how it looks with um this case

So the date when is loaded is of course prioritize so the most important data is no the first so they do system can go back as fast as possible. So for 34 minutes to take some this case to get to a two and a half channel guide ready, but the most important tables so that your system can can go back to actually so if you're curious, but the time it takes for the full five terabyte is even longer. So in this case, I'm almost an hour. I can imagine what's up with that actually means for y'all for the system. And with this I'd like to hand over to

Rohit for will tell little bit more about the world map. Thank you Andreas. Secure just to give you guys a quick overview off Google's Journey on pro lighting system sizes. Google has been true to providing all the systems as working machines so far. So you'll see on the road map when we started off back in 2017 was a force on a system that was 208 gig memory and we've come a long way now if you look at queue for 2018 were able to release up to a 4 terabyte single node

machine that's available. And now we're the partnership from sap Google now, we're able to release the first 70b system that's running on optane memory. And further down the road map you'll see. Couple of months down the road. We will be releasing the sixth hour bite and the 12 terabyte again single node virtual machines that will be able to run on a single note scale up mode and then around the end of the year. We will be releasing the 18 terabyte VM again, so to conclude today Thanksgiving

telling sap to introduce you folks to the Intel optane DC memory persistent memory Based On A system that will be available in Google Cloud. We have information available for the information. You can always contact Tim Andreas and Zack for further questions.

Cackle comments for the website

Buy this talk

Access to the talk “SAP+Google+Intel: A Winning Formula for Big Data and Lower TCO (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Google Cloud Next 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “IT & Technology”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
173
app store, apps, development, google play, mobile, soft

Similar talks

Manish Dalwadi
Product Manager at Google Cloud
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Alexandr Tcherniakhovski
ДолжностьSecurity Software Engineer at Google
+ 1 speaker
Seth Vargo
Senior Staff Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Keith Millar
Senior VP Services Business at Pythian
+ 1 speaker
Jamie Sidey
Chief of Staff at Optiva
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “SAP+Google+Intel: A Winning Formula for Big Data and Lower TCO (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
647 conferences
26477 speakers
9839 hours of content
Tim Allen
Rohit Kamath
Andreas Schuster
Jack Vargas