Duration 48:08
16+
Play
Video

From Blobs to Tables: Where and How to Store Your Stuff (Cloud Next '19)

Gabriela Ferrara
Developer Advocate at Google
+ 4 speakers
  • Video
  • Table of contents
  • Video
Google Cloud Next 2019
April 9, 2019, San Francisco, United States
Google Cloud Next 2019
Request Q&A
Video
From Blobs to Tables: Where and How to Store Your Stuff (Cloud Next '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
5.08 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Gabriela Ferrara
Developer Advocate at Google
Dave Nettleton
Director Product Management at Google
Alok Pareek
Founder/EVP Products at Striim
Codin Pora
Director of Partner Technology at Striim
Tobias Ternstrom
Director, Product Management at Amazon Web Services

Gabi is a Developer Advocate at Google Cloud and a passionate Software Engineer. She likes simplifying complex systems, and believes abstractions are best when they can be understood in a real life example. She’s driven to go beyond DBA lingo to make database and storage technology more accessible to software developers.

View the profile

A product management leader with a track record of delivering customer focused products – from creating the vision through to ensuring customer and business success. I have a proven ability to learn new markets, and technologies, understand customer needs, and deliver innovative products. People are an organization’s greatest asset and I focus on mentoring, building teams and working across groups.Key strengths / expertize:• Understanding customer needs and product innovation.• Data/analytics products and cloud services.• Building high performing teams.• Product evangelism in executive briefings, at events, and with industry press/analysts.• Track record in individual contributor and management roles, in small and large organizations.Products: Google Cloud Platform, Microsoft Azure, SQL Server

View the profile

I am a co-founder of Striim, an Intel backed company that's built STRiiM, a next generation software infrastructure platform to address real-time data management challenges for fast, in-flight data. STRiiM helps enterprise Data/BI/EDW-Hadoop/Kafka teams with data ingestion, big data & cloud integration, operational intelligence & in-flight data visualization.I am responsible for overall product development & strategy.

View the profile

Highly motivated and passionate entrepreneur with a strong technical background, and a particular affinity for all things IoT. 10+ years of software engineering experience…from transactional data management to cutting edge beautifully designed consumer mobile applications. Last several years spent founding and leading multiple startups.Most attracted by fast paced projects where design, architecture and attention to detail have a real positive impact on user’s day to day lives.Specialties: Team creation and organization, taking ideas to product, UI, UX and scalable architectures, mobile applications, real-time analytics and developing robust enterprise grade IoT platforms.

View the profile

Product management.

View the profile

About the talk

Google Cloud Platform offers many options for storing your data. From storage to databases to data warehousing, data is the foundation of any enterprise and any application. In this session, we’ll talk about what options exist, the strengths and tradeoffs of each choice for various workloads, and demo the different experiences across services when storing your first bytes.

Share

I'm so going to give you a whirlwind overview of all of the different storage products that we have on Google Cloud platform and give you some pointers as to what to think about a starting all the different types of data that you might have to take this off and then I'll invite some of my colleagues onto stage as we progress through that presentation sort of Journey Through Time and talk a little bit about some contacts as to where we are in the start of the Journey of the different types of storage that existed over the last 40 or 50

Cent. Contacts to where we are today and we'll go through some of our storage products in more detail and then out a database products as well. Alright, so first of all just a brief history of storage in databases. So let's start off with storage festive. Also go back far enough are just looking to your desk and you'll find it. The original way storage was attached to a computer. Whiz it was just physically attached to the CPU, right? So for all the desktop machines that you have for the computers that came out long long. You just bought a physical blocks you on your application to

file system in the storage all attached together as a single thing. So storage was what was called direct attached storage in just physically attached to the machine obviously became quite an efficient as the amount of data crew amount of CPU results just grew I'm and the idea of a storage over Network evolved into two main categories storage area networks and network attached storage in the first storage area networks. He is a main idea is that the network is separating the storage from a set of machine. Instead of starting effectively blocks of storage and

the file system is local to the machine which is running the application of the reason this was important is what this allowed you to do is offer a locally-owned files and you compute and send it over the network communicate and large blocks of storage and communicate with an 8-k block on a 16k block on a 60 4K block so you can back and forward Big Blocks of storage to the underlying storage infrastructure and then operate your application over the file system. That's pretty good for a particular database type applications that wanted to be able to do lots of sort of

management at the storage locally and then sort of hand off Big Blocks of stars back to the storage system. I'm so this was the area of SAN storage area networks. Probably one of the best men product in this area storage. So this is value typical kind of file storage on file share which is over the network. Application sits on a set of machines you have a network and the file system is sitting remote without remote file system is dealing with things like locking of files on remote access to the various database systems that you have

recently. I object storage started to come to fruition. And this was for a class of storage it was typically really really large unstructured blobs binary large objects of data. I'm way you wanted to start a really large amount of data and just have very simple get input kind of operations on it which the semantics of file systems with to Rich and Austin the cost of it to write so so class if I'll get started started the material like I cannot really only took off in the clouds, but there's plenty of on-premise storage options available history of the

storage systems that are out there and then Google Cloud platform equivalent products for that from Star it so that specific Best local SST but not with a direct attach block storage file storage with Cloud file storage Storage storage in the weather database is actually complicated is Jenny actually was pretty simple for 30 or 40 years. There's two or three major database band has built-in proprietary oltp Systems transactional records. And when you wanted to ruin queries and reports on them, you really didn't want to run a large-scale query against your transactional system. So you decided to

take all your date or off in batches and running it through a process called ETL extract transform load started into a specific system call the data warehouse in a Motorola data for the types of questions that you knew you wanted to answer using what snowflakes demon of the things like that. You know, I'm right so transactional records captured on a transactional scale of database system data with detailed off onto a data warehouse analytics and report and for 30 and 40 years. That was a state-of-the-art Assistance or the scalp

around the time the internet emerged connected the amount of data that could be collected really ballooned into in volume in the variety. Right? No longer with transactional records. The only source of interesting data that organizations might want to capture data stream data quick data analytics data that is just really really inefficient not cost effective to start out into a transactional database turn off started getting dropped into basically unstructured log storage and your folks are just drop lots of storage on the cheap storage systems and then

had this huge How do I know mine that date? All right. So this is sort of ecosystem studies about your 15-20 years ago came to fruition where you wanted to be able to capture huge amounts of structured data and then run jobs over that day that the sort of refine it in place and find the interesting insights that you've done often then put into a data warehouse 271 on the Let It Go song something happened on the analytics lights on the database side, you know two other very interesting things happened. There was a lot of the need to sort of strawberry wide

table like thousands of different potentials table relational database management system at the time of Colin databases image to support that you value lookup Another one and then sort of this no sequel category is a document database. So you might be working with a Json document. So it has a structure not something that looks super well with a transactional system while you look at more traditionally starring, you know customer and all judges and having a nice strong relationship between I'm with foreign Keys another relational constructs salusa way to manage

to support that so-called nosql databases made a number of trade-offs in terms of how they thought about the start of the semantics that you got over the data traditional relational database Management Systems relational semantics, you know, you could have taken out tables relations between them foreign Keys a transactional consistency in the Ashley properties of transactions. There's a very strong Particle model behind those to give us a very powerful construct the role very constrained

within this effectively scale of architecture but no SQL databases effectively willing to give up on a couple of things in order to get the scale in the performance that we need right eye in particular originally sold it eventually consistent models that much more easily let you scale and get availability, but at the cost of a consistency of your data animal recently scalable relational database Management systems that have come to the mint leaf tackled some of those address some of those fundamental issues of this balance between scale and availability

and now have actually brought some really interesting technology too bad, but I think if we look forward 10-20 years time, can we see a number of these Solutions side of kind of coalescing back together again? I'm certainly in terms of the semantic that they provide. The backend while still providing a really easy interface has to deal with things like documents Etc. So that's the sort of a brief overview of the history of storage and databases or so have a

great set of offerings for each of those different classes of databases from Platte sequel after Birth by Sleep on postgres bigquery for data warehousing iCloud beta parking cloud storage is great for the large scale Hadoop. They like.you skaist. I have no sequel we have big table and Firestorm and then type of Spanish is our scalable a relational database management system as well. Instead of a where we are in the in the industry right now summary of all the different products that equal Data warehousing and I

don't check Block in file storage system goes through some of the storage products in a little bit more detail. So let me start with Google Cloud Storage storage products. If you have a really large amounts of data to store media content. I want to run a lot scale data Lake and Google Cloud Storage is the product to you fully managed object storage service has a set of different tiers letting u t your data from the standard through Niall and cold line giving you progressively cheaper cost in terms of the cosmic gate to store. It will give you a single API to access all of

that data by default all of the day. There was encrypted anytime you write data to hit anyone who's reading I can immediately see that day that we have strong contesting the metadata. That's a really big deal. If you're building a data Lake on the on the platform. I'm super powerful products men use cases. We see it used for Content storage and delivery. So for example Spotify starting huge amounts of media content and streaming light around the world store and get a very many hundreds of petabytes a date to run a large scale. Jobs analytics

jobs in the cloud on that and then just a date of protections if I've got backup archival data to the cloud. Physician desk. So this is a block storage product. I'm so classic use cases here. If you're using spending up the yams and you want to attach a disc to them. Then persistent disk is the product you add use for that. I'm more advanced use cases. If you're available databases, then you want to have a list of products that you can attach for that says. Yeah General Enterprise applications also use for some big computer applications that want to

run in regular ATMs that we have a two tiers of persistent disk sander and a high performance which is backed by ssd1 a really nice features about this product. Is it completely separate from the compute instances that you provision? So you can use any computer instant provision a disc if you decide to increase the size of your disk you can do that. Dynamically you decide to change your computer and since you can do that. Dynamically these things on couples together, so lots of flexibility and Opera building out your insides of yams

and also sizing out your desks another great feature anyone disc and scale the 64 terabyte. I'm so if you have a need to think about striping discs better performance, you don't need to we take care of all of that magically under the covers the provision 164 turbo disk is write that out of hundreds and thousands of different discs until we get really good consistent performance for you after your application could look cool features with not announced recently as snapshots so we can take snapshots of those desks. Originally scheduled on features. It's just coming through beta right

now regional position. Did you building a highly-available database applications to give you an active active desk? That's synchronously replicated between two zone. So I might have to be anyone's going to be a man into the zone and I can now have a shed disc but is that is a synchronous replicated between those two zone. So super key feature of assistant does IPod 5 store, so he might just want a regular sort of file storage file share over the over the network and we can do that with iCloud files to also classic product used in a many many existing application for the

just expecting a file storage for it to be available in the cloud. So you see that use for a lot of lifting shift applications content manage applications and again back to the bean running again by the Publix API rate low latency flexible a scale weight scale capacity. Can we see Christmas to a shooting down data centers that moving media content archives Schmidt ocean Institute is gathering ocean ocean the graphic data on on ships that in the ocean and then Way to capture that data, right? So we provide the transfer appliance, which is a racquetball form factor that we can we ship to you

you rack it into your date into your Datacenter bring power Network and plug it in and we can unload it up without your petabyte of compressed data and then you ship it back to us. So we see as I mentioned lot of customers doing this for just really large-scale migration set of data to the cloud. All right. So we That Whistle Stop two of the storage products. Let me ask Gabby to come up on stage and just give you a quick demonstration of one of our storage products. Thank you, baby.

Okay, I'll be creating first. They mention that we have standing to your line and cold Line storage for Google cloud storage. And I have a bucket and skull line retriever. Usually it's not as fast as it would be as a sailor and to create a bucket with any of those storage, You just give you that name. and choose which class one in here and the location is important. Do you want us to keep its closest to your applications possible, but that may not be I requirements for you because a feature that I'm going to show next. So

I'm going to clean this one at the multi-regional and I'm going to show you one picture that's going to make everybody all which it is my puppy. So I just going to be being a part of this day tomorrow. I'm going to go to her teacher to or cold Line storage. Nothing here. I already have a picture here or when is endless public and I'm going to upload a copy of it the same picture. And uploaded any is there even though it's cold line and then you can see here. I put it in Asia right now.

It's in these buckets. You can actually have real low late is too because every forcing Einsteins Google Network everything. It's mostly one hop together data that you need on your Application. So what do you mean or up a TPI or storage? API you're going to see. It took only one hop from the UK server. This to the storage apis Heather because we have over a hundred points of frazzled in the world. This is always going to have to the nearest location to you. So how that

translates to that image that I just uploaded on the Cove line. So if you're trying to get that image and We don't want. And I put is a de coffee. It took almost one second to retrieve it. And if you try it again, it's also almost one second because it's not then cashed and image is not available for you. But publicly, however, if you're trying with the Public Image that I just showed you. Is honesty here cold wine? 0.0. Nineteen seconds to get the image and that's why I'm so

from private to public you take a minutes is of Google Network. Where is that image is also available. He ain't do you ask for let's see how long A bit bit longer 302 millisecond, but wants you to do it again and Cassius and go to 16. 0.0 sixteen seconds for you to access it. And this. I'm going to go back today. I was just a quick quick care of the storage product with a nice demo that shows that when you use Google Cloud Storage, you don't just get access to storing the data in our

data centers in the cloud you get access to our Global Network to bring that data to you or to your uses wherever they may reside around the world, which is just incredibly an incredibly powerful way to start and manage content with us. Let me pass over to my colleague who will talk this through the very interesting databases part of the talk. Hello everyone. My name is to be at Nystrom and I need to park management for databases in Google Cloud very happy to be your

extra happy because they'd let me come on stage since they were closed storage and everything about storage is DUI. I'm so anything you use a database for it's really up to you. I think about our database portfolio and there are three three main things that we want to accomplish for you as you would expect a bunch of databases are used Wylie today, and we want to make sure that we provide these databases as managed services in Google Cloud. The second one is there are some problems that are tricky when it comes to databases. again, Dave doesn't think

they're tricky but they at least I think they're so we want to make sure that we solve these problems for you like one good example is with cloud bigtable firestore or spanner providing truly scalable services at with different trade of bit depending on what you're trying to achieve and then the other thing that we think is also very important is to provide an integrated experience so that you have and the more you have to do. Like what what they mention earlier

and load or gluing these Services together this we want to minimize overtime. So make sure that the service is actually talk to each other in a reasonable way. Am and there are tons of storage options in database options and generally when it when working with databases and storage, it's super important to spend some time looking at what is really you use case and what's really valuable to you. One of the things that I can just mentioned that that's at least I find this helpful is to think about the value of your data. And so you can

think of that has a spectrum from super high value to lower value and you might think they like what what does value mean for 4 days to be related to action Mobility. So the more actionable your data is the more valuable it is and it also tends to be that it's value is not that I'm okay. Okay to lose and less valuable valuable. They thought I might be okay with for example less ability to carry over or slightly longer latent say something. Like this example would be an

order like you want to track an order or a customer. That's a very good example of super high valued at the biggest hits super action. We need to ship something we need to send an invoice and so on and so forth. Whereas may be collecting iot information from some devices some of that they might be actionable a lot of the day that you collect may not be action. But yet you want to keep it around because maybe in the future as you do an analysis of this date or machine learning and so on and so forth, valuable and then you want to increase it up the

value chain. So that's something to kind of keep in mind but overall one building applications. It's been super important to be super pragmatic and look at what's really your kennel requirements. Swimming terms of our database products that we have Cloud SQL count SQL is fully managed. Database is today we offer my sequin poster sequel. Beat my too low for something else that I can talk about. This is why should be very clear my sequin poster Sequel and managed managed database service or services are super strategic to

Goose loud because we know lots of you use it and it's super important to you. So we have to make sure that we have a great experience for these managed services. And you use these for lots of things and there are lots of examples where something like my SQL or sequel is for example a much better solution than something like clouds Panther from from us as well. So there are trade-offs between all of our product. And on the toilet. The offer is cloud memorystore. This is fully managed reddish and this is obviously if you have cashing type Solutions, and again,

it takes out and takes away the mundane tasks of managing patching installing and keeping track of the datum. So I would like to invite Gabby back on stage to give us a demo demo of you saying Sound Seafood Corpus Christi. When you go to the sequel tab on the Google Cloud console, you're going to have the option to creating. So so my grade your daughter and we supposed to be And when you deploy you can select your Zone, but one of the things that I find is most amazing about

the cloud SQL is the ability of chose choosing which signs of machine that you want after 64 course, and if you get it to 64 cords to the 460 16 gigabytes the number of connections that you have on your database using SSD and one teacher that we have that I didn't see elsewhere yet. It's the increase and when you change discs, okay, you can start with 10 gigabytes store about 50 or a hundred. So if you put it on a hundred To cry all this other 10 or 50 gigabytes. There's two they are so it's not just about how much but also how fast are going to query

that and how they're going to have their response over the data on the desk. how to do that you just need to create and it's going to be created there. Why does it's created? I want to show you an application for you. So, let me see here. This is a python application. It's really small used to using flask and I'm going to be sharing to you. It's running first on the local machine everything wrong with with posters. so my phone I think this is the right IP address.

I'm sorry about it. Okay, not working. But that was supposed to be that Fat Bob's versus faces. When did you break the internet? So let's try to make it work on calls people. So we've called sticker. We have a thing called called sequel foxy that allows you to connect four anywhere without having to worry about whitelisting IP addresses and that kind of security concerns with Constable foxy. I'm going to stop my battery. because once you do that, it's going to be

A transparent connection you're a machine is going to think that's connected to localhost but actually connected to the cloud is called people instant. The right now my proxy is ready for new connections, and I'm going to start the application again. I'm starting here. Let me get the IP address. Maybe it was on the wrong one. It is working spaces and it's running on called speaker right now. And if you saw I didn't have anything on the connection string right now because it's still is on localhost as you can see in here and it's connected to call

to see KO on over the cloud. You just use Equifax. If you need to change permissions or anything else it just use the I am to to change the permissions can of cannot connect to your database and with that. I want to go back to previous. If you if you use cloud SQL if this helps you put proxy super practical make it super easy to switch between your local 11, yr month which people use on their laptop into town sequel. And okay. So let's take a look at some of our other database products the cloud spanner as

they mentioned is are scalable relational database database meaning that you can run his own one or many nodes and it also separates compute from storage and allows you to scale these completely independently. So for example, if you have a spanner database set up any gain its relation is regular candle tables rows columns things like this and you need more computing power. You can tell my assistant. I'm going to go from one node 200 knows we scale you up. It's completely online. You're not running on a hundred notes and

you have obviously a lot higher throughput with a hundred nodes your date that doesn't need to move around because the hundred old start again separate from your storage and and you can run you can run through the spike and as soon as you don't need the recipe anymore. Make a scale down to whatever number of knows that you need in order to run your workload and right in terms of that note in terms of price, but in terms of performance so generally a scale of database doesn't give you a slow latency as something like a

poster SQL instance. For example, that is completely local to your at your machine. And or my data is is is closer to the computer. So this is one of these examples where you have to think about what is super important to you? And it also depends. Spanner gives you a couple of interesting features. One of them is that you can scale today to either within a region or across the world. And when you choose to stay late that with cloud with the kelp and they're all of the day, there's always consistent meaning when you read the day that you

know that whatever you read is the absolute last committed up to date later that we have. You can also do some things in order and as you can expect if I use something like clouds by there on a global instance. There's some penalty penalty in terms of latency since we have to make sure that is consistent across the world. So instead of talking in a low you may be talking something like a hundred milliseconds with the pencil to use cases on one of these things that tends to create box in applications. So

whenever you go down and eventual consistency past your app better be smart enough to be able to handle that sometimes the day that you see isn't necessarily the latest up-to-date. And again comes at the cost and you have to look at what is more important to me, maybe lower latency higher throughput or expand their is relational. So if your data per definition, it doesn't fit the relation of mold and it's probably not the right pick for you. But even able to run a database from up to literally

petabytes in size and from hundreds of queries per second after me videos of fires per second and again with Dynamic scaling sure you pay for what you need in terms of computers. Now contrast that with big tables for cloud bigtable is our nosql key values last wild-card column store. I fell big table has instead focused on High super low latency and and We are we currently have in beta Global Cloud bigtable mini that you can have replication between Dolby cable instances

across the world. And this is a big difference between these two pro bikes or if I go and write a bunch of dating to call big table. It's consistent in the plaster as we call it that I write to so whatever shown that Sorceress in but if I write to a pastor in North America and then I get the same time go and read from a class 3 Neisha and then I was going to see exactly the same day right? There is some latency in terms of the replication in between where else would something like that be exactly the same day and again, It was a trade-off in terms of high availability. So

Cloud spanner offers 5/9 availability and but it is dependent on that. The Google network is connected to all of all of the information flows for cloud bigtable and clouds band there and the next Road. I am going to show Cloud firestore on Google's backbone. But let's see someone for example Google cable between accounts one of the Region's which is the right table region which managers and owns the rights, even if when you read everything is consistent all over the

world, but there's one reason that owns the rights. join this case Is it happened Cloud spanner wouldn't allow you to read or write from the known known primary region because of the fact that then it could be inconsistent. Where is now big to be with you eventually consistent would allow you to write and at some point someone connects the cable again or whatever happened, then you can continue that then thing synchronize you have merge conflicts things like this and

potentially depending on the app, right but everything continues to move in a fundamental decisions that you have to make an important to know that Eden within one application. You may want to make different trade-offs that you may want to use something like a big table for some parts of the application and something like clouds band there for other parts of the application like Dave medicine for Tampa Methodist. St. Cloud storage is stored in Spanish to make sure that it's fully consistent. and then and then we offer

Cloud firestore so I can tell by her stories or nosql document store. And fire store is open my chest for a completely consistent view of the day that I don't give a store. That's completely consistent. You can run this move to Regional Eye across multiple regions are within a single region within a single region wheel for a Ford 9N play multi-region same as Benadryl for a financial aid the different series again, it's a document store. So if you find a mentally rebuilding in half and the document storage and the right model for you and Cal Fire for a good choice, the

other thing that's interesting with fire store is that it's completely serverless it so Cal big table and Cloud spanner both offer you completely elastic scaling up and down, but you still have to tell their system it give me more notice or give me more CPU or I don't want to see if you anymore. Please take them back where I asked firestore scales. Leafly independently you don't have to tell the system what you need it stays with your work with games with your workload. Bigquery again. It's our

data warehouse speakers similar to fire store in the sense that it's also a serverless model meaning that you don't have to probation compute at you use the data you pay for storage and you pay for consumed CPU if you will or consume resources and bigquery also offer through this Infinite scale where you can internet where you can stay till 2 more or less any day that size. And depending on your use case. This is very may very well be the at the right product for you. So the more analytical huge case you have the

more you're interested in using big table. And again, it's very common to see that you're using all of these database products together using maybe Cloud SQL for my Seafood for something spanner for something big table for something and big 44. I let it takes Who was that? I'd like to invite Outlook and coding up to Stage to give us a demo of spanner and their company Stream So stream is a strategic partnership flowers that do basically reputation in migration of data into Google Cloud. Thank you.

Thank you. Thank you Tobias Tobias. So I disconnect that. So today I'm going to show you a demonstration of another you have these wonderful endpoints on the Google Cloud. How do you actually use them? How do you remove your data into them and I'm going to talk about how we move real-time data from your applications from an on-premise to Oracle database into a child spinner before I get into the demo or just a little bit about Stream So vs. When is the next generation iPhone that helps in 3 solution categories are those are called reduction hybrid

cloud data integration and in memory stream processing today. I'm going to be focusing on the couch. Option specifically how to remove data into spinner. So with that we're going to jump into the into the demo. Okay. So what you see on the screen is the landing page then I'm going to keep this going pretty fast. Are we going to step into the apps part of the demo and that's where the data pipelines are defined that help you move the data from on-premise to spend or in this case what you were saying? There are

two pipelines. One of them is meant to do an initial load or an instantiation off your existing data on 2009 or tables and the other one is also meant to catch it up. So why you were actually moving today that you might have very large tables for example, or massive amounts of volumes. Are you actually go ahead and not lose any data and all of the consistency things have you heard about from Tobias earlier? It's important that volume moving the day that you also don't have disruption to your

applications and your business so that step into the pipeline here has a simple you have at the top. Hey, baby did a source which is in this case Oracle is running on premise Sobe connection to this Oracle. Database has a line item stable. We going to show you a movement about about a hundred thousand records and also has an orders table where we going to show you the kitchen. The data processing. The way this application is constructed is by using these components on the on the left side of your of the UI in the for designer as you

drag and drop one of these things and you push them into the pipeline and that's how you actually construct your data flow. And once we actually go we can also step into the Spanish Target definition and this is your service account in the connectivity and the contract for your system this application or the pipeline. This is where you can sort of see that I can actually run this event in the Stream. Formulas can be run either on-premises or the Google Google cloud

has nothing available yet in the tables on the standard side. So let's go ahead and execute a query against a line item table. And in this case you're seeing that there are no records there and you can take my word that there's a hundred thousand that cuts on the Oracle side. And then just have time will assume that and let's go ahead and run the application. And as soon as you can see that in the preview in the lower part of your screen now, you can actually see the records are running life. This is why they're uploading the data and applying them into the spanner into

Cloud spanner. You can see that we have completed $100,000 records and it was pretty fast this morning. I done a million records. I was holding my my my birthday but that was pretty fast as well and you can see that I mentioned to you that the second phase here. That's the change it I got your face. So this is why you are actually executing this query how far is this query is consistent as of a specific snapshot at Oracle is also DML Activity Center application. Sorry I could take this day. This is the second pipeline now so we can step into by flight number to

Gordon's already deployed its and in this case, we was a special reader and that actually operates against Do logs of the Oracle database and actually monitors that so it doesn't actually have any impact on the production system for stay in Becca's in like insect and he's not doing any any quarters. In fact there be graduating from the redo logs. And then we are going to reapply that as EML as inserts update and so forth on the on the couch manor system. So let's go ahead and run this application. We are going to generate some DML using a data generator.

Have there we go. Okay, and let's go ahead and run the generator and you'll see that has a number of inserts updates and deletes against the older stable and now that switch over to the to the cloud spanner system and carry order stable care. As you can see this data and the orders table. This was also something that was just propagated. So this is sort of like the two-faced very fast at all how you get data from your own from databases to Cloud spanner. And of course you can. This can work against other databases that we support as well. And this is the available in the Google Cloud.

Thank you. Thank you. Thank you. Good to know that we can get the data into the database as well. Zohan in conclusion David gave us a history of databases. We're both storage and DUI and hopefully go to sense of that. We have a lot of different boat storage in database products in Google cloud and that they very much Target different use cases. So depending on what your primary use cases you should evaluate them find the right the right tool for the job. And also I did mention for like 30 years or so. There was basically want to live a relational database. I will say that's

the hammer of database management and you can use hammers for most everything right, but it's going to be tricky with the hammer and evaluate using a saw or something like this screwdrivers very bad for cutting down trees. also always will we spend the time to evaluate the solution and Sea witch what actually works best for you and your specific use case and this is generally what what takes quite some time especially to figure out what is your most what is most critical

to you? Generally the hardest and the hardest question to answer is what is more critical availability for sample or consistency? So these are things you have to look at it. Also it important to look at What what do I need today? And what am I going to need overtime? And how is it easy? Is it to transition between different databases and storage options? And that's again one thing that we think is super important for us to provide you is a seamless experience to actually move and between and use

these products together because we know it's like things change businesses change applications change priorities change and What is the right product for the job today, Maine open the right product for you tomorrow? Thank you very much for coming and we'll stick around here and answer any questions you may have.

Cackle comments for the website

Buy this talk

Access to the talk “From Blobs to Tables: Where and How to Store Your Stuff (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Google Cloud Next 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
163
app store, apps, development, google play, mobile, soft

Similar talks

Edward Bell
Solutions Architect at Striim
+ 3 speakers
Alok Pareek
Founder/EVP Products at Striim
+ 3 speakers
Codin Pora
Director of Partner Technology at Striim
+ 3 speakers
Yair Weinberger
Co-Founder & CTO at Alooma
+ 3 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Terrence Ryan
Senior Developer Advocate at Google
Available
In cart
Free
Free
Free
Free
Free
Free
Alexandr Tcherniakhovski
ДолжностьSecurity Software Engineer at Google
+ 1 speaker
Seth Vargo
Senior Staff Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “From Blobs to Tables: Where and How to Store Your Stuff (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
567 conferences
23025 speakers
8619 hours of content
Gabriela Ferrara
Dave Nettleton
Alok Pareek
Codin Pora
Tobias Ternstrom