Duration 32:28
16+
Play
Video

Tools for Migrating Your Databases to Google Cloud (Cloud Next '19)

Edward Bell
Solutions Architect at Striim
+ 3 speakers
  • Video
  • Table of contents
  • Video
Google Cloud Next 2019
April 9, 2019, San Francisco, United States
Google Cloud Next 2019
Request Q&A
Video
Tools for Migrating Your Databases to Google Cloud (Cloud Next '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
13.2 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Edward Bell
Solutions Architect at Striim
Alok Pareek
Founder/EVP Products at Striim
Codin Pora
Director of Partner Technology at Striim
Yair Weinberger
Co-Founder & CTO at Alooma

I am a co-founder of Striim, an Intel backed company that's built STRiiM, a next generation software infrastructure platform to address real-time data management challenges for fast, in-flight data. STRiiM helps enterprise Data/BI/EDW-Hadoop/Kafka teams with data ingestion, big data & cloud integration, operational intelligence & in-flight data visualization.I am responsible for overall product development & strategy.

View the profile

Highly motivated and passionate entrepreneur with a strong technical background, and a particular affinity for all things IoT. 10+ years of software engineering experience…from transactional data management to cutting edge beautifully designed consumer mobile applications. Last several years spent founding and leading multiple startups.Most attracted by fast paced projects where design, architecture and attention to detail have a real positive impact on user’s day to day lives.Specialties: Team creation and organization, taking ideas to product, UI, UX and scalable architectures, mobile applications, real-time analytics and developing robust enterprise grade IoT platforms.

View the profile

Passionate entrepreneur · Provide simple and viable solutions to complicated problems · Hire the best people and make them even better.

View the profile

About the talk

The cloud is one of the biggest catalysts for database migration, and enterprises make the move using a variety of strategies. Some choose a lift-and-shift approach to take advantage of the fully managed service of GCP and some choose a complete rebuild approach to move off of legacy databases like Oracle and to take full advantage of cloud-native databases. In this session, we will showcase the best practices for database migration and introduce tools that can simply migration process.

Share

So I think by now I don't need to convince anyone that's moving to the cloud is useful and important and I think a challenge with many many Enterprises face today is how to do it. Then that usually we have like this huge infrastructure and servers and thousands of different application. And in order to understand how to move them. We need to identify what we want to move first. Why is going to be the most useful what is going to minimize our destruction to our current application and most importantly, how can we move there as fast as possible? So in

general we talked about three different steps in migration. What is assessment understanding? What are the difficulties that we are going to face when we are going to migrate? What is the expected downtime? Why should we communicate our users then we do the actual migration absolutely fish migrating. We return our applications at to fit better to the cloud to pay better to the profile. If you're not running on gcp in the first step. We even decide what else do we need to move? And what have can we even move because sometimes not

everything can be just lifted and shifted and we are going to talk about this in a minute, but why dependents who do we have is everything going to be shifted as is or are we going to modernize something? What are the what is important for application? Do we have some specific performance requirements? That might not be met when we were in the cloud. Do we have someone? Latency requirements and then essentially that step will tell us what is the right past 8:00 and we have I think it in terms of tooling. This is probably the part that these the most

difficult today. This is the part where there is the least amount of tooling we are going to show one partner that have building assessment tool for migrating get a specific type of application, but I think that in general this is the place where we rely on knowledge and rely on or either the engineer's inside the company or on our own peace organization understand what impact is going to be for migration and in general we have different migration typing. We need to understand how exactly we are going to migrate. I think probably the best way to migrate is

the first one to retard application. I mean if we don't need any more or less just shut it down and that's its migration is done. Multiplication that we don't follow this pattern and we actually need to be more complex. I guess if we can also replace it with some new vendors. How many miles from Salzburg Georgia Coastal pretty easy? We first moved to that vendor and then retire but then when we actually need to move it then we have a lot of options, I guess that destructive option but also usually the least useful is just too rehearsed it we

take our servers as is and we migrated those servers into the cloud about 6 months ago Google acquire developed by that which is a company does via migration into other Google partners that does that as well. Then we start getting into the realm of actually changing something in the application when moving to the clouds and again, this is usually more destructive but also more useful so we have we can revise our application. So baby just move the VMAs, but also baby containerized some of them to be able to run them on some container

engine. So just a little bit a little bit of change or we can actually affect also change out the occasion to different modules sound modules actually move to new stuff. I say we take her database which we're going to talk about in Langdon ND stop a movie to call SQL and then we take the other partner for application and just moved a VM as is. And then we're going to the you know things that usually are hard to do with you complete rebuild or completely right of the application to feed the cloud. I think it where we are going to factor to to focus a Hindi song is on the refactor area

where we want to take some of our footprint and move it into cloud-native tools mostly the databases and then some of our friends. Will probably just keep as is and we'll just talk to the same in the same API the same way about to new database is a goose in the cloud. And then once we're done, of course, we are going to optimize we going to try to optimize for cost going to optimize for Speed going to optimize for lately. That's only once we're done movie. and we've gcp we can actually we have built and

spent a lot of time and effort in order to build migrations better and wanted of course, they're fundamentally making that cloudy Tailfeather making you descale the network performance and actually transferred data as GradeSpeed the service level on the S Loz TP and then the migration capability to tell for shark migration features migration tools and migration partners, and eventually you're not alone in this we have a very good PS organization and support that will help you throughout the process and then I

mentioned usually during the assessment phase and Dupree migration steps where they were going to nation is currently most important compared to the tooling that exist today, but of course It's also very useful in the migration itself. And then in the tuning cuz those are people that actually know what's best and what best tool to use and how to use them in the cloud. Let's talk about different migration types actually works very well today. We have the cloud compatible stop for cloud SQL

works both for my sequel in postgres. We have memory store Fridays. We do the API compatible to it is and big table mate. Grace are also if ye compatible dependencies of your application refactoring should be pretty easy as you will not need to change any apis. You will not need to change any drivers in your application to just work as is exactly the same way. It works against down from his databases. So usually a very common migration paths and they wanted that we are going through extensively covered in this session is to move the databases themselves to the to call

ticket or tomorrow restore and then moved application using probably be and migration and then you don't need to change anything in the in the application cell for me to drivers. And then I would say maybe there's more interesting but also not as easy migration policies when we want to modernize a debate and this happens usually for one of them maybe a to reason one reason. We need more scale. We need better or my SQL instance can no longer for the bills. What we need are for instance is no longer available

enough for our date ended and that's why we've built spinner and Spinner is a low latency very high availability and scalable database system in the cloud and it's not a very very similar to Julie most of the things that you're able to do with my cycle in postgres. Both of your praise will just shift as is there will be some work on making own understanding what not exactly compatible and then moving yet. The reason is I have tools that are I would say

I'm using them for many years now and I want to modernize them. So I'm using Oracle for example and using Cassandra and I want to move either to call SQL to Cloud bigtable. Usually the main reason we see moving Oracle to postgres in the clouds is to reduce cost. And in the end if we they Cassandra migration that we see is just sometimes at a certain size. It is becoming very difficult to manage your own Cassandra cluster. I do understand the issues. The production issues are happening and then moving to be table usually takes away a

lot of this load again the tooling here or not as advanced but we did that we do have a lot of guidelines a lot of experience out of migration guidelines that will help you do this and take you step-by-step through that process of doing the genius a database migration. Tell us some of your other may have heard Aloo. I cry Google Cloud. It's actually have become official last week. Aluma has to be focused on data migration from the beginning. 5 years ago and inside Google myself and the entire Aluma theme is going to be focused on

delivering a gray database migration experience and then expanding the current capabilities of a database migrations. And the other side is we don't think that we can provide everything ourselves and we don't need that you can do everything ourselves and it was that's why you have partnered with a great companies like stream and daughter told them going to jail today is that make lights are by attack which helps with that process as well. So wizard partner at the system and it took the in-house at tooling that we've built and your own your own

Engineers. The goal is to make that migration as easy as possible. resume customer example, so one example is that Evernote Evernote have moved over from pipe building user know what they do. Jeezy fiends 2 months. It'll be over two months Penzone migrated 200,000 websites with a hose from one friend to my sequel. Just moving to my cycling Transformer 120 to pee in couple weeks. And then I'll see if I moved from Oracle to spanner. Sorry about an ex increase database performance Dos

Gringos migrations are actually useful and usually, you know, you probably ask yourself. Why would I enter that migration that painful migration project? It might take me a long time. I take me along I-4 it usually it's worth. It. Usually sometimes it's not as painful as you think it's going to be listing tools can actually make it easier and you can get it done in two weeks are in a month and sometimes just a performance impact the cost impact is so great that is just worth it again as well. So with that,

let's try and focus a little bit on migration into classical and how to listen to figure out the gation hopefully with minimal downtime and minimal effort. In general the recommendation is of course whenever possible try and take the simple approach and the simple approach is just down by my gracious. So I have let them doing you like to like how much it is migration. I haven't my cycling some songs for him. I want to move it to the same angels in the clouds. Usually I would tell my you was there as hey, there's going to be a downtime that down thing with

score of the offender database size. Stop writing to the storage database perform a dump like in my stable down for whatever to move the schema and the data itself and figure if we can move a ticket we need. Philip in your answers in the cloud load the day that today is it a cloud? Switch DNS or the directions to that server and that's it for done. We had relabel right rear enable all of the clients read your database and it's very simple. Unfortunately. It's not always possible to

take down time is that noise that can be from one of the reason one reason can be the database is just too big. So the downtime will be too will be too will be too long and we won't be able to take down time. One way again is to try and make sound simulation to reduce it down time. And actually we have some real life great tips to reduce the down time when doing the migrations when importing today. Disable all foreign Keys all secondary keys because that's where the database will not validate against them. So there is no need to actually do that validation because I

already know you're moving the entire day to sit and then when you import date that was so important and down pic, of course in the order of the primary key that usually makes Embraer that's going to have very very significant impact on the in for time. In terms of weather we choosing to import into we want to make sure that I mean if we can have the available Ram larger than its means that the import iMovie much faster using SD storage or high Ops also greatly reduces downtime increases migration performance

during the import process itself. Of course, I'm looking at how fast the storage grows how fast is the art. Do we have any real operation? Do we have Beijing is happening? And I understand why? E-40 taking some time and also how to reduce it. So Diesel diesel find desert great and great tips, but still not good enough often times, like often times still we cannot take down time or nighttime will still be too long and you really want, you know, if you have a third one terabyte or several terabytes my single instance. You just

cannot take the down side or if you have a business credit card legation, it's good Friday. You need to keep writing. You cannot make it on time as well. All go back to what we can do after a pork butt. We have a that's why we have an option of minimizing the down time when migrating. So this is essentially zero downtime and this is already available for my sequel to my sequel migration in your gcp instance. I mean, you can just click I'm not sure if you ever seen this button that says migrate

data, but this will actually do the migration was very low down timer 0 down time in the way. It's done is it will we have. We have our application if still reading against the existing database What does process is doing is death process starts in your instant start replication? So like big log replication running between your Source on Fram and the database in the cloud as if it was another of the slaves of the slave. At some point the data between thesaurus

and the targets matches and then we can pause right and then we can really have any more down time maybe for a minute or for a few minutes pause rights to the existing instant movie to the new & Sons and then resume and because we had replication running the day that will be exactly the same so we can actually move with minimal downtime and from our experience. Love you if you needed minimal downtime migration or sometimes the make or break or will I ever migrate my application to the cloud will I ever migrate my database to the cob? So pardon my sequel to my sequel youth case that option

is already available will go in there and it will show it real quick. Don't let's say I have a data source. I'm pointing myself to the address username password of the source database. I'm Susie instance type of what I'm going to migrate. And again, that's the truth according to the tape that we've shown earlier and I'm pulling into a dump by 4 at the Iran bicycle dump at some point in the near past. I'm pointing and I put it in there in did you see I'm pointing it. I'm going to get a dump

file. I'm craving a replica. And after a few minutes that's probably got will be created and will have its own IP and wants to replicate graded. We can actually start her application. All you need to do in your Source database give permission. Once you've given permission to the database to connect as you'll see the replica starts creating. So in that case it's because it's been created and you can start morning to running the CPU the utilities ation the storage and you can actually see the Authority for the very big day today took

about 30 minutes to migrate is the fast forward for 2 minutes, but you see the story finding up and and then the latency is being measured the latency essentially tells you what's the difference between my current replica and the original server in terms of how far am I behind? There are the beginning usage growing growing growing growing at that point? It was 32 minutes at some point. Is done and we are catching up on the backlog catching up on the big log. Once we freeze all of the bee log from the source. We would actually see

the latency going down 2-0 is this point you can click you know, what? What do you see in there which was promote replica. Once we promote replica. We stopped right to the source database. We're not driving anymore. We click promote replica. If you've seen that process takes about 30 seconds, which means it's no longer obligated to tell diff primary server. And what is the primary server and it is promoted we can actually we can actually start the rights from the application directly to death replica. And that's it. We're done. We had too many more downtime aggression

between Mexico and I suppose we can go back to the deck. Yep. Great. So after after name for yourself I'm backing up a little bit here until he wants to eat in Portage once application start. So I think we start to tune it we are monitoring the performance we see do we have too many rights? We have too many application actually running weekend that did that says to improve the performance. We can you change the instance type if we need more CPU or if we need more memory we can turn on binary log me

tomorrow to create our own replica. Now we can we are again having my sequences of the cloud. You can never replicate we going to read replica kept are non-binary logging on our clothes equal in stones. And eventually we ready to go. We have our application running against a sublease in the clouds at this point. We can keep running application vm's on tram we can take the application fee and then migrate them into the clouds as well. So I'm a Fool Believes her application in the cloud. So this was a for how much it is to my sequel to my sequel migration. That's

great. What happens if we want to make to do a more complex migration? Let's say we want to migrate postgres Oracle to postgres. So we have a big Oracle in Tucson for him. And we want to modernize our application want to move to postgres in the cloud. So here actually assessment is very important. And this this tool is to buy call Nick Weiser and he's provide you with information how difficult will it be to my grace your Oracle instance into postgres so you to the store if you were to the Target in that case, I've chosen

Oracle to postgres and I have are five or six database. What's nice about this tool only. It will create a Orca make the data. It will just around a bunch of like on the arm and wrist and the current usage. Understand are you using a very specific Oracle synonyms? For example, that doesn't exist without sequel. What do we need to do about it? Are you using PL SQL extensively and then you would need to convert a tail sequel eventually each one of your databases will get a grade the score of how complex will be to migrate database, which is that bar

chart you can see here. So the ones with the higher score or more complex migration the ones with the lowest score should be like to just migrates very easily because you're not using any specific or proprietary Oracle stop. You can actually even see why so I guess resolution here is not good enough, but you can actually go deeper and say hey this is very complex. Why is it so complex? And then you will see these are the features that specific feature that are being used by the which are not compatible with Forest Grass or we will be It's a manual intervention in order to move

the poorest guest and you can even see which users are using the speaker so you can say hey my CRM app, maybe I'm going to keep it to the Oracle for now because using some very fast. If they cannot throw up, but my billing up I might be able to move to the post office. And then the last thing that they will do it will also analyze if the Oracle you said, he's more of analytics or Olaf YouTube type or is it all DP transaction type and if it's a analytics YouTube type you might be better served

with migrating deficit make application of that specific usage in to be carrying out into postgres. So it will tell you hey this station is using mostly all off Prairie consider not moving to postgres but moving into a data warehouse that designs for analytics. I be quiet and those applications are doing most Oltp or mostly transactional load and then it's better to go to Forest Crest Hill currently supports assessment of migrations from Oracle to call TECO postgres in my Sequel and coming soon by them

is also migrations into scanner. And with that I want to call on the codeine in a lock to show off the demo of the actual migration we stream so we finish the assessment we decided that we want to do the migration. We want to move over Oracle to postgres one way of doing it to you stream. So, thank you. Welcome everybody. My name is a little kingdom from stream. I feel like the brain surgeon after the complexity is within 100. So I get into the picture and tell you how to do that the hydrogen DSi migration from Oracle to

postgres. That's my demo today. What I'm going to do is kicktronics, okay. Just want to make sure my clicker is working here. Let's go to the next iPhone before I get into the Dental Association is the next Generation platform. And we focus on three different solution categories. The categories are Cloud Road option hybrid cloud data integration. And in memory stream processing was the focus today is on cloud adoption, and I'm delighted to announce partnership with Google on child abduction, especially in the area of a database migration. So

what I'm going to focus today is on the Oracle to mount siple posters migration and especially as I pointed out that down time is a big problem, especially for mission-critical applications. So you might have a number of different applications from your CRM billing payments core Banking excetra and these might not necessarily be able to take any kind of an outage but you still want your shame you want your initiatives for you know, cloud modernization or four of Pentacles operations. You might also have data on other flowers and you wanted to synchronize

that with newer applications that you're deploying on the on the Google Cloud. So those are all the benefits. Of course, the critical problem here is the the the the downtime Park so let's get into the devil and I will actually show you how steam helps you do that. So this not just want to have golden help me with the demo. And this is a landing page of the stream product. There are three parts here. There's a dashboard space does the apps apps for the pipeline? So you hear me interchangeably use apps and data pipelines and then there's

also you connect your TV to your sources before you get started. So we're going to jump into the apps part of the of the demo what you see here are pre-built pipelines. And remember this is a zero downtime type of scenario. I'm talking about this is two critical phases does phase one which is instantiation, but that's what you make a one-time copy of all of the day that we could be gigabytes perhaps terabytes or it could be lesser and Then followed by a catch up or synchronisation face. Wish me a cheap through some very specialized readers that help you catch up the source in the Target

and ultimately the two sides are going to be in sync. Initialization phrase so this is a simple a pipeline that shows a pipeline going from an on-premise Oracle database into a cloud SQL postgres database. The latest was built was using a floral designer by choosing a number of out-of-the-box components where you configure your your sources for example in your targets and you can also transformed along the way because it may be interesting because the data formats might be different or you may want to realign some of the data as you're going

through. Let's step into the configuration of just the Oracle database. So here's where you can figure all of the different properties and it's pretty flexible in terms of how you want to move the data on the cloud side. Here's where you provide your service account to your connection to the post sequel database. In this case, we're going to be actually moving to Separate Tables for the purpose of the demo. We're going to attempt to do a live migration of a million records of the line-item stable as well as I do some DML activity in Phase 2 with a change in a capture for the orders table.

So let's go ahead and do the application and in the cloud or on on Prime now, it's up to you is pretty flexible. And as we deployed you going to see that there's actually a does it does look deliberate mismatch that we introduced heard you talk about some of the assessment Jackson soul food safety important that before you actually do the migration that the sources in the Tienda Target Steamers, excetera sort of a doubt. They're compatible. There's no exceptions Etc. Is one incompatible datatype. So go ahead and just skip this

specific coordinates actually fix this in another floor. So we're going to jump into that initial load flow. So let's go And once we deployed it's ready to get started. You want to check for in whether the tables are already empty. So in this case in the line item table, you can see that the count is zero and now we're going to go ahead and run the application. That's the one that actually goes to the gr. Critical database, which is Oracle on friend starts. Go ahead and you can run that Torrance Rd application will also preview the deer along the way. So once you can see now that it

is beginning to fluent in this case, you have a million records in the in the line against able to monitor the application progress. You can see that so far we have 400,000 input the output to input all this you're changing. So this is where is happening. So things like batch optimization things like parallelism things like event guarantees things like tummy semantics all of the stuff is taking place behind the scenes. So if you actually consistent when you actually in a El compita migration so now I think we're done. Why don't you take a look at the poster SQL database

make sure that and there you go. So you're a million records table was just just moved pretty quickly after the performance is very impressive with that. What I want to do is remember this could be gigabytes are terabytes of data or if you want to do is drink this time. All of them that is actually changing in the database to this. This is your actual activity that is is being moved using a technique or change it a capture to stream allows you to actually have a specialized reader that in this case that step into that. I'm so here's where the reader properties are and then just

have time we can go a little faster same configuration for the for the Target obviously. And in this case now, what we do is we take activity from the redo log of the Oracle database and we are going to effectively Replay that on the target side to catch them up there by avoiding any kind of an outage to the production database. If you have a mismatch here is more of a position mismatch in this. Yes, I'll ignore it because I already know that all my data values of here to the Target size, So go ahead and ignore that and we want to actually go ahead and move the table here again to preview

option is selected. And that's where it started is going to generate some DML activity using a separate simulator. This is where you going to see some inserts updates and delete. I can see that he's running the number of the MLB operations against the order stable and then you can actually see that now not on the RV capturing the data but also the metadata so these things from the Oracle database like the transaction ID the distance from it number all of the amenities of the table in the operation type insert update delete you can filter on stuff like this and then

you can actually go find me log into the order stable and see that there's in fact they are present in the Stables. This was empty before him. So this is a technique of instantiation followed by changes. I capture an application that allows you to add your own Leisure movie Iran from database into the database. You can actually go ahead and test both size-wise a migration is ongoing it may take you a day or week or potentially months depending on the size of the database but the key benefits are there. There's no outage and you can keep your operations continuous for your

mission-critical applications.

Cackle comments for the website

Buy this talk

Access to the talk “Tools for Migrating Your Databases to Google Cloud (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Google Cloud Next 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
163
app store, apps, development, google play, mobile, soft

Similar talks

Gabriela Ferrara
Developer Advocate at Google
+ 4 speakers
Dave Nettleton
Director Product Management at Google
+ 4 speakers
Alok Pareek
Founder/EVP Products at Striim
+ 4 speakers
Codin Pora
Director of Partner Technology at Striim
+ 4 speakers
Tobias Ternstrom
Director, Product Management at Amazon Web Services
+ 4 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Emilio Del Tessandoro
Senior Software Engineer at Spotify
+ 1 speaker
Sandy Ghai
Cloud Bigtable Product Manager at Google Cloud
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Martin Omander
Program Manager (Developer Advocate) at Google
+ 1 speaker
Adam Ross
Senior Developer Programs Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Tools for Migrating Your Databases to Google Cloud (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
567 conferences
23025 speakers
8619 hours of content
Edward Bell
Alok Pareek
Codin Pora
Yair Weinberger