Duration 40:49
16+
Play
Video

Develop, Deploy, and Debug Using Google Cloud Developer Tools (Cloud Next '19)

Nikhil Kaul
Product Marketing Manager - Developer Tools at Google
+ 1 speaker
  • Video
  • Table of contents
  • Video
Google Cloud Next 2019
April 9, 2019, San Francisco, United States
Google Cloud Next 2019
Request Q&A
Video
Develop, Deploy, and Debug Using Google Cloud Developer Tools (Cloud Next '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
9.9 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Nikhil Kaul
Product Marketing Manager - Developer Tools at Google
Russell Wolf
Product Manager - Google Cloud Developer Tools at Google

Nikhil is part of DevOps team at Google. Prior to Google, Nikhil worked in various capacities, including product and engineering, in the software development and testing industry. Outside of work, Nikhil enjoys running and hiking.

View the profile

Russell Wolf is a Product Manager at Google working on Cloud Source Repositories and other code-related tools to improve developer workflows.

View the profile

About the talk

Come learn how Google Cloud provides an end-to-end workflow for developing, deploying, and debugging applications to services such as App Engine and others. We will also discuss how Cloud Source Repositories can be used with other Google Cloud tools to implement continuous integration process and validate check-in with an automated build and test. Also, watch us debug a live production application. That way you can pinpoint problems without stopping or slowing your application.

Share

Hello everyone. Thanks a lot for coming today. We are here to discuss how you can develop deploy and the multiplications using Google Cloud platform stairs. I make a call and the Brock Marketing Manager for delicate tools at Google cloud and with me I have Russell. I'm Russell Wolfe, and I'm the product manager for cloud Source repositories. So as you might have seen in the previous Slide the baby structured this presentation is essentially into three parts. In the first in the form of spot, we specifically wanted to talk about developing applications.

In the second part we wanted to Showcase how you can deploy those applications. I'm in the Hood part. We specifically wanted to Deep dive into debugging an app. In the first part when we talked about developing an application. He wanted to Deep dive into how you can store your source code. on Google Cloud platform The different integration which are available? McDonough post productive as a part of storing that source code I'm finally. the different features

which an apple able to build Security in in the second thought which is essentially targeted a dip line. We wanted to Showcase how you could use cloud Source repository specifically. Do Bill test and deploy an app? I'm finally in the third phase of the presentation. We wanted to Showcase how you can use cloud Source repositories along with the different Integrations with star driver for teabagging in applications across these three different phases is we are leveraging Cloud Source repositories along with the different integration state

provides level set and ensure everyone in the audience. understand what cloud Source repository is we go to the next flight. About Taurus repositories if it generally available service on gcp that allows you to host private git repositories. Because those were five stories are hosted on JCP. It makes it really easy to integrate with other three ski services such as back driver to Bogger Cloud field and Cloud pubsub. There are three areas were phosphorus repositories really Excel

first is with its high scalability. Google has invested in a highly scalable get infrastructure for many years to support the needs of two of the largest get projects that exist chromium and Android. Cloud Source repositories uses that same highly scalable get infrastructure, which means that you can host large files large folders. Large repositories and do it all for many users with a very low latency. Next Cloud Source repositories has high reliability because all repositories are hosted in multiple data centers

across different regions. In theory you been up half the world's network was disconnected. All repositories would still be able to be served. We even isolate data centers and perform periodic Disaster Recovery test just to be sure your Source go to state. And finally tell me your organization can find some needs five Source repository doesn't just integrate with identity and access management meaning that you get to use the same identity you use for the rest of your organization and music consistent permission model used with the rest of York five resources.

it also features audit log in which means that if something were to go wrong such as codex filtration, you can be sure that you can do an investigation and figure out what happened who did it and when it happened Bosworth repositories also does daily globally redundant backup which means that if some disaster were to occur your source code is safe and finally with automatic encryption in transit with pearl repository encryption Keys. Your IP is protected. So remember the three pieces of the presentation

deadlocked deploy and the bog can actually help you develop applications. There are two ways to check your source code to gcp. First you can do it by mirroring your code from an external gift provider such as GitHub or bitbucket. You only need to set up that mirror once and all of your source code is popping over after that anytime you make a new commit to GitHub or pit bucket that commits is then copied over to Cosworth repositories as well. now you may be asking the question what benefits do I get from copying

my source code over to JCP well first you can take advantage of kotzur will dive into that a bit later but I can just tell you that Google Earth rely on it every day and I bet you'll find a great 2 back and you can take advantage of a bunch of other innovations that codsworth repositories has with other JCP services for example 5 hot tub 5 Bill and stack garbage bogert and we'll also all of those later Alternatively, you can choose to host your source code to directly on Google Cloud platform in class or through passports and have your entire

development team collaborate there. We're also excited to announce an upcoming integration with gitlab, which means that you'll be able to host your repositories on gitlab and mirror all that source code in the clouds to take advantage of all of those Integrations. This will be available in the next month. The other goal of us providing Cloud Source repositories spend a lot of time trying to find ounces each and every day answers to questions like how a function should be called. What does the class look like where does the class get instant rated?

Why is something feeling or who changed the code resulting into the coldest feeling? And answers to those questions can be easily found through Court search functionality of lots of suppositories. Code search allows you to search across your entire repository with a single query. This is especially useful when you don't have all of your organization's repositories clone down from the server at your local machine or all Here Local Repose aren't up-to-date with the server. The next thing is super fast. So even if you are in a large organization

with commits constantly coming in yours always searching against up-to-date code. In fact Cloud Source repository is the same document indexing and retrieval mechanisms used by Google search quickly and across very large Corpus a zip code. There are three areas were code search for 5 person repositories really excels. First with support for regular expression sir. That means that you're not just searching for text. You can also search for specific entity type such as a class or function. And finally with results suggestions as you type you can

get answers to questions. You're looking for right away without having to Wade through any search result pages. Pause service repositories also helps you keep your coat secure. It's generally not a good idea to track service confidential's or security Keys into a Version Control System. Sometimes you may accidentally do that. And that's where it stops repositories is there to help with a push lock feature enabled. If you try to push a commit to the server that contains a JCP service confidential or a security key. That

push will be blocked and you'll be given information telling you what file was in violation. And why was blocked if you're committed was just fine. It makes it to the server. Let's go ahead and take a look at a demo for this all we're first going to start by showing how you sync a repository from GitHub. Then we'll take a look at the source browser includes first repositories and look at the contents that repository after that. We'll do a global code search and finally, we'll take a look and see how pushblock Works in action. This is the homepage of passwords repositories.

Here you can see your favorites and your recently viewed and these will contain repositories branches folders and files. It helps you quickly jump back in the context of whatever you you were last work. We can go up to the top right and select add repository to add a new Repository. And here we can choose between connecting an external repository or creating a new repository. We're going to choose to connect an extra Repository. We're going to add it this far project that created specifically for next and will choose the GitHub provider. I'm going off 10K JCP to connect to get

Hub. I've already signed in with my GitHub account on this computer. So I'm already in here and I'm going to choose to connect this Cloud the bugger python repository that I have access to on GitHub. I just click connect to selected repository and now Cloud Source repositories is setting up a webhook on that GitHub repository. That way whenever changes happen to that repo we can then mirror the changes back over and it's also doing a one-time full clone of that repository, including all the branches and tags and everything else. It looks like that repo has been flown over.

So, what's a Glock? Here we can see the layout of the repository in this Branch picker at the top. We can see that it's coffee over all the branches tags and commits. We can also see the readme file for the repository render here and click in the change history to see the history of all of the commits that have happened in this Repository. As we navigate through the source browser. Navigate through folders and even down the files. We can see that the trains history updates for that scope. once you're looking at a

file a claim to see which user was responsible for the changes on each line of code and see the associated commit and message that goes wrong with him. You can also go to change history and select the death between any two versions of a file. And you can click on a commit ID to view all of the changes that took place in that commits. By the way, you can compare those commits the other commits in your Repository. Empty the difference of all the commits that happened in between as well as all the changes the files. What's go ahead and take a look at the code

search functionality now. I'm going to change from search mean this repository to search Macross everything I have access to. and as you can see about type individual characters, I'm getting these results suggestions as I type and they're showing that is finding variables functions struts and I can go ahead and click on one of these suggestions and I'm taking directly to line of code where that function exists. I can also do the search box and just amethyst query and view all of the results

across all of the repositories I have access to You can go in and preview all the mattress in the file to see if it's what you're looking for and continue iterating through these different previous until you find the exact match that you were trying to get to this really helps your developers be productive and still just passed that Nicole was talking about earlier quickly. Now, let's take a look at how bushwalk works. I'm going to switch over to Cloud shell Cloud shell is another JCP service that allows you to use a command line to manage all of your

JCP resources as well as a code editor and all this is hosted in RV. I'm just for you so you can accept it for the browser anytime you want. Secure I'm inside of his Cloud functions repository and earlier today. I was actually testing out deploying my cloud function from a third-party service field in Ice immigration. And because I was doing that I downloaded some JCP store discount credentials to my machine. Those are contained in this cheat on Json file. Now later in the day. I may go and make an innocuous change such as fixing this typo from this repository

to this Repository. And I'll track it with Gad make a Quick Med saying fixing typo. And push it to the server. Now you see I made a mistake there. I was going quick. I just had to get ad. That is everything in the repository that wasn't being tracked be tracked and here we can see that my push was rejected. Pushblock has said that the push has been rejected because we detect that it contains a private key. Please check the following commands and confirm. It's intentional. I can go ahead and

execute this gift show command and it will show the contents of the private key, which I'm not going to do on stage. And if I just checked there was actually something in there that wasn't a priority in this was a fault detection. I can go ahead and just run get push and override to say I don't want to keep track. And where that we've seen how bushwalking Cloud source for phosphorus can help you for making this critical mistakes that can really put your organization's security at risk. And with that let's go ahead and switch back to the slides and they kill us and talk to us about the

next part of our presentation, which is the point. So would that be finished the first part of the presentation which was essentially developing an app? When it comes to the flying off map many of us are deploying to different environments. These could bbm's. This could be containers. This could be so listen Mormons or this could be us. irrespective of whether you're using Google compute engine variants Google kubernetes engine for deploying containers such as app engine or functions.

Are mobile platforms like Firebase Cloud integration with Cloud build can come in really handy here. Platteville is Google's continuous integration continuous delivery platform. I'm also supposed to freeze integrates with it. Inside as you receive it in the demo the good part about the integration is that there are no seais servers to set up manage or maintain as you would see in a few clicks you will be able to achieve. Automation from source to production and the good thing

is you can deploy your out to different environments. In fact, if you are someone who likes to manage the configuration has code you can use cloud Builder tiamo specifically for specifying build steps. You can specify the steps you want to perform as a part of build you can specify the steps you wanted to perform at the back of vest and you can specify the steps you want to perform as a part of diploid as well. The Buffalo looks something like this as a developer. Alright the code and I check it in Cloud

social Palm Springs from there on cloud Bill takes a long time to text the word family because we have set up a trigger to notify Cloud build that occurred with a change has been checked in. Next asteroid to take so I can build both darker artifacts as well as non non container specific example of these could be greater Maven go basil excetera excetera. Once the artifacts have been created Cloud bill can run different types of test whether it is a unit test or integration test. If you're working with

containers specifically in the artifact management space, which is the fourth point over there can help you scan those artifacts for one of these as well. We'll talk more about it in the later part of the presentation and finally if something looks if everything looks great, you can reply that app to different environments in the diploid in the deployment face. How about when you're working in a country as integration and continuous delivery environment? You need notifications as a part of that process. You

might want notifications doing this. As well as the end and that's where the notification peace comes in handy. Footpad Source repositories n Cloud build offer Integrations with Cloud pubsub. That means that when a new committee has checked in to your repository a new-build is kicked off or a building from place. You can generate a pub sub event to a pub sub topic. You can then have a cloud function configure to monitor. Pub sub topic watching out for these new events coming in and when a new event comes in and a cloud function triggers, you can

have it in Boca third-party service such as slack email or any other that you want to work with. Before we go on to look at a demo with it's all I'm going to ask me to kill to go ahead and walk us through the workflow of what we'll be doing in the demo show. So we have it set up a really simple demo to demonstrate country is integration and continuous delivery pipeline. That's the first phase of the demo and the second phase of the demo is essentially targeted at getting a notification to slack. Let's break down these two face.

I'm going to mow detail one by one. Both of these two faces Essence me start at the same place, which is Jennifer riding a cold and checking it in into clouds rest of us, please. In the first phase what happens is as delivered right to code and check the tenant do dogs do suppositories because Cloud will get to notification to do we have set once the bills gets triggered the app to the best version of app engine and runs and doing test if the test pass

guard deploys the app to the production version of app engine the first phase of the continuous integration and continues to be processed in the second stage. What we're doing is you sending a notification just like The steps look something like this. Start the game with Delaware checking in the code. Cancel suppositories to Cloud pubsub. Cloud function then subscribe subscribe to the event and since it's subscribing to the pops up event. It posed to message which is

received in the event to the black. Let's see both of these Demos in action. What switch over to the demo? Here we can see an application already deployed to production a very simple app. Basically. It's a guest book. You can write a message in this box here such as welcome to next 2019. Sign in the guest book and it's posted there along with a user who wrote it you can switch over to other guest books as well and don't have a distinct set of messages to and you can login and logout of this application. Now the title app engine guest book. Maybe it's in the most

suitable one. So let's go ahead and change it to next 2019 guest book. Just something easy for us to do here and we'll change it first by going across or through pause stories, and I have no idea actually. And it found the results in one of these repose. And I see it's here so I can just click edit code and this is now going to launch me an instance of how it's going to make sure that this repository is mirrored into this VM and I'll have this complete work space in the cloud here. I

already have your Dover actually so it's just asked me to change directory and I'm at that line of code. So I'm going to change it from a pension to next 2019. will address to get will get status to make sure I'm not checking in any security fees or anything like that. Get commit. changing title and then finally, I'll push it to the server. Now as this commit just push the server just like Nicole mention. We're taking off a bill sweat switch over to Cloud build now. And we can refresh. And see that a new-build has been involved. Breaking into there we

can see that is three distinct field steps. Will taking a dose into detail in 1 minute. Here you can also see that we have real-time logs that are being produced from the field so we can see what's going on. Let's go ahead and take a look at this build trigger that we talked about. So when Co made it to file source for phosphorus office building told it what to do. First we noticed that there's a trigger type. The trigger type allows you to specify when you want this trigger to activate. You have two options between a branch or tag and

hear a chosen branch and I can choose a rain checks that I want to match for when I want that go to the info in this case. Star cuz I only have one branch and I just wanted her off that but I could also specify an exact Prince Namor red checked for Center franchise. I want to kick off Field 4. I can also choose to fill for this build by change file. And so here I can use a red dress to say I only wanted for your bills if there are changes in my source folder or I don't want to trigger bills if it's just a change to a config file or a markdown file. Once you decided when you're going to

invoke a build you then gets to decide what do you want to do when a bill is in vote? You do that inside a field configuration by choosing between using a dockerfile or a cloud build configuration file in this case reviews the cloud build configuration file Cloud. Gamble. What's the girl look at this car. We can come back at 5 to restore Pastorius I can search for it, but it's right here. And so here we can see the first step that's declared in the Bible. This stuff says we're going to deploy

the current code to a test version on a pension without promoting it. I'm just declaring to use this existing image stored in container registry for g-cloud. I'm saying form the app to play commands and specifying the app. The animal for a pension and indexed adiamo for the backing Dollar Store tests. So I'm naming that version and I'm saying not to promote his production. After that, if and only if that stuff completes, I proceed to The Next Step which runs end end test against the test version on a pension soucier.

I'm just using another image. You should already stored on container registry, which is an Ubuntu image. I'm opening in the back installing sump erector sets and then executing this run tests. Bash script Here we can see I'm just declaring the URL for my test environment on a pension and I'm executing the end end test. If those tests all succeed I proceed to The Next Step which deploys the current code to a new version on a pension that's promoted to serve production. Again, I just use that g-cloud image

and call that same after flight. But this time I'm not naming it to a test version and I am letting it promote the production. That's really all there is for setting up his automated test environment running my undone tasks as well as promoting the production if those two feet I recognize many of you might not use a pension there many other deployment run times. If you were deported kubernetes, there's Cloud build steps for that. If you wanted to the party five functions, it's as simple as Best Buy in your function name to finally there's even steps if you're trying

to go to mobile and you wanted to fly to Firebase. Now that we've seen for the cloud build a mold that that final part of showing what we should build. Let's return to the bills and see what happened. Here we can go back to the history and it looks like the build succeeded we can see that that first step of 2.2 a pension in this test environment and not promoting it took 17 seconds. Spinning up that I'm going to measure installing a bunch of these requisites and then running our end end integration test took another 54 seconds, but they did pass that's good.

And finally 2.2 production in promoting to another 30 seconds. What's go ahead and check out our application on production? We can refresh this page. And we see that it's now updated to next 2019 guest book. Now that we've seen how that cicd automated workflow Works. Let's take a look at that other part of the demo danakil talked about which is posting the messages to slack automatically to inform your team. So here I have a cloud function already created called csr2 Slack. It's watching this pops up topic. I called CSR and I told told bad Source repository

to post puffs of events to that CSR topic whenever something happens like a nuke. I'm done pulling source code for this five function from a different Cloud Source repository in executing this message slack function. Here we can again go to Cloud Source repositories and take a look at what that looks like. Find the right away for me and we can see that in this. Function I'm going to be messaging the next 2019 Channel thing that users email just push a new Commit This repository. So let's go over to

slack and we can see that it did indeed did post a message to that channel. Whether you're trying to interact with slack or any other third-party service Integrations with bud pubsub and Cloud functions make it really easy. Now that we've look at this tomorrow. I'm going to learn that he'll tell us a little bit more about some deployed technologies that we have that are specific to Containers. So let's go ahead and switch back to the slides now. So so far would be the results showed in the presentation was really cute. It was a pension

of you in the audience might be using containers on a day-to-day basis to deploy applications. If you're working with containers as a part of cloud build we have something called as wonder what is scanning scanning allows you to identify security early on in the software development process. Package with Liberties for Ubuntu Debian are identified right as soon as the containers are filled it also provide you detailed Inside by providing details such as the security disability level of the one Divinity.

What is a CVSs score and weather of Texas at label. If you want to take this to the next level and further automated tool chain, you can use wondering if scanning with binary authorization which will be going to be soon. Buy New Yorker ization essentially lets you define policies based upon your organization needs as an organization. You can decide and Define that let me get to deploy only those containers which are actually tested what this means is if some deliver deploying a container which was not tested as a part of cicd

tool chain that container will not get deployed. The image above is essentially doing something else. The container get scanned by one of the body scanning checks for a signature on the image from one Liberties Liberties canning. If a signature is found the continent is deployed to Google kubernetes engine and if a signature is not found audit log is provided on why the specific step one of the phone. With that we have completed two-thirds of the presentation. We

specifically looked at developing an app to look that up by showcasing the cloud will integration with lots of spots. Please X action. I'm going to be able to replicate those issues on a local in Monument. And that's where I also suppose a breeze integration with stackdriver really comes in handy view to debug your app interaction the first and the foremost way is Strike divers the Box snapshot. This comes in really handy when you're actually debugging enough information in the following example

as an instant issue is being faced only by one specific user and as a result of which identifying that issue on your local environment becomes really difficult. And this is where start that was good snapshot provide. You detailed it captures the entire called stocks. It inspects the local variables to do. This is not even have to slow down or stop your app. And the best part is once you select the lines on which you have to set up these debug snapshots all

your running instances of the Earth can start automatically capturing these snapshots. The other way of debugging with stackdriver is lock points. Has Nicole said stackdriver debugger also lets you set the Abode one points. Debug.log points allow you to set a custom log message on a particular line of code. That custom log message. Can then reference the current application State as well as Only log under certain conditions. For example, you may want to log the value of a variable only

if the value of the variable is outside of an expected rain. I like to think of it kind of like print out the ball game, except you don't need a single printout in your code. And since you're using stackdriver to bugger in production, it's important that you don't even need to restart or stop your application. Finally stackdriver to bugger also integrates with sack driver login, which means that you can store search analyze and alert on these debug events. Let's go ahead and take a look at debug snapshots and debug log points in a demo again.

Can we switch over to the demo, please? Here we can see the stack driver to bugger user interface. At the top, we have a service selector. This allows us to choose between the various services and versions of the services that we have deployed. I'm going to go ahead and switch over to the one that we've now promoted to 100% And is backed by the cloud Source repository at the commits. We just made stackdriver to Bogger automatically pools in that source code

and allows me to navigate through the entire file tree. We're looking at guest book. Pai now, which is where the bulk of the logic for this application exist. Now when our user goes in and post a message the application that happens within this function down here. Call Post. Now, I'm sure you've all been in situations where you've had a user running up against your Production Service. They're running into a problem. You try rebooting it locally. It doesn't Repro you try referring it on your Production Services. Well, and you just can't

Repro it. Stackdriver debugger can come in to help a lot with those types of situations. Sure, I can go ahead and go into my Snapchat panel and set of conditions saying user email equals Cloud Source repositories 2019, which is the name of the email. I'm currently logged in with at gmail.com. And let's go ahead and set the snapshot on the greeting. Put, which is where we're going to insert the greeting someone hides in the guest book in Target store. And now we have a snapshot waiting there. It may be in the

next five minutes could be in the next day. But when a user comes in with that email address it's going to capture the exact application state that that user was looking at so we can find out what was happening for them. We can also set debug log points and so in the meantime while we're waiting for that user. I kind of want to see what's happening for other users so I can go in and check on that same line. And for now I'm just going to say if true so, I'm always going to have this debug one point happen every time someone post a message and I'm just going to say email and log

user email. I can then choose to add that debug.log point as informational as a warning or as an error. And as I said earlier all of that integrates back with all of your other logs and stuck driver login. So I can add that Long Point and now we just wait for something to happen. So let's go over to the application and make something happen. So I'm going to sign the guestbook again saying thank you all for. coming to our talk and the guest book is signed and we can return to

Cloud to bugger. Here we can return to the snapshot and we can see that the snapshot was captured. This user posted in the default guest book and that looks right. That's when I currently have selected. And it matched the user email that I was specifically looking for. I can then dig into the other local variables such as inside greeting. I can see the value of the message that was written such as thank you all for coming to the talk as well as seeing the entire call stack of the application at

that point. See if maybe something was executed that I didn't expect the executed. I can also go in and look at our logs and by refreshing the logs I can see that indeed is Long Point was hit and we now have a login our logging system saying this user and we get that if any other user visited as well. So I hope you all enjoyed seeing stackdriver to bugger. It's actually one of my favorites. We have on gcp and I highly suggest you give it a try. I'm going to hand it back to nikhil now to wrap things up and explain the main takeaways from our talk. I'm usually

switch back to the slides. So today I saw three specific things. We spoke about developing an applications using Cloud Source was pleased doing backstage. We specifically saw mirroring source code from GitHub and bitbucket to Gloucester pacify. You spoke about how get lab integration is coming soon as a part of that process. We also spoke about gold search and how court search lets you search across all the repositories you have access to We showcased push lock functionality which ensures that security keys are not

stored within Source Control Management systems that completed the first spot. We specifically spoke about clouds post repositories integration with Cloud build which lets you automate your bill test and deployment process of your pipeline in case you're working with continuous. We spoke about how you can leverage container registry one already scanning along with binary authorization to set up specific policies and finally in the Hood part of the presentation. We spoke about how you can use track driver. Applications in production.

Thank you so much for coming over here for any questions, and we would really appreciate if you could provide you feedback in the next stop you. Hope to see you again. Thanks a lot.

Cackle comments for the website

Buy this talk

Access to the talk “Develop, Deploy, and Debug Using Google Cloud Developer Tools (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Google Cloud Next 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
163
app store, apps, development, google play, mobile, soft

Similar talks

Martin Omander
Program Manager (Developer Advocate) at Google
+ 1 speaker
Adam Ross
Senior Developer Programs Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Naveen Chand
Product Leader at Google
+ 1 speaker
Breno de Medeiros
Staff Software Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Mikhail Chrestkha
Machine Learning & Data Science at Google
+ 2 speakers
Christopher Crosbie
Product Manager, Dataproc and Open Data Analytics (ODA) at Google
+ 2 speakers
Gregory Mikels
Customer Engineer - Machine Learning Specialist at Google
+ 2 speakers
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Develop, Deploy, and Debug Using Google Cloud Developer Tools (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
567 conferences
23025 speakers
8619 hours of content
Nikhil Kaul
Russell Wolf