Duration 44:44
16+
Play
Video

CI/CD in a Multi-Environment, Serverless World (Cloud Next '19)

Martin Omander
Program Manager (Developer Advocate) at Google
+ 1 speaker
  • Video
  • Table of contents
  • Video
Google Cloud Next 2019
April 9, 2019, San Francisco, United States
Google Cloud Next 2019
Request Q&A
Video
CI/CD in a Multi-Environment, Serverless World (Cloud Next '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
4.36 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Martin Omander
Program Manager (Developer Advocate) at Google
Adam Ross
Senior Developer Programs Engineer at Google

Martin Omander works for Google in Mountain View, California. His job in the Developer Relations team is to help developers build better software, and improve Google's Cloud Platform to make it even better for that purpose. Before Google, Martin worked at a string of startups in Silicon Valley as a software engineer. In his spare time he is an amateur game programmer.

View the profile

Adam is a software engineer in Developer Relations working to make technology simpler with serverless. Prior to joining Google he worked in many roles as an engineer, consultant, solutions architect, platform planner, and tools tinkerer.

View the profile

About the talk

Your production release pipeline might have a number of stages for experimentation, validation, testing, approval, and release, but the integrations and operations of environments is tricky. We will build a continuous deployment system that shows how to automatically connect the dots from your git repository to the multi-stage quality ladder your releases must travel to reach production. You will learn how to use CI/CD tools to test and deploy apps across GCP serverless offerings such as Cloud Functions, serverless containers, and more.

Share

Welcome to this session about cicd in serverless and multi environments. This is a topic that we are very passionate about because we've seen good cicd makes a big difference makes all the difference when it comes to Deb velocity in an organization. My name is Martin and I'm a developer Google cloud and I'm Adam Ross. I'm an engineer and developer relations in Google Cloud. How many of you have ever deployed to production from a documented checklist raise your hands?

It's still a large number of people. Okay, how many of you built to CI CD pipeline to automatically run tests are deployments. although a lot of excellent Well today we're going to explore one particular organizations journey into building their own pipeline for automating the serverless operations into the woods camping supply. And before we get too far, you should know that this is a made-up company, they're imaginary but they've got some real world problems that we've seen actual organizations face. And as we go through what they've done

their decision-making process and their Solutions, you might see some things that you can bring back to your own organization as solutions to your problems. It's a shame they're imaginary doesn't have such a stylish catalog. Yeah, well. I don't know. That's the wrong shade of red but one of their problems is Dave had slower and store release velocity because they don't have a good automation pipeline in place. So they don't really know when things go wrong. And when they do they need to go spelunking into the depths of their

coat changes to find out how to fix it. So their existing architecture has one monolithic application and a number of microservices each of them built as a server list service and they need to figure out how do we approach these Individual Services and modernize how they're operated. We can go straight to a demo on what they end up doing by walking through a change. So the new engineering manager has the friendly failures initiative, which requires that all the error messages be from the Earth. And we change.

And what's going pretty quick branch? Steve's at codecommit I hope let's watch conference Wi-Fi in action. That was really fast. Okay. And so now we're going to open up merger Quest and get lab gitlab is the Version Control platform of choice for Into the Woods GitHub and bitbucket are all great products. This just happens to be with into the woods uses and here we're going to open a merger quest which is roughly the same concept as a pull request. Now that I've opened this merger Quest it immediately shows us a nice page that has that ability to open in a web IDE,

check out the branch to explore it but most particularly we see this this blue circle here next to a pipeliner pipeline is the set of jobs are run to ensure the quality and process and it's automatically kicked off as soon as I open that merger Quest and we can dig into that and see what it's doing. so here we have a visual eyes display of different stages for this pipeline running from testing all the way up to deployment and acceptance and we're going to go deeper into the individual stages for this later drilling into just one of these we see that there's a lifting job

for the payment process and it's taking a little time to initialize this work space check out the code authenticate g-cloud in I have an error. Well, that's unfortunate. I guess I should fix that problem. And I also left the, in the wrong place. So now that I've made a couple fixes here. I'll commit that. And we should see a new pipeline running through. I'm going to just take a quick look and make sure it's running. Yeah, well, that cooks will go ahead

and move along. Can we go back to this lights? So Ops automation one might think it's about computers, but I'm here to tell you it's about the humans and we all have our different views of automation Junior James Piazza pretty Junior developer. I didn't see the woods. He just wants to avoid frustration and bureaucracy and have it be tested at work at work. Brussels Paris Paris, she's a lot and she's into processes and scripts. She has decided to have

observation should be about reducing manual tile for the developers teamwork test. She focuses on teamwork and she wants everybody to have confidence in the code and bring relief to the team. And finally the fourth person in the engineering team here is sign these the CTO he wants to make it really clear to everybody what they're supposed to do as part of a release and we'll see if these for engineers here they are going to discuss and build a CI CD Pipeline and we'll see the final results of the end of the talk. So that will have to go through

their Saints their six major decisions that they will need to make from repo lay out all the way through to security and I'll discuss the pros and cons of various approaches and arrive at a solution for each one. Repo layouts. First one is there is foundational before you do this. You can't release it. Do any of the other pieces their various ways. You can do a free sample of Junior James. He's heard a lot about the microservices and being super agile. And he's like one repo / microservice right that keep everything lights and

really truly separated these microservices a live test Services. Where would the API specs for that live also partying right writing notes that sometimes they need to run air tools across the code base. That's really easier. With Amana repo single code base. And it's of course easier to share knowledge. So after some discussion James has convinced. Okay. Let's try the mono repo approach. What would that look like? The layoff might be something like this some files in the root a some Joe Boulder and Tren test folder.

We have the monolith. There are moving from the monolith to microservices the haven't the Melissa still there. So microservices to spell there one direction for the one list one Services directory with all the different microservices have their own subdirectories. They want to set the office for growth right now. They're five microservices that will have more in the future. Who reviews code when you check code in? Well, as James has found a cool feature and get lab and it's in another some other tools as well. We can add a code owners file

Deco door is 5 tells the system who should be the author assign reviewers have for Friday code. That's that check damn. If you just browsing the code, you know who to ask about a particular piece of code. Good time decided to go with Moana repo first decision made 1026 done and it's time to move on to number two. So next we have continuous testing getting that test automation going on an ongoing basis. So that every change is carefully measured and found to be acceptable to

move forward. So do your developers might commonly think I can just run the tests locally everything will work out just fine, but we're senior Engineers know that can lead to inconsistently running the tests and a lot of difficulty making sure that you're testing things the way they operate in production running end-to-end. Test Saint Cloud Services can be a little tricky on your laptop. So they're going to take their existing work on building. No, testing Pipeline and extend it to these new serverless Services. They've got and maybe some of those automation Engineers amongst you

would find that fun. Like Paris. It does sound kind of fun. Doesn't it. I would want to be a list. So here is here. We have the master Branch from the repo. The master brand is commonly used as a release ready Branch with all changes are merged now shortly after the last release a bug fix was begun. This branch has been in progress for a while. But in the interim someone else came along fix the problem and merge. To master. So when we look at this bug fix Branch ready to go unit Test passing James's excited to merge it test nose. Wait a

second, even though that test is passing. It's showing green. There's a logical conflict between what is now in master and the chains that James has been working on thank goodness. There was an engineer on hands that has memory. List all of the code changes that have gone on for the last number of months. So maybe the solution here is to have special for the test. If your automated tests aren't reliable pretty much all the time. They're not worth that much because you learn to disregard the results of running those tests. It's so instead we want to look into continuous integration

continuous integration takes this continuous testing idea and focuses on how to make sure that if you accepted that change into your main line of code would things still be passing this is done by merging in the the master Branch here into the bug fix Branch before running those tests performing a code integration not to be confused with an integration test, which is a bit of a different Beast. And as we explored meeting to build that pipeline what kind of tools should we use one of the most important features that you might have for your CI

CD system would be a configuration as code approach having the definition of those jobs in all the operations committed to your code repository so that the entire development team and any Ops engineer's and anyone else around that technical neighborhood can see those changes and weigh in on what they constitute and make sure that you're practicing a good and integrated devops sort of culture. Nas we've mentioned this team is using gitlab and they're going to continue to use that for these new services but in the future, maybe they'll outgrow gitlab. Maybe

they'll be some other considerations and will consider other products of a circle CI Travis Jenkins or cloudbuilt, which is a product in gcp. A simple get lab testing pipeline is made of yamral. I bet a lot of you here were thinking of serverless track. We're safe from yamo, but not always so here we have stage that includes a test age the only one for now but more will be added later and every single job in a test aged ones and parallel and our initial jobs are limiting in unit. Testing thing is checking your code syntax and unit tests are relatively

quick to run tests. But we see that both of these are running node. They're both running some implied service that they're testing again. So it's not necessarily a good fit for a mano repo. It's not testing a bunch of different services. Show US expand on that linting job now, we have to wenting jobs one is testing a node service named relay running a job at running the job inside the real a subdirectory of the repeal and the other is linting a service built-in go running a side up payment subdirectory. And as James is observing. This multiplication of the job is going to result

in some slow down that's more and more services are added to the repo. And some making the pipeline a bit more Dynamic is going to help that process using almost any cicd system. You can introduce a little Dynamic logic and take advantage of the fact that you're probably using Version Control and you can run a slightly complex commands such as this this get dis here to list all the files that have changed maybe only run the tests that apply to those files and as it happens gitlab has a trick for that in the configuration. If you are running your testing

pipeline for a merger Quest you can also specify to only run that job if I file changes in a particular subdirectory. So here I am I see it is configured to only run if a file in the relay service has some kind of update. And you might be thinking this is getting a little complicated. But that's a trade-off from going with him on a repo. The tooling is often a little behind on that compared to having a service per Repository. Lastly when will the model repo and having all these services in one repository? There might be a

tendency to have some of the operations leak into top-level directory user into this configuration file will having those operations defined in the world's best. No task Runner bash scripts, you can get them isolated into those same subdirectories and make sure that you can make sure that those responsible for an individual service are also responsible for all the details of the operations and perhaps in the future this into the woods Dev team will look into moving to something else. That's a little bit more sophisticated than Dash may be using make files or basil to run their

run their jobs. What's the weather time all of these steps are built in we have a few different stages potentially coming together on just the testing pipeline. A PreCheck here is showing the use of Shell Shack which is the utility that applies linting checks to bash chips, then test running linting and unit tests against each of the services and billed running any builds for those services that need a build process. All right. So that was the end of Step 2. Now we to down 4 to go the next decision the team of into the woods me to make it how to

manage automatic deployments. So their number ways to do deployments all the way from Hell CTO saying he did it before he was a CTO when he worked in the start-up 20 years ago through something that the junior developer might do with scripted development the team really though they really want to focus and continuous deployment to delivery rather than these sort of Sam Emmanuel approaches because leads to more reproducible deployments and higher velocity. So James, he is happy Junior developer. He checks in the bunch of code tests. Okay. He

has them ready to go on the Release Train. Let's deploy to production. Just says hold on him and it's how do we know that? This won't break? Paris has a solution we deployed to staging and we deployed all the time to staging and keep running tests there and that way we can see if it would break if we were to deploy to production. Let's see what that would look like. So adding deployment to this whole pipeline is really just adding an additional stage to the end of what we've already built and that stage will include some new deployment jobs.

These jobs bring with it more llamo for us to look at a job. Definitely needs to run the deploy process and that will probably involve using g-cloud the command line SDK that allows you to run operations against gcp. So we have a shorthand authentication for G Cloud to Cloud platform will go into more detail on that later in the security section of the talk and then under the script we're changing directory into the wheel a service and we're running our functions to play for that cloud function, which probably should be wrapped in a play script

itself, but it's more clear this way that that is not a very complicated scripts necessarily know if you were going to the points Dead a cloud run service what you may have heard today across all the keynote or other sessions, you know that you are dealing with service containers. They have a bit of a different deployment model and so you are deploying a container container image from Google. Container registry to your Cloud run service, but it is just the same one line commands to make it happen. So we have added the new Diplomat

jobs to the end of our Pipeline and someone every change it will run through each of these stages ultimately resulting in deployed code. All right, three down three to go. The next step is environments. How do we deal with that at into the into the woods? So just a quick reminder, We have the architecture. That's the design. Right? What computer is in a microservice with storage is in a microservice environment. We have them for various uses like demo intestines along. All right, so the team

sits down to discuss this all these environments live in 1 g c p project. James is like now let's have the more isolated. Let's have one environment / gcp project. We have many many many projects. Well, we need to look into the pros and cons of each approach. So if each environment is a namespace here, we have QA and stage 4 for the payments microservice. We have a few payments compute part and act payments data part payments for Pickaway and four stage all-in-one project advantages Paris is right about that. There is easy access management you going to

I am at once and set that up. It is all too easy to just list all your resources in the project because he's all right there in one project. However, there are some Global project settings that you might have seen around regions and data App Store and so on that we need said once per project and that could be hard to make all the different server environments use those as single those Noble settings. Also is extra work to connect the compute and the data for for these

as a matter of fact, it could get so bad that you crossed wire and there's cross talk between the varmints. So the team decides not to have everything in one giant gcp project separate issue project. So we have staging staging project and we have a q a project. So I need to set up user access for every single project. Another one is we have to keep track of all these projects. What's good about this though is that we have really strong isolation with no Cross Park between

environment. So the team decides to go with this approach. Now the question is how do we avoid QA Bolton mix in the old pre serverless days in Into the Woods that was one QA environment. And if you working on a branch you really wanted to test your stuff on that and everybody was fighting to get their stuff on their well, well, we do need to keep track of them. So maybe we should give each developer their own development environment and their own QR marmots rather than infinite

environments. Let's see what that would look like. So now that we have a plan around the number of environments we have production staging and then a QA environment for each developer and a development environment freeze developers so that they can experiment and each of these environments is related to the code changes in some way production is going to be tagged off the master Branch when releases are cut and master branch is going to automatically deploy to staging and then individual feature branches will deploy The Cure. one of the challenges with this arrangement

of associating different pieces different lines of code to different environments is how the CI CD system understands how to route that you for that you have a bit of Dash Magic Here if the branches master. This this this first part of the code says all this must be for stage. And after the elf statement, we are looking up a variable from the environment was so c8b developer that push that last change and that will map to developers environment. Now, you could hardwire that and say, oh we're going to just use

the test environment today, but by providing a little bit of Separation to provide space to then continue to expand this later to perhaps have multiple environments that that might be routed to I'm still using this Arrangement are our earlier fixed to the master branch is going to automatically deploy to the staging environment and bug fix number one as it's going through is troubleshooting Cycles will be deployed to James's QA environment. Once that is approved to be merging to master it too will be automatically deployed

updating the staging environment to the current picture of what would happen. If a production release were tagged. And of course once a release, his tagged production will be deployed. All right, that was saved the team has decided on four out of six decision point the next one is serverless testing. Now. This is a little different than regular testing. If you testing something you run in your your own data center or on Prime as a little bit of a different Beast. So therefore some different types of testing Junior developer of

unit testing. There's integration testing system testing testing summary really useful to developers. Someone more useful to business people. We need some a mix of all these different types of testing. How do we do that? So here again, we have bug fix won the branch James has created he needs to March this in lets us see what testing happens at each place and this scenario. So whenever he commits code that's the heart James lost. His code is a heart there is unity in the CI workspace to this is still has nothing to do with

serverless or nothing to do with Cloud functions or Flop around or anything like that. And when is auto deploy their their automatic system tests, they run in the contest of server left. So this run on gcp and you can use all the gcp services and databases on such there more bigger than unit test. Then when he emerges into Master, there are links test in the CSE the system and then it's all to deploy to staging their their system test run just like on James and there's also these business

more business focused and determined tests are run in staging. And they're all branches are mixed up in staging to so we test all of them. A so they're barely speaking are two types of testing environments here at the inside the cic desist. Mmm. That's good for simple testing like unit test something but when you actually test stuff like serverless databases and making calls them and you're exercising more gcp than you actually need to run those tests on gcp. So there are very several assumptions you heard of Humane also religious functions Cloud

functions. You may know that the engine and are you today you heard about Cloud functions run. Let's go through and see how you would test each one of these three service computer options Just Dance loud function. You can do that locally the various ways of doing it really recommend. If you want to know more about testing Cloud functions locally. You should go to our co-workers Stewart to talk at on Thursday afternoon where he will dig into more details around this. For unit, testing you would crave milk HTTP request and for system testing. Well in that case you

just you deploy to gcp your gather URL and then your system. It's about your Elf. Also worth noting here test notes that it's good to have sore secondary more complex logic instead of secondary functions that aren't directly called. So that makes it easier to just so the unit test those and troubleshoot those in the local environment before you deploy everything up to gcp. What happened June with the modern run times on that benjen, you do your local development to test using your regular tools or Allegra regular Frameworks, whatever you're using

flask Express the cetera. Unit testing there more libraries that monkey out like tasks choose and databases and things like that and for system testing, it's just like Cloud functions to gcp you get a gyro and your test system hits that your L also worth noting here is that lots of Articles and blog posts by googlers and on Google about how to how to test it. That was two of the service computer options. What about the third one that we heard about earlier today?

Wilson's Cloud run is new. Let's talk a little about what it is to set the groundwork for how we would test in it Cloud run Bridges the worlds of serverless and containers providing a way to run a container ice service serverless Lee. This means that you can build a service in a pretty conventional way wrap it in a container and deploy it if you have no Ops Team like that's his team seems to have you can use managed Cloud rotten. And otherwise you can have Cloud one on gke where you have a kubernetes cluster underneath that your own often can

manage and with this Arrangement all the considerations of how you would test a container coming to play. The local development into bugging you might run your service locally using your scanner tools or you might choose to build that container locally and run it so that you are more closely stimulating the production environment for unit. Testing. You could also do the same one your unit tests the way you normally would when building code locally or you might also build a container that's loaded up with a number of development dependencies on top of you leave same

production dependencies that you have on cloud run itself. And in that case you can run you to test their and have available any of the system packages or other resources that are part of your service container for system testing much as we were suggesting with Cloud functions and a pension go ahead and deploy your service to Cloud run. It'll only be running when you're testing it and otherwise it will scale to zero and not cause you troubles. With all of the different jobs that we've been looking at with the CI

CD system. There is this work space that is spun up and which were running the living checks running unit tests and running g-cloud to trigger deployments and one of the problems you might find are the performance concerns of what it takes to initialize that environment then run all the tests and waiting for tests. Kind of a bummer. So one thing we can do is find ways to optimize just how big those work spaces are. Can we use a smaller container images part of that cicd workspace to initialize the environment the main Cloud

SDK Docker image that you might use in a CI CD workspace is pretty fully loaded. It's based on DB and it has a number of emulators and other things that you might not need initially in a service. So you might want to go with a more stripped-down approach using an Alpine based image. That's a lien distribution of Linux. That's really popular in containers and add to it just the pieces you need in this case for your Club run work. And so the team here has built a slightly customized version of the cloud SDK pulling us static binary from Docker before

the doctor client and then adding to the official Alpine version of the cloud SDK Docker image and installing the beta components which you need in order to interact with the cloud run service. This can lighten the size of this workspace container from around 700 megabytes 220b and that can save you a good minute of download time on each of those jobs. Another problem that can emerge from the particular way. We've outlined the system here is the tendency to only trigger the tests when a developer

takes an action for continuously evolving software that seems fine. You'll trigger the test on an ongoing basis and they continually be exercised before I match or service that is already been built in his running happily in the background without anyone the wiser. You won't notice if bit rot sets in and makes that service unstable to try to deploy it or try to issue changes. And so we running the deployment and testing process. All your services is a viable way to make sure that they are all staying in a healthy State whether or not developers are actively working on them. Through

this they made decisions in five out of the six areas that almost there. There is one area that is very important. You should not forget about the course that is security. So the team sits down to discuss security Paris knows that she needs for her diploma scripts. They need some secrets to deploy to gcp. Test notes data to hit the payments API. Did the payment provider has they she needs a an API key. So he received the discussion two types of Secrets operational secrets on one hand the deal with managing me all the bills and cicd all that stuff

that are used by the runtime for when it does what it's supposed to do. At that point James comes in and says well we have secure Source control. This is not an open source project Source control. At this point the party has to come in and gently guide to James to say that this is not something with the one does they could be accidental disclosure. I mean, we don't want developers to see the passwords and then accidentally answer is somewhere else from time to time

people specially break into our source control system. These secrets need to be in a safer place than Source control. How do we do this? Party has a proposal. So let's not operational Secrets separately from the runtime secrets. So everything starts with the repo, right? We have Source the source code there. No, James no secrets in the source code then that's by the Sea and the CI CD system has the operational secrets in it. One operational secret is the service key that service account key that is used to

deploy code to gcp. The other operational secret is a the name of a storage bucket. That are Google Cloud Storage bucket and set as an environment variable in the runtime environment. So the run to remember I meant can read it. What does the runtime environment do with the name of the storage bucket? When it needs a secret like the API key for the payment API it can go and hit that freed from that cloud storage bucket and and get that runtime secret. What's cool about this is that now that

reading of the runtime of secret is managed through the I am in gcp we lucked out who can read it James can't read it for example Paris country and the runtime environment. So setting up this environment means operational Secrets runtime secret separately. Let's talk about how those operational secrets are put into the CI CD system and get lab. It's easy. You just answer them in the web UI this means there's no fun sitting around with the secret soon. As you have a file is very dangerous because somebody might accidentally check it in right in this way. Somebody types

them into the web UI ain't nobody saves them anywhere. Once you type these in let's see where they go next. So if we look at the particular variables we have here. The third one down is the service account key. This is a key associated with a service account from I am and it's been entered here in a b 64 in cut away that way if anyone accidentally looks at it takes a screenshot Echoes it out to a terminally did not accidentally seen it nice security through obscurity. And so using that every single job that runs in the CI CD system can

then expand on that keep taking that environment variable decoding it and then using that to authenticate g-cloud so that I can go on to pretend to be that service account as it goes through and runs any deployment other commands that you need to run. Now as Paris notes Here you might find that this is a little bit easier. If you were running from cloud build because with Cloud build instead of even to manage this key separately, you can instead Grant permissions to the cloud build service account directly and it will operate with that. No additional

Keys needed So once you have the service account, what exactly do you do to give it those axes? He's all the permissions that it needs to one builds and deployments and perform other operations. You could give it the product owner or editor role. That's the Easy Button way to do it. But in that case if anything were to expose that key the blast radius of that would be encompassing almost all the things that can be done with your gcp project. Yes. I'm good. If you could just take down their production environment without key right to just delete it. Delete it so instead of

least privilege approaches much better figure out exactly which roles and cloud storage bucket ecl's you need to have and assign them to the service account. That way you know that you got the minimum exposure. What's more since we started this course of every environment being its own gcp project. We might want to have a central repository for things like build artifacts and and this service account so that we can have one Central source of authority and then deploy into each of those other environments in that case. We're

going to need to collect all of the different roles and permissions for the deployment process of shipping resources shipping a car functions into Stage in production and give that to the service account on a cross project basis. This can be done by identifying the email address of the service account and adding it like a normal user to other projects. For run time Secrets. We've charted this course of using a cloud storage bucket as a kind of vault which we are going to have the runtime function look up and pull in the secrets as needed here. We have a

little bit of bass magic in the first couple lines of the slide, which is looking up a cloud storage bucket that is specific to an environment using a little trick for dynamic variable names in Bosch. This bucket name is then passed into the function or Cloud run service or app engine app when you deploy it so that when those things start up they can load the secret and keep it in memory to you reuse for further requests in the future. with all this talk of giving permissions and rolls to a service account with keys exported to another system. There's a

lot of access there. You're still exposing even though you're trying to get at least taxes approach be able to deploy to production is not a trivial thing. And there are a number of ways you might approach that. Since we're using gitlab, you might take the open source version of gitlab and host that on GTA V. And I believe there's a are they code lab session for that on Thursdays going how to do that one approach that the Paris and the into the woods teams exploring here is splitting up the responsibilities of what different systems will do gitlab can be responsible for

everything up. So she weigh Reba Cloud build will handle all the operations for staging and production since so many of our operations are wrapped up in bass grips. They can be easily executed by any Runner and so gitlab is going to perform the same operations as Cloud build to the length of those scripts. And so as we've built out the additional stages, we might even add deployment and acceptance stage for those end-to-end tests. Let's go ahead and see if that demo from the start of the talk has completed and we have successfully

deployed service. So here is the gitlab CI pipeline that we started earlier and now we see we have some green check boxes. And the last one here with the! Looks like an accepted failure, but I'm not too worried about that right now. I want to see did my QA environment get updated from this change and looking into this terminal output from the CI system. It looks like it ran through all the steps figured out an environment and deployed. To a demo environment and here we have a URL to look at our clever and service.

Well that command disappeared didn't it? Well, it's is Cloudland Services deploy in an authenticated only way. I don't have a command Hindi. We will move on. Go back to slide, please. Alright, so now that they have the whole CD pipeline set up. What have they learned along the way will first Junior James. He was a lot of bureaucracy. He's happy. See see actually helps him at Paris who writes a lot of these scripts. She sees that that I bought it is worth. It. Also Secrets management is always tricky the test notes

that because you can test in environments that are exact carbon copies of your production environment environment and also happy that they can release software more predictably.

Cackle comments for the website

Buy this talk

Access to the talk “CI/CD in a Multi-Environment, Serverless World (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Google Cloud Next 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
163
app store, apps, development, google play, mobile, soft

Similar talks

Nikhil Kaul
Product Marketing Manager - Developer Tools at Google
+ 1 speaker
Russell Wolf
Product Manager - Google Cloud Developer Tools at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Christopher Baumbauer
Sr Software Engineer at TriggerMesh
+ 3 speakers
Andrew Bayer
Senior Software Engineer, Jenkins Pipeline at Cloudbees
+ 3 speakers
Kim Lewandowski
Chair of The Board at Continuous Delivery Foundation
+ 3 speakers
Christie Wilson
Software Engineer at Google
+ 3 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Naveen Chand
Product Leader at Google
+ 1 speaker
Breno de Medeiros
Staff Software Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “CI/CD in a Multi-Environment, Serverless World (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
567 conferences
23025 speakers
8619 hours of content
Martin Omander
Adam Ross