Chris is a jack of all trades kind of developer at TriggerMesh with code contributions ranging across the stack from low-level operating systems to websites, and contributions to the Kubernetes ecosystem. He is currently working on Aktion as a part of Tekton, and is a contributor to Gitlab helping to bootstrap their serverless integration.View the profile
Andrew is a long-time Jenkins contributor, and is the creator of Declarative Pipelines in Jenkins. He's a software engineer at CloudBees, working on Pipelines in both Jenkins and Jenkins X, and is a contributor to Tekton Pipelines.View the profile
Kim is a Product Manager at Google Cloud. She’s focused on making developers lives easier as their code travels to production. Prior to working at Google she wrote code for a number of startups.View the profile
Christie Wilson (she/her) is a software engineer at Google, currently leading the kubernetes native pipeline project. Over the past ten years she has worked in the mobile, financial and video game industries. Prior to working at Google she led a team of software developers to build load testing tools for AAA video game titles, and founded the Vancouver chapter of PyLadies. In her spare time she influences company culture through cat pictures.View the profile
About the talk
Deciding on a CI/CD system for Kubernetes can be a frustrating experience - there are a gazilion to choose from, and traditional systems were built before kubernetes existed. We’ve teamed up with industry leaders to build a standard set of components, APIs and best practices for cloud native, CI/CD systems. Through examples and demos, we will show off new, Kubernetes-native resources that can be used to get your code from source to production with a modern development workflow that works in multi-cloud and hybrid cloud environments.
So welcome everyone to our session on next-generation cicd when she K and checked on so I'm Kim Lewandowski, and I'm a product manager at Google. Hey everybody. I'm Kristy Wilson. I'm also from Google cloud and I'm an engineering lead on tekton. So before we get started a quick show of hands how many of you are running kubernetes were closed today. Wow, quite a few awesome and how many of you were practicing cicd? Okay, awesome and then using Jenkins. Nice to something else.
Okay, cool. So today we're going to cover some Basics we're going to talk about a new project called tekton that we've been working on wearside. We have two guest speakers joining us today to talk about their Integrations with tekton. We're going to briefly cover tekton governance. And then finally what's in the pipeline for us. So we're here to talk to you about the next generation of CI CD and we think that the key to taking a huge leap forward with CI CD is cloud native technology. So I for one found myself using this term
cloud-native all the time, but I realized that I didn't actually know what it meant. So this is specifically what cloud-native means applications that are cloud-native are open source, their architecture is microservices in containers as opposed to monolithic apps on GM's those containers are dynamically orchestrated, and we optimize resource utilization and scale as needed. The key to all of this is containers containers are really change the way that we distribute software. So instead of building system specific Wineries and then installing those and
installing a web dependencies. We can package of the binaries with all of their dependencies and configuration that they need and then distribute that But what do you do if you have a lot of containers? That's where kubernetes comes in. So kubernetes is a platform for dynamically orchestrating containers. You can tell it how to run your container with other services. It needs what storage it needs and kubernetes takes care of the rest and then in addition to that kubernetes abstracts away the underlying Hardware so you get functionality, like if a machine that's running your container
goes down. It'll be automatically reschedule to the machine is up and a Google Cloud. We have a hosted offering a cougar kubernetes called Google kubernetes engine or GK eat. So this is what cloud-native ends up looking like for most of us we use containers as our most basic building block, then we dynamically orchestrate those containers with kubernetes and control are resource utilization. And these are the technologies that were using to build cloud-native cicd. Cool stuff for those not familiar with cic and it sounds like most of you are it's
really a set of practices to get your coat built tested and deployed. I'm so CI pipelines are usually kicked off after apriso meant workflow and determine what code can be merged into a branch and then their CD which is what code changes you then deploy to a branch either automatically or manually and so what we've learned is that there's not really a one-size-fits-all solution. There are projects that just want something simple and just something that works out of the box and then their companies would really complex requirements and processes that they must follow as their code
travels from source to production. So it's an exciting time for us in this new Cloud native worlds for CI CD. So CI CD systems are really going through a transformation at CI systems can now be centered around containers and we can dynamically orchestrate those containers and using server. This methodology is control. Are you search resource costs and we will Define conformity apis we can take advantage of that power and not be locked in. Spanish new world there's a lot of room for improvement problems that existed before are still true today
and some are just downright harder if we break our services into microservices. They inherit only consist of more pieces have more dependencies and can be difficult to manage and the terminology is all over the place at the same words can mean different things depending on the tool. And there are a lot of tools. It seems like every week a new tool is announced so I can't even keep up with all of them. And I know that our customers are having challenges making their own tooling decision. So it's great to have this many choices, but it can often lead to fragmentation confusion and
complexity. But when you squint at all these continuous delivery Foundation Solutions at their core, they all start to look the same. They have a concept of source code access artifacts children's Etc. But the end goal is always the same get my code from source to production as quickly and securely as possible. So I Google we took a step back and after a few brainstorming sessions, we asked ourselves if we could do the same thing to CI CD that kubernetes did but containers that is could we collaborate with industry leaders in the
open to define a common set of components and guidelines for cicd systems to build test and deploy code anywhere. And that is what the tecton project is. All about. Tekton is a shared set of open-source cloud-native building blocks for cicd systems, even though tekton runs on kubernetes. The goal is to Target any platform any language in any framework weather? That's gke on-prem multi-cloud hybrid Cloud hybrid Cloud. You name it? So tight and started as a project wasn't a native people got very excited
to be able to build images on kubernetes and very quickly. They wanted to do more they wanted to run tests on those images and they want to Define more complex pipeline group. We decided to move it out and put it into its own GitHub pork and where it became tekton. So they're getting the vision of this project is cic building blocks that are composable declarative and reproducible. We want to make it super easy and fast to build custom extensible layers on top of these building blocks. So Engineers can take an entire CI CI CD Pipeline and run it
against their own infrastructure are they can take pieces of that Pipeline and run it in isolation the more vendors that support tekton. The more choices users will have and they will be able to Plug and Play different pieces from multiple vendors with the same pipeline definition underneath So tekton is a collaborative effort and we're already working on this project with companies including cloudbees red hat and IBM and we made a super big ever to make it easy for new contributors to join us. And again pipelines is our first building block for tekton and know Christy
will be diving deeper. So text on pipelines is all about Cloud native components for defining cicd pipeline. If I'm going to go into a bit of detail about how it works and how it's implemented. So the first thing to understand is that is implemented using kubernetes crd's. So CID stands for custom resource definition and it's a way of extending kubernetes itself. So out of the box kubernetes comes with resources, like pause deployments and services, but three crd's you can Define your own resources and then you create binary is called controllers that act on those resources. So what
Ciara do you say we added protection on Pipelines? Our most basic building block is something we call a step. So this is actually a kubernetes container spec which is an existing type icontainers back. Let you specify an image and everything you need to run it like what environment variables to use what arguments would volumes excetera. And the First new type we added is called the task. So I task lets you combine steps you define a sequence of steps which run in sequential order on the same kubernetes node. Our next new type is called a pipeline
a pipeline lets you combine tasks together and you can Define the order that these past run in so that can be sequentially it can be concurrently or you can create complicated graph. The tasks aren't guaranteed to execute on the same node, but through the pipeline you can take the outputs from one task and you can pass them as inputs to the next half. So being able to Define these more complex grass will really speed up your pipeline. So for example in this pipeline we can get some of the faster activities out of the way in parallel first lie Clinton and running unit test next as we
run into some of the slower steps, like running integration tests. We can do some of our other slower activities like Building images and setting up a test environment for end-to-end test. So tasks and pipelines are types you to find once and you use again and again to actually invoke those you use pipeline runs and task runs which are our next new types. So these actually involve the pipelines and tasks but to do that you need runtime information. Like what image registry should I use what get repo should I be running against and to do that you use our fifth and final
type pipeline resources? So altogether, we added five crd's. We have tasks which are made up of steps. We have pipelines what you're made up of tasks. Then we invoke those using task runs and pipeline runs. And finally, we provide runtime information with pipeline resources. And these are playing this run time information gives us a lot of power because suddenly you can take the same pipeline that you used to push to production and you can safely run it against your pull request or you can ship even further left and suddenly your contributors can
run that same pipeline against their own infrastructure. So this is what the architecture looks like at a very high level so users interact with kubernetes to create pipelines and tasks which are stored in kubernetes itself. And then when the user wants to run them the user creates the run which are picked up by a controller which is managed by kubernetes and the controller realizes them by creating the appropriate pods and container instances. Cool. So today, I'm excited to welcome Engineers from cloudbees and Trigger
match to talk about how they've been integrating with the tecton project and I want to highlight that they were able to do this very quickly because we put a ton of time and effort into onboarding new collaborators. The first I'd like to introduce Andrew Bayer on stage to talk to us about Jenkins X and Gen Y Gen can access is integrating with tekton engineer at Cloud. He's working on pipelines both in Jenkins and Jacobs acts so next. That's that's a lot of people who hears
play potatoes using etcetera. So in case you're not familiar with let me try and probably failed to explain it very well and then get corrected Jake a text to say nuded cicd experience for kubernetes designed to run on kubernetes and Target kubernetes. You can do either for building traditional and cloud-native work clothes. You can create new applications or imported listing applications in to kubernetes and take advantage of very quick starts and Bill Paxton delighted to get the initial setup for the
project etcetera without having to do it all by hand. Is he fully automated cicd integrating with GitHub Yeah by a prowess a lot of Automation and get up promotions at Center without you actually having to go click stuff by hand. I got promotions. It's got staging Devin prod environment integration and a whole lot of other magic. I'm here specifically to talk about the part about how Jacobs exit using tax on Pipeline. So a user is not actually going to necessarily know that they're using tekton pipelines behind the scenes. We have our own ways of defining your pipelines in
Jenkins, Zack Snyder buy a standard double pack or when you define your own pipeline using our send tax. Then it run time when a Bill gets kicked off Jenkins X translate that pipeline into the crdts that are necessary to run at Exxon pipeline and then Jacobs X monitors the pipeline execution Etc. that means that like I said, the user isn't directly interacting with tekton do users interacting with Jenkins acts. That means that we can do a lot of things on our
side without having to worry about exactly how you are going to rock. So why are we using text on pipelines? Like I said, I've been working on Jenkins pipelines as well for a while now and what we've come to learn is that pipeline execution really should be standardized and commoditized cicd tools all over the place have reinvented the wheel many times and there's no reason for us to keep doing that. So I'm really excited about that. And we really like that we can translate our syntax into
with necessary for the pipelines to actually execute so that we're still able to provide and opinionated and curated experience for Jacobs X users and pipeline authors. Without having to worry about being exactly the right syntax and upper bossidy etcetera and it gets us away from the traditional Jenkins world of a long-running jvm controlling all execution, which is you know, good. But the best part is as can mention how great it is to contribute and get involved with text on Pipeline. I only got involved with this at all
starting in November and we've been able to contribute significantly to the project help with that figuring out Direction fixing bugs integrating it with Jenkins X and get this all to the point of being pretty much production ready in just a few months and that's phenomenal. It's just been a great experience and Incredibly welcoming community and it's been a lot of fun. I don't have an actual demo exactly. typo So what I wanted to show you here was just quickly how
much of a difference there is between the syntax of users authoring and what tekton actually needs to run and why we think that's valuable. So this is roughly what a Obviously brain-dead simple pipeline in Jacobs X would look like just you know, 26 lines and then we transform it. Well, it's a lot more than that because we're able to generate that and not require the user to offer it all by hand. We're able to inject Jenkins X's opinions about environment
variables about where your what images should be used and a lot more and it's been really great for us to be able to have the full power of Texas on Pipeline behind the scenes at during execution without needing to make the user have to worry about all those details all the time that's been very productive for us set. Tonex, I'd like to introduce Chris from trigger much to talk about the work. He's been doing since Andrew. Hi everyone. My name is Chris Pine Bower
developer with trigger mash and also one of the co-authors for the text or sorry for his action trigger matches action. As this came out as more of a way of tying into GitHub actions of action workflow while once it was announced last October into creating a way of being able to find that with the tecton pipeline approach the idea of being that with that workflow, we can translate that into the various resources that the tax on Pipeline now makes available and then be able to feed that into your kubernetes cluster and Be able to be there at
experiment with that or even create additional foot so that it can actually receives external stimuli through something like a native to venting service things such as life whether it is going to be a pull request from something like get or maybe it is some other web form being filled out that triggers that builds that work flow in the background provide the result. And ultimately allowing you to run things from anywhere. So a little bit of a bit of a mapping exercise for the terminology goes with GitHub actions. You have the concept
of the workflow the workflow it would be like the equivalent of the tecton pipelines Global pipeline. This is where anything and everything runs task wise and then as far as to get Hub action, that is the equivalent of that single step for that task where it's going to be that one container that runs that command that produces the output and I do call out of one of the other components within to get Hub action, which is called uses. This is more of being able to add to find the image that you want to run with in your checked on Pipeline task.
Whereas with the checks on Pipeline tasks expect a particular image what we end up translating with the GitHub. Actions or at least we will once I finish my poor request is that we'll be able to allow the full support that get home has for their actions as far as being appointed to another GitHub repo or appointed to some other local directory within that repo that you have priests are defined to be able to build the documents that you can then feed into the or the task to what work the magic as it were. So as far
as our little pretty picture with the trigger mesh action, we actually have two commands that handle everything we have down at the bottom the create this one creates the tasks that creates the associated pipeline resources and you can also use it to create the past run or the piper our pipeline run object to create like a One-Shot invocation. Usually if you want to test something out to see if you've got your workflow working just right or if you wanted something else to call into this and up above is we also create a
cake made of Eventing sink as well as the associated transceiver which creates a server list service. So within K native to handle the input by The Creation in the invocation of that task run object. So till I give you a little bit of a quick demo of what we have. Let's see if I can perfect. So what we're looking at right now is the customary hello world example, you'll notice up at the top. We do have our workload to find this one being more of our pipeline
object through salsa indicates all of the actions that would be associated with our workflow and the on usually indicates the type of action that would happen within your repo. Anna and right below that one for the action you'll have some kind of identifier are naming. It's the fact that we're using sentences are base image and of course, we're just going to run hello with specification for the environment variable. We do also see that allow you to pass then either a string or an array of strings to go in as well,
which is one of the nice things about their language and as far as like some of our rough translation we take care of that for you. So with the action command itself as mentioned before hand, you have your create you have your launch one of the things that we originally had started working on is our own implementation of the parts for the GitHub action workflow syntax. But since they get help with kind enough to open source it back in February so that experimental projects such as these can make use of it. We've started transitioning to that one. So now it
acts as more of a sanity check on whatever work clothes that you feed in to ensure that it does what it's supposed to do as far as the common Global arguments. We do allow for passing in your get repository when it comes to creating things such as the collection of tasks with the create command and it is used for being able to not only create a pipeline resource that can be referenced by additional steps in case you wanted to add things to it. But also The case of specifying like that local directory so that we know which repo to pull the doctor from
to Wild Bill damage. So now hopefully that into our hello world, we just feed everything as is and we have our simple task object. You'll see our steps. I've actually been in a broken-down we have her since we have our environment variables. We have our new and improved her name which has been kubernetes fide to resemble their traditional naming scheme and then we can pass and my nasty to provider Casper an object. and then ever favorite why I asked well, hopefully it'll still talk to my cabernets cluster and we have are objects that are created.
And it looks like it just finished. So we have our true we have our success and we also have a pod name so I can go here and take a look at any output in the case of failure is or if there was something that you wanted to grab as part of the any of the successful messages. And some are other things along those lines and then we're to look at launch. The one thing it does require is that we do pass in a task. This one does also require that you specify a GitHub repo. All right.
this one so Here we have our Eventing no source definition of specific specifying going in to get Hub requests for pulling in our credentials and also the task metal crate and fire. So that's believe is pretty much it. Awesome. So thank you Andrew and Chris again for for sharing your work with us. I like I said before we're not doing this alone and text on is actually part of a new foundation called The Continuous delivery Foundation. This is an open foundation where we're collaborating with industry leaders on best practices and
guidelines for the next generation of CI CD systems. And so initial projects of the CDF include tekton Jenkins Spinnaker and Jenkins X. Now you seen tekton integration with Jenkins X. We're all so excited that we're starting to integrate with Spinnaker as well. And so these are the current members of the CDF and we're really excited to work with them on our mission to help companies practice continuous delivery and bring value to their customers as fast and securely as possible.
So if you do if you want to learn more about the CDF at please check out CD. Foundation to get involved or just kind of watch what's going on. Alright, so what's coming next? So for the CDF, we have a code located Summit coming up in coupon Barcelona on May 20th. And if you're interested in what's coming down the pipeline for text on pipelines. We're currently working on some exciting features like a conditional execution and event triggering and more and protect on itself. We're looking forward to expanding
the scope and adding more project me recently had a dashboard project added. It's over takeaways if you're interested in contributing to text on or you're interested in integrating with text on please check out our contributing guide in our GitHub repo it has information about how to get started developing and also how you can join a slack Channel and are working group meetings excetera. If you're an end-user check out Jenkins X action and watch this space for more exciting text my Integrations in the future. And that's it. Thanks so much for listening to our talk.
Buy this talk
Access to all the recordings of the event
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.