Duration 39:05
16+
Play
Video

RailsConf 2019 - JRuby on Rails: From Zero to Scale by Charles Oliver Nutter & Thomas E Enebo

Charles Nutter
Software Engineer at Red Hat
+ 1 speaker
  • Video
  • Table of contents
  • Video
RailsConf 2019
April 30, 2019, Minneapolis, USA
RailsConf 2019
Request Q&A
Video
RailsConf 2019 - JRuby on Rails: From Zero to Scale by Charles Oliver Nutter & Thomas E Enebo
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
1.56 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Charles Nutter
Software Engineer at Red Hat
Thomas Enebo
Senior Principal Software Engineer at Red Hat

Мейнтейнер проекта JRuby, Java Champion и Ruby Hero, работает в Red Hat

View the profile

I have a variety of older programming experiences, but I primarily have been using Java for the last 20 years and Ruby for the last 15. I am co-lead of the JRuby project and spend a large amount of time in the guts of the JRuby runtime. Otherwise, I am playing with Ruby and writing useful libraries (at least to me).Specialties: JRuby, Ruby, Java

View the profile

About the talk

RailsConf 2019 - JRuby on Rails: From Zero to Scale by Charles Oliver Nutter & Thomas E Enebo


JRuby is deployed by hundreds of companies around the world, running Rails and other services at higher speeds and with better scalability than any other runtime. With JRuby you get better utilization of system resources, the performance and tooling of the JVM, and a massive collection of libraries to add to your toolbox, all without leaving Ruby behind.

In this talk, we'll walk you through the early stages of using JRuby, whether for a new app or a migration from CRuby. We will show how to deploy your JRuby app using various services. We'll cover the basics of troubleshooting performance and configuring your system for concurrency. By the end of this talk, you’ll have the knowledge you need to save money and time by building on JRuby.

Share

We're going to talk to you about Jamie. BJ. We going to talk about why they were you might be interested before we get started who is running jruby for something today. Cool the room. So we're going to show you basically how to get up and going with jruby show you some of the reasons why would be interesting for you to consider this to deploy rails or other applications and hopefully draw you into you'll try some stuff out. I am Charles. We are the Derby guys now for 12-13 years or so, we've been working on this trying to make jruby run. Well make it do Ruby

for making a well on top of the Java platform before we get going one of our former contributors Olaf beanie, if you've heard of him before he was one of the core folks back in 2006 to 2007 that really helped us get rail going. He wrote a whole bunch of the core libraries we still use today and just because of his connections with Julian Assange and the fact he was living in Ecuador. He's been arrested illegally he's being held for 90 days without a trial is a horrible situation. If you want to follow it check out free all the beanie. Twitter as an account, we're trying to get

some money together to help frizz defense. Hopefully we can get him out of this Ecuadorian prison that he's trapped in horrible situation and I'll let John take over. Okay, I have a horribly large number of sides get through. I have 24 minutes of slides and this is going to happen in 20 minutes. So what is Jayru it's a ruby implementation first. You spell run any pure Ruby code and it's just work if it doesn't file a bug unless it uses something like Fork one of the few things that we can support. We don't support native C extensions will be talking

about this quite a bit later in the talk. We're built on top of the Java platform until we get all the benefits and lack thereof that comes from that there's tons and tons of tools and this particular case. We're showing you some screenshots of visual VM. Are you can see the GC going so you get a live view of that but there's so many tools for the jvm. That's nice to look at the health of your application is just one of them free and built into a lot of stuff. We have no Global interpreter lock, which means that our native threads can easily saturate and use

all your CPUs. That's a good thing will be getting into this more later to There's a jvm or Java library for absolutely everything in the world. So if you're ever stuck because a ruby gem doesn't provided. There's a Java Library you can lean on Today at Centerline is Jerry B92. It support through B25 support. This is the only Branch were currently supporting. We had a Jeremy nine one release which was 423 that's and of life. Now. We do have a ruby 26 Branch, but we're wondering is can we skip putting that out and maybe put out Ruby to seven next we

did this once before for Ruby to 4 and we didn't have any complaints really and the reason for this is going to spend more time in proving jruby versus maintaining multiple releases bootstrapping a new Ruby compatibility level is a lot of extra work so we might push it back to next year a leap year using 2.66 check on our progress that you are all be at the end of the talk as well. It shows you all the two six features and what we have done. Okay, so getting started with J. Ruby look like

they're not use it before there's really not a whole lot to it. Of course you can just go to the website is powerball's Zips is a Windows installer. Even you can do it that way on packet throw it into the into your past and you pretty much ready to go. But basically all you need to do is have some jdk we still recommend job at 8 job at 9 and higher have various security restrictions and they're locking stuff down. So this warnings that might be printed out when you run on those big they work well, but it's just something to keep in mind. We're mainly testing on a job at 8 right now

and then install jruby through one of these mechanisms the Powerball use your regular Ruby installer out there that we're going to be taking taking management jobs as well. And that's really all there is to getting jruby up and going only prerequisite really is having the jvm available and then usual ways to install overall. They're here we using rvm so we Switch to jruby. We've got IRB. Here's a little bit of Co this actually calling into some of the core jvm classes just like you would call any piece of Ruby code everything that's available at the jvm level is

also available to Ruby. So it's nice way to get access to those libraries that are not what we'd like, why is this a difficult problem for is why is this are perennial issue? Really? I love you. Look at see Ruby everything and see Ruby is already compiled native. So it boots up in the parser is native. The the compiler construction sequences the entire interval interpreter all of the core classes. Whereas NJ Ruby all of those things are written in Java written and compiled the jvm bytecode. So

the jvm have to warm it up have to decide what I want has to optimize it. So everything in jruby ends up starting cold from from parsing all the way through execution the jvm eventually does optimize this will see you in some of these bench. Optimizes it very well and we get a very fast Ruby but it takes time and you don't have that at startup. So the first five ten seconds nothing is actually running as fast as it could be as an example of how this is kind of worst case scenario here is in the blue is see Ruby, here's jruby in the green house without any tweaks. If you just start running

jruby normally 456 times slow for serious commands like lifting gyms running rake tasks and that's not what you want as an example of how this really isn't our fault necessarily the fact that we're on the jvm is the main issue. This is the same run of gem list, but run within the jvm so it can warm up until all that coach and get hot and it gets down basically to wear Siri. So we're looking at various ways to improve this and jump closer to the end of this when we start up the right now, it's a bit of a problem. So there's three things that

were looking at right now to Help improve this the first thing you'll look at is the dash dash death flag just turns off our internal jet that a tweaks the jvm tune optimizes much and this cuts off a good 30 to 40% of any major command that starts to get it into at least a comfortable range and one thing to note about this if you throw it into an environment variable don't come to us and Report terrible performance on a benchmark. We're just going to say, you know, like I do all the time I run a benchmark. It runs terribly. It turns out I've got this flag turned on so that's it one

downside to it, but it does make a big difference. If you're running jruby in startups time is driving you crazy. I'll take a look at this coming up with more jvm level features openjdk from nine and higher has a feature called class data sharing in this basically, lets us share all of that jruby class information all of our core code across jvm run. So it's less to boot up and get going a little bit more quickly. This is starting to improve our startup. Start a situation there to do this. There's a jruby startup gem, you can install that comes with a generator at CVS

application class data sharing your run that and then we should pick it up and start starting up faster to add more tools and tricks into this Derby start up Jam to kind of fix things and and speed it up from IBM call open Jane has this quick start feature. So all you have to do is pass these two flags to it and not only will it share all of our core class information, but it'll actually share compiled code across run. So as the application boots up optimizes to get warm in and saves off a native code that native code can then be picked up by Future

runs. So that's getting us much closer to the end of that warm up curve. We're going to be pushing on us a little bit more to see what we can do. We look at where are best times are like I say it comes down from five to six times slower than see Ruby to in the three to four times slower rain. It's enough to be uncomfortable. But maybe not enough to be super annoying all the time. The next big thing that we are looking at doing. We've got a newer features on the jvm side. They're looking at doing ahead of time compilation of jvm application. So one of these projects has been very popular

recently is called gravity in the idea here is that we can compile the core of jruby and have all of that run native right away, but then continue to optimize the Ruby code and still load your stuff at runtime. Hopefully, this will get us closer to start up. We look at some comparative numbers with the Truffle Ruby implementation, which is 100% growl viumbe. Here's there just nipple - e performance that's certainly more like what we want. We want to get down closer to see Ruby performance for starting up a simple command but on there. And if you actually boot up Ruby code it still

ends up being a little slower. We want to try and have both both worlds together to improve startup time. So remember those if you start playing with a ruby in the end, you'll be a little bit happier with the process a couple other tips for making a move to J. Rubillo starting with it. Don't mix gem paths. Sometimes the Ruby installers, you will be able to share across different versions of see Ruby different versions of it really doesn't work. So well cuz we have different platform gems. We have our own milk Aguirre version stuff like that. So I'm usually this will be separated by your Ruby

installer, but it's something to keep in mind. Ruby version if you're working with an existing Ruby application. If not going to switch to jruby that's compatible with Ruby 2.5.3 switch to see Rudy. So you got to keep in mind if you're switching back and forth or testing an application. It catches me constantly. Usually I just delete the file, but if you keep it in mind, you probably be alright. Finally, I'm getting started here. If you need help. We have our GitHub project. Of course, we are now monitoring three different chat rooms. I would love to try and get this down to one. But

this is the world we live in now so we have our IRC Channel we have Jitter and then we're also playing around with jruby on The Matrix open source chat Forum service has a mailing list if you're into that does not a whole lot of traffic, but there are folks monitoring it. Okay start now Derby on Rails. We we got a little bit about YJ Ruby what we're getting out of that. Let's talk about Jay Ruby on Rails itself. So we've been working on this for a long time. Like I say 13 years. We've been running rail since one of the early one. One. X Persians last week. We were at

Ruby in Fukuoka, Japan and we actually had people that run the ruby in production and answer this is yes, there's a lots and lots of companies out there some running huge like hundred machine clusters of jruby doing this stuff A lot of times the last mile for scaling they they can't get Siri to go any bigger without massive cost. They can make a move DJ Ruby and stay with Ruby and stick with it for a longer. Like you say benefit from the jvm in the libraries and all of that. We honestly believe that this is the best way to scale Ruby and rails applications in general the trails in

particular, but at some point any applications going to go to this level where it needs more out of it. What kind of difference is we have at the application Level? It's fairly minimal. So this is based on kind of a plane generated application. We've got a couple different gems that are in there. We got different database driver is different the back end for doing JavaScript stuff things along those lines the spring preloader depends on for working. So that's not something we support at the president right now at the moment minimal database configs. We worked hard on this so they

almost nothing has to change in here but the jvm database libraries don't do Unix socket. So you'll be we can do using a portable carport. And then finally here is the Puma configurations. Just kind of the show that if you're scaling jruby up, it's not via processes or workers. You can just throw as many threads as you want at it and will scale up across all the cores will saturate the whole system. I just wanted to add one thing that very first picture of the gym file when you actually generate an applique Ruby and rails it Jenner. It's a right shame file is showing you what

the difference is where I am. I great. These are minor changes you need to make but the standard generator does generate a jruby specific at they got Jay rails does support jruby out of the box. All right, so we'll just look at the current progress of how we support rails first one to look at its 5 to 3, as you can see, we're passing 4knines just a couple weird items on here action cable doesn't actually run in 5 2-3, but if we look forward to rail 6, we can see that there's any errors so I'm not really sure what's wrong here.

But we're looking forward rail 6 support for this is also used for work until like last Friday. So I'm real sick. So rc2 will no longer use fork for testing and we'll actually have valid results. I run it locally and we nearly pass everything real time. For the air is that we do have their typically really weird errors, like getting a floating-point representation of a date and having a slightly different around so we want to go and see Miss NPR to get rid of some of these a lot of these are just bad test. There should be doing if you know

within a range kind of match for the active record support. This is our results for running trails 520. We have our own project old active record jtbc more green cuz it's our own project. But we do have a bunch of excludes for tests against somebody Stelzer pretty low value like some Dayton 1540s is a slightly different due to differences between Java date system in the Ruby Jade system. And actually a lot of these aren't actually bugs that all we test the PG specific tests and active record because they provide good

value for us, but 20% of them are just super specific to PG and we don't care. Free like a trail 6 again. We're doing quite well, there's a few more failures to work through a few more projects has 10 errors errors happen to be that the pub sub adapter for postgres directly uses PG specific apis. We already sort of emulate PG a little bit and active JBC. So this should be pretty simple to fix but we have to look at it. If we switch over to active record jtbc for 6 to Port we're

basically they are we have 40 excludes in postgresql only so we still have a little bit of work to do and also Sunday version 62 rc1 has put out so you should be able to just seamlessly Tri-Rail 6rc one now. I want to do a little throw to Daniel Ritz doctor. It's he has been keeping us up-to-date on this project real time for real 6 and we really appreciated that he did good by Daniel. All right. So let's let's change gears little bit. Let's say that you have a

Siri app see Ruby rails application. You want to move it to Sherry reason I happen to be doing this right now. I'm trying to convert discourse to jruby so that it works. It's a massive application. It has over 500 20 files in the lock box or gems in the lock file quarter of a million lines of code. And at this point I had to work through approximately 5 gems that weren't supported by Jay Ruby and the app is working except it's not displaying The Forum posts and it is not a ruby problem. It loads gazillions the lines of JavaScript and there's some JavaScript issue

they have to work through but it's mostly working at this point. So let's let's use this as inspiration for working through your own application. So using jruby, you should install the cherubi link gem should CD internet directory and run it now what's going to happen? If you're going to get a report. The first thing is going to do is going to look at the gym file that's going to make suggestions when it knows that you have a potential problem Jim like fast exercise now to put it on jruby, but Exorcist is tonight's suggestion change it in. The state is actually

Source off of a Wiki page. So if you are a terrible user and you look at this and you see something you can add please do and you just have to update the table. The other thing it does is simple static analysis. So in this particular example where or equaling a Time Delta together if this is happening happening through multiple threads at a time. You're going to be losing some time here. So you just need to audit each of these cases and make sure that there's not any atomica update issues associated with it in most cases. It's not it's usually initialization

when the app is booting and it's happening in the like a class body or something like that. We point out things. We don't support like fork or things that no one should ever use like each object. But at some point when you go through this report, you have to just try doing a bundle install and you may or may not actually run into a problem or we don't support an extension. So you have to come up with a strategy for this in this case. You could use Friday bug Dash jruby is a jam instead, or maybe it's not that important to you. So you just put a platform

thing on it to get rid of it. Another thing you could do is just write it in Ruby. We run Ruby pretty fast. Maybe this is acceptable. Although if it is a hot bath like it isn't this course then maybe you have to go back and use a native extension. This wasn't needed for a discourse at all. But there's a project called for and function interface allows you to bind to a shared library and call the functions or interact with the straps on it. The best part is you use a ruby syntax to do is binding and all the implementation support it. So here's an idea of

how this would work and code. So you make a module you extend the the library methods from ffi now this gives u f f i live which allows you to bind to the dll and then you'll just make a series of attached functions where you give its name its input parameters its return type and then you can just call him so pretty cool. Another feature that we have is the ability to go and interact with Java classes and objects as if they are Ruby classes and objects and you can just import a Java Library typically from making extension equivalent. You'll just write a little bit of boilerplate

extra facade code to add like block syntax and things like that to make it feel just a little more Ruby like I'm working on Port mini racer, which is a very minimal set of va findings. I found a library Club J2 V8, which has a bunch of C code that allows Java programmers to axis FMJ, but now I'm scripting that job as Ruby and it works great. So here's an idea of how that looks you have to go unload the library you want to use then you import the particular job class you want here. I'm calling a class method and storing that

result in to an instance. I'm just executing methods on it and you'll notice that execute underscore script and Java that would be executed Capital script, but we have a whole bunch of convenience aliases so that you can make Java classes feel a bit more like Ruby. In the last strategy is to use our job in Native extension API. It's a it's an analog to see extensions in Serie B, but it's written in Java and it's our API. This will be the most work out of all the strategies could have also probably give you the best performance. I've been

casually porting OJ for some time. Now. It's a surprisingly large gem for parsing. Json about 20,000 lines of C. But it does absolutely anything you could ever think of doing with Json in about 20 or 30 more things that you haven't thought of it. So it's super sophisticated. This is a taste of what this looks like at the top. We have a job annotation to specify that. This Java class is the Ruby module OJ and then we have the attic on Strickland Road where we have another annotation that specifies that this is a module method that has one required

argument and rest equals true means it can accept additional arguments. I'm so arguably porting gems when there's a mismatch is some level of pain fortunately. Most of the common gems have already been ported over the years and I guess the Silver Lining to this is once someone actually ports it and releases it we all get it and we definitely would love to support people that have a particular gem in mind. They really want to use usually you can do like a line for line Port of the sea code and then you've got it on jruby so we can benefit so we'd love to see that more.

Okay, I'll slide over here. So then we ever be on Rails we've done to migration. We've got this up and going most of the standard tools that you're used to it will work. We recommend Puma as a server which is the default real server. I believe most of the same deployment tools work. We do have additional options because we're on the JV app, you can deploy it as a single archive and then just run the archive and that's your whole app. Or you can deploy it into a job applications. If you want to do that to yourself, but one thing to note here is that the jvm expects

that it's going to be the bus all application running on this server and it's going to use it much memory. Is it possibly can unless you tell it not to I'll show you in a little bit we run the benchmarks and it kind of blows up to a large memory space doesn't necessarily need that. You can choke it down to the rescheduling rails out. So this is a classic problem on see Ruby has been full of companies that have risen up and become very successful and then disappeared again trying to get ready and Ruby descale. Well, the problem is having no concurrent threads on see

Ruby really limits what you can do you can shut some amount of iro and someone a database off and kind of run things can currently but it's some level if you've got a CPU bound application. It's not I owe bound you're going to run into this and you going to need to spend up another see Ruby and another Siri be another season for as many concurrent operations as you really want to happen. And even with copy on write is to see in some of the the memory grass. We got later. You're going to share it going to waste a lot of resources at the very least. You've got multiple really run times all

with her own garbage collectors all wasting CPU Cycles on their own memory space and that's not how garbage collection is most efficient most efficient when it's one process with a large keep and everybody's working on that same heat. So we believe chords to ever be is the answer to this week and run a single day every instance with multiple threads and it's your entire site. And in fact, he can run to if you want we want some failover on the second process. We're still going to use less memory and still saturate the entire system single process with awesome garbage collectors.

Awesome. Jit. It's just going to use resources better. Once you get to a certain scale. I'm not saying like one Route 1 Ruby instance is going to be one see Ruby until death will be smaller. But once you get to 45, we win pretty clearly Also one user story that we like to talk about a we hear these stories all the time. But this is one that really stood out to us does a very large rails application. They had running on 40 extra large instances on ec2 cheap set up 40 workers per server to make this thing scale out for what they needed. They were getting about a hundred K to 150 K

request for a minute 50 to 75 millisecond response time. Just fine. They want at least maintain that but I'm try to improve it to Jay Ruby and then made other use other changes in their system to it to take advantage of native threads 40 extra large is went down to 10. So the 75% cost reduction on ec2 cost there and it was actually bastards consistently well over a hundred fifty thousand requests per second better response time better use of memory, obviously, cuz we've got one for 1/4 of an ATM machines as we had before. So this is the start of stories that really show us

that this is a way to scale our jobs. What is it that we're trying to do here with so optimizing for StraightLine performance in in rails is very difficult. It's only in the past year or so that we've really seen jruby start to shine and be significantly faster kind of about Serie B performance, but now it's looking really good. For more information on this cookie wounded his talk on the new see rubidge it and how that runs rails and we're all struggling with getting this route 3 Marta optimizes the la de la Cote we know people can travel in time cuz that was the last talk online

around a little bit slower or roughly the same until very recently. So worried been very excited to talk about this the past year. So My Little Benchmark, I've just been doing what is it the basic rails framework a very simple application. So this is just a scaffold blog application the standard scaffold generate that you can do running in more realistic scenario running to extra large hear. This is based on post grass. Didn't mention his postgres Pace application? I didn't throw the Truffle Ruby numbers in here because they were really weird and they're still

there still working on it. So see Ruby vs. Jruby is what we're going to see on these slides that but did not get updated did it what the hell drivers so we have lots of lots of variability depending on how we run the Benchmark what server we use Tamil talk more about that too. We were still trying to figure out what are the exact right numbers. Generally we seem to be faster, but we're trying to trying to figure out those tool Ruby vs. Reds on J review course. There's a reveal. Like I

said, it's not a whole lot faster. This is not with a lot of tuning is really but no tuning at all. We haven't looked into anything and we're already out of the box Pastor Thomas Shoal a bit later that work. We can be faster like this and use a lot less memory to mention the caveat about the Benchmark driver depending on whether we're using a b or work or Siege or whatever numbers. Can you keep changing their all over the map? So there's something strange about the combination of those Benchmark drivers and Pooh Matic is giving us some odd results keep alive doesn't seem to be working

properly in Puma at all right now, so that's something we're going to work on this week. Ideally course the warm-up time. So we give our our our benchmarks a little bit of time to warm the application up typically use of the deploy. Jeremy production will run some number of of common request just to get things going a little bit. I don't think that's too unusual. We decided to go and set up a real app. So we got Ruby Jones. Org. We didn't have many problems or really any problems getting it

set up all the changes for the application. We're pretty much changing so that I could run in production mode on my laptop the original planets around the Sun ec2. And we went into a bench marking hell of sorts Man is Hard anyways, so I ran on my laptop i7 with 16 gigs is a reasonably expect machine. I disabled HPS cuz I didn't want to self signed certificate difference between Siri and Jai review with hcps isn't really different trees Puma on both implementations for an apple to Apple comparison, and Jerry doesn't run unicorn. And all the stuff is running on the same machine and

away this ended up being a good thing because we want to saturate the CPU to show how how we can over utilize machine. And we end up using Apache bench in the end. I just went backwards. There we go. I don't even know I'm heading there OT is so when I first got the application going I set up a user for myself and I was going to push a jam and then I decided to Benchmark this quick and then I went down into some benchmarking hell where I started using this tool called work and every time I'd run it with additional threads it would show absolutely no

improvement and then after about four or five days someone's like, oh, it's Niall reactor you're actually have a concurrency of town and then I'm like screw that to call and then I switch to a b and then I was using a keep alive and then we discovered her a puma has a bug with keep alive. So if you want to see the details for the go to bug 1565 and the Puma project. So we're only benchmarking accessing profile. It has quite a bit of logic to the Views pretty sophisticated. So here's

us running a b during a single synchronous request over and over again on mortgage connection closes. If we look at the yellow line first, this is see Ruby running puma and single mode. So there's 20 threads here, but it's only getting one requested a time. So it's very flat. If we actually like he had Kumon clustered mode that says 20 workers or 20 us Pond Fork subprocesses. You can actually see it's about six requests per second faster. And the reason for this I believe is that as the next request comes in the previous worker is still closing the

socket. So this is the overhead of closing your connection shown in graph form. Can we look at the Blue Line This is Jay Ruby itself. You can see that it's slowly gains and performance and this is the jvm doing its thing during profiled optimizations and continually making a faster by the seventeenth adoration. We actually catch up to see Ruby. We'd like this curve to be a little more sleep than it is today, but we did manage to the catch. But this is a much more interesting graph for us. In this case along the x-axis is concurrency so I can currency 10

maybe sending 10 separate requests in parallel to the servers on the y-axis. Is it speak request for a second that it ends up getting? So again, if you look at the yellow line, it kind of looks like it's level but it's actually decreasing because the global interpreter lock and native threads don't mix so it's not really getting any benefit. They're if we shift up to the Green Line This is Kumon clustered mode, and you see a nice you see a nice curve as its scales out. It flattens out between 8 and 10 concurrent users because by that point my laptop is on fire

at 100% But then if you shipped to the Blue Line, you'll notice that we actually outperformed see Ruby quite a bit here and that's because you're running in single mode with 20 threads instead of 20 subprocesses. Lawless wasted resources is an added benefit here. I really wanted to push me over the edge. So we started seeing failures and we actually survive longer before we start to give up the ghost. The next interesting thing and running this Benchmark. I just continuously kept running in each run has about 35 minutes long and I tracked the memory you

can see on the on the green boxes that it took about 11 runs for for jruby to stabilize. Siri has 20 workers. So 20 subprocesses. There's copy on write semantics for the cell processes. But we're looking at the actual resident memory that it's using right. You're still going to have a working set that's going to blow that up. You can see those percentages on top of the blue boxes. You can see that those percentages are going down. So I believe that MRI will eventually top out on on memory, but I didn't want to run at that

long in general had 20 workers were using about half the memory going back to Charlie's example with an extra large ec2 instance. We probably be running 40 threads. So that that that memory Gap would get keep getting larger. So that the takeaway is here is that we use less memory and this is just because we're using threads vs. Entire subprocesses copy and write does definitely helps you really but having a duplicate the entire Ruby run time and rail stack and your application in each subprocess. It's going to take some extra

memory even with copy on write were more stable over time as well. I actually don't understand why I see Ruby was growing I suspect it was just Pages getting hit from updates and copy on write. It's possible. There's some fragmentation that you seen the GC size is growing. I don't know I highly doubt it's a Memory leak there. They're pretty pretty stable. We're also more CPU efficient. Again. This largely comes down to us having threads and not having to have an entire runtime entire garbage collector going in each process. And as you saw

it, we we didn't we didn't fall over as quickly either under load. So we have one- and that minus is that we'd like to see the warm up time and prove we're actively looking at that to along with the start-up stuff similar start a problem. Okay, so wrapping up here. So a future future work for jruby. Would love to know what you think about 2.6 vs 2.7. I think at this point. We're probably going to be just doing to seven we've got a lot of the to 6 work done. But you know people are happy with 2.5 compatibility will stick with that for the next, you know, six to nine months

of jruby both on Graham and on other native compiling jbm stuff that's out there. I'll hopefully get us the startup time. We're looking for ideally we don't lose the warm-up. We can fix the warm-up and still run fast tube. So that's that we're looking at is a whole bunch of other upcoming stuff on the jvm side, like having real native fibers that we're not using threads to do that. I having a built-in for and function interface for Native libraries that the jit knows about and of course, there's Nugent a new GCS all the time where we can get the benefit of

all this just by being there. Futures for Jay Ruby on Rails rails 6 is working today. We actually expected we'd have to do a lot more work. But thanks the folks like Daniel and others out in the community real sick pretty much just worked out of the box in a point at a guy there. He's going to be adding a Denton. Rob Rob, Rob is putting out Microsoft SQL server support pretty soon. Do I still need that we're getting that back into our our stack two and we're here when you need us do we not saying that you should immediately

take your single server single worker see Ruby app and move over to jrb. Maybe there's something you need out of us at that at that scale the jvm itself. The library is the cooling fan to an ecosystem but grows of migration at some point going to get big enough that we're going to help you scale that stuff out so you can start with J Ruby on a nap, but at least keep us in mind for the future I think about what that migration might look like and let us know what we're willing to help you out with it. Hopefully, we'll see some of you on here.

This is a bunch of logos from current and past uses a jruby. We're thrilled to have spokes up here like IBM's got their their Watson Explorer data intelligence app that runs on J Ruby. BBC runs all of their election results through jruby based application. Just tons of folks out there. They're using jruby in big important settings like that. Maybe you'll be up here soon to help wanted real tester great opportunity. Just run the tests tweet. If you see something that looks like it might be fairly simple, like, even if it's just a test fixed on the city on the Rhone the

Railside that's a great way to get involved and we're always standing by on email Twitter get her etcetera to help out with stuff if you happen to support a ruby gem, please go and try it and test with us and then report any problems you find address to UCI. Do it. So that's about it. Try jruby. Let us know how it goes. I might be hanging out here all week. I think. I'm going to be around 2, so give it a shot and that's all we have. Thank you.

Cackle comments for the website

Buy this talk

Access to the talk “RailsConf 2019 - JRuby on Rails: From Zero to Scale by Charles Oliver Nutter & Thomas E Enebo”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “RailsConf 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “IT & Technology”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
166
app store, apps, development, google play, mobile, soft

Similar talks

Aaron Patterson
Senior Software engineer at GitHub
Available
In cart
Free
Free
Free
Free
Free
Free
John Schoeman
Product Consultant and Developer at thoughtbot
Available
In cart
Free
Free
Free
Free
Free
Free
Glenn Vanderburg
VP of Engineering at First.io
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “RailsConf 2019 - JRuby on Rails: From Zero to Scale by Charles Oliver Nutter & Thomas E Enebo”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
576 conferences
23196 speakers
8664 hours of content
Charles Nutter
Thomas Enebo