Duration 38:42
16+
Play
Video

RailsConf 2019 - Postgres & Rails 6 Multi-DB: Pitfalls, Patterns, Performance by Gabe Enslein

Gabriel Enslein
Senior Infrastructure Engineer at Heroku
  • Video
  • Table of contents
  • Video
RailsConf 2019
April 30, 2019, Minneapolis, USA
RailsConf 2019
Request Q&A
Video
RailsConf 2019 - Postgres & Rails 6 Multi-DB: Pitfalls, Patterns, Performance by Gabe Enslein
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
1.39 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Gabriel Enslein
Senior Infrastructure Engineer at Heroku

I am a multi-faceted software engineer whose passion revolves around distributed systems and researching methods for improving their performance at scale. I enjoy helping other engineers learn about new technologies and practices that improve upon reliability, speed and security.

View the profile

About the talk

RailsConf 2019 - Postgres & Rails 6 Multi-DB: Pitfalls, Patterns, Performance by Gabe Enslein

This is a sponsored talk by Heroku.

Rails 6 has officially added Multiple database support, and the options are overwhelming. We're here to help you make the right architecture decisions for your app. In this talk, we will look at performance gains and pitfalls to some common patterns including: separation of concerns, high-load tables, and data segmentation. We'll talk about read replicas, eventual consistency, and real-time (or near real-time) requirements for a Rails application using multiple Postgres databases.

Share

I think we're getting started here. So thank everyone. Good morning. Welcome to postgres real 6 multi DB and I talked about some pitfalls patterns and some performance today. So just a quick shower hands who uses rails and postgres together just about everybody in the room. Awesome who's on the latest and greatest surrounds. The latest 5 will do that. All right. 11 minute sew with real sticks, they added full support for multiple database abstractions. So what that does is that allows read and write databases

to be specified at every level of your rails application in the rake tasks in your active record miles even filtering API request down to the DB selection. So how exactly does that interact with postgres. So before we jump into it, I want to give a quick thank you to the rails contributors that we're on this project. There's a ton of work that went into making this possible in a lot of refactoring. So thank you to the rails contributors. I'm just a little quick blurb about me. I've been a Roku data engineer for about 3 years. Now. I've been primarily working on Sinatra

and Ruby on Rails applications and I've worked with a slew of nosql and SQL databases. But over my tenure at Heroku focus on postgres specifically with hypostress traffic and large data sizes and date of birth. So some phrases you'll hear me use in the talk or something familiar with postgres. You may understand know these but you'll if you need more information about then you can come see me after vacuuming partitioning charting replication statistics cashing and connection pooling last year about 10

performance and I talked about having a self-help life coaching business. And so it's booming right now, right? It's it's doing great just tons of people into a lot of that uplifting. your motivational success in in quotes and speaking so Some some of that we visited we have customers that have some Marketplace for selling things like mugs with gray quotes to pictures to the framing images things like that and shared areas of Interest favorite quotes TV shows movies Foods you name it? So now we're up to millions of

users and hundreds of millions of updates per day. But now we're at the size customers are doing a lot of complaining because they have seen a lot of slow down in the platform your profile updates take a really long time, but I can't update their addresses for new shipping and they want to upload their own quotes and they're taking forever. So Doing some initial analysis, we found that the characters that we tracked for favorite TV show characters in the movie characters. It's growing really fast is amazing shows out today

Hut on every single platform you name it and we were just growing can't keep up with it. So what can we do to speed this up? Well, we can talk about moving some of the more expensive reading queries off to replica. So what that'll do is that alleviate a lot of the worst of the traffic on the character table, which seems to be causing the journey to slow down and there's a little bit of a hiccup here, which is I want to use it in my rolls after a clay. So it feels like there's kind of a cap. So let's let's talk about that, specifically the cap theorem.

So there's three specific tenants of the cap during winter consistency availability and partition tolerance and the theorem States you can achieve only two of these three at any given point in time. You can try to compensate for that lack of the third one, but truly you'll never be able to fully achieve all three simultaneously. so what this does is it helps identify critical failure points increase the overall responsiveness longevity and stability of your applications. So how do we translate that to Post-Crescent? So this character wants to make it in there,

right? I'm pretty sure he wants to be included with all the other crazy characters. So the characters in content here is hundreds of thousands of TV shows and movies and each one can have many many characters on a hundred thousand anyone Game of Thrones out there. Now that's hard to keep track varying information about each character or each show can can have a huge impact and we primarily controlled that updating right now character's Persona movie movie will be updated some early. So we upload that information and

depending on each character or shows popularity rating movies popular depends on how often that information is queried. So what can we do to speed up? Well since rails 6 introduced multi DB we can add the ability to have rails intuitively understand how to split reads and writes. So let's actually jump into what that looks like. What do we need to change in your environment? What needs to be to find? How do I look at each individual record? So to start yours a base example of splitting out just the characters. So say characters

is a URL that we have Define on your application and can fig bars. You can also create a replica in a similar fashion and Define it at the database table file. Okay. Now we've got reals to know where to look but how do you get the active record to look at it? Well, thankfully that's also been updated to be fairly simplistic be the database connection definition. Also not only allows you to establish it at the single table in heritance, but it allows you to inherit that connection strategy through your children classes. So TV show characters of movie

characters will Now understand how to do that, too. So having a base model to find a connection strategy is a really robust in plastic solution in the sense of child models will retain that strategy unless they're overwritten it enables us to prioritize workload very effectively. So this gives us a lot of performance in in prioritizing write throughput vs. Re-read through for clearing that information back. So we can also specify a roll a roll for a very specific if we want something say like a random TV show characters as the start of your day.

You can specifically ask for that single one first and only specify to look in the reading area. So if you know yes or no operations, you don't even want it to go through trying to figure out the right database salary database you can have it do that. Some strange things started happening here the latest characters. We've uploaded from our from our upload pipeline don't seem to be actually getting uploaded to database where they seem to not be coming back and I'm not my customers aren't seeing their latest characters added to their favorites. So

what's going on? Well, I think we should get a pitbull so. We can look at some interesting information. You can look at this via at your post internals or if you're on the Roku platform, you can use the CLI and we actually give you this information up front. So we're behind by a large number of commits on this read replica. So What what is that follow relax? What is what is that issue with the re replica? So further for those who may or may not be familiar Post-Crescent a

lot of write ahead log to replay the changes more traffic to the primary. The more write ahead log gets generated and has to be sent to the replicas De Replay. There's other factors that can go into slowing down the replay on the replicas themselves, and there's too many to list here, but that that's a very difficult topic which will start to digest as we go on. So some basic things we can look at a front are we missing any indexes for the replica or what's our connection usage look like or how are the database resources? So

optimizing post Christopher reads versus rights, especially an indexing looks really different write this size of the dataset can even change your query plants. You could have an index that works just fine for writing. But ultimately when you move it to the replica, your replika can slow down and do some kind of strange things cashing also changes the effect on the DBA import performance to So let's talk about explain analyze first. So this is going to be a good way for you to identify when you have some queries Post in the

Super Bowl console can actually show you what your what Your active record SQL queries are actually doing and how they actually perform. So a basic example and when we take a first look at this is we want characters from a given show ID the pretty common query right? Like we we have a TV show we want to look at and we want all the characters for that show where missing an index here, right? This is a pretty basic one. So you'll notice a few different things here where the number of rows returned is a lot. That's that's almost the entirety of the table. Right? We're going to

the entire table scanning this cuz it's not the character. Id that's the show ID. So if you have two sequential e run through the entire table obviously depending on how big your table is. It's going to take a long time which is why you see sequence cam. So some things to look out for when you're looking at explain analyze are the Sorting method you're using this sequence scan. So you can scan or going to be the least performant of your types of scans groupings will also affect that and conditionals will also affect that surprisingly enough. There is internal ordering as postgres

constructs the data set. So conditional in groupings will affect that too. Depending on how complex queries are. So things that you'll want to look for a try to favor our bitmap index or heat scans or index only or index scans. Those are going to be faster because indexes are or heaps. They're not going to go see the entirety of the table. They're going to Traverse and login time over the the subset of the dentist out for the pointers. So a different example when we've actually added the index you'll see that it's a huge Improvement. Right? We've we've gone down

to only a few hundred Road and a significant decrease in query time. And there's a lot that can go into that but index isn't the only thing right. I've I've checked indexes and my leg is still gone. We've caught up now. I'm not about my connection limits and Mike reanalyzer doesn't show me anything really intuitive. So now what? well We might have had another pit fall again. Weather databases and our customer base being so big. All right, heavy primaries are just slowing things down. We're adding tons of quotes everyday or updating quotes everyday and customers want even more

interaction with our inspirational quotes or favorite quotes of our characters and they're complaining that we don't even have every character that they want a quote and so they want to be able to add this feature to so now we have to look at what's going on in the coats table. so the volume of heavy use of queries can slow the replay but the volume of Rights throughput is can also do that to we're writing just that much more into the right ahead log, that's just going to give you a larger volume of things for the replicas to replay the bigger your connection

training or pool saturation as people disconnect and reconnect can also a fact that that means the actual Heroes of the database can be affected. So there's also a third thing where the general size can start being an issue even having the database reindexing if your indexes are not very efficient or performant can cause a lot of problems so we want more ways that users can interact with their quotes and we want to be able to you'll find a way to segment quotes off work

figure out what else can we can do quotes cash, like people actually care about them or look at them or infection. the DVR the cash So yeah databases have Cash's most of the stuff that you're looking at when your rails application of clearing stuff is actually clearing your databases memory. It's not going to disc very rarely. If your database is set up correctly. Should it be going to desk every time you're going to disc means that it's an exponential decrease in your performance? So writing a lot of new data can also turn

a lot of your memory your database Cash's will inject information the more new information or putting in and the more you're reading out the larger. Your data sets are in the more length of time has passed over your data queries. You're just going to keep turning through your memory, which means there's more opportunities for your queries to start doing what's called disc swapping. I won't get into that too much, but that's basically how your memory in your database gets moved to temporary storage on disc which slows everything down a lot. Indexes are still mostly

held in memory, but they're mostly pointers. They're they're not full pieces of data. So you still have to go back to the disc to figure out what you're looking at and indexes that are on tables of high rate of change can also affect the load meaning that your database has to change the write ahead log to enforce that information for the indexing as well. So I can feel my own chest tightening a little bit about this. What can we do about that? What can we do to breathe a little easier? Well thankfully postgres had PG

permit in for a while. And with the Advent of postgres 10 and 11 purchase news non-native in postgres so we can start partitioning. So partitions a good way to to move less access data off and to let the DD cap is breathed were enabled us 11. So postgres can do some Auto Magic to kind of infer how some basic structures of your data look like using things like unique in their unique indexes and primary keys and things like that. The Advent of protection little aggravation aggregation

can create deep paralyzation increase to which means it's a lot faster to jump between segments of the table itself because postgres use it as different tables, then it knows how to Kool-Aid at the end of a query So an example of this is we can attach a partition to the quote stable that we never had before and from there we can add more of these. So if we know that we have it in explicit range here from July and August of 2016 where we got a huge bump of quotes. We can actually partition that off and you can partition them

based on year and month for a lot of your older quotes and values moving forward and you don't necessarily have to drop them you can leave them. Postgres partitioning needs it. Don't talk. That's a whole different other topic and there's a lot of nuance there. But if you want to come and learn about more of postgres partitioning come and talk to me afterwards. So now that we have some ease of using the quote stable customers are still asking for being able to use their own characters that they've uploaded and their own quotes So, Since we

have a separate table for characters now and it's every table for quotes with all the partitioning. How do we handle that characters have also been moved off to their own database, right? So now we want to see how quotes and characters interact from the primary and they're replicas. What is that look like McCook perspective. But I mean, we're still having a little bit of sluggishness and clearing quotes. There's a lot of them. so yeah, that could stable blood fast.

Yeah. well What if we do something like this, how can we handle that data growth? We can start quotes app. Weight with rails multi DB you can actually have the table to find and its own database, which means we can move the entire table out. So we actually get some separation of concerns in India are why principals at the data level by sharting our databases for the pieces of data that are very large. It prioritises the critical functionality of the main database for user access and logins and

profiles will still allowing all the functionality of all of the users for quotes and characters that we want to give them. So moving quotes with only B also gives a lot of relief to quotes and the original rails app characters of blowing now in a consistent fashion other table for performing much more consistently as expected. So an example of how this could be done as we can actually Define a different quotes table and it different was database to to move everything out and inactive record. We can actually tell it to just create the quote and the character

what that does that allows active record to intelligently split and prioritize and indirect traffic to where the tables now currently reside in the different databases. What's happening Melrose at? All the sudden everything is going haywire. My queries are taking forever. What's going on? Well, I think we had another pitbull. Retros of slowing down cuz logic in the app can repeatedly end up in in database Loops. We're now making a lot of different network cops because we're jumping to all these different databases. We're different different quotes database for

reading and writing word different characters tables for reading and writing different user profiles on a third database joints have to be done at the rails layer and not the DB layer, which means cuz our data is not located anymore and we're reconnecting to all of these different databases a lot of different times and that's really slowing everything down. There's also this issue of how to create cash for me because they cashing even in the query cache is having to Prairie the same table multiple times or blowing awake. Cashing it and how we

query from rails out to our databases are database caches are also turning a different slowdowns something to really consider. Is that looking at your 95th and 99th percentile queries? Those are going to be the ones that stop your database in its tracks nine times out of 10 more often than not you're going to be seeing that these percentiles of these queries are going to be one or two queries blocking the entire day and that's going to bring everything to a screeching halt. Yeah, I'm getting exhausted. It feels like I'm playing whack-a-mole up here.

Network latency is really hard problem solve. And it's not talked about in the cap theorem, but it is mentioned in a theorem. That was done not too long after that called Pace Elsie and the peso see them talks about when partition tolerance is critical. You can choose between availability or consistency, but when we're not considering critical failures of partitions, you have to choose between latency or consistency. so it feels like when we look at it from that perspective and what that means for me latency is our biggest concern now, we've moved all of our

concerns down to the network layer. We're not even at the database anymore. How do we avoid? All these round trip networks jumps. How do we minimize the networking issues with the slow down through the network lips? And how do we maximize the size of our data and its usage given this limitation? Consistency is not our problem. Now. It's the speed of which we deliver information to our customers. So some things to consider. What connection pools are you talking about? Are you referring to database connection pools are the real connection pools are the real servers

actually using connection Point properly. Are there are they not over there attempted limits on the on the app servers or the DB limits for the DVDs? There's a lot of times where you can configure rails to try to / establish connections to anyone database. Now, if you spread out your databases into five or six now you have a bunch of different databases. You have to go hunt for which women are you talking about? Are they all what about the database connection pools are they all going directly to postgres postgres doesn't have native connection pooling. Do you

have connection pooling setup on your post breast with something like a pgbouncer is Israel saturating the DB limit or is it a real specific issue? Are your connection pooling setting set up correctly on your postgres databases? Are they set up on all of them including every replicas? They may not be? Why do my caches seem useless? well My you're with cash are you referring to or are you referring to your query Cash's or your database caches? If you're referring to Davis cashes is it you're right, it will database or is it

one of your read replicas which one is turning more in which was causing the slowdown in your response times more people think that they're right databases having the the issue with cash is but a lot of times it's the read replica that they're using and it could be one of them you might have four or five set up for a single database. And if you're looking at one replica, make sure it's the right one. so if we talked about how big your caches are because we want to talk about data sizes. We we have to address the size of your data in

relation to the cash is on the database. So I talked about using disk swap earlier. So this is what I wanted to bring up again, which was supposed restful actually try to intelligently move some of the date of your clearing temporarily to disc it'll write temporary files that cleans up afterwards and hold them there before returning the entirety of the data to you and it may do this a lot. You do need to know if you're using too much of your disk just to do the Reeds before you realize that you also need to optimize for avoiding those types of scenarios. The

caches will cat Hole Records if you're not using indexes It'll cat it won't just cash a pointer. If you're using swap It'll point to an entire record written onto disc 2. So you now have a problem where your disk growth on your database is also growing the more you. Which is an additional problem. You may need to solve. Postgres can stop accepting connections tirely or stop even when you're responding to requests and you may have a connection open, but it'll say no. I'm out of memory too bad or I know I may not be available. I'm doing something else. I'm too busy. I may have even Fallen too

far behind on my replication and I need to stop and recover. There's a lot of scenarios where your post goes can shut down because you've overburdened the traffic or the way, you have an optimized for your workload. so which what's another area to look at? How are your resources on your app servers on your reels? Application servers going are they running out of memory? Are they running out of connections? Are they over utilizing CPU? Are they even trying to use the same disc on there? Are they trying to save a bunch

of information separately Elsewhere on the on the application server, you may not even have enough space on the apps over to do what you want. What there let the spare on your DVDs similarly as as I discussed before memory is a hot commodity followed closely by the ILO how fast you can access that disk information NCT u city is going to be the other big tell in your idea of if you're using swap. You'll see a lot of CPU weight as a statistic that you look at to see if your CPU is trying to wait for your desk. It's just going to sit there and spin cycle and it's

not going to accept anything else. So you're going to be burning CPU hours. Just trying to wait for your disk to free up. It's going to be extremely costly and of course last but not least in my attic connections. So it feels like there's a lot of things to look at and there's a lot of things to consider but there are some helpful ideas and some food for thought that we've had and grown over the years at Roku and working with rails. So apps should have safety measures things like maintenance modes feature flagging and circuit breakers

when you find that things are going wrong or certainly things to help protect your time and keep you running. I need a really important especially for running experiments rolling out new features or even when you see other issues like critical Hardware failures or parts of the network are going down. It seems like there is every other day. There's always an issue with a a public IP that public IP administrator that's always having issues that's going down like dine work something out of AWS or out of any other cloud provider localhost you're using

so having circuit breakers to turn off. The features are going to be really critical to maintaining your up time. Data services should have thresholds. You should be tracking that information and knowing that you have a certain limit or a buffer zone of where you need to react tracking the utilization. So you leave room for yourself in case you have traffic spikes in case you have slowdowns in case you're running a process that you don't expect to to hurt you as much as it does think back to when we're uploading characters and TV shows and

content even internally, you wouldn't expect that today and caused an issue on the characters table, but it absolutely will especially if you have to load up a lot of data really quickly it can cause a lot of problems for your customers down the road and you need to be able to stop it. Web traffic spikes if you're gaining popularity can can also have an equal effects you need to account for some growth even as your building your plan moving forward and putting a plan into place when those events occur when you're losing that buffer and that threshold limit is occurring is really

important for step into understanding where what your options are. and of course There's always me strategies coming out. There's always new ideas and new ways to account for all of the changes in your traffic whether your data is changing whether your users are changing their behaviors. There's always ways to evaluate new strategies and I would highly recommend you evaluate new ideas early and often architecting services start for the principle of least privilege, which means they only have access to the things they really need not everything in

under the kitchen sink is definitely a good start identifying how you can lower the scope of your services to make them a little more lean. They may be doing too much right they may be doing a lot of things that they don't need to be doing and you can segment off to the primary ownership of another part of your rails app or another service entirely. Some easier things that you can do today start breaking up your monolithic code get ready or God classes get in there. If you're scared of a class. I highly recommend you go in and you

started talking it now the more you fear your own code base the more it's going to instill that fear to avoid it and let it rot. Identifying those critical real-time applications versus non real time or fault capable Services things that can be eventually consistent can also help you determining what really urge to go fix. So I would highly encourage that idea to understand what can you live with a little bit of lag or some some cashing at the content layer or giving customers of you of something that may be young five to six hours old. They don't need The Upfront like refresh

data today and even refracturing your data relational restrictions. I talked about earlier higher conditionals in your constraints in your groupings can affect your database performance. They may not need to be there anymore. They may be holding you back from achieving greater performance or even enabling new features. So with that in mind, I'll talk a little bit about what Heroku is offering today, and then I'll open the floor for any questions. So We've opened postgres 11 in generally available. We've made an announcement earlier in the year, but I

want to reiterate it here. Huh? Special Evan has a lot of these really helpful features that you can use today things like stored procedures to bring them into closer parody in compliance with SQL standard partitioning and optimization features extreme Advanced sterilization of huge drastic improvements on performance, and you can learn more about it in our blog. We also continue to offer are curated analytics so we can query all that information that you seen explain analyze and those caches and we deliver it and it conceivable readable format. We also expect that

information in logs. And we also give you that information on how to look for it even just using the key SQL terminal in dead center. So please go look at those those are out there for for free. And anyone use And we still talk about pgbouncer talking about managing your database connections right now. We do still continue to offer the pgbouncer connection pulling data, which lives on the servers with the databases to prevent them from this exact problem with having connection issues. We right now only allowed transactional connection pooling session pulling has its limitations and

has some pitfalls. I know that rails prefers session pooling but there are problems with trying to get out of those 95th and 99th percentile issues that stop and turn off a database. So that's one of the reasons we picked that. You can use it to achieve better visibility in your connection statistics adding your guard rails. And this is really great for asynchronous workload specially if you have a big queuing system read data upload that you need to do we still offer this a nine 6:10 and 11. But we also continue to maintain the client side pgbouncer build pack, which you can

maintain on a separate run timer set up an application server in front of your database. So please go check this out. Please read up on them. They're free. And I talked about adding guard rails and ways to fortify applications. We've talked to had a couple other of our co-workers open openly blog and talk at realcomp and other conferences and these are couple of them that you can go check out. We've also added additional monitoring with data. Roku.com a graphical way to look at that information on postgres in Heroku with all the analytics of what are your slowest queries? What

are the ones that are taken the most I also please go check this out. And thank you so much for attending. I'll take any questions. So the question was if I if you partition your table, do you have to make query changes the if you're marrying the default table the answer is no if you're trying to optimize for specific like older date times. You would have to figure out a way to programmatically tell tell your database to go look at something. That's a lot older rails does let you identify I'm Raw SQL

so you could tell tell it to look at the same database, but look at a different partition if you wanted to look at historical things like say you wanted to do the quotes throwback you would be able to use a random generation to create that the string and execute the raw SQL. Any other questions back there? Yeah. Okay. So the question was what support is active record have for joining the tables at the programmatic layer. The short answer is that one of those

things where you would have to Define it on your own? So because you're you're moving that that specific relational information to your rails app rails doesn't intuitively know how to do that upfront. You would have to try to construct an information as requiring it back because it your postgres database is don't know how to do that and your rails is writing Ross equal under the hood. You would have to get each data set filter in a certain way and then collate the information at your application layer. So puts additional stress on your LSAT, but you would have to do that at the

Active record player. over there the question was how do we have any suggestions on how to load test? Okay, I'm going to try to reiterate and correct me if I'm wrong. So the question was how would you perform to test this and how would you be able to define whether or not that the issue was at the rails a player or the databases layer the there's a few different ways you can do this one of which is you can figure out the complexity of your real code. So how many times you have to jump through the same queries and that's something

you can do via arse back or even API Integrations. There's a great gem called Airborne that can do that be an API layer, especially if you're using rails API if you're looking at load testing on your database, unfortunately, the only way to do this is in a mirrored environment in like a staging environment that the hardware setup is identical to your production environment, and you would have to be Go to have some similar setups of like graphical interfaces with logging of seeing where the where the discrepancies are. I know there's a lot of tools and third-party vendors out there.

Like lebrado will do that log aggregation for you or New Relic is very popular, but that would be the closest way to do it to give you an accurate representation because doing it any other way is going to not really give you a one-to-one results on how your production environment would work that answer. Your question is the Reading Writing or if you have to protect yourself and figure out all of these different strategies on your own the answer is yes, he does not do anything more than help you to find connection strategies all these Advanced topics. You are correct are going to have

to be done by the by the engineering team in the development team to be able to do things. But thank you so much everybody.

Cackle comments for the website

Buy this talk

Access to the talk “RailsConf 2019 - Postgres & Rails 6 Multi-DB: Pitfalls, Patterns, Performance by Gabe Enslein”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “RailsConf 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “IT & Technology”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
166
app store, apps, development, google play, mobile, soft

Similar talks

Samay Sharma
Senior Software Engineering Manager at Microsoft
Available
In cart
Free
Free
Free
Free
Free
Free
Matt Duszynski
Senior Software Engineer at Weedmaps
Available
In cart
Free
Free
Free
Free
Free
Free
Glenn Vanderburg
VP of Engineering at First.io
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “RailsConf 2019 - Postgres & Rails 6 Multi-DB: Pitfalls, Patterns, Performance by Gabe Enslein”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
577 conferences
23312 speakers
8705 hours of content