About the talk
RailsConf 2019 - New HotN+1ness -Hard lessons migrating from REST to GraphQL by Eric Allen
Our desire to use new tools and learn new technologies backfires when we don't take the time to fully understand them.
In 2018, we migrated from a REST API to GraphQL. Patterns were introduced, copied, pasted, and one day we woke up with queries taking 6s and page load times less than 10s. Customers were complaining. How did we get here?
In this talk, we will discuss why we chose GraphQL, show practical examples of the mistakes we made in our implementation, and demonstrate how we eliminated all N+1 queries.
I'll answer the question, "if I knew then what I know now... Would I stick with a REST API?"
developers developers Developers I'm ducking. My name is Eric Allen on Twitter at underscore. DJ all day underscore give me a follow and I'm to be posted my slides as soon as the talk is over. The title of my talk is the new hotness hard lessons migrating from rest apis to graphql. And today we are going to talk about mistakes. And frustration and how we use developer. Sometimes make decisions that have far greater consequences than we initially calculate. Where did Ivan the same code and I'm going to demonstrate how
to eliminate and plus one careers in graphic you all and if you're not familiar with what and then plus one. That's okay and plus one Kris present themselves when we query databases for a collection of records and want to get back to Associated objects. In this case. If we wanted to select 10 countries 10 being the dealer in in other cities, it would take us in + 1 or 11 to fetch the same day that we could easily Section 1. We're also going to talk about how to think differently about making mistakes and how we can make better decisions as engineers. Back in January when I wrote the
abstract for this talk. I was I was pretty stressed out. Our team worked really hard in 2018. We had done we just completed a front-end rewrite migrating from a six-year-old backbone implementation to react and at the same time we migrated from a rest API to graphql. On Monday January 7th of this year when we all showed up to work. I think we're all tired of looking for a bit of a fresh start and instead we realized that we had some pretty serious problems. The morning. I log into a New Relic instance and are haptics score was at point eight two, which is
the lowest I'd seen it since I started at the company biking to about 2,500 request for a minute and response time for anywhere from 2 to 10 seconds. This isn't catastrophic by by any means, but we were receiving feedback from our customers. That's our application was unusually slow. And for a team that takes as much pride in our work is we do this just felt flat out unacceptable to us until we started to peel back the layers of the issues that we were having we discovered three key issues. The first of which was a memory leak in react the week is associated with
password input fields and when they get rendered into the dawn, there's an event listener that gets added another event listener prevents every note and every child note for me in garbage collected properly. If you're so inclined find this tweet on Twitter. We have a sample application up on GitHub that demonstrates the issue and feel free to give a thumbs-up to the issue because even though we were able to vote in our application issue is still outstanding today. The second thing we discovered that some of our architecture decisions that we made on R&R new react implementation was with
severely increasing through but we were making a lot more small request that ordinarily would take you do 1600 seconds or so, but the accumulation of so many requests was maxing out our database connection pool. And so we have request bottlenecking at the database layer, they were waiting and middleware and so I request it would ordinarily take 60 seconds was taking like I said anywhere from 2 to 10 seconds. And all these problems were exasperated by the fact that scrap you all had introduced in plus one. Everywhere in our app. So, how did we get here? How did that happen?
Before we move forward. I just want to pause for a brief moment and acknowledge that simply giving the socks or puts me and my team and a bit of a vulnerable position. I didn't ask for their permission before I wrote The Abstract primarily because I never thought I would get accepted and and in the process of writing to talk, it's really forced me to sort of confront a lot of my own in a few years and feelings of imposter syndrome throughout my career because I didn't come from a traditional stuff for background and I've been writing software for
about seven years now, but I come, you know from a finance background and it works in sales and marketing and so are you having to stand up here and and bass we talked about mistakes was with all the difficult for me, but luckily for me the team that I work for is incredibly supportive and and we value of learning over everything else. So to the people who have been supportive of me at both your Cindy and throughout my career. I just wanted to take a second to say thank you cuz it means a lot to me.
So like I said, every wife works, I started out my career stop by moved on to a low and onto pivotal labs and I work at Sears MD. It's been a privilege to work around insanely talented Engineers, but when you're surrounded by Engineers to do this town said the bar is set extremely high and so on the back of my mind, you know, I always swore that I would never talk to myself that the subject matter was technically valuable enough whatever that means and like I said here I am given to talk about mistakes. So when the doctors accept that I had kind of an epic and I did what most
people I think would do. That's why I just Googled. how to give a technical talk And after and after much research and we can look in a blog post and seeing other people's talks. I mean ultimately most people just say, you know, you have to figure out why are you here? And why am I about to take up 40 minutes your time and tell me there's a few reasons first and foremost. I'm really passionate about the work that I do specifically performance and API design. I also really like decisions that are good for business.
I think a huge part of our jobs as Engineers is to make really quality. Well thought-out decisions that never put our customers or our own business is at risk and I think at times our judgment can be really clouded because we want to learn new things and work with new tools and perhaps in those moments. We feel the consider the full cost of our decisions to the business. I think that we as a community can do a better job acknowledging. I talk to you more openly about the fact that are often forced to make really big decisions with limited amounts of information. You know, we make these huge
bets on Technologies when it's impossible to calculate the short-term or the long-term cost of those decisions. And lastly I want to do this talk cuz I care a lot about this community and I really enjoyed working with graphql and I'm hoping that by sharing some of the struggles that we had in might make someone else's transitional bit smoother. So before we just want to paint a small picture for you of what are app does. I've done my best to strip away as much of the domain knowledge in context. They canceled that the code will just speak for itself. But I do think that having a basic
understanding of what we do will help it go to come alive and make it will be easier to understand. So I work for Sirius Indy where we have created a hipaa-compliant medical chat platform that is focused on providing patients with barrier-free access to an unparalleled virtual care experience. So, what does that mean exactly basically means that anyone with access to our platform can log in and within a matter of seconds be chatting with a doctor pharmacist a financial counselor, you know, whatever your medical need dictates and this is a screenshot from a
permit at I'm going to be decided that grinding was a really good thing to do right now in the middle of my talk. Anyway, this is a screenshot from a chat that I recently had about my son Arthur but he wasn't feeling well on login a patient sees a stream and a patient can have many streams. Like I said a stream is tied to what type of carrier getting so maybe it's a pharmacist a doctor or financial counselor. And so when you log in the plan is notated at the top so you can see this is a serious indeed employee plan. And each stream has many messages and that message has an author in that
author can either be a patient or a provider? And when the chat is completed me sort of rap of all of those things into what we call our encounter in an encounter gives the chats would have liked a level of finality. Let us know that it's finished and on the surface this probably looks like a lot of other chat applications that you have worked with but in our space that the object relationships can be totally complicated. We have many streams and a stream belongs to a planet a planet has an operating hours. We also have our encounter object. Like I said that wraps all those messages up and then
we have our authors be a patient's me a providers and then over here to your far right? We have our credential object, which is the object to be used for authentication. It's for emails and passwords are stored. So when a user logged in naturally, they might hit their screens in point to go and fetch their collection of respected streams that belong to their experience. But what happens when your logging in and you're not on the web application what if you're using our iOS client or your Android client right those platforms are going to Different data needs and their connections
are probably going to be less performance underwear that right simply based on their connection. And so if you've ever had to maintain a rest API You probably understand the struggle trying to write one API that serves all their respective needs of each client or you know, the pain of trying to maintain multiple apis specific each client type. These problems are really really costly for businesses. You have to write maintain a documentation. You have to worry about backwards compatibility your package delivery can be a lot slower. API team has to be in constant communication with each of
the teams that maintain the clients. And make sure that they're getting back the data that they need in the shape that they need it and that's exactly why we wanted to move to graphql. For those of you who may not be familiar with graphql graph shows the cree language for apis in a run time for fulfilling those queries with your existing data graph. You'll provide the complete an understandable description of that date in your API and gives clients the power to ask for exactly what they need and nothing more. It makes it easier to evolve apis overtime and enable powerful developer tools.
So grouchy lazy eyes are organized in terms of types and field not in point with graphql you to find a type system with. Mutations effectively setting up some guardrails are on your data and then each consumer has complete flexibility within those guard rails to to create a data. However, they want The results of this is an introspective lazy eye that sell documents shows us how to create it and our perception it through Sunday with the flexibility would allow for greater autonomy between our clients. We have faster cycle times there be less coupling between our API in our
clients. And then not maintaining the documentation was probably going to be my favorite part. Turn on the surface a graphql API doesn't actually look all that different from the rest API. You know what rest we have many controllers and actions. You have index actions and show actions for reading our data and we have updated Elite action for modifying it. So in this example, we have our streams index which is obviously going to return the question of our streams. And graphql we have many queries to read our data and we have mutations for modifying
data. in this particular case We have our streams resolver. That streams was overmastering and this was all verbal since they returned the same data that are streams index action would return the recipe of Siri leiser's And graphql we have types and both cases were using these objects to determine the structure of the data that we're going to return from our ATI. NRC realizes we Define attributes and relationships Allograft, you'll type objects. We defined fields for both are attributes in relationships.
NRC realize there's those relationships rely on other Siri leiser's In a graph fuel type objects those killed relationships rely on other type objects. So why with all of these similarities when we're requesting essentially the same data does graphql produced in + 1 queries with rest is not. Listen to bring this to our first lesson in the first mistake that we made. Le Tigre loading the first mistake we made when implementing our revolvers and graphql was assuming that eager loading would be respected the same way that it would in a restful controller action. But if you take a step back
and you think about the greatest fundamental difference between graphql and rest API it's exactly what the client last four because it's predetermined ahead of time. You know, we have a Ralph's file in that round file tells us all the different options that we have like a menu of in points, right and every time we hit when was in points were always going to return the exact same data, so it's really easy when you have that data to write really efficient queries and load all that data in the most efficient manner possible, but we never know what the client might the client might ask for. So
if you think about it either loading in graphql is 4 of an anti pattern because you're probably going to be because even if it did work, you probably end up loading a lot more data than you necessarily need free to request. Susan example, this is a stream. This is a this is a stream squeri. And in this example, you can see the requesting an active encounter with his encounter object in the provider that's tied to that in the patient plan. And this might be a typical request in graphql. But what happens if for whatever reason one of our clients only really needs a collection of our stream
IDs. Will they can use the exact same query string and just tear it down to only request the ID. So again, if we have no idea what the client going to ask for, how can we make sure that we're officially loading those objects? Well first things first, we're going to remove all you're loading from our resolve hers. So now when we load a collection of streams each one of those stream objects that we get back from this collection is going to be handed into a stream type object. As it stands right now with no Eagle loading or anything. We need stream gets handed in to this type object. We're going to
be triggering an additional query for the active encounter the patient and the plan. So if we would have said 10 streams will be adding three additional creeper stream right now not to mention any additional. That would be triggered by the underlying object or relationships in those types. This is very inefficient. And as you can imagine we were not the first Engineers to come across this problem. Lucky for us the wonderful people at Shopify have created graphql batch graphql bad provides an Executor for the graphql Jim which allows queries to be back together to leverage bashing.
We install the gym is a couple lines instead of code. And then we need to find some custom loader objects that we have a record loader and we have Association Motors this I literally pulled directly from their documentation just as an example. You can also modify these objects. If you see that that makes sense for your preoccupation. Do in order to eliminate the sector queries we go back to work. We Define methods for each of those object references to the active encounter the patient in the plan. Within These methods we leverage our newly-defined loader objects.
And then what we need to do is we need to do the same thing for every other one of our type object. So any object relationships in any of our graphql type object need to have the relationships to find this way with the little Rob check. Once we're done doing that we go back we run our streams query we look at our logs. And it looks like we've eliminated some of our in plus one queries, but not all of them. Brings us to our second lesson delegation. Here we see our stream type. We have a patient relationship with that stream.
When that patient object is pended into the patient type. We use our regular to ensure that were backing all of those patient Kris together and everything seems good, right everything looks cool. So let's go take a look the patient type object. well on first look When I look at this object, my instinct is to think that there's no additional work to do here because I don't see any object relationships. I don't see any other types being leveraged some some scratch my head. I'm thinking where you know where this excerpt. Coming from open take a little bit
deeper. We look at our patient class. We realized that one of the biggest benefits and rails is how easy it is to interact with complex object models and create a language around your domain that actually make sense and it's easy to read so easy to read. In fact that sometimes you forget where the date is actually coming from in the first place. So you can remember that patient credential object that I mentioned earlier that I object that we used to store emails and passwords. Both patients and providers have a credential relationship, but that credential object is not really meant to be
interacted with directly from our API. And so we delegate to it. So when we call Patient. Email, we're actually calling patient. Dr. Email and this subtle line of code one that we use frequently and rails is Hitman plus one free waiting to happen in graphql. How do we solve this one? Well we go back to our patient type and we need to find a message for the email just like we did for any other relationship and notice the. Then syntax. Irregular object is actually returning a promise and by calling VIN on the return promise for my record loader. We gain access to the credential and
are able to grab the email property from it while avoiding extra Cruise because we've now back all those Prudential careers together. We go back. We run our query check our logs again and you can see the babies. That's together our Prudential queries. We still have other inputs one Cruz elsewhere which leads us to our third mistake, which was how we leverage our service objects in decorators. So we use service objects in decorators when we need to present data in a specific way based on one or more factors. For our application each plan has a predetermined set of business hours.
If a patient tries to login outside of those business hours. We need an intelligent way to inform the patient that a provider will get back to them as soon as we're open for business. So when we touch a stream. We had this message back with the active encounter. The ark encounter object didn't gets handed to the encounter type and inside the encounter type. We have a field to find for the patient few status. In a method that called out to our patients used as object and as you can see we've defined this with a message signature where we hand in our encounter object. So let's go take a look
at this object in real life. This object is a lot more complicated. However, we can see that in this contrived example that even though I'm handing in the encounter. We also need to plan and we need to stream. No in a rest API were eager loading as being respected. This might not be a big deal. But in graphql, this is just one more in plus one free waiting to happen. So, how do we fix this problem? well in order hand back that you set up message without introducing a 10 + 1 Crew we
need to load all of the objects that the patient you status relies on ahead of time. We also need to update the method signature to allow us to hand in all those objects together. To do this we can leverage our records are objects. I need to find a method for extreme object in our plan object. But in this case because Archie status object needs the encounter plan and stream objects all to be present before it can be called. We don't really know when those promises will resolve. We use the promise. Alston tax. To ensure that all of those objects are present prior to
calling our patient case status object. So what that last update we check our server logs in Valla Oliver in plus one careers for extreme resolver are gone. Let's do a quick recap of what we've learned you were loading will not be respected with deeply nested objects and graphql. So we need to use loaders to query are objects efficiently method delegation to Associated object. When should you send plus one Cruz released expect them and decorators in service objects that are called from your types need to explicitly preload all the day today rely on or they will also introduce in plus
one. So today on that same instant the Raptors scores 1.0 that response times are about 60 seconds as you can see we ever drowned 550 request permanent. So let's do a quick retro on our decision to go with graphql. What are we doing? I think the first thing I'm probably the most important thing is just an enormous amount of flexibility with how we interact with our API. I've noticed over time that we are making far fewer changes tour API if we introduce new features on the front end, I can't speak for Android and iOS
clients because we can just started the undertaking of migrating those those clients over to graphql, but we are extremely excited about it. And we think that it's going to be equally as flexible there. We also got free documentation, which is my favorite part anyone who's had to maintain an API knows that maintaining documentation is extremely laborious. So that's been awesome. I also think we have to do that. We took on some new dependencies and what that means is that from this point forward in our application. We have to maintain his dependencies and we have to rely on the
goodness of the open-source Community to make sure that we are you up to date and that it's secure especially in our space. Talk about some things that we gave up we gave up some confidence to be honest initially both internally on our team and externally with our clients. I'll be up for a short. Of time. We also give up HTTP response codes. One thing I wasn't aware of when we first I didn't scratch you, all's that grabs. You always returns at 200 no matter what unless you somehow spoof that so even if you get an error graphql response to the 200 and then it
has a error key inside of the response. And so I could give an entirely different talk on how to handle are messaging in graphql and how that's different from like a normal rest API. It seems like a simple thing, but when you're not getting it for 1 or for 4 and an air handling is completely different you have to think about it and only way We also lost some granularity in our performance monitoring. I think what happens sometimes is that you know, our application is 7 years old now and overtime you just sort of get used to the tools using any sort of take them for granted know we had
already been using the bullet Jim to identify and plus one careers in a rest API and I think we just assumed at some earlier that a lot of the same tools would just work well with graphql and unfortunately, they they're not built for that. Same thing is true for New Relic, you know in our domains face because we have to be hipaa-compliant Hydro certified 80% of the New Relic API is basically eliminated Frost because we have to enable high security mode. And so when you only have one in point that your entire app uses and graphql vs a whole category than points. It makes it much more
difficult without adding a whole bunch of extra code to see what the what the problem is inside of a transaction Trace. I think we also have to acknowledge that whenever you migrate from one technology to another There's an opportunity cost to that. You know, we could have spent a lot of the time that we were doing on this migration from rust to graphql on other features. And I know that I'm sites always 20-20, but I do think we have to consider this when we're making our decisions. So, how can we approach making these types of decisions differently?
At the beginning of this talk I started out by saying that I believe there's a huge part of our job is Engineers this to make quality. Well thought-out decisions that never put our own businesses or our customers business is at risk, and I also said that I think we can do a better job as a community to acknowledge and talk more openly about the fact that were often forced to make really big decisions with limited amounts of information. I've always sort of felt this way and because of my non-traditional software background. I've always sort of struggle to articulate myself with
Engineers to come from CS background to a really really bullish on working with new technologies and learning new things in machine learning or whatever happens to be at that time and they don't necessarily at least in my experience. They don't always want to talk about how the decisions were going to make is going to affect the business and that was until recently when our CTO I'm sure the podcast that's featured Annie Duke who the former professional poker player and she was talking about her book that just came out called thinking in bets. I was absolutely blown away by this podcast
cuz even though it has nothing to do with software has everything to do with making quality decisions. So let's talk about some of the ideas, but I learn from this. for making better decisions how many people in this room? Have ever been in a meeting where somebody presented an opinion or an idea is a fact. Pretty much everybody. We have to remove opinions that are presented as fact. Presenting opinions or ideas of the fact is is a toxic behavior. And what it does is it creates an environment? That's uncomfortable and it
discourages other people from giving input. Because when an idea or an opinion is presented as a fact or with 100% certainty it immediately shuts other ideas down because an order for someone else to contribute another Viewpoint. They essentially have to imply that the idea it was just presented as a fact might actually be wrong. Forces us in the conversations about right vs wrong instead of discussing the cost and benefits of a particular decision or idea. I also think we need to get a little bit more comfortable with saying I don't know or I'm not sure and I feel like sometimes were
discouraged from saying I don't know because it's perceived as being like evasive or vague but getting comfortable with saying I don't know where I'm not sure is a vital step in becoming a better decision-maker. I couldn't comfortable saying I'm not sure we're centrally embracing uncertainty and by embracing uncertainty. We're better in our decision-making process because good decision-making processes include accurately representing the level of uncertainty about a given decision or action. Instead of saying graphql is the right decision. You know, what if instead we said You
know, I'm about 75% sure the graphql is right decision. Simply by saying we're 75% and acknowledging that Insurgency exist opens the door for other people and makes it much easier for them to tell us what they know about this problem and contribute to the discussion. Once the decision has been made. We need to acknowledge that what makes a decision great is not necessarily that it has a great outcome. Sometimes you make really great decisions and things just don't work out for any number of reasons and that doesn't necessarily make the decision a bad one. A great decision is the
result of a good process. Accurately evaluating unknown and making the best decision with the information we have is a good process drawing an overly tight relationship between a decision and its outcome is what Annie refers to as resulting working backward from results to figure out why those things happened leads to cognitive traps like assuming causation when there's only correlation and it doesn't really leave room for that in certainty that is always going to exist and these traps lead to terrible decisions in the future. speaking of terrible decisions How about
we remove any biased from choosing a technology simply because we want to learn something new. We just call it out and be transparent about our own wants and needs so that those wants and needs don't necessarily overshadowing decision. And we just making a decision that won't necessarily be good for a business. lastly I think we should embrace mistakes. It doesn't it doesn't ever feel good to have to admit when we make a mistake in a society teaches us that mistakes are shameful. but what if
What If instead of feeling bad We have to admit that we made a mistake. What is the bad feeling actually came from the thought that we might be missing out on an opportunity to learn something just to avoid the shame or the blame of having to say. Hey I messed up. Shame or blame that in my opinion shouldn't exist in the first place. So I proposed. That instead we hold each other accountable for this and we Embrace people who admit making mistakes so that we might learn something new. You know what's rewire our brains to bring less certainty and more empathy tour
conversations and let's get excited when a teammate walks up and says, hey, I think we made a mistake. I did leave some time at the end for questions. I'll do my best to repeat them and remember to do that. So I'm so free documentation is that there is a tool with graphql called graphical and inside of your Li Chrome developer tools. You can go in there and you can use graphical and what graphical does is I can actually I'll go back to that's why they aren't the second. So you go
so you can see here that like at the top of our patient profile query we had websocket stuff and current dependence and patient profiles and screamed and whatever right? So basically what you're you see here, you see a menu of all the different query strings that we that we accept in our API and then what you can do as you can use this tool to build out exactly the career that you want. So in this particular case, you can see here we have our stream and that stream has all these different, you know fields that you can query for so maybe you need allergies or they're there their date of
birth or history or whatever. And so what you can do is if you don't actually need that data for any particular request, you can't remove it like you don't you don't have to ask for the patient. And so essentially what this is this is like all of the API documentation you could ever need because it's interactive. And anybody who's going to be consuming this all they have to do is go here. Look at the different breeds that are available and then there you go. I mean, it's it's it's actually quite awesome and fun to play with two. You know that that's what
the question was. There's a lot of other gyms that are built on top of graphql back and whether or not we've experimented with any of those at this point and answer is no we haven't yet. You know, what I was about to say, we are one of the best things about this community is like how quickly things move and how many new gyms and stuff. I'm getting introducing even since I started writing this talk. I have discovered like to other tools. Unfortunately, you know, obviously what we have to do overtime as we have to like make the best decision would be that we can with the information at
the time and move forward with that one. And I think that you know as we start to discover those tools and they become more and more mature will probably experiment with some of those things. You know, I'm I'm pretty conservative when it comes to choosing tools. I like that's one of the reasons I love working in real so much. You know, I like things that make sense in that work in that are well-maintained and well-supported and especially in our space where you know, we're dealing with health care and so we can have to be very very judicious about choosing which To use because we have to
make sure that our application is compliant. Yeah, I mean to take it back to a DHA said in his talk, you know, you know any time we choose it to us. Like we're relying on the goodness of the open source community. And I think you're my hope is that you know, it's part of the process of using graphql will be able to let you know just tell me some things and actually contribute back to that good so far. We've just been consumers. LOL, I knew this was coming.
Okay. The question was I told you at the beginning or at least my abstract that I would tell you whether or not I would do this all over again. The reason I didn't specifically address that in my talk is because I'm very hesitant to make any you know, blanket statements are assertions or prevent and ideas of fact about whether not Dracula might be right because I don't want to encourage or discourage anyone. I'm thinking about using graphql from doing so I will say this based on my personal experience and based in an argument space where we have multiple AP ovary have apis that
are consumed by multiple clients. The answer is yes, I would definitely do this again. If I was working on an application, we're only dealing with one client and it's pretty you know, I wouldn't do this because the flexibility is really where the power comes in specifically for all your different clients. That's my opinion. And and I think also if I was like, you know real doing tomorrow, I would probably use graphql again just because I really enjoyed the experience and I I like I said, I have enjoyed the developer tool. So I won't say whether you should have you shouldn't and if anybody has
questions about her or what about whether you should or shouldn't I'm more than happy to have that conversation afterward. I'm so that's the question was how do we monitor or how do we Implement? Like five more find green access depending on like attributes that you know, like maybe one user has access to in one user doesn't have access to graphql has support in their gym for pungent. And if you check out the the grass killer review documentation, there is a lot of support in there for doing that type of thing. We win an argument space. We do, you know, we we we
can we kind of control all the consumers of our API. So this is not a public-facing API except for like we do have other SSO Integrations and stuff for the best of our customers, but we control that stuff now so it's less of a concern for us. So I'm not as familiar with those tools but for public facing apis, I know that their support for that. So the question is because you always were friends of 200. How do we handle our are risk responses? Well, that's a great that's what I mean. That's a really good question. Basically, we do a lot of Grace Caliber in a we use an interactive pattern
in our in our application and we've essentially engineered it so that like, I mean, we hope that we actually never get an error message when we get very few of them at this point. But whenever we do we have handling on the front end that understands that if that he happens to exist what to do with it. And so I think that there's probably some room for improvement how we handle those things, but I have been focusing it in the vast majority of my time on the back ends of the stuff and less of their of our timing in the reactor. My experience with rotational cruise is honestly,
what is my strength of mutation occurs? Not just curious to me. It's like pretty much tomato Tomatoes pretty much the same thing. You know, we use. You know, like I said, we use an interactive pattern where we have objects that are specifically responsible for creating data or you know, whatever and actually just a question because one of the things that I learned about our application when we made this migration was how well we had architected it and how the people who came way before me architected it because when we switched over from
Reston to graphql there was very little that we needed to do in terms of interacting with the objects that were responsible for doing nothing right to like when we create a message that has to do a bunch of additional stuff. Right? We have push notifications that go out and all kinds of other things. Right, but all that object was abstracted to the all those responsibilities. I do is instead of calling that object from our rest controller. We just called that same action from our graphql. And so it made the transition really smooth. The question is do I use
graphql Pro and if so, which features do I use we do not currently subscribed to grab fuel Pro. We have considered it one of the other things that I don't talk about in my talk is that you know during the time of a bar migration from from rest to graphql the graphql Ruby gym when I underwent like a massive overhaul, right? They used to have a function Basin tax to move to a class C syntax. And so as part of that process I was a little bit we were a little bit hesitant to go further. I'm with it until it was more Vino fully baked in solid. And so, you know, I know that it's very stable today and
it's been worked out great for us, but we haven't we haven't gone to that next up yet. It doesn't mean that we won't we've definitely considered it but not yet. I'm actually at a time. So if anybody else has questions, please come see me. We have some T-shirts and stickers and stuff for you. Also. Thank you so much for your time. I really appreciate it. And if you have feedback.
Buy this talk
Access to all the recordings of the event
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.