Test Leadership Congress
June 27, 2019, New York, USA
Test Leadership Congress
Priyanka Halder - Taming Your Dragon: From No QA to Fully Integrated QA
In cart
Add to favorites
I like 0
I dislike 0
In cart
  • Description
  • Transcript
  • Discussion

About speaker

Priyanka Halder
Head of Quality Engineering at Goodrx

Priyanka Halder has more than 12 years’ experience in Quality Assurance from GoodRx Inc, Heal, and Homeme Inc. She currently heads the Quality Engineering team in Goodrx which serves to more than 8 mn monthly Americans to purchase their medication. Growing up in India, Priyanka today sees herself as Californian and is building her career implementing QA teams to use technologies like visual validation, test stabilization pipelines, and CI/CD for each code deployment. She loves to tackle the QA technical challenges with the help of modern technologies like Selenium/Applitools. She won “world bug battle championship 2014” and was also a recent speaker in Wonder women in Tech conference. In her spare time Priyanka loves to travel, try out new foods and spend time with her super active 3-year-old daughter.

View the profile

About the talk

Many companies struggle with their QA processes and think of them as bottlenecks to their releases. Join me on my journey transforming QA and improving reputation and reliability.

Transforming QA has required me to take deep dives into some of the technical challenges organizations face when they try to integrate an automation-focused QA team. I will be talking about 3 stages of QA adoption: Strategies on smart learnings from failure, pre-planning for success, and key action points for successful implementations (POC, tools, team structure).

I will also review how I created transparent and effective processes in QA that helped in gaining trust from other teams such as product, engineering including senior management team on QA.

00:30 Intro: my team and I

01:27 Why the topic is Taming your Dragon

02:28 How we can help you achieve better quality and faster

03:18 What exactly QA mean? What a test engineer actually does in their day-to-day life

04:09 Some key factors of QA adoption

04:26 QA Adoption. Factor 1: size

06:12 QA Adoption. Factor 2: bug tolerance limit

07:17 QA Adoption. Factor 3: various other

08:45 Formula of 3 steps success model

09:10 Learning from failures

12:11 POC and implementation

14:49 About the test pyramid

16:20 A famous story “3 little pigs” and what we have learned from it

17:20 The case study: GoodRX

17:49 What is our QA goal?

19:25 What are managers looking for when hiring testers?

20:56 Talking about Tech Stack

22:00 About timeline

23:40 CI/CD requirement/ criteria to success

24:41 Let’s talk about shift left and risk mitigation strategies

27:16 The framework: key highlights

28:23 Browse base class

29:07 Page class

29:42 Element wrapper class

31:50 Various integrations

32:19 About finding differences. Examples

38:13 Visual testing

41:48 Q&A


These are my learnings that what was needed when you go to a new company and you start a brand new team order. 00:03 The strategies you should follow I will also go through some case study that in the last 9 months 00:11 what I have done in good RX and then the later part will be about. 00:17 Our latest trend called visual regression that we are using at scale scale our QA Department. 00:23

The little bit introvert myself that's my husband in the background and my 3 years old daughter was born in Calcutta and from 2010. 00:30 I'm in the USA and I love traveling. 00:38 I traveled 5 continents already and looking for the other two. 00:40 I love to workout more work. 00:44 Workoutholic If you are in my team. 00:46 You have to learn to listen to my workout stories. 00:49 Every day almost an I love to meet new people so, please. 00:52 Talk to me at the end of the session. 00:56

Give me feedback on how I did. 00:59 This was my strategy for my team and so I have a dedicated SD team like it's a service team which helps all our release ways. 01:03 and I have off your team in back in India, 01:13 who does the testing when we are sleeping here so I almost have a 24 bar 7 cycle so they really few is help with new features. 01:16 The Essenes rights are called tools and help the release Ways and are optional testers help the release. 01:27

It's like having more hands to the people who can work faster and deliver faster. 01:34 So why did I choose this topic training your dragon so you everybody must have known that dragon easier for leg sapient? 01:41 Which vents fire right in my career. 01:50 I thought like a lot of company pink. 01:53

QA is like a dragon who only reported effect who are bottlenecks who slows down releases a lot of them things we don't need it and there are cases where 01:56 they can be without. 02:09 Without here they can be successful and Q again. 02:09 Just come as a strategies or coach and stuff, 02:14 and then I'm going to go through. 02:17 But I think going back to the same cartoon movie that that was laid on my first slide that if you utilize us towards your goal. 02:19

We can help you achieve better quality and we can help you achieve really faster. 02:28 That's what the movie showed right and I'm a diehard cartoon fan, 02:32 I learned from everything and I have a 3 years old. 02:36 So I'm forced to watch all the movies. 02:40 So it's like a dragon whose name was toothless and this guy called he couple was like really different from all the other people. 02:42 So, in his community thought they have to fight Dragons. 02:50

This little guy went and brief enter the dragon, 02:53 and utilized him for his own community right the same thing like if you want to want to integrate with QA think of them as a friend friend great philosopher 02:56 and they're going to work with you along with you to help you achieve your goal faster and better. 03:06 I'm sorry, the quality of this picture is not enough, 03:14 but I just want to touch base like what exactly QA means? 03:18

What a test engineer actually does in their day-to-day life. 03:22 It's not just looking at requirements and matching requirements that it is working or not. 03:26 We were lot of heads sometimes we're designing sometimes. 03:32 We are strategies. 03:36 Sometimes we're communicator. 03:36 Sometimes we are. 03:38 I would say you have to be very political to talk to some. 03:38 People in the team, so that you are not often doing them right. 03:44

Sometimes you have to just observe from the far if they're getting off and did so everything like in our day-to-day job. 03:48 It's very, very important that we wear this different hats and I got this from LinkedIn and I was very fascinated that how much like one person does in every 03:55 team and it's so true. 04:05

Of the next slide talk about some key factors so if you are joining our team tomorrow where there is no way anyone to integrate a brand new QA team? 04:09 What you showed the think about so the next few slides gonna be talking about that, 04:20 so the first factors. 04:26 I found is size, 04:26 so work like you could end up working for different kinds of Co uh. 04:29 Please start up maybe which is not yet life and there's a risk is really low right. 04:34

You don't have to support a lot of people. 04:40 The budget might be an issue. 04:42 So you might be restricted to use tools those which are free. 04:44 You might have to even support like 10 developers with yourself and you have to come up with a strategy like how with minimal blood budget you will be able 04:47 to integrate like one example. 04:56 I will give like when I was working for a startup called Homely. 04:57 There were no budget so my whole team was crowdsourced. 05:01

I utilized the company called Apple OS and like I used to send all the releases like to them and they used to be my. 05:03 When I came up with that strategy that singlehandedly. 05:10

I only able to support their app Web API, 05:13 so how can I utilize other things, 05:16 the crowd source people who can come and help us doing that the next one is start up so they some set up, 05:19 they might have not be they might not have started doing QA so there your job is to do and introduce a process they might not have budgetary issues but 05:26 risk can be very high in some cases because when they are integrating QA they're expecting you. 05:36

That you will be giving them really good quality. 05:42 And then there is a big enterprise like Visa, 05:46 Walmart, where they have like really big team that I was taking interview. 05:49

A person from Pandora and he said his team is 37 people big and he has a dedicated person who goes and make sure all the mobile devices are on 05:53 so not everybody is as lucky to have that seem so when you integrate a big team on a in Machu Enterprise your strategies will be completely different. 06:02 So next thing next week factor was the Buck tolerance limit like the risk. 06:12

We all know that Facebook did not have any way for long time right so the risk is really low and it depends on the product to product again. 06:17 If it is a pre release. 06:27 Tap then risk will be really low because it's low volume, 06:29 but then there can be some companies. 06:33 Like Hill, where it's a doctor house call company and a doctor comes to your house and checks you write so there. 06:35

If your medical history is not working if your heater is not working if your iot is not working. 06:43 Then there is no questions like it cannot work in production like you have to be very sure that your strategies are working for this company, 06:50 then come to the present company good. 07:00 Alex they serve almost 10,000,000 customers every month, 07:02 so even one bug means thousands of dollars. 07:05 So, your strategy will vary like how you will be making sure that. 07:08

The quality is assured for that kind of Comp. 07:13 The next few factors are test environment. 07:17 So when you end up being the first Test Manager. 07:19 Sometimes you will be surprised to see that there is no Q environments. 07:22 There's only dev environment and this is directly push it to production, 07:27 so it's your duty to watch them that you need a deterministic environmental test what you are testing. 07:31

It's very, very important to have a test environment, 07:37 then another big part is team culture. 07:40 QA is responsibility of everyone. 07:42 I'll give example of good RX and everybody in my team. 07:44 Test the product the developers ask the QA my estate. 07:48 So it's not like we are catchall and they throw over the fence and we're going to be finding everything that does not help and the last one is tools. 07:51

So when you join as a QA manager or if you are leading a company. 08:00 It's very, very important that you choose the right tools looking at what kind of strength you have so if you just happen to have QA tester. 08:05 They only able to write selenium webdriver so you might choose plug and play. 08:13 Play Rick play and record tools right also you have to see that that record, 08:18 and play. 08:22 It was going to work for your company or not? 08:22

Is it complex enough that those kind of tool will not be working, 08:26 so this kind of strategy is really help to succeed. 08:30 This is just so simple code, 08:33 which says path to success and part of the failure is exactly same. 08:35 And it's up to you how you choose. 08:39 The path and I I really believe in this. 08:41 So. 08:45 This talk is about when I joined the last 3 companies. 08:45 What strategies I took so I came up with a formula that is 3 steps. 08:50

One is first few days just learn from your failure like why the company did not have QA? 08:55 Why they failed before then plan like what you want to be using proof of concept then implementation and then scale. 09:01 So when they started learning from failures. 09:10 These these are the some of the points that came up and the number one was imbalanced. 09:12 Usually, when when a company starts integrating with you. 09:17

There is like really low number of QA like one or 2 and they think they're Unicorn and they gonna support. 09:20 Everybody and find everything so as a QA manager. 09:26 I think we have to set the expectation right and say that that does not work and we have to scale or come up with a strategy that the balance 09:29 is there between. 09:39 You and a developer so then there is no process. 09:39 I think one of the key things as a QA manager. 09:43

You can do is you can introduce processes like? 09:47 What health so how the Jira Workflow. 09:50 Wanna work for your team. 09:52 How how you will be doing automation in there BC. 09:54 I pipelines will there be manual. 09:57 QA needed on production to do sanity do you run regression every night? 09:59 Those kind of strategies, then fragile regression suit, 10:04 which takes hours and days so as a manager. 10:07 I think it's very important. 10:10

You educate your team with the newest technology and make sure they know the industry norm? 10:12 How how much time it would take to run your regression cycle and I have few slides to go over that and I'm going to show you what the Internet 10:17 industry standards are the major problem. 10:28 I found in most of the companies is that there is no test hook. 10:30 So if you just join a company and you just start writing automation test. 10:34

It will be very, very fragile because if you don't have dedicated test hoops or IDs. 10:39 It will fail left, right center and I'll give the example of companies like good. 10:45 Alex be done a lot of AV testing to make sure that our features are our features are getting like our users are getting accustomed to our features right and 10:50 if we don't have test hooks are test will not run it will fail. 11:00

Every time we push to production and we almost pushed like 15 to 12 to 15 times a day. 11:05 So then there can be infrastructure is issues again? 11:12

So when you land up as opposed to manage are you might see that there might be breakage or end to end like their own DNA environment, 11:15 where you can test end to end their own being environment like Springsteen production where you have to make sure it's exactly like production and you have to verify your 11:24 test again that sometimes test environments are like completely different than what is production again. 11:33

They are not a thought about the databases are not as big all those things. 11:38 And then the automation, maybe educating yourself with the latest technology and trend gives you an edge to scale your team really fast in this very small amount of time 11:43 and I'm going to cover some visual regression testing later part of my life. 11:53

So what are some of the POC's point that made me and my team very successful was students elections again like when I joined who directs there was only one 11:59 QA automation engineer and then like after we jotted down what we needed for our team like? 12:11 What is the return on investment we're looking at? 12:17 What is the tool will be doing should we go for record and playback should we do selenium webdriver then like setting up a realistic. 12:21

Expectation one thing I see a lot in our industry. 12:30 When people try to do peer sees they just write contest. 12:33 They just pick up one tool and the right to test and they think Oh. 12:37 This is awesome. 12:41 Tool am gonna use this one and then later on when they start to scale with it. 12:41 They just they just hit a roadblock because that tool might not be helpful that we might not be scalable with thousands of tests. 12:47 So have a realistic. 12:55

At least minimum 40 to 50 test cases when you do POC do for a month at least. 12:55 See how they are performing are there any roadblocks is the tech support good? 13:01 Is there is a community around it does that company innovate enough for you to speak to that tool given example of selenium webdriver. 13:06 It Is Everybody. 13:15 Almost uses that as automation tool right thing with aptly like that. 13:15

They're number one company for visual regression and they they are innovating in a very high speed speed like if you look at their data and product like they're inventing 13:20 every day. 13:30 There is a very large they have like developer advocates. 13:30 Which goes and talks about their self? 13:34 What cool they're doing so those are very cool thing I will tell a funny story like? 13:36 How many of you know about a tool called monkey talk. 13:41

Anyone no one knows in this room, 13:45 so in 2012, when I was working for a company called TrueCar. 13:47 I took that tool that was, 13:50 I don't know from where I found and I thought I'm gonna automate all mobile automation with this tool. 13:52 This is really cool and they had a training program. 13:58 I went and did the training program came back Road 3, 14:01 four test and after that, 14:04 I hit a really bad road block because they had no community support. 14:06

Not many people were using it that I could not Googled and like the blockers I had. 14:10 So that was really a bad decision to choose from. 14:15 So you might be getting lot of emails from a lot of vendors in that Oh we are the record and playback. 14:18 Just come, we have a also users but be very, 14:25 very cautious when you're introducing a tool to your team as a manager or as they lead look at it that it doesn't really provide any value. 14:28

Maybe it's very simple to record playback and just play it, 14:36 but will it be helpful in long run will there be ROI on that or not? 14:39 So then came the scaling part of it. 14:46 Everybody I'm sure here knows the test pyramid. 14:49 So usually All in all my companies. 14:52 The back end developers are awesome. 14:54 And they have a huge selection of unique test when I joined as a first QA manager. 14:56 We had like really long manual test cycles. 15:02

We did not have any API test then we started building on the on that automation test where we made sure that we are, 15:05 we are automating our core path happy path 1st. 15:13

Then edge cases and then a whole regression suit and then we like connected to the pipelines and on the Top using visual regression that something like was given to 15:16 me in my present company by the higher up an it's a game changer in the sense that this think about a tool where you have to check manually like 15:28 300 pages. 15:40 Every time it goes live impossible for a human right get stuff so. 15:40

We do a visual regression on the Top like we started doing that in the first few months. 15:46 This was our pyramid, but then it today. 15:52 It's changed now. 15:54 We have a really large set of visual regression tests and then on the Top we do manual testing. 15:54 Just just to tell you something I have some Easter egg all over my life. 16:02 If you happen to find any mismatch on the you eyes and any mismatch like in any spelling mistakes or anything like that. 16:06

You are entitled to a really cool off water water so this. 16:14 Let me and and grab one from there, 16:18 so I want to pause for a second and tell you so I have a 3 years old. 16:20 And she goes through all this cartoon stories and it's a very famous story called 3 little pigs. 16:26 So it's about 3 Little Piggy Bum brothers and their mom. 16:32 Ask them to build a house so they go there. 16:36 The first one chooses draw an build a house within hours. 16:39

The second one goes chose would build the house. 16:43 Maybe in a day or 2 but the 3rd. 16:46 One 2 days to build a house with solid break and stuff, 16:49 so there was a Wolf who came and tried to eat all of them when he came the first one, 16:53 he could enter the house immediately. 17:00 The second one he could blow it down. 17:02 But the 3rd one. 17:05

He could not destroy that house So what we learn from this story is any good thing takes time and you need to write you need 17:05 to use right tools to build it to make it go for long run. 17:13 Coming to the case study so this is our website good. 17:20 RX very go. 17:24 Search for a a prescription drug and it shows the pricing of it. 17:24

It looks very simple but it's a very regulated industry and you have to be very short that when you are showing pricing when you're showing some information warning FDA 17:29 updates. 17:39 There are like Top notch and they are there is no mistake because it can be very harmful if if something is wrong right? 17:39

Like the strategy that I showed you previously so we started with learning what our QA goals where so we, 17:49 we needed distributed QA to help lot of developers immediately so I had a uh in first month. 17:56 We had like QA teams option and then like ramping up on shore really quickly, 18:02 then dedicated as a team who will will not be asked to do any manual. 18:08 QA one problem. 18:13 I see is people today say we have QA engineers, 18:13 they automate. 18:18

Their test they do release. 18:18 But the reality is if you are supporting release, 18:20 you hardly get time to do automation. 18:24 So then that becomes a backgrounder and you don't get to succeed on that, 18:26 then choosing a robust framework doing test artist stabilization pipeline so one thing that makes my team very successful as we have a test stabilization pipeline using Travis so every 18:31 time. 18:43 Anybody pushes sport to master it. 18:43

Post and it runs against all the other tests using Travis and then. 18:45 It gives us a report that none of the other tests are effective using all cross browser. 18:50 So we support age different cross browser. 18:56 In 8 different devices and it goes to like almost 500 test to tell us that if the new test is affecting the other oldest or not. 18:58 That's a great strategy. 19:08 We're using here and then the whole last goal was to Automate. 19:08

Almost all the all the regression shoes. 19:14 A suit to reduce 90% 19:16 of the time. 19:18 So after we decided about our goal. 19:18 The main the other goal was to hire people and to hire people we needed strategy that hot we will be what we R who will be who we will 19:25 be hiring and what are the skills we are looking for Soto streamlined? 19:35 What we will be hiring. 19:39

These are like some of the really, 19:40 really good point I found on LinkedIn that when you're hiring for a QA analyst say their critical thinking has to be Top notch so when they are given a? 19:43 A project and feature they should not be like as a manager. 19:53

You should not be hand holding them and they should be independent enough to create their test cases and talk to the product off to the developer and work within 19:57 so these are some of the the point that I really believe in when I hire somebody in my team like enthusiasm team playing a certain and creative thinkers. 20:07 Like these these are really cool points that you should take a note about as well should. 20:17

That's a very large topic, 20:26 but one thing I would say like suppose you're very frustrated while writing code and you see let somebody has written a very bad expired and you just go to 20:27 the channel and say this expert is Bulshit. 20:37

It can emotionally harm the person who has written that right so you have to be intelligent enough to control your emotion, 20:39 and talk to other people such a way that they're going to take your way feedback constructively and work on that and not get like emotional or not get defensive 20:46 when you are. 20:56 Putting across your talk, 20:56 but it's a very big traffic and there are a lot of readings online if you wanna read about it. 20:58

Then we selected Arctic Stack, 21:07 so our back end is Python And selenium. 21:09 Webdriver, we use. 21:12 BDD behavioral driven testing. 21:12 So are you. 21:15 Analysts can write the tests and not Estates can do the automation. 21:15 We use browserstack for our cloudrunner. 21:20 We use hundred nodes and artists who they are full regression test takes less than 2 minutes. 21:23 Uh then Apple juice for visual regression. 21:31

We have Jenkins and Travis again like our pipelines run on Jenkins, 21:34 but our stabilization pipelines that that I just explained to minutes practice our own test run using Travis to make sure it's not breaking or other tests that the stabilization 21:38 pipeline. 21:49 And for that we use. 21:49 Travis we used Jira for black bug. 21:51 Tracking and Speaker Tracking and we SUV used estrel or test case management. 21:54

So after we decided who we are hiring and what are the exact was this is the timeline laid out it's like in June 2018 there was one QA then 22:00 we were able to scale 300% 22:12 and we had a senior analyst offshore team by end of December 2018. 22:15 We had all PO P1 automated. 22:20 We had spinnaker pipeline started working there like back end were pushing 5 to 6 times a day they were running our automated test cross browser support. 22:22

Hum something cool, we do is our mobile web test runs on real mobile device like I have seen people just resizing the web and making it like a mobile 22:33 web and running but one thing we take pride on we write it on the we run it on real mobile devices. 22:42 Then again in 2019, we scale another 300% 22:50 and and we were able to cover our full regression. 22:53

So we were able to cover edge cases and now today, 22:57 we run test feature automation first so when developer develop our automated automation. 23:02 Engineers create the test how we do that, 23:07 so creating this test hope is the process embedded in our development. 23:10 So when the designer shows us design, 23:16 we go to their different files and we tell them OK. 23:18 This this element requires this. 23:22

This is ID, so when developers are developing they're already creating data QID for us. 23:24 So we don't have to go back and ask for them or big questions like is embedded in our SDLC cycle to make us. 23:29 Before the CI CD of success criteria we came up with that. 23:40 We will be taking data from our Google Analytics. 23:44 And we will make sure that all topics. 23:47 Browsers are covered in general. 23:50 CI CD pipelines individual validation. 23:52

We also will do like speed because we release almost 12 to 15 times a day we could not have a pipeline which will take hours or days. 23:53 It takes only 2 minutes because we use hundred parallel nodes and we have created the framework such a way that. 24:03 The test are not dependent on each other so we can just launch hundred test together. 24:11

This is how today's the pipeline runs into the attic so we have hourly run of POP one then we have pipelines which helps all the pushes to production and 24:19 then we have nightly regressions for 300 test cases 60 day to run every night on all different browsers. 24:30 So I just want to stop for a second and talk about this shift left and risk mitigation strategies like every meet up. 24:41

I go to everybody's talking about shifting left testing developers testing like but What is this about and how even as a QA we can embrace this so we cool 24:49 things I learn in my present company is you can do shift less strategies. 24:59 Even with the QA team as well. 25:04 So there are products, which we shipped without queuing it's lower environment. 25:07 But then we use feature flag so when you use a feature flag. 25:11 It's only available using that feature flag. 25:16

It's a para meter that you put on the URL and it will be only available to you first because not know other person can know for the feature flag 25:18 is so you can push stuff iteratively on production test with production database. 25:29 Mimico actual customer and and tested there, 25:33 so you're not like hindering your team speed right so you can I talk to your development team or your managers that? 25:36

We've even having fewer Deacon help testing in production and using feature flags. 25:44 Then traffic allocation. 25:49 We use, 25:49 something called split when we introduce a new feature in production. 25:51 We go with 5% traffic make sure that our features are stable. 25:55

It is not creating any any cases where like our traffic is probably car conversion is dropping or something so like gradually increasing the traffic to the newer product helps 26:00 us a lot to make sure the quality is Top notch. 26:12 If something has missed by QA or the other team in the lower environment that is getting caught on production and the 3rd thing is I'm really vocal about dogfooding 26:16 bug bash? 26:27

We use, something called fastly CDN so how it happens is when a Peach big feature is ready, 26:27 we open it up to our internal user to test it and it's only available to our headquarters and other other location based on the IPS so if they hit 26:35 foodaddict.com. 26:46

We will always go to the new woodaddicks.com with help of the CDN an fastly because we're routing thenewgoodrx.com to our employees to make sure that it's actually they're 26:46 actually testing the newest feature so these are the very three cool feature, 26:58 which which we can introduce to our team shift left and mitigate risk for big bulk project and make sure that we're doing few diligence when we are set. 27:03 Coming to some statistiques for good. 27:16

RX we have 2.5 T visual regressions 300 right now. 27:19 These numbers are little old, 27:23 but we almost tested 49 unique browser with combination in browser stack and we have run 1,000,000 test in last 9 months more than 1,000,000. 27:25 This is actually a screenshot from browsers tag, 27:37 which says since last 6 months. 27:41 We have ran 837 test and that was the graph how we scaled from zero to 800,000. 27:43

So uh I showed you very big numbers and it comes with some issues criticality and responsibility. 27:54 Right so I'm going to give you some key highlights what we do in our framework that makes our automation throws access code. 28:00 1st and foremost is a browser based class so every every little click every little action that we do on our website. 28:08 It is wrapped in our browser based class and all the other tests actually extends it. 28:17

So we have same actions for our mobile web and web. 28:23 But when it comes to mobile web. 28:27 You might might have to do something different something different sometimes. 28:30 Um like sometimes you have to make sure the native controller is enabled you have to click back. 28:35

All these things are taken care in our browser based class so if you are a man automation, 28:41 manager and you are coaching your new team make sure you know about these things and you are implementing this kind of a base class to make your automation super 28:48 successful. 28:58 The next in Speech Class I did tell you that, 28:58 creating data QID is embedded in our SDLC process. 29:03 So this is the data. 29:07 QID see all the elements that we write. 29:09

It has a unique data QID here, 29:13 so how we have implemented here. 29:16 It's called lazy initialization. 29:18 So we created all the elements before even we are using them in our tests and when you call this element plus. 29:18 Actually, behind the back, it does some plumbing what it does it is this one so it actually wait for the element. 29:29 It look if it is a button it waits ancis if it is activated if it is a text box. 29:36 It waits for the cursor to be active. 29:42

This maybe make sure our tests are not fragile and like you don't have to call the same functions again and again each step, 29:44 which slows at his down to this is taken care in our framework itself the way we have written it. 29:51

Uh this is again another rapper class for amplitude, 30:00 so we suggest everyone to use rapper when you're working with 3rd parties because sometimes they introduce introduce latest version of their SDK is and it's not supported with the 30:03 old version, and if you don't have wrapper class you will be like last you have to rewrite everything from the beginning if they're changing the way a function work 30:14 so Super Duper important that you do this coming in your in your framework and you are using a wrapper class and you are. 30:24

Logging all the exceptions and you're making sure that. 30:33 In one one place you are doing this stuff like for iOS. 30:37 You have to make sure the Top header is carted off because that's like time and date right so all this plumbing's are done done here, 30:41 which makes our test Super Duper Fast and reliable. 30:51 One thing I want to come here and tell you that we used to use page object model. 30:54

So it helps us keep all the elements in one class and go to one place to make sure if something changes, 31:00 we come there. 31:08 We of change there, 31:08 so it's like very easy to handle. 31:11 Some other cool thing we do is whenever our back end engineer push stuff we get ink in plaque. 31:15 We use browserstack. 31:21 APIs to let us know that what's the failure exactly is so we do not have any other reporting. 31:21 You can just go to the browser. 31:28

Stack dashboard and you can see exactly if something failed why it failed. 31:31 It actually tells you hear that we were expecting something in North Carolina. 31:35 It was supposed to be false. 31:39 But it's it's true right now like that pricing is showing up. 31:41 And this is our young kids so this is a spinnaker pipeline, 31:46 which shows that are across browsers are integrated testing are integrated there and this is browser stack. 31:50 OK, so this is. 31:59

This is a visual testing that I'm going to talk about and it's super cool what it does it. 31:59 It's like giving eyes to your automated test it if it actually automatically test the look and feel of your website and I'm going to show you some more stuff, 32:09 but let's do a fun exercise 630 seconds and let me know nine differences. 32:19 You see in these 2 pictures and whoever answers I'm going to give them today. 32:24 Oh. 32:31 Which one is it? 32:31 Right. 32:36

The copyrights in your life. 32:36 There. 32:43 One one at a time, 32:43 please let me choose yes. 32:50 The things is different right here, 32:53 there is 4 days, 3 is like yes. 32:55 Is not there he's right? 33:00 Yes. 33:01 Yeah, it's actually missing right. 33:01 Fingers are different right. 33:09 I'm sorry what is it? 33:09 Up to date, a signature is not there on the left anyone else. 33:17 Yes, absolutely right. 33:24 Anyone is there's more. 33:24 Read more. 33:30 No yes. 33:30 Yeah, 33:30 she's she's right. 33:43 Anything else there's more. 33:43

Yes, I'm ready already. 33:50 Yeah, 33:50 that's only. 33:53 Right because one of the fences was missing so the Bush is yeah. 33:53 Cool, so you saw it took 30 people, 34:07 30 seconds to find Niners or so differences, 34:10 so just think about a port your analyst if he is or her job is to do this day in day out 15 times a day I think he or 34:13 she will go back. 34:24 So this is actually the difference? 34:24 The leg is little bigger that was not said this. 34:28 This has a difference. 34:31

Here, 34:31 the hand is actually a little up. 34:33 Of this hand is actually a little longer, 34:36 so these other differences. 34:39 Somebody didn't even say it did not came to our naked eye right like we could not figure out they given this handlers. 34:39 So if you do actual pixel to pixel comparison, 34:49 you can actually see the difference is how the leg is little shifted how the hand is little up and down how that leg is missing right. 34:53

And so automatic using automation this is like super easy. 35:05 You say in within few seconds that? 35:08 What are the difference in a website? 35:11 Which is going live in production if there is any so when we introduced at some of the advantages. 35:14 We got is we like like we increase our release velocity like anything so from like one or 2 releases a day or a weekly religious we shoot for 1015 35:21 times a day because of visual automation. 35:33

And when it comes to visual automation, 35:35 you have to think about quite a few stuff. 35:38 So our sites are dynamic like the pricing change, 35:40 so if you do pixel to pixel comparison. 35:43 It will fail every time we needed a tool where we could ignore some regions, 35:46 then we could use different kinds of comparisons and actually gives us that so they have some different match levels. 35:51 So if you say layout. 35:57

They're going to go to website and just check the layout of that page and it will not take for the content it will not take for the exact values. 35:58

Same with content if you just say content it will go and it will just check for match the content, 36:08 but it will not check for other stuff and if you do stick or exact then it will do pixel to pixel validation on that also you can tell the 36:14 test to ignore colors, which is so super cool because we run a lot of evidence with colors and font and sometimes we don't want to test those we just 36:23 want to ignore them so super cool and they have some artificial intelligence, 36:32 so every time you thumbs up a failure. 36:36

Their system loans it automatically that OK. 36:38 I'm not supposed to check this next time so it's their system is learning continuously so these are summer. 36:41 Athlete athlete. 36:49 So now coming to the Fund Park pizza actual my findings in my introduction slide. 36:49 I said, I think guilty pleasure. 36:59 Finding bugs in different apps. 37:01 So you see the left side. 37:03 It's Macy's and the error message is actually behind the button. 37:05

A normal functional test will not find this. 37:10 This bug because the text is present. 37:13 It's just hidden behind the behind a button. 37:15 Same thing some some random code snippets showed up on good. 37:19 RX blogs a normal automation will not find regression test will not find it and the right side is it's actually Amazon. 37:23 My real account and the product image and the descriptions became null and one day and if they had visual regression. 37:31

They could have caught it before going to production. 37:39 And this is examples from my own pace route that we found in lower environment after integrating with visual regressions. 37:43 It's like there is an extra 0 after each pricing in each row. 37:50 An and some integration went wrong and the full page became black and that's how they show. 37:55 The comparison like what was it before the baseline and then what is the new one? 38:00

Then here's some random dot showed up here. 38:05 And here one of the icon is missing legacy this transfer logo. 38:08 So you see, there are lot of values when you do this visual regression and you can scale with it like no other because a normal manual test will take 38:13 long time, so as if you manage you can introduce this tool an you can scale your team horizontally and find this cool bugs. 38:23 Come in lower environment. 38:32

Some industry standard, 38:32 that's all slab one of the key player in the cloud. 38:37 Runner has recently published about continuous pipelines and Fortunately. 38:41 We all I feel like our team follows in good addicts to some quality. 38:44 Benchmark is like? 38:50 What is the percentage that your automation test path every time you run 90% 38:50 time it should be passing? 38:57 What he said test runtime should be should be average test runtime should be 2 minutes or less. 38:59

Test.com coverage at least you should run 5 platform in average and what should be the concurrency. 39:06 So suppose you have bought 25 parallel node. 39:13 If you're running your all the tests, 39:17 you should be utilizing 75% 39:20 of the 25 nodes otherwise you're wasting money. 39:22 So when they did the survey with some of the biggest player in the market like Walmart and Visa. 39:26

They they came up with this data that only 18.8% 39:34 actually fulfill displaced quality criteria that they 90% 39:38 of their tests are passing when they're running their tests 35435.9% 39:41 of in the industry like companies. 39:46 They have a test which is less than 2 minutes. 39:48 So it's like very important to make your tests like. 39:52 Individual and independence, so they are not taking long time, 39:56 they're not waiting on another test. 39:59

Uh this then pics platform only 62.5% 40:01 of users have like 5 or more cross platform and then the test concurrency pretty good like 70.9% 40:04 of users are using what they're going there. 40:11 But if you look at this number. 40:14 This is surprising like only 6.2% 40:17 of the organization achieve this for all 4 benchmarks. 40:19 So we have so much more to do here like as a manager as a lead in our company to get to that benchmark. 40:23

Um so these are again some of the slides where they said, 40:34 like only 35.4%. 40:39 Users runs under 2 minutes and like there are 7 plus minutes user as well, 40:39 which is 10.42%. 40:46 And this is true and I know from my experience. 40:46 Then, when you start doing automation. 40:52

We just use Chrome and we don't want to do cross browser because I suck and it takes a lot of time developer, 40:54 HUH that so you see like 65% 41:02 of the people are using only Chrome when they're doing cross browser and very nice percentages on 6.2 and very less percentage. 41:04

Another and only 8.1% of the company uses real mobile device, 41:12 which is really, really surprising like when you are testing and if you want to test properly like you have to test it on a on a real mobile device 41:16 like resizing a browser doesn't help much. 41:25 Uh there is a 3 minutes demo, 41:30 but I think I'm almost 3 minutes away from finish my talk. 41:32 I'm going to give you guys a chance last me questioning. 41:36

Right we use browsers to access we have 10 device concurrency for a real device app automation, 41:48 but we have 100 concurrent devices with them for mobile web. 41:53 And all latest one with them, 42:01 so that's how we handle, 42:02 but we have an in-house device form as well. 42:04 So where we have, like 4 or 5 devices, 42:06 but we don't want to buy everything in house to use later on it. 42:09 Yeah, exactly and it's hard to maintain it. 42:15 Go back. 42:19

We don't today, 42:19 we don't use source lab, 42:25 we use browserstack. 42:27 Yeah, so you mean when we run the test any lagging. 42:27 We don't see any lagging wine run. 42:32 The test because as I was showing you the key points on our framework make it. 42:34 We have written in such a way that there is no like static sleep there is like all the plumbing is done in our base class and page classes. 42:39 So there is no no lag introduced and from their crowd runner. 42:48

I haven't seen any light. 42:52 Did not? 42:54 We do not have the API testing framework yet, 42:54 but in my last company. 43:07 We had rest assured, in Java and we used to actually create the data dynamically before we used to run our tests and that's where we want to go there. 43:08 We are only 9 months old in my present company so this is a goal for quarter three and 4. 43:16 You want to do that. 43:21 Right. 43:28 Right. 43:28 So by the time I had 3 Estates and 3. 43:28 Q analysts so RQ. 43:45

Analysts wrote all the test cases and are tested, 43:45 Rapids Estates. 43:50 Rapidly created the edge cases and by the time you also started integrating with visual regression so that helps. 43:50 So total 300 but they're all data driven like if I actually count the data, 44:02 it will come for a 6K. 44:06

No so as they exist responsible for framework tooling writing test cases like helping my release UA so I have a separate release your team who does not do any 44:14 automation other than writing. 44:26 The test involved or even normal steps expected result definition, 44:26 but so my essay teams and my offshore team enables those released. 44:31 You is to make releases faster. 44:36 It's like they are more hands to them that they can work faster. 44:39 Yep. 44:46 Uh. 44:46

Uh we We do not exactly do like point by point. 44:46 This is how the code is coverage with not yes. 44:58 But we are integrated from the beginning of the giant cycle like from the from the design and planning that what test we will be running. 45:01 We show them that what else we will be automating what environment and what environment and ideas. 45:10 We need so we are integrated and monitor monitoring purposes. 45:16 We use datadog. 45:19 So when we go life. 45:19

They monitor continuously that if there is any point 500. 45:22 400 S or North. 45:25 Yeah, 45:25 so we had that for irony test as well, 45:42 but not for fun. 45:44 And but you need to assist with invite. 45:44 Yes. 45:49 Right. 45:49 African. 45:49 What are T shirt says the visually perfect? 45:49 Yep. 46:11 Passing yes. 46:11 Yeah, we in. 46:11

Right so the This is this benchmarking with like 4 or 5 very big companies and I have spoke to like when I interview ask open to these people and 46:28 they say they have like 10,000 test and it runs for 6 days and then this sit down together as a whole team and they debug why the failures are 46:37 yeah, it does not help. 46:45 I don't know then you might just do it manually only if it takes like 6 days for you to run. 46:46 All that is. 46:52

I think one thing people fail to understand is like you cannot automate everything and you should not automate everything so. 46:52 When they start automating everything it becomes very large and the runtimes becomes super large so then they start failing here. 46:59 Our dog. 47:16 Yeah, so we have a unique strategy to handle that we use cookies. 47:16 So we have a specific cookies for our internal users. 47:30

It's called GRX internal cookie and we set it purposefully so our first leader CDN when it takes when it checks for the website. 47:34 It sticks and look with that special cookies present in our call or not so it whitelist us us prove out browser. 47:41 Stack and we are able to run test in staging an lower environment. 47:48 So you can use like parsley kind of a CDN and uh. 47:53

Unique key so you don't have to use their local testing and to go back to his fine is there any lagging? 47:56 Yes, if you use local testing there is a lag and we don't use that we use this cookie strategy to whitelist are self. 48:03 So it's like just opening a normal browser, 48:11 but setting the cookie in your baseline. 48:14 Yes. 48:23 Oh, the you mean the code or right. 48:23 This one. 48:36 Is a page object model so these are the elements? 48:36 No. 48:53

We do use data provider today to drive our tests but not for the element creation. 48:53 But like our pages does not drastically change like it will not be exactly like 100% 49:08 different if it is then. 49:14 Yes, we have to rewrite it that hasn't happened yet. 49:15 But we do not today. 49:18 We do not use any data provider for that. 49:20 Right. 49:24 OK. 49:24 Thank you guys so much. 49:24 I have some few more stuff like trains and fill boxes if you are interested to take richtungen taken. 49:40

Thank you so much. 49:48

Cackle comments for the website

Buy this talk

Access to the talk “Priyanka Halder - Taming Your Dragon: From No QA to Fully Integrated QA”
In cart

Access to all the recordings of the event

Get access to all videos “Test Leadership Congress”
In cart

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
app store, apps, development, google play, mobile, soft

Similar talks

Damian Synadinos
Owner at Ineffable Solutions
In cart
Karen Holliday
VP, Quality and Customer Care at InGenius Software
In cart
Joshua Gorospe
Senior Tester at Intersection Co.
In cart

Buy this video


Access to the talk “Priyanka Halder - Taming Your Dragon: From No QA to Fully Integrated QA”
In cart

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
541 conferences
21379 speakers
7781 hours of content