This website uses cookies to offer you the best experience online. By continuing to use our website, you agree to the use of cookies. If you would like to know more about cookies and how to manage them please view our Privacy Policy & Cookies page.
Accelerated mobile QA and test automation
So please feel free to check out our blogs at blog.apexon.com. Great. So moving on. To walk through mobile landscape and testing challenges, we will talk about mobile test automation tool categories. So we’ll look at the broad categorization of mobile test automation tools. And then we will dive into mobile automation best practices, which is the crux of this presentation about how mobile test automation, how does it differ from normal desk top test automation or web test automation, and what are some of the best practices around it? Finally, we’ll walk you through a short case study on one of our bigger clients where we tested an automated mobile web application on multiple devices.
And this is airline solutions web app. Finally, we’ll open up to Q&A. So feel free to write your questions on the online webinar question and answer mechanism, and we’ll be happy to answer that. So going right in, let’s look at the mobile landscape and how that mobile landscape is reflecting on testing challenges that today’s QA teams and test automation teams have to deal with. So mobile is definitely on the fast track. This is some of the data on how mobile landscape is unfolding. As you can already see, the gap between the human population out there and the qualification of mobile devices is closing really fast.
So we already are over the number of people the world produces on an everyday basis. And if you compare that to the devices that we create, and this is just the smart phone devices and the tablets, we are way over that. So basically, the point here is that the reach of smart phone devices and mobility, in general, is happening faster than you and I can imagine. This slide and the next talks a little bit about how mobility is getting more and more prevalent in the Enterprise space.
As we have known, the mobile applications have been, for the past two years, ever since Apple and Android app stores because more common place, and we all know about the number of applications that come and get download from these app stores, in past year or so, in the Enterprise, there are significant advances in mobile applications. So some of the content in this slide and the next talk about where enterprises are. So as the second part of this slide shows, there are definitely certain industries in which mobile applications and mobile solutions are very, very common place. Healthcare is one of the top ones.
Even more early adopters of technology, as always, is financial applications. Travel and retail industry is big on mobility. So there is a lot of advances in the Enterprise world for mobile application. And this gap is going to close as fast as it can. There is also an interesting data point, and this is relevant as of 2009/2010, I’m sure that it is a lot more prominent now that it states that 40 percent of brands out there, which is Enterprise and consumer brands, have developed more than 30 applications. So if you work for a large company, if you work for Enterprise, or if you have anything to do with consumers, chances are that you have more than one application already being worked upon at hand.
Also, it’s important to talk a little bit about mobile, the development lifecycle, and how short the mobile app lifecycle is to speak intelligently about why testing is important and how it has an impact on testing. And there are obvious reasons for it. As the innovation continues in the mobile space, there is significant number of use cases, the user interaction, mechanisms with the mobile devices. Customers demand how your computation reacts to it. There are all of these reasons that have an impact on the very short lifecycle that mobile applications have.
So this is an example of how the evolution of web browsers, which is one of the most prominent technologies that we have known thus far that have impacted the consumer and the end user interaction with the computing devices. Between 2005 and 2011, as the slide says, there were four or five releases of prominent browsers. So Internet Explorer, all the leading browsers, Chrome, Safari, Firefox, etc. had four or five releases, which approximately means that they are on a release cycle of twelve to eighteen months. Now, contrast that with year’s release of Android OS. We went from Version 2 to Version 4 all in the year 2011.
Compound that with the sub releases of this OS, and the . So just the level of activity that happens in the mobility space is huge. And it’s very significant. And that has a direct impact on how we look at testing of these applications and how we automate them. Again, the same behavior was there on form factors and other operating systems. So the beginning part of this presentation, which I walked through right now, it has a message. And the message is that if you were on Enterprise, no matter what industry, obviously, LVC, there are certain industries which are more prominent and certainly those which are less prominent.
But no matter what industry, whether you’re a C level executive who cares about a BI solution or not, they do care about your mobile app. You have unique kinds of challenges on your development and. It’s here to stay. And the only way to tackle this is think about how to deal in an agile world and how to deal with automation. So that’s a good stage to the next part of the presentation, which specifically dives deeper into mobile automation. So as we look at mobile automation and the challenges around it, I think it’s wise to divide it into categories of prominent automation tools or prevalent automation categories, if you will, of these tools at the technology level.
And as we go into it, you will see why. So the way we see it, there are three main broad categories of automation tools. One is a set of tools that work at the HTML level. So which is working in the browser or in the applications. And we can automate these things kind of like how we have been automating websites and web applications. But we can do that on the mobile devices. The second broad category is things that we work on the native platform. And finally, there are third party tools, which we call as platform independent mobile automation technologies that are available.
And they are very rapidly evolving. So how these tools work in the past six months is a lot different than how they are right now. And we at Apexon take a very close look, this being one of our prime activities. We take a very close look at available tools and technologies out there, and we publish that on our blogs. So I welcome you to look at our blogs and see how we use some of these tools. So digging a little bit deeper into these categories, mobile HTML based automation pretty much drives HTML and Java script. So when you talk about automation, what you have on your web page or your cross platform app that tool is trying to drive that.
So that tool will recognize every button, every link, every widget on your HTML based presentation layer and drive it. It will recognize a web control. The positives are that, by virtue of that, it is automatically – it works across device platforms. The underlying mechanism of working on the user interface is the automation tool is aware of your objects and, therefore, it is very robust and resilient to changes in your underlying app. And of course, it is non intrusive to the application. You don’t have to change the application cord in order to make it testable.
The downside is, of course, limited to web and cross platform and HTML 5 apps. So you can’t test native apps using these category of tools. The second category, as we discussed, is native platform automation technologies, which means that, with iOS, there is an in built iOS automation technology. With Android, there is a similar one. This is by far the most powerful and most intrusive approach you can get. You can pretty much test anything and everything in your application’s user interface. However, it will require port access written specifically for a device platform.
So if you have an app that is developed once for Android and once for iOS and once for Blackberry, you, unfortunately have to stick to creating platform specific automation. So just like you develop your app three times, your test automation code will also be developed three times. Now, there is this interesting category of platform, and even then, mobile automation. For my benefit, I kind of divide that into two types, Type A and Type B. So Type A is something that actually uses screen based recording and screen based optimization. OCR’s is optically character recognizes the contents of the screen, and it helps the test to actually work with the content that is showing up on the screen by what you are OCR.
And the benefit is that it will actually work across platform. So since it is working at the screen level, it works across the devices and across platforms. And the side benefit is that it has access through the whole device. So if you have a scenario where you test something on an app and then send an email and then come back to the app and check the network, etc., it will work across the whole device, not limited to your application. But on the downside, it has limited object awareness, and it relies on email and OCR capture on the device.
Now, on this basic technology, recently, there are several tools that have come up that I call Type B, which, actually, take this basic approach and then make it more objective area. So using one script, you can actually deal with creating test automation code that will actually work across devices. And it will work in an objective area where your automation code actually is dealing with the widgets on the screen. So sometimes, this could be intrusive wherein you need to create a special build, or you need to compile the tools library within your code or the tool to be more object aware of your application such that it can insert test code in the testing process.
But it definitely is a lot more powerful than Type A. So the purpose of defining these broad categories is to help identify what tools would work for what kind of applications based on the application categories or application character stakes and how to implement that tool best for a particular character stake. So getting into specific best practices or lessons learned around test automation, so I am dividing these lessons learned into four main categories. How to select test cases and devices for mobile automation, what kind of scripting challenges one has to deal with, and how do you really deal with the underlying problem of mobile automation, which is fragmentation?
So even on an Android device, you may have to put smarts in the script such that it can recognize the nuances between two different Android devices and deal with it. Finally, and then we look at dealing with special device conditions, how do you address that? And finally, what happens when you actually execute test cases? What are some of the best practices that you can build around when you are creating test execution? So device selection, obviously, the question that we are trying to answer in this is how do we maximize our test at minimum cost and time?
The number of devices, depending on what is the objective of your application and who is the target audience, the number of devices that may exist for providing good coverage can be outside the reach of any practical mechanism to test. So you may have hundreds of devices on which you may need to test in order to provide good coverage. And that often is not practical. So no one has to figure out how do they maximize the coverage, what mechanism, what technique do they use to do it at the minimum cost and time? And these are some of the factors that we look at when we identify what is that device and test metrics that we will come up when we actually do testing and automation?
So a few of the characteristics are types of the apps. So what is the basic nature, function, and job of the application determines what devices to use for testing and automation. So if you are developing a game, certain types of devices will be more prominently used. If it’s a business app, certain other types of devices have been more prominently used. A lot of times, this information can be available either from the marketing department. We, at our company, track a lot of this information fundamentally in our devices’ repository and in our knowledge base such that we can come up with the right metrics for the right problem.
And we typically do that routinely for our customers. User personalities and other categories where, depending on not just the type of the app, but who is using it. Is there a teenager using it, a business traveler using it? Is it your typical consumer using it? Is it a social type of app being used by a senior or by a mom? Depending on that, devices matter. Geography, of course, is relevant because there are different devices that are released in different parts of the world geography. And one needs to worry about that. What app functions are possible with a particular device? Again, this means that there are certain devices that have certain capabilities and certain devices don’t.
So if you are streaming, if your app is written for a specific screen, all of these factors have to be when you are identifying the devices. Also, device popularity, so there is a lot of data that we mine and we track based on which we can tell what devices are popular at a particular geography and whatnot for a certain form factor and OS. So those are some other factors. So the purpose of doing this analysis is to eventually come up with a device OS test metrics. So the idea is that we will come up with a set of devices and underlying OS’s on which we will do a certain level of tests.
So it will typically look like, on the base OS of the device, we will do a full test. And on all the future OS’s, we may do a partial, small test, and so on and so forth. So the idea is that we will come up with this metrics so that, when we get into the testing phase, we know precisely what coverage we are getting and why. And all that coverage satisfies our business. How do you select test cases? And this is particularly for automation. For manual testing, your test kit selection would depend on the coverage you want to get and the type of application. But out of that universe of test cases that you have identified, what are good candidates for automation?
That is a problem or that is a question we are typically trying to answer. So right off the bat, there are certain test cases, which are not automatable. So the test cases that have interaction with system or peripheral things that want you to take a picture, or if there is a bar code scanning type application that you need to scan a bar code, and you need to scan multiple different bar codes, it may not be automatable in the normal usage form that you or I would interact with the device. There may be other techniques to automate it, and then you could feed in a bar code programmatically to the device or to the app.
But in its normal sense, it may not be automatable. So we need to identify that. Depending on what tool you use, interaction between multiple apps, OSN application or multi domain type of test cases may not be automatable. So again, that is something that you need to keep in mind is if your test evolves, you want to send a text message while you are testing something or receive a text message that may not be a great candidate for automation. Special issues or special conditions identify doing location of area testing, field testing, again, are types of test cases that you may not automate.
So this boils down to the test cases that you can and should automate are functional regression tests that give you the most value from automation, you should definitely look at automating them. So another factor is tests that are stable or part of the app that are stable and are going to change less. So again, this is standard automation best practice that not just applies to mobile, it applies to any presentation layer automation where you want to test areas which are most stable and are less likely to change that you have good understanding on the business processes.
Also, the best practice that we like to employ is that we would test which are strict test cases that are medium complexity, and then we go to the high complexity test cases, and then we go to low complexity test cases. Similarly, in the smoke test category, we’ll go with high priority, medium priority, and low priority test cases. So these are the different character stakes or best practices that we use to identify what test cases you will automate. So what are some of the scripting challenges that we come across typically, and how do we deal with them? How do we deal with fragmentation?
So as you know, there are genuine issues around doing automation at the user interface level. And those actually get multiplied many fold when you talk about mobile automation. For example, this light shows that, on the same OS, same based OS, which is Android 2.3, on two devices, the screen shows very, very different than each other. And if your automation script assumes that it will be shown the same, then you may be in for surprise. So this is an example of how fragmentation impacts you while doing automation. So also, the level of differences that may exist in terms of due to adjust because of the platform and the device and form factor could be significant as well.
So when you write automation scripts, it is very important to design with this problem in mind where your one functional test case will actually, inside it, will have to deal with so many different variations in different form factors and different screen reservations and device types that, if you don’t design it like that, you may be in for a lot of surprises. So again, this is an example of how form factor impacts the content and how it looks. For example, how it looks on two different Android versions and on an iPhone can be quite different than each other. So another important point is that do not do automation on web browsers.
And this is particularly applicable for cross platform applications or mobile web applications where you may feel that actually testing it on the web browser may suffice. But in reality, it’s not. Kind of the same thing applies for similar tests. So on a desktop browser, how an application may look would be quite a bit different than how it looks on an actual device. So how do you deal with fragmentation? When we design test cases, we basically build – we develop our automation code in a layered approach.
What that means is that, at the level of code which is actually interacting with the devices, we implement code methods that will actually interact with a specific type of the device, and we write multiple parallel methods like that as components that can be used in an overarching test case. So if a test case script has a requirement to press a button that button press method could be implemented differently for different device platforms. So by layering our test cases or layering the code of our test cases at component level, which interact with the devices and, at test case level, which, actually, implements the business functionality of the test case, we can address the fragmentation and easily maintain the code and also extend our test cases when new devices come in.
So for example, if a brand new device comes in, we may need to write completely new functions to deal with that device’s UI, which may be different than other devices on hand. So dealing with special conditions, so what do we mean by these special conditions. So as we know, apps misbehave in different conditions that may not be most optimal conditions in which we develop the application. And these could reflect from how much CPU is being used, how much free memory is available, what network conditions we are in, how are the other apps interacting with your device, what environment conditions we are in.
Believe it or not, things like humidity, things like temperature, etc., has an impact on devices for. And we have actually seen test cases misbehave or test cases fail because of environment conditions. If your app is using sensors such as ambient light sensor, or it’s using camera and stuff like that, light conditions could matter as well. So how can we create automation test cases that will reflect some of these conditions on the device and enable us to create test cases that develop the device under these conditions? So before we get there, we have some of the handy tools that we use frequently to assess what state the device is in.
For example, on Android, there is an app called system panel app or task manager app. It is a really cheap investment that will help significantly in doing automation or doing manual testing. Similarly, on iPhone, there are system and monitoring tools that are available that can be purchased from the app store. So coming back to creating these special conditions, to aide our automation, we have developed tools and components that will get the device into a special condition thus that we can then do automation. So we can parameterize our test case to have this component and get the device to a certain state.
And then we automate the test case. So some of the best practices. One of the challenges with automation is that test cases will randomly fail or your automation will stop abruptly when it is executing because of various reasons. There could be test case patterns. So there are many reasons test cases will break. So what we have done on all of our automated test cases, we have invested in a robust test recovery system. What that means is, if we detect that a test case has failed, or if a test case has more like aborted, we detect that, and we can read on the test case.
And that’s very important because, as I previously said, the test cases will fail. So we have not only, like I said, developed the robust test recovery system, but also, we do extensive logging from the test cases so that when a particular test case fails, we can go in and later debug why a particular test case failed. This is an example of the amount of logging we would do. This is our report back shows what the test case was doing when it was running, what specific sessions passed, what specific sessions failed, the logs from our test cases also reflect in our report. So we can do a ton of analysis of what that test case went through.
And it is readable, even by a layperson, somebody who understands the application well. So moving on, continuous integration is another element of managing test execution well. And what this implies is that it always helps creating a test automation management tool that allows one to actually execute the test cases from a CI tool like Jenkins and run it on a periodic, continuous basis so you can create some kind of trending across builds. So the CI is an effort that has a very easy ROI, very quick ROI. It will pay off in days.
Other than somebody trying to execute these automated test cases manually, thereby, in fact, defying the whole reason that you automated this thing. So invest in a continuous integration tool, and invest in a dedicated set of automation devices against which you can actually do the automation. So we have our CI, most of our automation, actually, test cases running against CI to Jenkins, which allows us to monitor the test cases executing on a periodic, nightly basis. This brings me to the final part of the presentation, which is a quick walk through of the history, which is a mobile web application, which allows the users to – it’s basically a travel application.
Our automation test tools, actually, help us. We use Selenium to automate it. So we created a test lab with web driver installed on individual mobile devices. And we could execute our test cases from a remote CI tool, a Jenkins based CI tool, where we could execute the test cases using the web driver on these devices and collect the results in a repository within CI and do reporting of that. So this is the overall test environment. Some of the technical challenges we came across while automation are, as listed here, Selenium web driver is something that is still maturing.
So the best practice that we learned was to build the web driver code frequently, as frequently as is required to, because they are constantly fixing problems and bugs on the web driver. And it helped to kind of take the latest build and build it. Element ID discovery helped us. We needed to kind of find novel ways and interesting ways to identify the elements against which we were automating. So commonly used techniques of are not supported on all malware devices. So we had to actually work with a level of to build some testability inside the mobile web application that they were developing.
There were differences in terms of the Android web driver and on iOS web driver, which, also, we had to recognize and specifically code with. So these were some of the technical challenges we came across. Lessons learned from this case study. So automation team, it totally makes sense to have them co-located with the development team, particularly in an agile mode where the automation teams can actually interact with the development team, like I said. There are several conditions or several states in which it is important to build testability into the development code.
And therefore, it makes sense to co-locate the automation and development teams. I talked about developing the scripts with keeping in mind about fragmentation. So it pays a lot to design the scripts in a very component oriented way such that, if things change, if a particular part of the application changes or a new device gets introduced, you are changing it in one place as opposed to changing the script in multiple places.
Also, it helped us a lot to actually get an agreement of devices that we needed to support because what we identified was doing automation on a brand new device takes its own due course as we have to redevelop all of the components, specific components, unique components that particular device is going to need. So having a good understanding there is a lot of in Selenium and iOS Android web drivers. So it pays a lot to actually do this build yourself such that you can take the evolving changes that are happening on this technology and use that in your automation code.
So I would stop here for questions. You can write the questions in the Web X tool, and I will take questions now. So one question is about that I get commonly asked is how do you deal with these special conditions when you’re interacting with peripherals and interacting with test cases that actually need to use the peripheral such as a camera, how do you automate those? And there is no easy answer there. You can code your test cases, or you can design your automation test cases such that you can feed some of these interactions to the test case.
For example, if your camera is going to scan a bar code, and based on that bar code, your test case is going to do something, you can actually do the test case such that, instead of really using the camera, you can have the test case actually read the bar code picture from a file system and do your test case validations against that. So there are several such smart techniques that you can use against your test cases and, thereby, proceeding with your automation test cases without really having to get stopped against these challenges that are imposed by automation technologies that we have today. Okay.
So I don’t believe that I have any other questions. So I will stop the presentation right now. And I shared my email address in the beginning of the presentation. Please feel free to write questions to me on that email address, and I will be able to respond to them. Thanks for attending this webinar, and we look forward to getting your feedback and having you attend in our future webinars.