This website uses cookies to offer you the best experience online. By continuing to use our website, you agree to the use of cookies. If you would like to know more about cookies and how to manage them please view our Privacy Policy & Cookies page.
An exclusive insider’s view from an analyst at NelsonHall
Josh: Good afternoon, and welcome to our webinar on New Technologies that will Transform Testing for the Better. My name is Josh, and I will be your host today. Today’s presenters are Dominique with NelsonHall and Andrew Morgan from Apexon. Dominique is a practice director at NelsonHall with shared responsibility for IT services, research globally with David McIntire and John Willmott. Dominique covers IT services research in the areas of software testing, big data, and analytic services as well as IOT services.
He has been a part of NelsonHall’s IT services analyst team since 2007 providing comprehensive and insightful coverage of IT services markets in the world. In particular, he is widely known for his extensive knowledge and coverage of software testing having examined recently digital testing and DevOps continuous testings. He assists both by buying site and vendor organizations and assessing opportunities and supplier capabilities across IT service lines.
Andrew is the director of product marketing here at Apexon. He’s an experienced leader in strategic analysis, opportunity assessment, and roadmap execution. With his guidance and expertise, he has helped companies expand their digital initiatives to groundbreaking levels. He has over 10 years of experience in working with a wide range of companies including global automotive, pharmaceutical, and technology manufacturers.
Additionally, he has directed the development market first such as life science applications, customer engagement programs, and predictive analytics platforms. Welcome, everyone.
Andrew: Thank you, Josh.
Josh: Before we begin let me review some housekeeping items. First, this webcast is being recorded and will be distributed via email to you allowing you to share with your internal teams or watch it again later. Second, your line is currently muted. Third, please feel free to submit any questions during the call by utilizing the chat function at the bottom of your screen. We’ll answer all questions towards the end of the presentation. We will do our best to keep this webinar to the 45-minute time allotment, and at this time I’d like to turn then over to Dominique.
Dominique: Thank you, Josh. Welcome everyone on this call on QA and cognitive technologies. As just mentioned my name is Dominique, and today we’ll try to go a bit deeper into what cognitive technologies can bring to QA, to software testing services. We’ll try to identify the few use cases that are now usable and can be implemented as part of QA operations. Trying to go beyond the hype and getting into a bit more detail so to speak.
Let me quickly discuss NelsonHall because the purpose of this call today is not to talk about NelsonHall, but I guess I’ll mention two things. We service both end users and vendors. We look at the both side of software testing services about the side of QA. We think this is quite unique in the industry. Also, we like to be granular, we like to go into details and we think again this is a different shader to our competition, but enough NelsonHall.
Really what I would like to talk today about is cognitive. Really, when you look into cognitive technologies, it’s a white concept with a lot of different technologies backing it. We’re going to discuss today three main elements of cognitive in the context of QA. The first one is use cases of AI for testing automation. How do I use AI in a practical manner for automating testing?
Clearly, no surprise, this is where the bulk of the cognitive activity has been in the past three years or so, and certainly, we’ve seen an acceleration in the usage of those use cases. We’ll get back quite a bit on that. The second part of our presentation today is going to be on the context of RPA and testing. Of course, in the past, we haven’t seen too much activity around RPA in QA. This is now emerging. We’ll talk slightly but rule of IS fees of software vendors in this space, but clearly, this is a way forward, we’ll identify one or two use cases for RPA.
Then the third element of our discussion today will be really about, it’s fine to use cognitive to automate testing, but how do I test cognitive AIs and so forth. We will reflect a bit on this again trying to take a practical matter, I suppose to go into a lot of concepts. Really, this is what we want to talk about today. Really the agenda for today reflects what I’ve just mentioned. We’ll talk about AI-based estimation, RPA-based automation, and testing cognitive systems.
Hopefully, this is sort of crystal clear and fairly easy to understand. We’ll start by looking to the use cases for AI. By AI we’re taking, of course, generic view of what AI is, whether it’s deep learning, machine learning and so forth. I’m not here to give you a speech about AI because I wouldn’t be able to anyway because I’m not an expert on AI, but I certainly know how to apply AI to testing, and this is what we’re looking at today.
If you think about the majority of the offerings today in AI use cases, they’re really around what we call enhanced analytics. You could describe them as analytics on steroids, so this is making sense of the wealth of data you’ve got in quite a number of system, whether you’re talking about defect management tools, about government tools we’re using, production, logs, ITSM tools, and even stream data, but you got the wealth of data.
We’re starting to see AI now being used to make sense of this wealth of data. What we’ve seen so far is really more use cases around defect management data. Now that’s no surprise, this is where QA specialists will turn to first initially. We’ve seen two main sort of analytics use cases for AI in the context of testing. The first one is defect analysis. Really defect analysis in terms of realization of defects trying to cluster them, trying to identify what those defects are related to.
Really cluster approach, really the start of the journey because it’s really [unintelligible 00:07:02] then go and learn more about the defects, defect analysis, defect prediction is quite interesting too. Again, it’s based on trying to analyze data according to number of parameters, the complexity of the applications, change in the applications from one analyst to the other, who’s developed the applications and so forth.
Also, from a testing perspective, who has been testing the application as well. Trying to identify the main characteristics of an application or release, and then link the defects into the next release or trying to predict based on history. Some simple purpose, not yet as simple as I’d like it to be, but this is the way forward. Missus planning that space is really once you know the level of defects you were expecting from one area to the other, then it’s easier to plan resources, i.e people end tool and licenses type of things in time, durations.
This is really defect analysis is one of the main use cases, works in caring for AI. We’re also seeing the same type of approach and use cases for anything related to test cases. Test case optimization, really trying to find duplicate across test cases based on image technology, test impact analysis, test coverage, I mentioned test coverage, linking requirements, and test cases.
This is what we’re saying. I probably could have added to this list of two main cluster of these cases, sentiment analysis based on stream data mostly and on forums as for defect analysis, relates much more a lot on clustering. This is really the starting point of our journey that eventually will lead to cause analysis. This is where we’re seeing most of the activity currently. Again enhanced analytics or analytics on steroids.
I’ve come too fast. The good news is we’re also seeing now AI being used for automation. That’s actually news because it’s nice to have AI as enhanced analytics, but it’s even better to make testing and QA-automated. Again, being very pragmatic here, here’s the three or four use cases we’ve seen so far. Mostly being used by vendors and again by end-clients.
Again, this is really the starting point. I’m converting English test cases into a test script. Yes, as long as the English is fairly standard.
Porting test strips from say UFT to Selenium and I think infrastructure is one of those IP that automates it. A bit of root cause analysis or false positive analysis, this is where when you think you had a false positive and your system will automatically rerun the script because there probably was a synchronization issue between servers and the test scripts and the test engine, this kind of thing.
I have to stop here, this is all fine. Really what matters here is the level of efficiency so if it’s run at 80%, 90% of efficiency it’s fine because that means you don’t have any rework or minimal rework. The challenges is when you’re converting test cases into test scripts. For example, you’ve got an NL rate of 25%, 30%. In that case, you’ve got a lot of manual activities to correct the work that has been done.
The value of rework, of manual rework, is limited. So I guess what I’m saying is I strongly believe in those AI based automation, but the one thing we need to be careful with is the level of the effectiveness and efficiency. If it’s high that’s fantastic, if it’s low then it’s probably not worth it yet. I have no doubt that in the future that’s going to be- these are the leading technologies of the future.
To this slide, I probably could have added another bullet point which is around web crawlers. We’re seeing much more or more frequently web crawlers in identifying paths and transactions into a website and then converting those paths into test scripts. Again, that’s starting to appear and that one as well is extremely exciting. If I had to leave a message on AI really, it’s the early start. Still quite a number of use cases already and that the automation level this is quite exciting because we’re seeing now very very neat type of use-cases.
I’m moving then to RPA-based automation. That’s an interesting one because certainly in the past we haven’t had many discussions, much discussion around RPA in the context of QA. What we’ve seen quite a bit of discussion really is with software vendors with ISVs and it’s no secret that some of the RPA or QA software vendors are trying to go into each other’s field and there’s a good reason for it. Some of the technology is UI based so it’s not that vastly different, it can be applicable to one or the other.
I’m not sure whether there is a conversion at the software vendor level between the two but suddenly there’s a growing overlap. That really brings the question on where does RPA fit within QA? I think we need to differentiate, we need to step back a little bit and differentiate between the two. On the one side, we’ve got the boards, the bats, we’ve got the chat bats or voice bats.
Really, in that case, this is more of a human to machine interface and we’re starting to see clients using those bats to access data, testing data mostly from a reporting point of view, from analytics point of view, but it’s starting so is this a technology of the future? Probably not in that sense, it’s nice to have, it’s probably making the UX, the user experience of testers nicer. There’s probably more to appear than that although that could affect productivity efficiency engine as well.
Then we’ll look at the workflows. Automating a transaction or business process from beginning to end. Where we’re seeing clients and vendors working on civil use cases, really trying to apply up here in workflows on, no surprise tasks that are human intensive or that require a lot of volume. Typically, where we’re seeing the number of clients using RPA is for test data management for lack of a better tool or when the tool, the test data management tool is too expensive, so that kind of use case.
We’ve seen also, I’ve mentioned, volume, I’ve mentioned human-intensive tasks. Where what we think the use cases of the future is going to be in RPA is when you’ve got business processes that run across different type of applications. Typically, in a telecom service provider type of scenario, when the client really serves and for SIM card, that process up to those SIM card launching, and mailing, and then activation.
That previous process goes across a lot of different type of applications with lots of underlying technologies, whether they’re web-based or mainframe-based and so forth. This is an area where we think RPA can be used because it overcomes the traditional limitations of the testing tools. For example, the selenium is is only for web-based application. We think there’s potential here. If any, this approach would be quite relevant to clients that are very process-dependent and standard process-dependent such as the telecom service provider industry or the banking, the core banking industry, retail banking industry.
This is where we’re expecting to see traction around the usage of RPA in the future. As I said, RPA though we’re still nascent in that case. There are not too many clients that have used it but we think that’s going to change with time. We’ve talked about AI, we’ve talked about RPA. The next discussion topic is really on, that’s fine to use the cognitive but how do you test cognitive? How do you test AI? How do you test RPA?
Let me try to respond to these very difficult questions because really, we’re now starting to learn how to do this. I think the key challenge for testing cognitive systems, whether that be AI or RPA is not as much how do I test it, but how do I introduce automation to test it? That clearly is the challenge for the future. Let me take a non-scientific, non-negotiable approach to what AI is. At a high level, you’ve got deterministic systems or algorithms which are all friends.
A deterministic system is where you’ve got an algorithm for a particular input, provides you get the output, the same output or the output you knew, that you expect. My input is A, the algorithm is B. I can check it because I know the output had to be B. That’s what testing has been all about in the past. Non-deterministic system, this is a very different type of definition. Algorithms that even for the same inputs can exhibit different behaviors, some different runs. That’s the theory from a testing perspective. My input is A, but I don’t know what the output is going to be. Is that going to be B or C or D? I don’t know.
The typical example of this is for food management, for when the tax authorities, for instance, or the banking clients are trying to identify a fraudulent person’s face, certainly know what they’re looking for but we don’t know who they are going to find. So testing that is quite complex. Let me get back to deterministic systems. I mentioned this is an offer in the sense. The process is what we expect. It’s meant to be straightforward and really, the challenge is about new tools, new methodologies, and investing into things we’re not used to test.
Let me take an example for both. You know what the output is going to be, the match of the traction, those days, it was about using different languages and actions like mine to make sure that they both actually understand and provides the right answers. A lot of the activities on the utterances and the input level. But to some degree, this is something that automation will solve. Give it some time and that’s fairly reasonable and applicable.
The challenge really is when you got those non-deterministic systems where you don’t know what the output is going to be. This is where we’re still learning. First of all, the things that are still being discussed is training versus testing. These AI and algorithms are all about enough data. There’s always a tendency to use as much data as possible for training but then, it leaves you not enough data for testing. The rule of thumb currently is 80% to 20%. Is that the right type of percentage, of balance? Should it be dependent on the type of AI you’re testing? Lots of questions and also, a lot of different ways of looking at it.
Let me take an example here. A predictable AI is the topic of those days where everyone is talking about, “I need to understand how an AI comes to a certain decision” if only for regulatory purpose. That certainly is coming so I’m expecting this kind of discussion to be bundled with QA, with testing. We’re years away from finding the right approach to this, so this is more work in progress.
I pretty want to close my discussion, my part with trying to look into the future. First of all, there are a lot of use cases being created in terms of AI. Every week or so, I learn about new use cases. Even in activities, I would- like crowd testing. We’re finding more use cases for trying to analyze the expressions and emotions of people on the video or trying to identify, try and duplicate test defects reported by the crowd tester. There’s a wealth of use cases coming and that’s fantastic, this is for the better.
The second conclusion I wanted to make is UX testing is the big one where I expect AI to make a difference. Largely, I mentioned, trying to understand and identify emotions in a video or in a voice recording, and so forth. This is where we see a lot of activity. Certainly, I had to leave with the final command with IoT and edge devices. Certainly, there’s a lot of enough of effort doses to put more processing power at the device level.
That’s fine but we’re going to new ways of complexities now, trying to test with these devices that are remote and facing quite a number of complexities in terms of battery life, bandwidth, processing power, although that’s increasing. If only, I would say, this is the start of cognitive in the context of QA so do not think this is sort of a hype. I think it’s here to stay. The leading vendors like infrastructure who are investing into AI, I think are doing the right thing. That’s what I really wanted to talk about today. I’ll leave the floor now to Andy.
Josh: Excellent. Thank you. Thank you very much, Dominic. Just for some contact for a lot of our broad audience, can you give us just a little bit of background into some of the research that you did for your report? Just the type of companies that you talked with or interviewed. Just so everyone knows, it wasn’t just a quick little survey monkey that you did, but no, it was actual big long research process.
Dominic: Sure. In the past four months, we’ve looked into– Let me rephrase. Every year, we look into QA services. This year, we look specifically into the topics of AI and UX testing and mobile testing but we really spent a lot of time on cognitive and the context. We interviewed, I’m trying to remember, probably 70 clients and also 30 vendors. Don’t quote me but that would give the same direction.
Josh: Got it.
Dominic: We keep on doing this. We’ve been doing this for 11 years and we’ll continue doing that. Josh, is that responding to your question?
Josh: Yes, that was great. Excellent. Perfect. Now we’ll go over and jump into the next portion of our webinar here.
Andrew Morgan: My name is Andrew Morgan. Thank you very much, Dominique, for giving us the insight into your research and what you’ve seen within the market. Josh, thank you again for starting this up and bringing us together here for this event. I know we’ve seen a lot within the market in terms of how innovation has really gone, how it’s affecting companies, how it’s impacting the enterprises, not just from a development standpoint, but from that testing operation size as well.
I didn’t put stretch. I’m sure as most of you know, what we do is really look at some of those digital initiatives that you’re trying to achieve, not just from next six-month roadmap but two, three, four or five years down the road. What’s the real end-goal that you’re really trying to achieve for your business and what those initiatives are. Just to give you some insight into our innovation and what we’ve really done in terms of being able to capture what those technical requirements are for, for these enterprises, for these startups, for these leading companies within many markets.
If you really looked at how we can help them with our expertise, now this just gives you an idea of where we’ve gone throughout our own evolution that added both stretch, but as you can see over the last almost 10 years now, your high focus is on application development, cloud and that IOT big data in artificial intelligence that Dominique was referring to. There’s still a lot of, as some people say, old wine in new bottles type going on with technology in terms of what we saw as predictive analytics and is now considered AI with machine learning capabilities and where is it really going to go next, how much is it really actually accomplishing versus what it says it’s going to do.
Same thing with RPA. We talked about the robotic process automation. There’s great capabilities there but a lot of these things mean different things, different people, and it’s really about the context in which you’re able to apply them and what the benefit is that they’re actually bringing to your use case. In terms of just the market and what we’ve really seen and what we have an outlook on as it says here more devices, more connections, more possibilities. Also, more problems if you don’t have your back-end systems assured, if you’re not following the best practices and protocols.
We think about digital as we see here digital transformation is the top priority and you’re currently underway all the way up to completed, but what do you, as a company, or you as an organization really can consider completed? We talked about being done. There isn’t really a being done capability in there where it’s really more about how are you now achieving your latest goals? When we talk about being all these new devices come in the market, when we talk about Nest, we talk about potentially even Uber’s automated drone flying around different parts of the world, or parts of cities. These are a lot more connections.
I know even New York there was a couple weeks ago a pilot, unfortunately, crashed, and because of air conditions and it wasn’t a favorable situation, but when you have all these more devices than all these more connections, it means that there’s a lot more possibilities of what can occur, but we also need to make sure that we’re developing things that make sure they make sense and how we want to achieve our goals.
An example of one of these use cases that we’ve seen, a lot within healthcare actually as well in terms of connecting different medical devices, back-end systems, bringing that end goal to or the end-benefit to the patient. For example, this is the example of Great UX where biometric authentication opens up an application, you then use the camera functionality to scan a prescription and then an augmented reality you can reorder your prescription, look at side effects or contact your doctor about what it is that you’re actually experiencing while you’re taking the medication.
Not a doubt it sounds great from a user perspective, especially as we talk about how technology is really aiding efficacy of elderly patients or people with chronic illnesses. When we think about it from an infrastructural standpoint, an IT standpoint, in a testing standpoint you have medical authorization, you have electronic medical records, you have shipping, you have point of sale. Then you see even the inventory adjustments they need to have the specific chain of custody and different regulatory compliance built in, as well as all the different payment and insurance integrations as well.
Now, there’s even companies trying to put off all this type of interaction on a blockchain network. How do we even test all those capabilities and interactions without it being on a blockchain network? This is just to give an example, as technology is really improving, and there are all those devices and all those more possibilities, that also means there’s almost exponentially more elements in the back end that we have to test for and assure to make sure that quality is being brought to the patient or to the consumer.
Just some insights that we’ve seen, this is a very interesting tidbit that we found in terms of consumers now owning 3.2 digital devices, or devices in general. Anything about that, from phones, smartwatches, different iPads or tablet, computers, desktops, whatever it may be even having Wi-Fi in your cars now. What’s really considered devices is expanding quite broadly. Now in terms of businesses that are able to adopt an omnichannel strategy, and that’s where you’re actually integrating the seamless experience across all these channels, you’re actually seeing a 91% greater year over year customer attention.
Now in this age, where digital is king, and you’re able to really adapt and broadcast what it is you’re doing to your consumers, and they’re expecting this immediate gratification, need to be able to have that seamless experience. For example, if you’re watching something on your phone, and you’re on Amazon Prime, and then you add an item to your cart that you saw in the show, then you go back on a desktop and that item is still in your cart waiting for you to making sure that you’re going to check out and you have similar items that you may want to buy.
If you want to jump into where you paused that show, maybe you didn’t have your credit card handy on you, or anyone of those leaves in the back of your phone and so you wanted to just finish the order online so you pause the TV. Well, now you have a seamless experience through Amazon where you’re not restarting or reacting the program or re-adding these items to your cart. This is very beneficial in terms of how we’re actually seeing consumers stay brand loyal.
Now our customers are asking from the IT perspective to be able to achieve these components, to be able to achieve that omnichannel strategy to be able to excel and being able to broadcast their digital applications across all those devices. As well as being able to test in those complex infrastructures I detailed before. These are some of the questions that were commonly seen and that are constantly being asked in terms of shipping there.
As you can see here from left to right, in terms of being able to shift their approach, what it is that they need to prioritize the different costs as well they evolve, and really how they’re adapting in terms of real-time data in the cloud capability. Now those last two, as we’ll talk about a little bit later, it’s really around what it is that the consumer is experiencing and how you’re able to adjust for those nuances.
Just like we talked about Amazon example in terms of not only understanding what it is that they last watched or what they added to their cart, but where are they within their geographic location? What are their maybe recent histories or item have they purchased that we need to make sure that is added to that suggested list not just new commonly featured items, but more with whatever item they added to their shopping cart. What else that we know about the consumer can we add that helps showcase that we’re going to provide them a better buying experience without it making seem like we’re looking into everything that they’re doing?
Now as we go, we see the different challenges were shifting left. Now I’m not going to read through all these but as we know agility, quality, and efficiency are the main factors in what we do from a testing perspective, a development perspective and quality perspective. Making sure that what we’re doing is able to bring all these tools to market as we’ve seen, even the way Google Maps has really excelled.
They’re able to integrate whatever you’re playing on your phone into your map so you’re able to change the song right then and there. Well, that’s pretty impressive but what tools do we really need to incorporate into our testing infrastructure to make sure that we’re able to test for that? How do we then incorporate that into our Apple CarPlay to make sure that we’re able to have that syncing and not a faulty user experience, or we’re here now that cars are so digital it doesn’t cause any other issues under the hood per se.
As you see here, there’s a lot of issues that our customers that we’ve just seen in the market in terms of being able to understand what it is that they need to do internally to bring these products to market. What we’ve done is we’ve looked at this intelligent testing from an entire life cycle, and it’s not just we don’t all just testing but more intelligent testing of looking at it holistically, looking at it from a cognitive standpoint and understand what elements can we enhance and make better?
You see here we start with our business objective and user experience. We want to know what it is we’re trying to achieve from a PDLC standpoint, what it is that we want to achieve from a business initiative standpoint. Then by setting up the requirements and use cases, now we can go into automating some elements. There’s tools right now in the market, I’m sure most of you know them, that we can immediately automate the functional and UI components of the testing process.
Now, you need an API, those are a little tricky and you don’t always want to automate those right away or straight out of the box because we want to make sure that we have the right connections right away, as well as the capabilities and tools that already exist in the market aren’t as mature as some of those within the functional and UI standpoint. There’s still a lot of discovery going on to make sure that we’re not just implementing the latest and greatest tool without validating it for our clients.
Now, the second component is the creation. Now, there’s two ways we look at this and the top you’ll see what we call the left-to-right approach, where we have now these test cases identified and what we want to do is actually create them by putting them in a system to be able to automate them, have them in our test bed to make sure that they’re able to actually run within any regression cycle.
It’s a nice idea we use model-based testing within the use case to lay out all the possibility of that test case. We used ATDD or BDD to automatically write the code for that test case and then can implement, as you see there ATGH on top of that. Now, that’s not a new acronym, don’t worry. It’s not going crazy in the market. It’s auto test case generation and healing component like Mable, Functionize. There just wasn’t enough space there, so don’t get too worried. Just the capability to add then tools on top of that application, so you’re able to automatically generate and adjust existing test cases as development changes or business initiative change, so now the UI or other capabilities change.
That’s similar to the aspect that Dominique was referring to earlier in terms of the web crawlers being able to look at some of the components within our design, within our development and to adjust the already automated test cases accordingly. Now, the second part that we actually see quite often, this is more for existing test cases or existing applications within the market. On the bottom of create, you see a right-to-left approach.
You look in the middle, actually in the backlog. We need to look at the existing test case repository, what test cases have been automated and which ones are still being done manual, and so we do work backwards and do a gap analysis using that same model-based testing technique to understand where are we missing our coverage, what test cases are actually obsolete anymore that we don’t even need to add in the system and when we actually componentize those test cases to make it a much more optimized in efficient manner.
Now, the benefit of this is, while there’s still a lot of work to be done, to make sure that we’re optimizing everything that’s already in the work, that we can actually add those same auto test case generation and healing tools on top of the application since authority and market, as opposed to above where we’re actually trying to get to market, this now gives us the capability to make us more efficient in market.
Now, this all sets up our continuous testing foundation, and this is really around achieving the maximum agility, quality, and efficiency. We look at an entire infrastructure of how we can actually achieve testing in the most efficient and effective manner, as you see here, tested and management, environment management as well as service virtualization and intelligent continuous testing are the main proponents of what it is that we need to consider when establishing our fabric?
This gives us the ability to risk-based testing, which for example which test cases change since the last release, what failed last time that we need to prioritize? We want to make sure that we’re prioritizing our testing in a sense where if it fails or if we have a high probability that we think it’s going to fail we’re able to then shut down the rest of the operations to make sure we fix these components and applications before we run through the rest of the testing, otherwise we’re just wasting resources, there’s no point of running successful tests if we know other ones are going to potentially fail.
Secondary we have– Then in the fourth part we now go to our execution phase, and this is where our normal auto environment. Now, this is where you see a lot of service providers within the industry talking about testing and talking about that station under test, and what needs to be done, everything from functional, performance, security and so on. We also have a thing in Era called unsolved testing. Again, it wasn’t quite sure what to call this component, but it’s where a lot of our customers actually come to us because they don’t know how to automate some of these test cases.
There’s some of those very very intricate API levels that they don’t quite have a library for yet or they need to make sure that they just– There’s not quite sure on how to actually set up, and execute in terms of the automation of those test cases, and that’s where we come in and we actually automate a lot of these existing test cases they’ve been doing manually, that they haven’t quite figured out how to write the exact code for.
Now on top of our capabilities to automate a lot of these test cases, now you have the ability to analyze. Once you’re automating all these activities, you’re collecting data, you’re gathering information, and you can actually do a lot of just analysis on top of what that data is you’re automatically gathering. As you see here we have Auto Bild analysis automated test acceptance, visual verification just like in terms of the UI and the UX as well as other components around the dynamic page test profile.
Now all this still goes back in risk-based testing, but intelligent test selection. It’s not just in terms of what has changed in last time, but what can we then start looking at in terms of you having more of a probability to fail or to pass, just based off of these high-level analytics. As you see here once we get into now our optimization stage and we create that feedback loop, we’re able to actually have more of a predictive and prescriptive approach.
This means that we’re actually looking at what we’re recommending from low to medium or low to high in terms of the priority as well as what level of resources need to be involved? As well as where it should occur within the testing lifecycle? Almost a road map or playbook per se in terms of what testing activities need to be done. Now as you see on top this actually now can integrate with all of our log and app analytics in terms of what’s actually happening in real time, as we’re seeing applications pass or fail, we want to make sure that we’re able to look at what actually is performing in real life, if it says that it went to a cancel screen did it actually cancel or did it just stop or close out?
Did it actually log in and say verification or did it simply just go to the account screen? Sometimes it looks like it’s doing the right thing, but it’s not from a coding stance, coding testing standpoint, but from a real-world scenario, it’s not quite always the case. The more that we incorporate some of those real-world statistics and analytics into the reporting, then we’re able to adjust, or influence and enhance some of these business objectives and consumer experience components for our company.
From a testing standpoint, this often looks like new user journey flow you’ve been testing, new test cases in terms of how they’re specifically using them. It’s not just model-based testing, but also, what can we actually influence and prioritize within our own testing activities. Let me just step back real quick. In terms of being able to make all this intelligent, it’s not just our expertise, but we’ve actually opened this up and opened this methodology up to include, not only some internal IP that we’ve created, but also, the best tools of marketing those, that external IP.
From an internal standpoint, these are some of the components in some box that we have. You see in the top right in test optimization, we have toe-bot. This works in the creation phase in that right-to-left approach of really enhancing the backlog. In the bottom, as you see, the automation accelerator and acceptance in the– Those really round what it is that we’re doing from an analyzation and execution component.
In the left, you see the result analyzer and predictive QA, really in terms of what that feedback and optimization is doing to be able to provide us, not only with more intelligent results, but also, more intelligent items as we go forward. Then, these are just on those other tools that exist in the market, examples of model-based testing like HEXAWISE, different visual and verification like Applitools, as you see there. Then in terms of Functionize, in the bottom right, and Mabl, those are some of those auto-test case generation and healing factors that enable you to sit on top of your application .
Now, some of those are still coming to market and really expand their capabilities. Some of them, they’re better suited for desktop application. As we’re seeing, AI evolve and really come to fruition. The possibilities are going to be limitless. Just in terms of how we align testing to your business goals, just want to touch on this briefly. As you saw before, we talked about the different components of agility, quality, and efficiency. As we talked about our intelligent testing life cycle and how that really can impact your entire business. Based off of what it is that you’re trying to achieve, we can then program the best kind of roadmap and execution that’s needed for you.
Then even considering the next AI-Testing-AI in terms of robotic cognitive automation, bat-testing, cognitive computing. What is that next step? How do we set out the best testing roadmap for testing the evolution roadmap to get you there? To do that, we use things like this maturity model. This gives you an example of one element. This isn’t so much for– This is intelligent testing, but more in the execution of quality engineering.
As you can see, there is different components here so do we use different models to help walk you through what it is that we need to help you achieve based off of your goals as a company. Lastly, just to go over the intelligent testing, it doesn’t just mean AI. We talked a lot about our PA cognitive component, but as you’ve seen here, there is still a lot of possibility, which also means there is still a lot of maturity that needs to happen for these technologies.
If that happens, you need people to be able to adjust and train and help mature these different models. Same way, you don’t tell your three-year-old to not touch the stove ever again because it’s hot, then he never touches it again. You have to reinforce some of these common-sense tools or bring in confidence throughout the entire process to make sure that they are learning correctly and being able to adjust new scenarios and be able to provide the best results going forward.
Thank you, everyone. Let me pass this back to Josh.
Josh: Thanks, Andrew and thanks, everyone. I know we have a few minutes left. I want to make sure we get some time in for questions. As I mentioned before, you can simply do that by clicking on the chat window and entering a question and we will ask it on the call. Let’s see, we have a couple of questions coming in. The first one is for Dominique. Dominique, if you could quickly answer this, it would be great. What is the main challenge for testing cognitive and related technology?
Dominique: Okay, thank you, Josh. I guess as I mentioned, the big challenges are on nondeterministic systems. The challenge is not only to introduce testing because we’re at a world where we don’t have a lot of experience, but the second most important challenge is reaching to your situations. If you introduce manual testing for testing the non-deterministic AI, that’s not going to work long term. My response is, we need to build the experience and also how do you introduce automation in that space. That would be my answer to this question.
Josh: Okay, great. Thank you. Christopher Andrew is what verticals would be most impacted by the shift? You talked about the shifting left, in particular in image and healthcare, were there any other industries that would be impacted by this?
Andrew: Yes. I think telecommunication as well in terms of we can talk about 5G. We’ve seen 5G start to really roll out in a few cities. The devices are still adapting but the possibilities are really going to be- it’s not just going to benefit consumers and other high tech products, but also medical as well when we talk about the ability to transfer that information and have them involved in the ecosystem to make sure that we’re providing the best end result.
As these aren’t necessarily hardwired items in there, so they need some sort of telecommunication built into their product roadmap as well. Healthcare for sure, as I mentioned, just with the severity of the device’s ability to bring new technologies to market, the analytics that are going to run on top of that, but then also telecommunications will be one of those components and not just health care but also other industries looking to do similar transformative initiative.
Josh: Okay, great. Thank you. A question here for Dominique, how expensive and time-consuming is applying cognitive to QA in your estimates?
Dominique: Thank you for this. I guess within a lot of the use cases in there are incremental steps, not huge projects. That’s fantastic because these are short-term projects involving few people, few experts, up to three, four max. We’re talking seven million very mature million type of projects lasting three to four months. That’s the ideal case where the data quality for training there is good.
What gets a bit more complex is when the data is not good when the data is coming from different systems and you need to enhance the quality of the data. That becomes a different type of discussion, but provided the data is standard then you’re talking very short projects, three to four months, few people, and that’s fantastic retreat. It’s compelling or fine.
Josh: Great. Related to this thing that we got this question similar to your question I just asked you, but someone was saying that there were slots for testing different applications. That obviously there’s some upfront cost both in training and purchasing tools. This question is, is it better to apply this technology to a subset of applications from end-to-end, or is it better to apply some AI to all applications under test and see where we get the most bang for our buck. What is your– I’ll ask that to Dominique and then Andrew if he wants to follow.
Dominique: From my perspective, the incremental approach limited scope is better because then you’re trying to simplify your project and make it easier to understand. I guess for my perspective incremental steps start small things, more limited scope and you know what you’re doing. That would be my sort of preferred approach.
Andrew: I’ll definitely agree, I think also it’s important to understand what it is you’re trying to achieve and why you want the AI to kind of potentially do for you. Is it something where you’re trying to get a reduction of your backlog and yet most applications and so it’s you can apply it that way, maybe in a bit, it’s similar to Dominic’s. It’s a very small scale but seeing where you can apply it. Definitely, it depends on what it is you’re trying to achieve. I would say this will really help determine that that road map.
Josh: Looks like we have room for one last question only for Andrew and when it comes to what you’re sharing on intelligent testing with what you proposed would it help or hurt companies with how we focus on DevOps initiatives?
Andrew: I think again it’ll actually probably help. DevOps initiatives is always, are you doing your best practices, are your doing your initiatives? What is it that you’re really trying to achieve? I think the ability we have here with intelligent testing is to look at our everything holistically and make everything a lot more efficient and optimize and the whole goal of DevOps is to be able to be a lot quicker to market it, to be a lot more efficient in your own operations and that’s what the whole goal of intelligent testing is.
It’ll definitely aid you. We just have to make sure that there’s clear understanding, just like we saw with Cloud about six, seven years ago, when everyone said, “Let’s go to the cloud.” Everyone was, “Let’s do DevOps.” Well, what does that really mean? What are you really trying to achieve and if we can understand that and create the better the most effective outcome user model, that’s often the best result not only for the company but for the consumer as well.
Josh: Great. Thank you. Looks like we have all the time we have left, can you say anything. I just want to wrap up and say be sure to check out and subscribe to DTV, it’s a new digital transformation channel that brings in industry experts. I want to especially thank Dominique and Andrew and thank you, everyone, for joining us, you will receive an email in next 24 hours with the slide presentation as well as the link to the webcast replay. If you do have any questions please contact us at info@apexon.com or call 1408 727 1100 to speak with a representative. Thank you all and enjoy the rest of your day.