The problems with script based testcases

I don’t like script based testing! First thoughs of script based testing means checking and not testing! And this is a very boring and uninspiring work for a passionate tester, isn’t it?

A few days ago I stumbbled over the website of the context driven testing school ( open in new tab ). After reading some issues on that page I thought again on the problems of the script based test cases.

I’m refering here to that type of script based testcases that are needed to execute by hand to create test logs by a test (managmenet) tool. And I want to invite you to read here about 5 problems I’m thinking of when writing end executing script based testcases.

Those testcases normaly have been added to the test management tool for ALL people who are involved in manual testing and also for those people who manage the test process.
And here I think we have the first problem!

problem 1: „Creating well defined testcases and keeping them up to date“
Remember my sentence above: Testcases should be written for ALL people – espacially also for those new employees working as tester whose (first) job it is to „exetute“ them.
But in daily practice those people often find script based testcases, which are not up to date or those one’s which have to be written completely new due to missing or simply wrong or insane information. And often the problem is that the requirements are not completely clear when testcases had to be written!

problem 2: „Missunderstand/missinterpret context“
Persons, involved in the manual testing (I name these group of persons „testers“) have to execute the testcases. If they want to do their work well, the need to know exactly what has to be tested in every single test case.
And there are the people who wants to know, wheater the test cases have been passed or failed or how many test cases have been executed ever and so on. These people (I will call them „test managers“ and stakeholders) do have a tendency to look at the hard data which means the number of test logs!

The problem here (in my opinion) is, that it is very hard to write script based testcases in a way, that it meed needs for both groups of persons.

Let me explain this by an example:

Imagine the following situation 1:
A gui of an application whitch contains 4 radiobuttons and a list in which new data can be added, modified and deleted over a button should be tested in an agile
environment straight after it has been developed. There are no dependencies to other data structures or other gui’s. So the focus to write the testcase(s) in this
environment is on functinal/module/gui test which should concider also some special tests for „boundary“ testing for example.
A scriped based testcase to do a „well“ testing of the gui should concider several test steps, or even more than one test cases!
On second thought I would say, only one testcase is quiet to less! In situation 1 we have to to functional testing!
So there should be the following testcases:

One test case can look for example the following:
TC1: GUI X Test, radiobutton 1
precondition: gui is availlable
step1: set radiobutton 1
setp2: add new entry
setp3: click on button to save
step4: check, that entry wich radiobutton 1 has been saved correct and now appears in the list

TC2: GUI X Test, radiobutton 2
precondition: gui is availlable
step1: set radiobutton 2
setp2: add new entry
setp3: click on button to save
step4: check, that entry wich radiobutton 2 has been saved correct and now appears in the list

TC3: GUI X Test, none of the radiobuttons set
precondition: gui is availlable
step1: set radiobutton 2
setp2: check: add new entry should not be possible (list disabled)
setp3: check: click on button to save should not be possible (button disabled)

TC4: GUI X Test, arbitrary radiobutton set, boundary: entry contains all ASCII signs
precondition: gui is availlable
step1: set radiobutton 2
setp2: add new entry with a character string of all ASCII signs
setp3: click on button to save
step4: check, that entry wich radiobutton 2 has been saved correct and now appears in the list

TC5: GUI X Test, arbitrary radiobutton set, boundary: entry is „blank“
precondition: gui is availlable
step1: set radiobutton 2
step2: add new entry: type in space („blank“)
setp3: check, button to save is not enabled

Here is a short summary of „how the testases for situation 1“ have been written:
1) focus is on functional/module/gui-testing, because the tests should be executed right after developement of the gui in an agile environment
2) Therefore: Description of testcases must be very detailed – here: several testcases, and also every single testcase with a suitable number of detailed test steps.
3) Execution of the testcases will result in 5 test logs (After excecution of all the testcases I would say: Main functions of the gui have been tested)
4) Why 4 testcases and not one tall testcase with several steps? I would say: because focus is to figure out, if realy all elements (fuctions) of the gui works well. If not, test case for that gui on higher test levels should not be done!

And now keep this information in mind and switch to an other situation 2:

In situation 2 the gui has to be tested on a special test environment with focus on integration test. Integration test therefore, because the main focus is it to check, if the gui can be shipped out to the customer when a user works with the whole application (not only the gui). So there are a mass of dependencies: other envionmenment, maybee other test database, maybee other configuration of master data, rights, system settings which have influence on the gui, user can’t use the gui directly – a special workflow/scenario (use of other gui’s) before use of the new gui has to be perforemd, and, and and…)

So, as a result of this, the script based testcase(s) have to be written in a way, that all these issues concerning „integration“ and „workflow“ will be concidered.
But it can be assumed, that testcases from situation 1) have been executed and all passed.

As a tester for „situation 2“ I wan’t to write the following scriped based test cases:

TC1: Use of gui after login user:
– gui is availlable and well integrated
– rights for login user and gui are set
step1: login to application with user
step2: add new arbitrary entry to gui
step3: save entry
step4: check if entry has been saved correct
step4: switch to gui 2
step5: edit entry from step 1 after reopening gui
step6: save edited entry
step7: check step 6
step8: reopen gui
step9: remove entry
step10: save
step11: logut
step12: relogin of user
step13: check, if entry from step9 has been removed

TC2: Use of gui after login, several switch of user (one user has no right for the gui)
– gui is availlable and well integrated
– rights for only first login users are set
description: With this testcase, it will be checked, that it is only possible to acess and use the gui
when a user will be used which has the corresponding rights configured
step1: login user1 to application
setp2: switch to gui and add arbitrary new data
step3: save
step4: switch user (logout user 1, login user 2 which has no rights to access the gui)
step5: switch with user2 to gui
step6: gui can be seen by user2, but everything on the gui is disabled (user2 can’t do anything, the new entry from user1 is not availleble for user2)
step7: switch user again
step8: open gui with user 1 again
step9: remove entry from step2
step10: save
step11: switch to other gui
step12: reopen gui and check, that entry has been deleted correct
step13: logout with user 1

Ok, that will be enough for the example of situation 2. Let’s summarize situation 2:
1) Focus is on integration testing with all that isues which belong to that.
2) Therefore: Description of testcases is not less extensice, but in an other way: The steps of the testcases don’t expand on any details of „function/gui“ related things, but the steps concider any behaviour of „integration“: dependencies and influences of other gui’s, user workflows, and so on.
3) Depending on the use cases and user workflows, there might be not less testcases when comparing it to situation 1.
4) Also here the question: Why not one big testcase with several steps? I would say: because focus is to figure out, how the new gui have been „integrated“ under different dependencies, user workflows and use cases. And that’s what the core point is to test under situation 2.

But the question is: Does it make sense to write and exeute TC1 from situation 1 in situation 2? Or does it make sense to write and execute TC2 from situation 2 in situation 1?
I think not! And this can lead to the following:
„Test manger“ thinks: All „integrated“ testcase have been executed and passed! The reality is: „Tester“ has executed the „gui/function based“ testcases!!!
Or vise versa: „Test manager“ thinks: All „gui/functional based“ testcases have been executed and passed! But actually „tester“ has only created and the „integration“ based testcases and customer get’s some problems with boundaries which never have been testet!!!

I will keep this problem in mind as „Context of testing not concidered in writing script based test cases“ or simple „Missunderstand context of test case writing“ and „missinterpret test logs“.
And now, when compareing the summaries of situation 1 and situation 2, I can say that it depends on the focus and the test level (as fan of the context driven testing school, I would say: depends on the context), how to write the script based testcases.
To come back to my thesis from above as I wrote that „it is very hard to write script based testcases in a way, that it meed needs for both groups of people.“:
Do the members of every group realy now, what kind of testcases have been written and exeuted? Does „tester“ realy execute an „integration“ testcase when „test manager“ has the task to manage the „integration testing“? Or vise versa, does a „test manager“ think, all „gui based/functional“ tests are ok, when he get’s testlogs where 90% of executed testcases for the „gui“ resulted in „passed“? I think it is NOT every time and for every involved person clear, what type of tests has to be written and executed.
Sometimes it’s also a problem, that for example „integration“ tests has to run even then, when NO „gui/functional“ testing has been done – actually a nogo, but dueto missing resources unfortunately the reality in daily testing practice!

So what I want to say is: Simple look on the numbers of test logs can purport that the system unter test is well done tested without any problems! But indead it isn’t so! In my opinion „Test manager“ has to ask also the following: On which circumstances did which testcases passed? Has any testcase been executed twice (several execution of same testcases means that something went wrong and now works correct in SUT)? On which context, which test envirionment, which test level are we looking? Can all these questions anyway concidered and well interpreted in the number of test logs?

problem 3: What is betther: Sveral testcases or several teststeps in less testcases?
Imagine again the situation 1 and situation 2 described as axample for problem 2 above. For a module test of a new gui it might be the betther idea to write more testcases to check every single element on the new gui – e. g.

tc 1: input in textfield 1 and press ok button to save, tc 2: edit textfield 1 (after successfull execution of tc 1), tc 3: delete data of textfield after pressing „delete“-button, tc 4: set checkbox 1 and press „ok“-button, and so on.

The question is – and here we are again in different context – Does’t this approach of writing testcases make any sense, when we have to write testcases for that gui for writing integration tests with some other gui or some special workflow? In my opition in the most cases not. The general way is, that we don’t need to do such „elemenary“ modul testcases for every single element on the gui in higher testlevels. We should assume, that those elementary testcases should had been run yet
when we do e. g. integration testing.
In reality we would then write for example a testcase with following content:

TC1: add, edit, delelte data in GUI, navigate to GUI1
-step 1: enter data in mandatory fields of new gui
-step 2: save and check step 1
– step 3: navigate to gui 2
step 4: check that data have been correctly transfered
step 5: navigate back to gui and edit field in gui
step 6: repead step 3
step 7: navigate back to gui and delete mandatory data
step 8: save and check that remove of data works correct but navigation to gui 2 should now not work

So what can we say? When writing testcases to „elementary“ tests, it would be a good way to write testcases for every single isue to check them.
When testing on higher test levels (workflow test, integration test, acceptance test, …), it would be a good thing to „integrate“ several basic testcases into less testcases but with several steps.
It’s the same thing with writing software: first write single functions/methods in the code. Later „integrate“ these single functions/methods in several classes.
And isn’t it right when I say: When we work on testlevels like „integration“ test, that we test several „classes“ of code and check on different places in the SUT where those functions will be called? A keyword in modern software developement (at least in object orientated software developement) is „encapsulation“. So why shouldn’t we use „encapsulation“ to write our script based testcases for higher testlevels?

problem 4: „Writing scipt based testcases: well structured with fix test data and fix steps versus well structured but dynamic“
As I started my carear as a software tester I liked standards: When I found a requirement to write a testcase which says „on textfield x, only numbers from 0 to 9 are allowed“ I thought: „Oh, fine! Just type the range 0 to 9 (maybee -1 and 10 to check the boundaries also) in my teststep of the testcase.

It’s no problem to work with the testcase, but a couple of releases later the requirement has changed: „on textfield x, only numbers from -1 to 8 are allowed“.
So I hat to change my testcase that the requirement is now coverd correct again. But here the problem is: I have to do this every time when the values in the requirement has changed.
So wouldn’t it be a better idea to write into the testcase simple „step x: type in numbers from requirement x, check also boundaries“ and link the testcase step with the
requirement document? I think this sounds betther and makes less effort.

An other example is: A special type of gui has to be tested and imagine that every release there is something „new“ on the gui: for example further textfields or more special filter options. Now the problem is. Due tue the small „changes“ to the gui it doesn’t make sense to create for every release a new testcase – at least when context of testing is „integration testing“. In case of functional testing/modular testing it would of course make sense to cover it with a new testcaes!

So this changes have to be concidered in the one and only testcase for the gui (assuming we have one testcase for „integration“ testing for the gui).
Than the test case should be written in a dynamic style:
Instead of:
step1: add entry to selectbox 1
step2: save and check entry from step1
setp3: add entry to selectbox 2
setp4: save and check entry from step4
setp5: add entry to….
step6: ….
step7: and so on and so on

… it would be more comfortable to write simple:
step1: add entry to next gui element
step2: save and check entry from step1
step3: repead step1 and step2 for all elements of the gui

problem 5 „Is there a standard for writing script based testcases?“
In general: Yes. Try to search for it in the www and you will find enough of them. To much? Ok and that’s the problem. I think every standard which is somewhere availlable is only for it’s own needs usable. I guess this one standard for writing your testcases will not exist. And if you compare the different writing of script based testcases for manual testing from the examples above it is depending on the context, focus, test level and so on.

After reading the article „Frustration of Test Case Debates – Taking the Positive out of Online Forums“ by Brian Osman from the November 2012 edition of „Testing Cicrus“ magazine (open in new tab) I’m glad that other people also discuss about the poblems of writing and excecution testcases. The issues mentioned on this article are well known to me and it was a pleasure for me to read it.

So what is my verdict at all concerning „writing script based test cases?“
1) The importent’s thing is, that you as a tester should know your testing model and your testing mission. If you are new to a testing project you should know your stakeholders, developers and testing collegues so that you can ask them what and how to test for a first time. And for some time past, you know more and more about your testing area and you know more and more to add information to your testing model and your testing mind set. And I think it’s a good idea to do some „exploraroty testing“ at first. With this you can cover also some „functional/modular“ testcases and get more and more information about your testing range. And with more knowledge about your testing area you can easily cover the „interfaces“ between different modules under test which means that you know more and more the workflows and that you are good prepared for higher test levels like „integration testing“.
But if you are new to your testing realm, you can also learn from script based testcases when – and that’s important – when they are well written with a „step-by-step“ description (compare to problem 1 mentioned above)! So what’s my message and experience here? Use scriped based testcases to learn from them, when you are new to a testing project. But also use them, when they are not well written. Time after time you can modify them with valueble information so that those testcases are realy usefull! Doing this is not only usefull for you but also for other persons who maybee have to execute those testcases.

2) Concider the current context of testing when writing and executing test cases! The best way I think to do so, is to find something between strictly writing test steps in your testcases like „do this, do that, check that“ and keeping in mind some „exploraroty testing“ during testcase excecution. It’s a different view of the testcase when you are executing the testcase for example during a „sanity check“ or later as a regression testcase on an „updateed test envirionment“ for example.
Also pay attention on the different views of „test levels“, „test envionrments“ and so on. It all depends on the context! And if you have to „log“ the executions of testcases in different contexts, make sure that your „test managers“ will take a right look on the „right“ testlogs (comparing to problem 2 mentioned above).

3) Last but not least here my favorite suggestion: Communication is all!
Testing is about questioning. If there’s something you don’t know about how to write or excecute your script based testcases or of course if you think you might have found a bug in system under test but you don’t feel certain about it then just ask. Try to know the people which can answer your questions and ask them. Ask your project owners, ask your developers, ask your testmanagers, ask other testers and stakeholders and so on. Writing and excecuting testcases are only then
valueble when every involved person in the testproject knows whereof he/she is speaking!

4) I think it never ever will give a perfect „manual“ to create and write down testcases. You can also ask about the perfect „manual“ to write down code, can’t you? (compareing with mentioned problems 3, 4 and 5 above)

I hope this post was inspiring to you, my testing friends! And I want to invite you to write down your own opinion about writing and execution script based testcases. Also feel free to comment this post!

kind regards from Germany


please leave me a comment...

Trage deine Daten unten ein oder klicke ein Icon um dich einzuloggen:

Du kommentierst mit Deinem Abmelden /  Ändern )

Google+ Foto

Du kommentierst mit Deinem Google+-Konto. Abmelden /  Ändern )


Du kommentierst mit Deinem Twitter-Konto. Abmelden /  Ändern )


Du kommentierst mit Deinem Facebook-Konto. Abmelden /  Ändern )


Verbinde mit %s