Monday, 20 August 2018

D, E, F’s of Software Testing

All we think and talk about is a, b, c's of x, y, z's, and for a change here are the D, E, F’s of Software Testing.


D for Data
Data Testing - Do extensive testing of the input and output data and provide coverage in terms of the below. 

For its legal usage and infringement of the right to data privacy. What data can be used and when and who can use it and why?
Test the data - how is data collected, stored/saved, transmitted, used and disposed of.
Test the data - for correctness and reliability. Not everything on the Internet is from a trusted source.
Test the input data - for idiosyncratic patterns.
Test the output data - if it's spewing out relevant output for the input fed.
Test the data - input fields for the invisible forces attacking them.
Test the data - for who prepared it, who is the end user, are they the authorized end user, where did we get the data from, where are we storing it, how are we storing it, how better can we store  it, how securely can we store it, how else can we use it, how legitimate the data in transit is, how do we distribute it across systems, third party and otherwise, are the users aware of how their personal/valuable information is stored and used, who/which tool generated the input data, is this tool licensed, how is the data generated, who are the makers of this tool, who else can access the data and how is the data disposed of.
Test the data - for its quality.

The question for us to think about here is:
Why are we collecting this data from a user?
Answering this question can help in solution-ing the software for apt usage and can help reduce the cost incurred in designing, implementing, testing and when the solution is released to the users.

E for Evidence
Evidence collection of the testing performed is of utmost importance to testing. It is also significant to understand the value-addition of testing evidence produced and provided to the client.

Testers need to be trained to use tools to capture and produce evidence. From the project initiation phase to completion and beyond, evidence of the testing done, coverage provided, scope altered, tools used, checklists filled are filed by testers for the consumption of an end-user. The test lead needs to ensure that the test evidence is easily accessible to the consumer (for the client) and in time.

Be punctilious of the testing approaches you learn and use, as it creates value. Including the context-driven test approach which sets up the tester to form and base the testing by cautiously understanding the context. Failing to state the context prior to performing testing and during execution, can set the product up to have lousy quality.

With evidence collection, the testers learn the fine distinction between testing and providing proof. Without which, testing is deemed valueless.

The question for us to think about here is: 
What steps you take or have taken to gather evidence?

F for Fact
Lest we forget the value addition of the reviews, root cause analysis of the bugs which reveal that lack of fact-checking can be fateful to any project.

Testers as fact checkers can ultimately lead them to be and become better quality contributors.

Fact checking as part of testing can trigger a tester to be hawk-eyed during the whole process and make one quality conscious. 

The statement of work
Collated product and testing requirements
Use cases referred and used
User stories created and suspended
Agreed user acceptance criteria
Test ideas, charters designed for execution with valid and invalid inputs
Product understanding and bug review
Test reports and graphs generated
Risks encountered and mitigation planned
Experience and closure reports

The side effect of doing this helps a tester to better time track the testing activities one is involved with. And raise relevant questions related to estimation which is a faux-pas when planning for testing is considered.

All of the testing related activities needs to be fact checked for. Testers, practice checking for facts before filing a bug or sending out a test report. Tester vocabulary used when testing and not testing plays a prime role in doing so.

The question for you here is:
Dear testers, have you fact checked today?

Wednesday, 1 August 2018

Understanding The Context

Establishing the context is essential when conversing, writing and especially when testing.

Reasons why establishing the context prior to testing can be missed out:
  • Ignorant or Unaware –Testers/Leaders are not yet aware of the science behind establishing a context.
  • Code and test enthusiasts hastening to verify and validate the product against set requirements.
  • By not encouraging the idea of questioning at every stage of software development, one could dampen the outcome of testing. Relevant and well-timed questioning leads to challenging the status quo and learn new dimensions and the changing contexts that are akin to testing. 
  • Change in requirements requires a change in context, which is seldom accommodated in ardent and uncompromising work cultures.

The above scenarios can occur in instances when:
  • The team has not set clear team and individual objectives prior to testing.
  • The code delivered to be tested is delayed and has passed the test execution start date. 
  • When the manager is pushing for testing to be completed and delivered at the earliest. 
  • Every member of the team wants to be in the good books of everyone else and is not asking lost time to test.
  • There is a mentor missing whose guidance can help in fire-fighting.

What is the context?

*Context - The set of facts or circumstances that surround a situation or an event. 
*Definition via the WordWeb.

De-constructing Context
  • Making sense of the system under test is itself the beginning of testing and is the first step in knowing the system.
  • Understanding the circumstances that surround a situation or an event - In this context, the system under test is needed to be understood in the environment it operates in, in order to test the system in an interactive mode.

Why is it important to establish the context when testing?

It is required in order to construct solutions for testing problems and to understand the system’s isolated and interactive behavior. 

Testing is to learn and to know the system under test.
I would like to suggest further reading of Chapter 1 from the book ‘An Introduction to General Systems Thinking’ authored by Gerald Marvin Weinberg to understand, to know, to extend, to limit the learning/testing and how far we can/need to go when subjecting the system under test to contextual mode.

An analogy
Recently, I was asked to prepare interview questions on logical reasoning and this request came as a 1 liner. I went ahead without questioning (or establishing the context) and spent a considerable amount of time defining, redefining, learning about questions, various types, and modes of questioning and learned the definition of logic, reasoning, facts, ways of thinking, asking and framing questions. Found various online resources to learn about questioning and interviews and I finally submitted the 20 questions that I prepared with some original questions in the list.
It was only later and when I asked for a review did I learn that since the context was missing in the written communication(the e-mail request sent) the effort was rendered waste, however original. The email clearly lacked the information that was required about the context and I had assumed the context of my current understanding and this whole effort was rendered a waste of valuable time for all involved.

Key Takeaways
  • Do not hesitate to ask relevant questions.
  • Establish context whenever needed.
  • This tweet from JeanAnn Harrison sums up the other lesson learned.

Optional reading

Thursday, 26 July 2018

Few points I learned about running

a workshop online.

The simple answer is Don't do it.

Don't run a workshop online, if there are no representatives or a co-presenter at the onsite.
No matter the amount of preparation, and the confidence in the content and the research done on the exercise involved do restrain from running a workshop online. 

`The exercise designed and/or you crave for high interaction.
`The audience is new to this style of presenting.
`The speaker is new to this style.
`The audience and the speaker are new to each other.
`The exercise designed requires both the speaker and the audience to be on mute.
`The exercise designed requires the audience to focus and work in a silo.
`The speaker cannot see and/or interact with all the audience.

Do online workshops if there is no other option of getting the audience to attend it offline.
And if:

`A co-presenter is at the onsite and acts as a binder between the presenter and the audience.
`It is a group activity, then do try this out but only if the audience is well-versed in the exercise.
`Else at all times, do an introductory class, explain the exercise and then get the audience to see you and you can see the audience. Then do attempt it.
`But learn if this is the style that suits you and the audience, then decide.

I recently tried experimenting with a few audiences and I did learn that this style of running a workshop online with a new exercise and a new turnout doesn't suit my style of delivering a presentation. 

I crave for audience energy and interaction and derive my energy at times from the audience. 
Understanding the pulse of the audience, meeting their eyes, getting immediate feedback is my style. Knowing the assemblage helps me.

Thursday, 14 June 2018

Generating Test Ideas

It is interesting to analyze what we subscribe to and are involuntarily subscribing to, especially when we are in the beacon-range with active signals.

In one such encounter, I overheard two people conversing on the pathway. Person A to B says, ‘with an additional 10000 rupees neither are you going to become rich nor I will become poor by lending it to you’.

It dawned on me how false is this analogy to testing, with each and every idea generated while testing how greatly the quality of the product can be improved. The quest for test ideas is unending for anyone who is test-obsessed.

Generation of ideas is limited to the current know-how we have of the domain, technology, hardware supported, implementation technique used. Straightforward testing can be performed with this limited knowledge. To deep dive, a tester needs to expand the know-how and generate more such ideas by integrating different teachings from various schools of testing, refer knowledge work of expert testers and themselves contribute to this body of work. And thus add value to the product being tested.

Idea Generation - Is an intellectual and a fun activity.

  • Idea generation is a brainual activity.
  • It is an individual activity, generate ideas with the knowledge you have thus far gained.
  • It is a group activity, generate ideas as a group learning activity.
  • Not every idea that occurs needs to be tested, but only those that fit the context.
  • Such ideas can be stored away for later and for relevant usage.

How - Idea Generation?

  • Brainstorm with a group of friends during a recess.
  • Start generating ideas to break a set pattern.
  • Throw a dice or two and based on the outcome/the occurring patterns, generate ideas for that topic.
  • Role play to generate new test ideas.
  • Set a limit, and go beyond the set limit.
  • Roll the dice yourself, or throw a dart on the testing dart board.
  • Ask your friend/colleague to throw you a testing challenge.
  • Find a mentor to practice testing exercises with. Gather ideas.

When - Idea Generation?

  • When testing.
  • When not testing.
  • During test preparation phase, testing phase, and afterward.
  • Generate test ideas to test requirements, design, raw data, processed data, data in transit, unit test, integration, system testing and beyond.
  • When in a reflective, analytical or critical mode.
  • When learning to test, and when unlearning.
  • When preparing the test charter.

Where - Idea Generation?

  • At your desk. Invite a friend to carry out this learning activity.
  • At the breakout area. Hang a dart board or place a bowl with different types of testing, test approaches, and techniques to choose from. Pick one as you move around. It serves as a good exercise physically and can help you learn to test too on the go.
  • Before you start the day/ week/month, make a mental note to learn about a certain testing trend and generate ideas using that approach.

Why - Idea Generation?

  • Save a copy of all the ideas generated for self-use or share it with anyone else in the team.
  • Share it with the world of testers - who may have had a formal software testing education or not.
  • Create charters, maps, heuristics, tools to help aid and ease your testing activity.
  • Build your network by learning and sharing with like/unlike minded testers - they all have ideas to contribute to.
  • Give and take credit for the work done. Build your credibility as a tester.

Over-thinking can cause fatigue easily, so it is better to know when to STOP.

  • Slow down - Slow down to distract yourself. To allow new ideas to flow in or to break a set pattern.
  • Take a break - Break when you feel fatigued, it is common that brainual activity can drain you out from having a continuous flow of ideas. Invite other learners to drive the flow of thoughts. Learn with peers or pair up for some time.
  • Output - Analyse the output/outcome thus far, take credit and celebrate not waiting for others to pat your back. Studying the outcome thus far can guide us greatly on how to proceed further. Keep a check on how far you have come since you started.
  • Pull over - Pull over is to makeshift with the current status quo.

STOP and allow yourself to have Eureka moments - Ideas need you.

Not everyone needs an inspiration. Ideas flow to some as the know-how grows and some do need a trigger to act and stop.

Monday, 28 May 2018

Word, Character Counter, and You

Recently I tried four different word counters available for free and the 'Word Count' browser extension on Google Chrome, to count word and characters of the same content across all of them. 

The algorithm used and the accuracy of these free online word and character counters apparently resulted in a different count of which 3 out of the 4 yielded 3 different results for word count and all 4 of them yielded 3 different character count with and without white spaces. Also, tried Word Count Google Chrome extension which yielded the same result when used on top of all 4 tools.

Tools that I tried out and the results are as below: 
[Word count] / [Character count with spaces] / [Character count without spaces] - 448 / 2664 / 2202 - 451 / 2630 / -
Interestingly I tried Grammarly app (for the first time) on top of this tool and it yielded - 453 WC. - 453 / 2663 / 2202 - 448 / 2663 / 2202

Word Count - Chrome extension yielded this same result on all 4 pages - 446

One or all of the tools above could be accurate based on the rules used to calculate. And one of them did come with this disclaimer: We strive to make our tools as accurate as possible but we cannot guarantee it will always be so.

Lessons from this exercise:
  • Provide proper spacing.
  • Type in appropriate words only wherever required. Eg: A/B, A-B, A and B, A or B, A to B, can be expanded aptly.
  • Beware of auto-correct and basing a/any report on unreliable sources.
  • Follow language specific guidelines that can ease interaction with the tool.
  • Despite these corrections, there will be no guarantee of the accuracy provided by these tools.
  • Try using reliable apps when dealing with important terms/clients.
  • Read the disclaimer.
  • To tool owner: Carry out accuracy, reliability testing and provide options for the user to reach out. Provide a disclaimer, help, manual ~ about the product, which the reader can read to understand any product better.
Reach out to the tool provider if there is a scope to learn about the issue - Help them, help you.

Tuesday, 22 May 2018

Note to freshers/engineers

Friends, from the engineering fraternity who are fresh out of college/university, need to know this as you begin to look out for a career in the corporate world.
Number of seat allocation for freshers by the recruiting company for campus recruitment and walk-in interviews depends on a number the factors such as the size of the company, sales, projects in the pipeline, client budget allocation and a need for fresh graduates. If a selection is not made via these modes, then there are other approaches and effort that needs to be taken in order to train yourselves to reach out to the company.

Here are a few ways that you can try out before approaching seniors or friends to apply via the referral program. Based on the approach that I have taken, I have hereby shared with you some tips that can help you independently take on this path and know do's and don'ts.

Courses / Education

  • Continuous learning in an interested topic/subject helps build a strong profile while you wait.
  • Pick a topic and enroll to free/paid courses on Coursera, Udemy, YouTube or other learning platforms.
  • Share the knowledge with knowledge seekers on a chosen medium and in turn help your work reach the talent acquisition team or someone who is on the lookout for skills matching yours. Help the right job, find you. Highlight your current learning and projects worked on.
  • Take time to work on a pet project which can be added to your profile as independent work. Many multinational firms are pro-pet project credits.

Resume / Profile building

  • Add - New courses taken, new interests developed and projects contributed to.
  • Remove - What is not relevant today.
  • Build - Learn what are the current trends and build your profile accordingly.
  • Get the profile reviewed by a professional.
  • Take the effort to build your personal brand on a reliable medium. Do not share information on unreliable sites.

Open source contribution

  • Enroll to contribute to open source projects. 
  • Sources such as Quora, Medium, and search for related articles on how to begin contributions, how to prepare to be an intern to contribute, and to get paid for contributing are available with apt search criterion and yield results based on where you wish to make a contribution.
  • Open source contribution is one of the key factors to help influence your recruiter.


  • With reliable sources can benefit.
  • It can help find a mentor to learn more from.
  • Subscribe/Enroll in forums that are reliable in this regard.
  • Seek help to apply for open job postings. 
  • Importantly, know that there is no return favour that is needed from you.

  • After applying via the referral program, wait for the call.
  • From here on, the HR should be able to take it forward.
  • The employee may not have a direct connect with the proceedings of the hiring process.
  • The employee via the referral program benefits if you are selected and stay with the firm for a set period of time.
  • If at any point in time, you wish to leave and it is before the employee benefits. You can leave cordially. 
  • The employee has no right to pressurise you to stay in order to reap the benefit. Know that if you do wish to stay, then the employee does benefit.
  • At times, the HR department may have missed to share the benefit with the employee then the referee can reach out to the HR.
Best wishes.

Quote: The most meritorious level of charity is helping someone to become self-supporting ~ Judaism.

Monday, 23 April 2018

DISSCOH ~ Heuristic Approach To Bug Reporting

This article focuses on the approach to bug reporting with a closer look at the rejected and deferred bugs. 

It was originally published in 

What is heuristic?
The word heuristic is derived from the Greek word 'heuriskein' meaning 'to discover'.
A commonsense rule/set of rules intended to increase the probability of solving some problem (definition via the WordWeb 5.52).

I was introduced to the heuristic approach of solving software (testing) problems when I first met Pradeep Soundararajan.
Based on my personal experience with bug reporting, I have collated the points and the examples below to ponder upon while reporting bugs.




Consider the word 'Error' being misspelled as 'ERORR'. This misspelled word is causing discomfort to you and requires to be logged as a bug. It is possible that this may not be seen as discomfort by some and the bug if logged could get fixed or rejected or deferred.

To understand what a mis-spelt word could cost, take a look at this picture in the link below. 
And that's the price Chile paid for the mis-spelt word. 

When in distress whether to log a bug or not, remember to:
  • Question with an intention to gather information.
  • When we know better we test better.
  • The information gathered could help us in understanding the impact the bug might cause if left unattended.

Refer to the wishing wand chapter from the book – 'More Secrets of Consulting' authored by Gerald M. Weinberg on questioning with an intention to gather information.
Question who the users, target audience and the customers are. 
Know that your definition of discomfort can be different from the others.
Question the stakeholders, business owner, product owner, yourself, users and the involved in order to gather information to answer arising questions.
If it seems clear to you that a bug needs to be fixed then log it and move ahead with logging other bugs, which you think could add value to the software being developed.

Points to ponder:
Have you noticed this perspective on how fast/slow a mis-spelt word on a very popular website/stand alone application versus a not so popular website/stand alone application is handled?
Have you observed if you added value by logging other bugs along with the misspelling, this caused both the bugs being handled versus only the misspelling issue being ignored?
What factors are affecting the decision makers? 
Have you questioned the decision makers? 
If not, it is worth finding answers from the decision makers.

Consider reading the below excerpt from an article by Michael Larsen on his blog 

“Spelling errors are often items that really get an executive irritated, especially in things like End User License Agreements. I learned this a number of years ago when, because of misspellings and typos in the legal agreement, the net result was the fact that we had created a loophole where we were making a tacit agreement that was never intended (and that we would be liable if something were to happen if the users did not do the steps necessary). 
These areas are not glamorous, and sometimes they can be quite tedious. While that may be true, these are also areas that are considered most likely to impact revenue, and therefore will be given close scrutiny by executives. If we are taking the time necessary to look in these areas and report on what we see, we will probably get more traction and movement on these issues. Why? Because these are the issues that really matter. It may not seem fair or intuitive, but you can almost never go wrong reporting and championing bugs that the CEO has mentioned will be problematic for them if they were to be  released.”


Users must verify if the information found on the Internet is authentic. If/when producing the information found on the Internet as a proof for a bug raised.

Consider the below link here as an example to probing.

The information related to the prefix list 985-989 is missing in the tabular column and was missing at the time the issue (missing information) was reported to be fixed. 
Post the fix, the readers now have access to this information: Prefixes not listed above are reserved by GS1 Global Office for allocations in non-member countries and for future use.

This information is VITAL and is essential to any retail organisation.

Optional further reading:

Question with an intent to gather the RIGHT information.

Who are the contributors to Wikipedia? 
Views expressed on Wikipedia or on the Internet could be an individuals perspective and/or an organisation adding their own view.

Diligently work on providing evidences and be thorough with the investigation. Build your first and subsequent investigation reports with references leading to why/why not the information on the Internet is correct or incorrect. 
Also know, when to stop probing.


Consider a bug description like the below: 

On 'Nokia Lumia 520' device, in some cases the user is unable to slide down to the whole list of paired devices which are connected via the Bluetooth on that device.

Wait! In most cases the users and/or the programmers might ignore (as there is a work around it) this bug so why fix at all? 

Avoid the usage of words like some cases, most cases from the testing vocabulary.
What are some cases?
What are most cases? 
Who is the judge of such cases? 
Is there a way to concretely confirm what some and most cases are? 
If the answer is No – then avoid the usage of such misleading words from verbal and non-verbal communication and while reporting bugs.
Instead, it helps if some/most cases are expanded to include the details of what it really means.


Consider this bug rejection reason - "Suggestion rather than a bug"
The bug validator conveys that the raised bug is not a bug but a suggestion.

Dear Testers, 
Provide reason/s for logging the bug with appropriate reasoning and ask for to receive a concrete understanding behind deferring a bug as not fixed.

A tester could have logged the bug cause he/she did not have all the required/sufficient information to judge the failed test as a suggestion rather than a bug. This in itself could be due to insufficient documentation. 

It could be a missing requirement in the requirements document.
The scope of testing is not specified or is limited. 
It could be any of these or other reasons. 

Hence it is essential that the validator too provides a reason for marking the bug as a suggestion. Ask for and receive very specific rejection reasons.


Failures can teach a lifetime's lessons.
Care to defend the rejected bug if required.
Read the rejected reason when a bug is rejected. 

If/when there is no reason for rejection, ask for it in written communication. 
Re-open a closed bug if found detrimental to the current requirement or you foresee it as a future potential candidate of becoming a deferred bug. 

And remember this when trying to recreate that assumed ir-reproducible bug. 
Image courtesy: The above cartoon is designed by Andy Glover.


Have you heard this rejection reason before? 

Originally, on my machine, the requirement is/was satisfied and hence this is not a bug.

Know your test environments.
a) Did the programmer/tester say this only to dodge further questions? 
b) 'Originally' is in consideration to what? 
c) Avoid usage of words like: Unorganised, primarily, to begin with, earlier and works on my machine when logging and rejecting a bug.

In agile, requirements don't always stay true to their original description. 
Ask for what Originally means?


Learn the history of the product. 
While testing, try not to be biased by the launch date, versions and previous state of the product. Bias and un-bias if required. Test your assumptions regularly.

Find the answer to:
Should I be performing sympathetic testing as this product was launched 80 minutes/hours/days ago?
  • Irrespective of any such preconceptions, log the bug.
  • Remember your right to information. Question whenever required.
  • If there is no scope to question or there is a block recognise it. Try to unblock.
  • There could be a case when a bug remains in unaddressed / in a new state or has not been fixed. Follow up on such bugs, be persistent and log bugs *dispassionately.

How to use DISSCOH heuristic when logging a bug or approaching a rejected bug?
I would like to suggest that you try this out and check the usage in your context when reporting a bug. 


*RIMGEA: A Bug reporting Heuristic


My gratitude goes to Carsten Feilberg and James Marcus Bach who mentored me on bug logging and bug titling respectively.

Andy Glover ( and Michael Larsen ( for granting permission to use the cartoon and the excerpt from their blogs respectively.
Other mentions and further optional readings about Heuristics include: 

Parimala Hariprasad

Lynn McKee

Ben Simo

Jason Coutu

Happy and efficient bug logging.

Friday, 2 February 2018

Product Image Testing Ideas

An attempt at collating testing ideas to test image/s inducted for a product. There is scope to add ideas relating to relevant image association, handling missing images, accessibility, securing and addressing the quality of the added image. 
Terms of use: If you are working on image induction (single, bulk), image testing, image search, on an e-commerce site this can be used (only) as a reference. Feel free to find a context to test in various other contexts.
Happy testing.