Testing

Why we should all be embarrassed by the WebAIM Million Report

TL;DR

Digital accessibility refers to designing and developing websites and digital tools in a way that makes them usable by people with disabilities. Unfortunately, the WebAIM Million report, which analyses the accessibility of the top one million websites, shows that progress towards web accessibility has been slow. This is a cause for embarrassment because accessibility is a basic human right and failure to ensure it on the web excludes millions of people from accessing information and services online. 

It’s just not good enough folks. We can make excuses that we don’t have control over the design or brand colours but we have to keep raising these issues until the noise is so loud people start listening. 

What is this article about? 

The WebAIM Million reports on the accessibility of the top one million websites ranked by Alexa. While there have been some small improvements in accessibility since the report began in 2019 there are still too many instances of basic issues especially in shopping and entertainment websites. 

What is Digital Accessibility? 

Digital accessibility refers to the ability of people with disabilities, including but not limited to visual, auditory, physical, speech, cognitive, and neurodiversity, to access and use digital content and technology.

In practice, digital accessibility involves designing websites, applications, and digital content in a way that makes them usable by as many people as possible, regardless of their abilities. This includes providing alternative formats for visual and audio content, ensuring that content is navigable by keyboard and using assistive technologies like screen readers and speech recognition software, while also making sure that colour contrast and font sizes are appropriate for people with visual impairments.

The goal of digital accessibility is to remove barriers that prevent people with disabilities from fully participating in and benefiting from the digital universe. If we make digital content and technology more accessible, we can ensure that everyone has equal access to information, services, and opportunities online.

What is the WebAIM Million Report? 

The WebAIM Million report conducts an accessibility evaluation of the home pages for the top one million websites. The evaluation is conducted using the WAVE stand-alone API and the results provide an overview of the current state of web accessibility. The report, which can be found using this link https://webaim.org/projects/million/ notes detected errors, page complexity and the most common errors and more within the report. 

While it has shown some positive trends over the years, progress towards web accessibility has been painfully slow and there is still so much more work to be done. 

Here are some key trends from the report since it began:

  • In the first report, published in 2019, very few of the top one million websites had no detectable accessibility errors. This number increased in each subsequent report indicating some progress in web accessibility. But the rate it is progressing, with the current technology it could be nearly a hundred years before all sites are accessible. How sad is that? 

  • Alternative text for images has shown consistent improvement over the years, however as there are valid reasons for not having alt-text e.g. it is just a decorative image, it is hard to tell from these figures how bad the situation really is. 

  • Empty links are down 8% since the first report but there are still 50% of them that fail due to ambiguous text such as ‘click here’, ‘more’, or ‘continue’. 

  • Colour contrast is a persistent issue, usually due to design or brand choices. The highest ranking issue in the report, the latest figure of 83.6% of home pages having at least one contrast issue is barely down from 2019 which was 85.3%. 

Overall, The report serves as a useful tool for tracking progress and identifying areas where more attention and resources are needed.

What does this all mean? 

While the WebAIM Million report has shown some improvements in web accessibility over the years, there is still a long way to go to ensure that the web is fully accessible to people with disabilities. The report highlights the urgent need for more attention and resources to be devoted to digital accessibility. We should all be embarrassed by the slow progress towards making the web more accessible and take action to ensure that everyone, regardless of ability, has equal access to online information and services.

Digital accessibility is a human right

Being an Accessibility Advocate - and why you should too

Following a post on LinkedIn about side hustles, where, in the comments, I mentioned working on an accessibility workshop. I was asked ‘What’s an accessibility advocate?’ I say it often and had an internal understanding but had never properly thought about defining it for myself or to anyone.

This was the beginning of my reply.

My definition is someone who promotes, talks about and actively brings accessibility into conversations.

That felt about right but I see lots of similarities with other disciplines and their advocates that I wanted to provide some context. A distraction from preparing to deliver my very first accessibility workshop at Sky in Leeds, but a worthwhile one I think.

So I continued;

A bit like security is seen as a pure specialism and scary by some, many see accessibility the same way. For both there is so much we can do.

• Ask if they have been considered early on in the process.

• What strategies are we using?

• Include basic tests in our exploration

Considered at the design stage instead of hard to add items they become design choices and therefore much cheaper. Considered after something is built, they are much more costly!

I was fairly comfortable with my answer now but I couldn’t quite drop the thought. I kept thinking about my immediate reply and was it really deep enough? Did I understand what I wanted to achieve by doing this as well as I assumed I did? Should I, as any good tester would, question my assumptions for a clearer understanding? I came to the conclusion, after some thought, that yes, yes I should question myself. It didn’t take long to identify that it is those things, but there’s more to it.

I make it very clear at the beginning of conversations, talks and even recently in a workshop given at Sky in Leeds and a meetup in Nottingham, that I am in no way an accessibility expert. But that doesn’t mean I can’t teach others what I know, spread the message and amplify others voices in the space. It never ceases to amaze me of the knowledge we take for granted as common place that isn’t widely known. Simple things like adding Alternate Text (Alt-text) to images accessed digitally seem obvious to allow people who need to use screen readers to navigate. But they only become obvious once you consider those users.

Over the last few years I’ve looked at different ways to spread the message.

• Invented a visual heuristic (https://www.thebigtesttheory.com/blog/2019/5/13/my-first-experiences-with-accessibility-testing).

• Shared information.

• Been a co-host of an accessibility power hour (https://club.ministryoftesting.com/t/power-hour-accessibility-testing/26064) on the Ministry of Testing Club.

• Created a quiz I believe is unique (and so far very well received) and that I’m still improving. It is deliberately very visual, as that is the target audience, but I want to make sure non-visual people can also take part by providing a fully accessible version online. Although I suspect they know most if not all of the answers already!

The next stage is developing a longer workshop to help people to conduct a basic accessibility audit of their own sites and apps. Learning this will allow those attending to have those conversations up front and (hopefully) influence the design. Essentially, I’m hoping to inspire others to become accessibility advocates themselves. I have an hour and a half workshop already and am close to submitting a half day workshop to conferences. So watch out for it coming to a conference near you soon. I’m also considering offering this to companies in house covering my time and expenses e.g. less about making a profit and more around spreading the message.

The next logical question is why? What makes someone want to be an accessibility advocate? Personal experience? Or just that it’s the right thing to do. While it is the right thing to do the best explanation I can give comes from someone else.

Four-time U.S. Paralympic medallist Tucker Dupree used to do a lot of public speaking during his competitive swimming days. His talks would often challenge the audience to think differently about people with disabilities.

I’d always open my speech with, ‘As a person with a disability, I belong to one of the largest minorities in the world, and on top of that, it’s a minority that anyone in this room can become a part of at any point in their life. You can acquire a physical disability at any point in your life, and disability comes in every culture and in every colour…’

One thing I hold to be true in everything I’ve learned is that the general perception that Accessibility = Disability is not quite correct. In the majority of cases, things that affect people with disabilities can equally affect people without. While it is important of the people with, Accessibility is more about Inclusion so becomes wider than just conformance to the guidelines.

So overall while I’m happy with my contribution although I know it is only a drop in the ocean and it needs more to convince those with the power that this is something we have to do. No easy task but I feel, a worthwhile pursuit.

Near the end of writing this blog post, WebAIM published their re-analysis of the top one million web home pages which can be found at this locations, https://webaim.org/projects/million/update

They categorise errors as;

Errors are accessibility issues that are automatically detectable via WAVE, have notable end user impact, and are likely WCAG 2 conformance failures.

The other reason everyone should be accessibility advocates is what they found. A 98% failure rate. And that’s only based on automatically detectable errors! We can and should do better, but only if we look for and call out these issues.

My First Experiences with Accessibility Testing

My First Experiences with Accessibility Testing

This post has been written as part of the Ministry of Testing Bloggers Club Sprint 13 https://club.ministryoftesting.com/t/bloggers-club-sprint-13-new-timelines/24995

 The brief was; Your first experiences with accessibility testing. How you started, where your learning began and any assumptions you had to question, change or drop completely.

Reflections from TestBash Brighton - Testing is?

For my talk at TestBash Brighton Essentials (see ref below for slides and more) one of the things I covered was what I believe testing is. Over the last 6-months or so I have been mentoring a new tester on their journey and to start that journey, I had to explain what testing is.

There are a many, many attempts to explain testing and I even referenced one of my favourite descriptions in my presentation.

Testing is the infinite process of comparing the invisible to the ambiguous so as to avoid the unthinkable happening to the anonymous.

James Bach

Over time, the more I looked at how different people went to great lengths to explain the craft, the techniques, the mind set and from other perspectives. I though, there must be a simpler way of beginning the conversation as it is my opinion that when trying to explain what ‘testing is…’ we do ourselves a disservice and answer instead, ‘what testers do…’ or a close variation.

My belief is when we do this, we confuse ourselves and others and we would really help our craft and each other if we simply said;

Testing is part of risk mitigation for the product or system.

Now I know that might sound overly simplistic when we are sometimes put in the position of defending our profession and craft. But, we should follow up with;

And we do that by…

This way, when we talk about critical thinking, observation, problem identification and solving; explain why automation helps but isn’t a solution in of itself; offer advice about bias, empathy and inclusion; offer opinions on observability and testability and ask unexpected questions from the, ‘but that would never happen’ category, there would be a clear concept that all these things help us identify and reduce risk for the product or system we are helping develop.

I would be really happy to hear others opinions on this way of thinking. Please let me know in the comments or follow up on Twitter @CricketRulz

Refs:

https://www.ministryoftesting.com/events/testbash-essentials-brighton-2019

https://www.slideshare.net/adystokes/test-all-the-things-with-the-periodic-table-140052297

Talk link to be added later, may be Pro or attendees only, to be confirmed

If – Ady Stokes 2017

‘If’ by Rudyard Kipling was written in 1895.  Here’s my version, if Rudyard were a tester. Please excuse the artistic licence! 

If – Ady Stokes 2017

If you can keep your head and explain the quality of the product that’s due, 
When all about you are losing theirs, and blaming it on you. 
If you can trust yourself when others are doubting you, 
But make allowance for their doubting too. 
If you are treated worse than you really should, 
Or there’s misunderstanding of what you said, 
And that the future doesn’t look too good, 
Or being told that your art is dead. 

If you can advocate your worth and believe in testing’s value, 
And think of every possible practical scenario that could transpire. 
If you can dream of all the things the customer might do, 
Then analyse the results to take your testing higher.   
If you can meet with Triumph and Disaster,
And treat those two impostors just the same.
If you can script, but not make scripts your master, 
And make discovery of truth your aim. 

If you can force your heart and nerve and sinew,
To serve your turn long after they are gone.
And so hold on when there is nothing in you, 
Except the Will which says to the team: 'Hold on!'  
And explain simply your deductive and inductive reasoning, 
That was led by curiosity and guided by your intuition.  
If you can hear bias arguments but smile and keep on listening, 
Then use soft skills and diplomacy to keep everyone on mission. 

If you can talk with crowds and keep your virtue,
Or walk with Management - not losing the common touch.
If neither foes nor loving friends can hurt you,
If all stakeholders count with you, but none too much.
If you can fill the unforgiving minute, (or 99 seconds)
With sixty seconds' worth of testing value. 
And find a great community who’s in it, 
To share their knowledge and is truly diverse too. 

If you can treat everyone equally,
No matter their difference to you. 
If you can treat their gender, origins and history, 
As valuable to offer a different view. 
If you can be an ally to all those who need, 
And defend their right to be just who they are. 
Support everybody’s rights to succeed, 
And mentor their journey so they may go far. 

If you can see scenarios that will make the testing great,
As well as the risks that others do not spot.
If you can see the value in which tests to automate,
And more, the value in which to not. 
If you can use tools to add value to your quest, 
But understand their value to aid and not replace. 
If you’re not afraid of change and always do your best, 
And can interrogate a database. 

If you can be the personas and advocate for all, 
And the voice of the customer in use and value too. 
And advocate and test for your product to be accessible to all. 
Explore to make discoveries because that’s what you do. 
If you accept that those discoveries, 
Will change what you thought you knew. 
Look for threats and system recoveries,
And help defend against those too. 

And then know all these things here, 
Are just a part of what you do. 
And be brave and show no fear, 
Accepting our learning will never be through. 
If your passion for testing means that you will never quit,
You will be here to the end. 
Yours is the Earth and everything that's in it,
And - which is more - you'll be a tester, my friend!

Introducing a 7th Thinking Hat

While I make absolutely no claims to be anywhere near the level of a genius like Edward De Bono I've found adding my own 'hat' to the Six Thinking Hats 

(http://www.debonogroup.com/six_thinking_hats.php) to be useful in making the technique more relatable to modern applications. 

Introducing, the 'Hard Hat' 

De Bono’s 6 hats is something I return to regularly. Some time ago I added a 7th ‘hat’ that I feel brings an element of the modern digital/mobile world to it.

A purple, hard hat to represent where the work/workloads are. For example: How is memory and CPU usage affected? What puts the most work on the system and needs to be monitored? I believe that this assists me to think about things in a more digital/SCAMI technologies way when using this technique.

I did try researching if anyone else had added their own hats and there are some variations with a gold hat for customers and a grey hat for consequences/cycles but nothing I've found like my purple one that pulls the digital/mobile world into the technique.

I've always found the mind map below useful when applying six hats and have added the purple hat. I hope Paul doesn't mind! I also hope someone else might find this useful. If you do please let me know. Thanks.

Session Based Testing, Exploratory Testing and my Questions technique

SB – Session Based Testing - 

Technique Element

Sub section – Approaches

Since Jonathan and James Back (satisfice.com/sbtm) documented their Session Based Test Management approach combining exploratory testing, risk management and ‘management’ oversight there has been a lot written.  Hopefully by now most people know the benefits of exploratory testing and some of the various methods of recording that activity.  In this article I hope to share a brief overview so we are on the same page.  A list of the main benefits and some minor drawbacks.  And finally the question technique I apply when using this in my day to day work.

Overview:

The Session Based Testing approach was developed for a project to allow their test team to ‘organise their work without obstructing the flexibility and serendipity that makes exploratory testing useful’.  They needed a way to keep track of what was happening, so they could in turn report back to a ‘demanding client’, while ensuring the time spent created the biggest return on investment. 

Essentially this is structured

exploratory testing

to help organise thoughts, capture questions and insights and allow rapid feedback.  Key elements to this approach include;

  • Each session is chartered (associated with a specific mission)
  • Uninterrupted (as much as is possible)
  • A template is used to record the details of the mission and findings
  • Reviewable ( a ‘report’ is produced to document findings and questions and the tester is ‘debriefed’) 
  • Time-boxed (with flexibility but generally no more than 2 hours)

In my opinion, there are a number of flexible points in the approach and tips that are worth being aware of, especially if you’re doing this for the first time;

  • I don’t think it matters if you call it a charter, mission or focus.  As long as you generally stick to your subject, although picking one might help when sharing for consistency.
  • Interruptions should be avoided if possible.  On occasion I’ve shut down Outlook and put my headphones on for these types of sessions.  At one time I even had a red flag on my desk which indicated do not disturb unless it was urgent. 
  • There are templates available or you can create your own like I did.  Again it’s useful for consistency to stick with one you’re happy with.
  • Reviewable.  A lot of focus is on ‘management’ reviews but team, peer or even self is fine, as long as what you find generates actionable insights rather than getting filed away never to add value.
  • Time-boxed.  If you start small with something very specific that’s a good way to get a feel for this technique and learn to focus.  I can sometimes be like the dogs in ‘Up’ and be distracted by squirrels!  Learn to note where the squirrels are and why you need to look at them later.

Question technique and template:

I admit that I often use this as a mental reminder, rather than something to populate, as my preference is to speak to a developer on my team immediately after a session to investigate or question.  (I don’t raise bugs I describe behaviour and in writing this, that’s probably what my next post will be on.  I’ll add a link to it on here when done.)  Only if this isn’t possible due to availability will I actually fill things in from the notes I have taken during sessions.  For me, this is a disposable document with a short shelf life used to capture, discuss, resolve (or not), and most importantly discard.

I’ve reproduced the template in bullet form rather than embed a PDF or word document, that way I hope it will be easier for you guys to take away and make your own.  When you get to the questions you might find, as I do quite often, you will remove some before you start as not applicable, or you won’t have filled some in when you’ve finished.  It’s supposed to be flexible like that but you should take a moment to understand why they are not populated or applicable to the session as that may prompt some other thoughts.

The template:

  • The Basics: Date; App/function under test (brief description); any other useful information depending on your context
  • Any dependencies vital to the testing (connections, files, data, hardware etc. this helps make sure you have them before you start)
  • Any information that is useful such as material/learning’s from previous sessions, personas to use, environments, tools etc.
  • Test strategy (a consideration of techniques you might use as a flexible plan is often more useful than no plan, but don’t be afraid to improvise as that’s half the fun and discovery may make your plan obsolete quite quickly)
  • *Metrics (see rant at the end of this post)
  • The questions: (with a brief reasoning for them)

o

What do I expect? (even if it is something brand new I always have some expectations)

o

What do I assume? (sets a context that I can query as I go)

o

Are there any risks I should be aware of? (to execution, the system, helps anyone else reading have context)

o

What do I notice? (behaviour; usability)

o

What do I suspect? (things that I feel, not always based on facts but that I don’t want to lose)

o

What am I puzzled by? (behaviour that doesn’t feel right)

o

What am I afraid of? (high priority concerns about the item under test)

o

What do I appreciate/like? (always good to have some positive feedback)

·

   Debrief (originally between the tester and a manager there’s a checklist of questions on

satisfice/sbtm

.  My version is more often a conversation with the developer with questions or queries, but can also be with the product owner or stakeholder depending on what I find/context. I’m not saying don’t do this, rather do it only where it’s going to add value.

This post is getting a bit longer than I’d hoped but I feel it’s important to summarise the benefits and possible drawbacks of using this method so there’s a balanced view.

Pro’s

Con’s

Allows control to be added to an ‘uncontrolled process’

Can be harder to replicate findings as full details are not captured

Makes exploratory testing auditable

As a ‘new’ technique it has to be learnt

Testing can start almost immediately

Recording exploratory testing (rather than brief notes) can break focus / concentration if you're more worried about doing it

Makes exploratory testing measurable through metrics gathered

Time is required to analyse and report on metrics

Flexible process that can be tailored to multiple situations

Time is required to discuss/give feedback to potentially the ‘wrong’ person

Biggest and most important issues often found first

Can help explain what testers do to clients, stakeholders and the uninformed

Given all the above, if you have to justify exploratory testing, (notwithstanding you should be looking for a new job!) then using session based test management either in its original form or some hybrid version could be the convincer you’re looking for.  In the table above, ‘management’ will generally only see the Pro’s column which covers a lot of the things ‘they’ will worry about.  But seriously, look for a new job!

*Metrics: I personally don’t think these are useful for virtually anything (oh, more controversy!), but if you absolutely have to report back to someone, a manager who knows little or nothing about what testing really is for example, here are some metric examples.  How much time you’ve spent such as start/end times on actually executing testing; blocked; recording findings; actionable insights; questions/queries; potential problem; bugs; obstacles; screen shots or some other method of recording or documents to show any issues to help replicate them.  

Using Personas and the Relationships Between PToT Elements


PS Personas – Technique Element  
Sub section – Approaches

There are lots of relationships we have to consider in testing.  In this post, I’ll briefly discuss those relationships and how the Periodic Table of Testing can be used to map them.  Then share a real-life example of how using the personas ‘thought technique’ lead to using other elements on the table.

Any idea, technique or approach can only take you so far without some view of those things surrounding it.  Even a Hermit (a person wanting no contact with others), no matter how isolated, has relationships that need to be considered such as their surrounding environment. 

Understanding relationships can often be instrumental in identifying appropriate scope that helps ensure we deliver quality in our products.  The Periodic Table of Testing is exactly the same.  A Technique Element can have a relationship to a Testing Element and in turn a Testing Element can lead to (have a relationship with) a Technical (or any other) Element.

Real example:
Below is an example of how a Technique Element lead our team to a Testing Element that helped describe our relationship with our Customers.  By creating Customer Tours or Journeys we could then mimic the Customers behaviour, particularly when using Personas to navigate the system in a particular way.  Those Tours and Journeys then lent themselves to a Technical implementation through automated tests and the creation of Living Documentation.


Working on a project to create a customer portal to access mortgage account information was a great opportunity to introduce personas.  I’d read a lot about personas but the main takeaway for me was how they could be used to highlight key differentiators.  For our project, the key differentiation was the accounts status at the point of use.  I’ve read and seen a lot of information on personas and some recommend highly detailed and complicated outlines.  For me a lot of the detail in those were superfluous and didn’t add any real value.  For projects lasting years they could hold some worth but for me were distractions from the main point of them.

Back to the project and our main differentiation.  Mortgage accounts have several status variations including the account being up to date, in arrears, with an arrangement, in litigation, in possession and so on.  We used different personas to represent those different states. 

With input from the team we even used the names of the personas we created to represent variations in surnames to see how they would be displayed in the UI.  And so, Sally Steadman, Adam Thompson-Pritchard and Olivier O’Connell amongst others were ‘born’.  While the personas had genders, ages and key personality traits their development didn’t go much further as the status of their accounts was the key differentiation required.  Once we had the personas and a shared understanding of what each one meant we expanded the idea to other elements.  As well as creating customer journeys for them and noting the different information and help items they would see, we wrote feature files for them that became both our automated testing and in turn our living documentation. 

Thanks to our shared understanding we were able to create a ‘Preview’ version of the site and added fake services.  This meant you could register and sign in as one of the personas and explore or complete user journeys just as the Customer would.  We used these to execute our automated UI tests giving us stable responses.  Cool stuff I thought! 

However we might conduct testing and wherever our starting point; the relationships of different techniques and methods must be considered in our quest to investigate and add value to the best of our abilities. 

References:
Generic Testing Personas: http://katrinatester.blogspot.co.uk/2015/01/generic-testing-personas.html (great example of minimal personas)

Leeds Testing Atelier May 2017

The 4th edition of the free test conference held in Leeds took place at the Wharf
Chambers in Leeds on the 9th of May.  The website is here.  The two track conference had speakers, workshops and panels and was well attended by both developers and testers.  Some of the conference was filmed so hopefully some of the sessions will be available at some point. 

Below is the schedule and notes on the sessions I attended and a little feedback gathered from discussions. 

Track 1 – Hipsters

09.30 – What not to do, a guided tour of unit testing – Colin Ameigh
Colin took us on a journey through unit tests using PHP as the base of the testing pyramid.  In his experience a lot of unit tests were rotten and not maintained or even unit tests at all.  He pointed to the single responsibility principle being absent as the main culprit.  As a minimum they should run everywhere and also in isolation.  Another insight was that where TDD was used, the third step of ‘refactor’ was often missed meaning over time the tests themselves lost their value. 

10.00 – Testing the waters – Rosie Dent-Brown
While I didn’t attend there was a ‘buzz’ around Rosie’s use of Agile at home and assigning ‘roles’ to those around her!  Sound fascinating!

11.00 – Colleagues to Community – Ady Stokes
My talk on our journey building a test community included some of the test challenges we had done sharing our experience to hopefully inspire others and encourage them to come and help us grow.  The title has a link to the deck on SlideShare and here's the link to the talk on YouTube

13.00 – TDD using Excel – Dave Turner

14.30 – Panel – Generalising Specialists
This was an interesting discussion based on both T and Pi ("π") shaped testers.  While the discussion covered many things, below I’ve tried to summarise the key points made.
  • Generalising skills can make you more valuable to a company
  • Generalisation shouldn’t be at the expense of your ‘deep’ skill
  • The role of ‘pure’ specialist is still valuable (E.g. Penetration; Performance; UX/UI; Accessibility)
  • ‘Pure’ specialist as a consultant or service was expressed as the most powerful use



Slightly away from the core topic but I believe still valuable to share I think, was the value a tester adds to a team or project.  When involved through the whole process, from ideas to delivery; the statement, ‘testers are the glue that binds the stages together’ was expressed.  I thought this was an interesting metaphor along with, ‘testers can also be the conscience of the team’ making me thing about my role in a different way. 

15.30 – Defend the Indefensible & PowerPoint Karaoke
This was an interesting idea to say the least.  An ‘indefensible’ statement was put up on the screen and the victim, volunteer had 30 seconds to defend it.  As an example, one of the statements was; ‘Testing is dead and pointless as everything valuable can be checked by automation and users!’ 

Track 2 – Nerds
09.30 – Docker as a tool for testers – Serena Wadsworth

10.00 – Power of pairing – Lee Grubb
Lee offered us his experience of the power of pairing.  He went through the traditional techniques of set up and Driver/Navigator roles.  After explaining the benefits through some of his experiences he explained some of the other types of pairing including Strong Pairing.  This is where the navigator explains their ideas and the driver interprets.  To understand what your thinking so well you can bring it to life through someone else I thought was a powerful tool.  Although there are lots of styles and combinations, he mentioned dev/dev; dev/ tester; dev/DBA amongst others, his final piece of advice was not to limit yourself and experiment with what works for you.

11.00 – Testing is DevOps – Alex

13.00 – Testing without Testing – Algirdas Rabikauskas, Kristina Valiune and Peter Ferguson
This workshop sought to show us some of the exercises they had done in their peer community.  The time was split into two challenges, the first being to identify an object through clues; Or, ‘understand requirements’.  This took the form of being given a single word, the premise that a ‘rock star’ had a requirement / rider for ‘something’.  You had three minutes to come up with questions and one minute with their agent who could only answer yes or no.  Our word was ‘stick’ and after a couple of rounds where we established it was made of wood, 30 cm long, narrow and cylindrical we finally reached drum stick.
The second challenge was spot the difference with a twist.  There were three duplicated images of a city bridge, paragraph and dice grid.  For two of those the second image had be inverted or turned upside down.  We chose a method of peer review (code review) by assessing an image individually, then passing it on.  After the disappointing news that we had missed one difference, we mobbed the remaining image until we found the final item. 
In summary Algirdas said that these games were helpful in reducing assumptions, critically thinking about problems and developing team techniques and bonding.  Having found the activities both challenging and fun and enjoyed the interactions with my new found team I’d have to agree with those comments.

14.30 – Panel – Continuous Delivery

15.30 – Games including Dysfunctional Scrum, DevOps ball game, TestSphere 

The Periodic Table of Testing, an introduction and history

Firstly, thank you for taking the time to read my blog.  If you have any comments feedback or questions I'm eager to hear them so please get in touch. 

I'll be using the blog to share my thoughts on testing, feedback on events I attend and to share the things I find most useful. 

But the primary reason for the blog is to document my investigations and journey's through the world of testing using my

Periodic Table of Testing

.  

Periodic Table of Testing, a representation of the elements of testing in the style of the periodic table

Over time I hope to navigate through the table as I ask myself, do I understand what this is, how it works and how/when to implement this in the projects I work on.  After all, theory and ideas are all well and good, but if you can't then apply them, what good are they? 

The table is an ongoing work in progress and I expect it to change over time.  It could grow, have elements removed or even have a new sections added.  For example, I'm not sure if interpersonal skills should have its own column or section as there's elements in the table but they are so important perhaps I should highlight them? 

The table takes its inspiration from many sources.  I originally created the

Periodic Table of Data

 back in 2011 while working in a Business Intelligence role as a way to see how new data could fit into our existing framework.  It was also a way to understand what we already had. 

The idea was published in the Testing Planet in March 2012 and the article is available on the

Ministry of Testing website

Periodic Table of Data, a representation of the elements of testing in the style of the periodic table

While I've been playing around with this idea for some years there have been a number of recent influences I'd like to highlight in spurring me on to finally publish.  I attended NWEWT in March 2017 which is the

North West Exploratory Workshop on Testing

.  The workshop was on growing testers and I thought this could in some small way help testers navigate the world of testing.  Or even be used as a visual heuristic of considerations for projects.  Another contributor was

Ash Winter's Wheel of Testing

 and how he used it as a tool for the testers he managed.  

There are so many more influences I'd be here all day but a quick mention to Chris Pearson and Andy Lawrence at Computershare for supporting my crazy ideas to do stuff and beginning my education on all things Agile respectively.  

So that feels long enough for an introduction, please leave any comments or thoughts below.  Thank you.