My neighbour Margit is a teacher in a local school. A couple of days ago, we met in the front yard and chatted about getting started after the summer break, and she told me that she was getting introduced to the concept of challenge based learning. Curious as I am, I had to search for that on the web afterwards to find out more.
As described on www.digitalpromise.org , Challenge Based Learning (CBL) provides an efficient and effective framework for learning while solving real-world challenges. The framework is collaborative and hands-on, asking all participants (students, teachers, families, and community members) to identify big ideas, ask good questions, identify and solve challenges, gain deep subject area knowledge, develop 21st century skills, and share their experience with the world.
The framework is divided into three stages:
- Engage: Identify the big idea and start generating a large variety of essential questions that reflect interest and needs of the community. Eventually, ending up with the definition of a challenge, based on one of the essential questions.
- Investigate: Based on the challenge the learner conducts content and context-based research to create a foundation for a solution. This beginning with identifying a number of guiding questions related to the challenge – things we need to learn in order to find the right solution for the challenge. Use the questions to identify activities and resources necessary to answer them. This can be needed for research, simulations, games, questionnaires etc. As a part of the investigation, we also align questions and activities to any given standards – in school terms, this could be connected to subject areas like science, math, arts etc. Eventually, when all questions have been answered and all activities have been conducted we analyze the data and identify themes, developing a clear conclusion as a foundation for the solution.
- Act: Here the evidence- based solution is developed and implemented and the result of the implementation is evaluated. “Act” is split into two different parts; solution development where the foundation from the investigation phase is used to develop a solution in an iterative cycle using prototypes, experiments and tests. After development we implement and evaluate the outcome – reflecting on what worked and what didn’t. When the implementation is complete we can either reiterate on the solution or create a completion report sharing the work with the rest of the world.
Reading about the model I had two thoughts: “This sounds like a kind of iterative development model taken out of the IT context”, and “could we maybe get inspiration from this to plan and execute testing in an agile environment?”
If we dive into the latter thought – maybe, we could imagine a situation where the team uses the concept of challenge based learning as a way to identify and plan test for the team as well as executing the exploratory charters and collect the knowledge gained by that to communicate the quality of the system under test?
Let’s take a look at the steps described by the framework and try to put words on what it would mean in a testing context.
Big ideas: When you start a project you need to start by defining the purpose of the system the team is going to build. What value should it give to the customer. I think it is a crucial knowledge no matter whether you are a tester or test manager- you need to understand why we are building this system.
Essential questioning. For me, this could be two different activities on two different “levels”. It could be equivalent to the product risk analysis – in whatever shape or form you choose to do it. Brainstorm in the team to get a common understanding. But it could also be on epic or story level you use this – brainstorm over questions for the epic to get a better understanding and clear acceptance criteria.
Challenges. This is where you define your “test project” – collecting the information from PRA, questions etc. and define the scope of the test, the test assignment. How to document it is up to your context, whether it is visualized in a drawing, written in a wiki or in a IEEE829 formatted document.
When it comes to the investigate part of the model, I could see that on two different levels – planning test and also preparing an exploratory test session. I will try to describe both in the following.
Guiding questions. From a planning perspective, the guiding questions could be used as a tool for the test manager; asking a number of questions to clarify scope, risk and test approach for the project. All questions needed to learn about the project and to do the most efficient testing. On the other hand, from a tester perspective, I would use the guiding questions as a way to identify all I need to know and understand in order to test a feature the best possible way.
Guiding activities and resources. If we again see this from two different perspectives, then this activity would be used by the test manager and team to investigate options for the test strategy; automation framework, virtualization of test environment etc. And I don’t necessarily mean documented IAW IEEE829 or TMap, but a visualization of what the strategy and plan for testing this system is. Create a common understanding of the testing challenge at hand. From the testers perspective, this would be where I started identifying potential themes or areas of interest – where do we need to focus, how should we address the testing for this feature?
Curriculum and standard alignment. Do we need to fulfil any standards? FDA regulations? Military standards etc? If so we need to ensure that we check for compliance in accordance with the requirements. This is more in the sense of checking than testing as such, but is a necessary activity as well.
Analysis. Based on the activities and questions we can now specify the test strategy, and as mentioned before this could be in whatever way that fits your context; drawing, wiki, document etc. The tester is now ready to identify the charters and sessions that he knows he needs to do in order to fulfil the testing challenge – the strategy for the project. Based on the collected knowledge and challenges he can break the challenge down into separate sessions. For large systems there might be a need to do this in the team together, facilitated by the test manager.
When it comes to the acting part I see a good mapping session-based exploratory testing.
The solution development is when the testers are conducting the sessions, this is where prototypes are created, experiments are conducted and an iterative design cycle is being executed. That cycle would then be the cycle of simultaneous learning, test design and test execution as defined by James Bach. This is where the knowledge we gained by questioning and challenging is giving payback – in the sense of finding bugs, learning about the system and questioning the solution.
From a test management perspective, this is where we follow up, mitigate risks, support the team. There might be activities to improve the environment, introduce new tools based on experimentation etc.
Implementation and evaluation are the de-briefing of the sessions. This is where we reflect over the session just completed; evaluate the outcome (the PROF mnemonic – Past, results, obstacles, Outlook, feelings. See satisfice.com for details) and evaluate any needs for supplementing with new charters etc. This is also where the test manager prepares any reporting he needs to do for the project as a whole – so again two different perspectives. When it comes to the reflective part, the test manager might revisit both set of questions used earlier in the project, both during engage and investigate to see if the answers were found and the test assignment met.
What I Learned?
What do I like about the above? Actually most of all the questions; using questioning deliberately and in a structured manner to learn and gain information as a foundation for test planning and test specification/execution. I regularly see test managers sit in an ivory tower when specifying their test strategy, only involving others on a very limited scale. And often we have to base our test strategy on a limited amount of documentation, but are missing a good approach to collect the information needed – maybe this could be a way? With more and more projects being agile, maybe this challenge based learning concept could be used as a collaborative way to create a team owned test strategy?
The challenge based learning method actually operates with a three question approach; this could add value for the test manager in his planning and control of the test in the project? Maybe by using them like this:
- First level of questions to understand the assignment at hand (Engage – essential questioning)
- Second level of questions to understand what the approach needs to be in order to support the assignment (Investigate – guiding questions)
- Third level of questions to evaluate: did we indeed fulfil the assignment (Act – evaluation)
Could there maybe even be groups of questions that we could reuse as a foundation for inspiration – at least for the two first levels of questions? One could maybe see them as heuristics of test planning, or maybe even as question patterns – q-patterns as originally described by Vipul Kocher (e.g. at EUROSTAR 2008: http://www.slideshare.net/EuroSTARConference/vipul-kocher-software-testing-a-framework-based-approach):
A set of inter-related QUESTIONS grouped together to:
• Relate to some aspect of user or software requirements
• Provide various alternatives to arrive at a solution
There are questions that we repeat every time we start a new project, why not gather these and get that head start when engaging in a new project, investigating the challenge at hand? Of course, the context dictates a number of things that will differ from time to time, but other questions are always relevant.
Well, this is all good and well, but what did it change that I tried to do this mapping between challenged-based learning and agile testing? I didn’t revolutionize the world of testing that is true… but I think it is a small evolution in the sense of a structured yet flexible and context driven approach to test planning and execution. And just the process of stopping, looking at others models, trying to see how my world fits in is of value – continuous learning and improving my own world of testing.