Showing posts with label TDR. Show all posts
Showing posts with label TDR. Show all posts

June 10, 2009

Dojo TDR @ XP Day France 2009

This is a late feedback on my session:

+ good attendance, room was full but not overcrowded, great :)

? this was my fifth experiment of this type of dojo, we started the first experiment with a colleague last November; I still had high expectations on the output

+ the subject (Blackjack) was an excellent base to work on, thanks to Gojko whom I stole the idea from

+ thanks to the XP Day goodies, I got a much better timer than I used to have for timekeeping

- I proposed the fitnesse wiki to record the tests, not a very efficient tool for authoring, but that's all I had on my laptop

+ the audience was very dynamic trying to help the keyboard holder

? I started the 5mn turns, intentionally choosing a test that was not at the heart of the problem, just to check whether the participants come back to the core topic, they did not, so I'm not sure that was a good idea

- the participants started quite late to identify interesting tests, actually that was half-through the session when I asked them whether the tests roughly illustrated the Blackjack rules

+ towards the end, one participant got rid of FitNesse and grabbed a pencil to draw some tests on the paperboard, that helped a lot to share an understanding on blackjack rules

+ the conclusion that I made to the audience was something like: "if you want to learn about the domain and the requirements, don't pay too much attention to the rules, they're distracting you from the objective"

+ finally, I received very good feedback about the session from the participants, if you attended the workshop and have some more feedback, please let me know

August 9, 2008

The material of the TDR workshop at Agile 2008

I'm posting here the (electronic) material that I created for my workshop at Agile 2008, Test-Driven Requirements: Beyond Tools. Feel free to reuse it to play the game yourself, I just want you to keep me informed of the output.

Game 1:
I created this game after an experiment in which I participated at the University of Linköping (Sweden) back in 1999. I found the concept interesting enough to make a parallel with software engineering. I named the 2 main players "Product Owner" and "Developer" and I added a couple of roles, the User and the Tester, to add a bit more spice to that game and make it closer to software product development process.
To play the game, you need 2 identical sets of wooden blocks, a table, and a splitter.


Here are the instructions of the game:
- Audience instructions
- Players instructions

Game 2:
I completely made up this one (or think so until someone tells me it's an old game they played at Agile 1975 ...). In this game, groups of 3 to 9 persons try to uncover the requirements of a vague specification that a user (one of the group's member) had read just before. The objective is not so much to uncover real requirements or needs, but rather to use tests to explore the needs and formalise a specification. The users don't know anything concrete, so the specification should be elaborated as a result of a group collaboration.


The game starts with a short presentation reminding the audience about tools and techniques for using tests as specification. Then, one user in each group is given a Memo to read to get to know the context of a system on which the group will work. I allow the group to work 20 minutes and then ask them to write the result of their collaboration on a sheet of white paper. I will post soon the output of the workshop (I have to upload a report on the Agile2008 submissions website).

May 13, 2008

Agile 2008

My session proposal for Agile2008, the world conference about agile software development, has been officially selected ! This is a great news and means I will be flying to Toronto beginning of August to attend this exciting event. I'm scheduled tuesday the 5th at 10.45 AM, I will therefore have the pleasure to kick off the Conference sessions just after the keynote (monday is reserved to research topics and icebreaker). I now have to prepare the details of my session and make sure I can get the material I need to run the workshop smoothly and make it a successful experience for the participants. I will be posting some updates on my preparation as it comes.

April 29, 2008

Test-Driven Requirements versus functional test automation

I am updating the case study of the TDR course that I recently developed for Valtech Training. During the first sessions of the course, it turned out that the trainees had a difficult time understanding the differences between Test-Driven Requirements and GUI Functional Tests. So I decided to improve the case study with real examples of automated GUI functional tests.

The case study is a simple webapp that is supposed to be an online banking system (it is of course greatly simplified). I am the product owner of this app, and one Valtech consultant developed it for me (because I can hardly code something now). I built the functional specifications with FitNesse, using test tables to interact with the developer and discuss the functional requirements. In turn, he used the fixtures in a TDD manner to drive his development. Below is an example of the functional requirement that describes an immediate money transfer :

As you can see, we mixed textual descriptions in a use-case style with an example to illustrate the description. The example is actually a test, specifying preconditions, actions, and verifications. We used a RowEntry fixture to specify the preconditions, i.e. to populate the database with test data, then we used an Action fixture to describe the actions, and a Row fixture to verify the results. The example is in french. If you cannot understand french, you just need to know that we spent some time to choose appropriate fixture names, so that the test tables do not feel like automated tests or freaky technical stuff inserted in plain text specifications. We really wanted that the test tables read like real examples, so we chose fixture names like "There exists the following accounts", or "Consult account details", which read more naturally than freaky names we often see with FIT/FitNesse examples "com.myapp.test.fixtures.ConsultAccount". We also put some effort to mature a domain specific testing language (DSTL), so that we could reuse the expressions for precondition or verification with different requirements. For example, we used a lot the fixture "There exists the following accounts" to set up initial test data.

Now let's move to the GUI testing. I made use of a Selenium-FitNesse bridge (the like of webtest) to integrate my automated GUI tests with my specifications. That bridge provides a testing mini-language to allow non-technical people like me writing automated test cases without knowing Selenium scripting. I created a couple of cases to test the webapp, below is an example of an automated test case that traces to the specification given above:



In this example, we see that the flow of actions and verifications is overwhelmed with interface details and strategic instructions (wait for page to reload, etc). The testing mini-language gives a better readability than the equivalent Selenium script would do (some test specialists like to call this "keyword-driven testing"), but still, the test case is not good at serving the specification side. Both types of tests are complementary and should co-exist. The first FIT-like type serves to drive and support the specification process, it also helps in building a regression safety net; the second Selenium-like type serves to test the application and make sure every piece of the software properly fits together. In the end, we should get something like this:

The screenshot above shows the organisation of my FitNesse wiki for the webapp. I created a topmost suite that contains two "sub-suites": one is the specification artefact, containing the FIT-like tests, one is the acceptance test artefact, containing the Selenium-like tests. I made use of links to trace tests from the second artefact to the first, so that I can have an idea of impacted GUI tests when I change the specifications. I can run seperately the specification tests and the acceptance tests, or within a single click. It should be clearer now that TDR and GUI fonctional testing serves a different purpose. I used FitNesse and Selenium because they are open-source, but the same kind of approach can be followed using test management proprietary tools like QualityCenter in combination with test robots and requirements management tools. It's not going to be the same price though.