Apparently Python's generators are great for testing user input. This is just one way of many. Previously I've used a scenario based scheme. In scenario based testing the idea is to set up a multi-line string that mimics the user input and the excepted output. The test runner then asserts that against the actual implementation.
This can be seen as a form of acceptance testing. The solution I'm going to present here works on unit level. I guess these two methods could complement each other in some situations.
Originally I wanted to test the code below:
As you can see it's not very testable. First of all the functions I wanted to test are inside another function. Secondly they have some nasty dependencies (mainly raw_input). In order to get forward I separated "_choice" and "_boolean" to another module (questions.py) and got rid of the _ prefix. This solved the first problem. Now what to do with the raw_input? Let's extract it!
Here's the finished "questions" module:
And its specification as defined per Speccer syntax:
Original module I separated "questions" from looks like this now:
Way neater! What's going on here? As it happens generators provide us a way to pass both test data and actual user input to the functions. If you have not used generators before, they might look a bit weird at first. It might help to think them as functions with state. That "values" function over there at the specifications for instance yields next item of the list for each successive call made. This property makes it easy to use generate various series.
I know I could have left raw_input to its original place and just mocked it. I'm not convinced that would have been a good solution in this case, however. It just would have made my test code more convoluted while leaving the mess in place. It seems like a good idea to extract input.
I hope this post gave you some idea how to test user input particularly on unit level. Generators seem to fit the bill particularly well. They are useful beyond testing of course.