Another good turn out at the lab meeting today (sadly Lesley sent her apologies) for Ben du Boulay talking about a grant proposal he’s been working on over the summer. The proposal is for research into “Mental models and the learning of programming”. The proposal has been developed in partnership with Saeed Dehnadi and Richard Bornat (both from Middlesex University).

The work is a continuation of the work that Saeed Dehnadi did for his PhD. In that work, the 12-question Dehnadi test was constructed as an attempt to identify an innate aptitude for programming among people starting a computer science course. The questions are written in a sort of pseudo-code, and the participants are given no instructions as to how this works before they start (see figure below). They are asked to select the correct values for variables assigned in the code from a multiple choice list of possible answers. These are then marked, with three options – the person has correctly and consistently interpreted the questions, they have been consistent but incorrect, or they have been totally inconsistent and incorrect with their responses. The results of these tests were then compared with the end of course exam results. There was a significant chance of passing the end of term exam if you started out in the consistent and correct or consistent and incorrect group, and a significant chance of failing the exam if you started out in the inconsistent group.

A sample question

This suggests that consistency (understood as indicating a systemic understanding even if it’s not the understanding the testers expected) is more important than correct answers in predicting which students will do well, and suggests there is an underlying construct that is missing and needs to be taught for the inconsistent group. The two groups were labelled as algorithmic (consistent) vs. non-algorithmic (inconsistent).

One study was also done where the students re-took the test after their final exam. Interestingly, there were some who had passed their exam but still did not exhibit a consistent approach to the Dehnadi test – which might possibly raise questions about their deeper understanding (and just possibly the teaching and exam-setting!).

The goals for the research proposal are to try to understand the underlying problems that dog some of the people who are trying to learn to program, particularly aiming to contribute to the current debate around learning programming at school and university. They do not plan to offer any solutions to the problems. The current goals are:

  • Looking for more data from the start of the course to correlate with the end of term results.
  • More data looking at re-taking the Dehnadi test after the final exam.
  • Possibly to look at different teaching methods with respect to the results of the Dehnadi tests at the end of the course (no suggestion as to which teaching methods would be best, just an exploration).
  • Further develop the test (which currently deals with variable assignment) to look at more complex things like loops, conditionals etc.
  • Include some longitudinal work, with interviews after the tests to discover what the person’s approach was to the test, maybe to identify strategies that were consistent but didn’t appear it to the outsider. Also to continue to follow people through their course to attempt to gauge the evolution of their understanding.

The test was originally paper-based, but is now online and a program has been written to “mark” the test. This means that the data is more amenable to different sorts of processing and could even be re-evaluated as more of the issues are understood. The proposal is currently with ESRC.

There was some discussion afterwards about the risk of discovering that all our teaching methods were ineffectual! Ben pointed out that there has been a lot of work done on teaching programming over the last 50 years, but students were still failing so there was obviously more work to done. He did admit to being just a smidge worried about how many friends and colleagues they risked upsetting though! One point that got raised was that there has been a lot of work done on “Threshold Concepts”, which seemed very applicable to programming – if you don’t understand the first step, you can’t build on top of it. The ‘bi-modal’ results often seen in Computer Science suggests that this is a useful way to consider this.

Many questions were raised about the format of the questions – were the multiple choice answers necessary (we decided yes, for total non-programmers), the use “int” instead of “integer” or something less programmer-like etc. Pejman raised some questions about the motivation of the participants – were they really trying to get the right answer, or just trying to finish fast? Eric asked if any comparisons had been done against other tests – but Ben pointed out that if it’s hard to motivate participants to complete one test it’s going to be even harder to get accurate data from two!

Next week we have Pejman talking about some of the work he did over the summer in Canada. 11-12 in the Interact as always…

Advertisements