Archives for the month of: December, 2012

We finally managed to get Trevor Nesbit to stay in one place long enough to give a talk to the lab group this week. He’s been visiting from the University of Canterbury since the start of December, but has been disappearing off to conferences and on visits hither and thither since.

Trevor’s research stems from his teaching experience. He started out teaching small classes of less than 50 students, and then more recently moved to teaching classes with 200+. He found that some of the techniques that were extremely useful in smaller classes (e.g. small group work, where one member of the group feeds back to the rest of the class) really didn’t work in these larger settings. In fact, as a baseline for his later work he ran an experiment, with a group exercise just before a 10 minute break in the 2 hour lecture that was supposed to be reported back on after the break. Of the 300 students in the class, only 80(!) turned up after the break.

So Trevor has been looking at ways to use what he calls “Prevailing Personal Social Communication Technologies” to overcome some of these difficulties and increase engagement in larger lectures. He started by creating a system that allowed students to use free text messages to anonymously send responses to the lecturer, who can select which to share. This gets past the problem of inappropriate responses! Trevor has used this system in a number of tests, from asking for just the names and favourite colours, through to working out numerical answers to problems and generating an “ask the audience”-style frequency chart, and on to reporting back from small groups.

He and his collaborators have noticed some interesting long-term effects of using this kind of anonymous feedback mechanism:

  • the inappropriate responses tailed off. They weren’t high to start with at around 4 out of 70+, but (possibly because they weren’t getting through) they stopped as the term continued.
  • they started getting other feedback that was useful, such as “please speak up” and “you’re going too fast”. These aren’t things that people are likely to be comfortable with saying in front of 300 of their peers, but the system allowed them a way to communicate.
  • after a few weeks using the system the groups appear to be happier to participate in general, not just via their mobiles. Hands-up style votes often only get around 20% of the group participating, whereas all the lecturers using the system saw an increase.
  • there was anecdotal evidence that the students were helped by seeing the feedback from other students, particularly when they were asked to explain how they had reached a solution.

The phone-based technology is slightly clunky and difficult , and an obvious change is to use smart phones. A group of third year students have developed a web app to do roughly the same thing in a less clunky manner. Trevor has plans to use this with classes in the coming year, with more formal measures of class engagement now that the system is more established. He also mentioned that care needs to be taken to design the participation exercises carefully – just as in smaller classes.

Ben wondered if there was any way to link the results of the students to their use of the tool, but Trevor said that as the experiment was to run on subsequent cohorts there may be some underlying differences between the cohorts that could affect the results. There is already a well-documented link between engagement and improved performance, so that is less important than proving an increase in engagement. Ben suggested that maybe you could compare the students who sent messages to the students who didn’t, but Liz pointed out that this gets into the debate about the value for lurkers, who may gain a lot from observing the participation of others without participating themselves. The group suggested that there could be a lot of value in the anonymity of the system not only for asking “stupid” questions, but also for the people who are seen as always sticking their hands up!

Trevor said they’d had a “bit of hassle” with earthquakes in Christchurch. Fingers crossed for some smooth sailing to pursue this research (and maybe rebuild!).

This was the final meeting of term, and so it was only fitting that Edgar provided us with some of his fabulous food. Cochinita sandwiches all round. Truly delicious, thanks Edgar! And lo, the frozen cakes survived the term. Who knows, maybe some desperate PhD student roaming the lonely corridors over the Christmas period will be sustained by finding these goodies… We’ll find out in the new year.

Merry Christmas all!

First up, congratulations to Pejman who has had both a workshop and a paper accepted for this year’s CHI. He’s had a busy year.

This week we had another guest visiting. Lennart Nacke from UOIT is Pejman’s co-supervisor, so Pejman managed to convince him to come and talk to us while he was on this side of the pond. Pejman is an incredibly persuasive (as well as busy) man! The format was much more of an informal session, with Lennart telling us about his interests and what his lab group (the HCI and Game Science Group) at UOIT are up to and us interrupting/commenting/asking questions. This post gives a very high-level view of that.

Lennart is looking at game experience and particularly how to increase immersion. A corresponding interest is therefore how to quantify and measure these in order to know when they’ve been improved. Games are all about learning, so how do we design a game that continues to be interesting once we’ve mastered the basics?

Lennart likes to play with various sensors to try to measure the experience of the player, but industry links have fed back that shelling out thousands of dollars for expensive sensors is not going to happen. So, with his lab group, he’s been investigating the uses of cheaper “toy” sensors, particularly things like Neurosky and Emotiv to see what information he can get out of them and whether these novel devices can be used successfully for input as supplementary game controllers. Games are a perfect environment for using these less expensive sensors, as minor inaccuracies are not important in the “safe” environment of the game world. He’s managed to get his undergraduates to produce some really interesting work using them, but says we are still very much at the beginning of getting the brainwave data, and still a long way from being able to even reliably describe the same state each time. There’s a case for some interesting work with games designers and cognitive scientists here!

The other area Lennart has been looking at is using games to increase exercise. Although the wii was heralded as doing this, the video above suggests it is only somewhat successful! Lennart had again had undergraduates experiment with Half-Life and a bike input, using the bike pedalling to affect the offensive and defensive capabilities of the player. Apparently this was really effective, although the guy doing the programmin could hardly walk the day after. Our group came up with some interesting ideas on how to safely limit the exercise levels, or maybe move the threshold depending on heart-rate etc. The opportunities for collaborative gameplay with these kinds of games is also really interesting, where maybe one person does the pedalling and another works the more normal controller.

A fun and wide-ranging discussion. We also sorted out what to get Ben for Christmas.

We again managed to avoid the frozen cakes, with a selection of fairy cakes, mince pies (apparently these are really British – who knew?) and Roses chocolates on offer. Anyone would think it was the end of term! However, we do have one more lab session before Christmas: next Tuesday in the Interact lab it is the turn of our other end of term guest, Trevor Nesbit. Looking forward to it!

This Tuesday Edgar presented an overview of his PhD research, mainly focussing on his current task derived from his last experiment carried out in Mexico. The aim of the project is to explore the usefulness of natural language (NL) descriptions for small algorithms when novices are learning the basic concepts of programming (input/output, assignment and loops).

Programming environments, such as Scratch or Alice, are based on graphical manipulation interface and error-free syntax, to support programmers. However, these environments still missing a way to “translate” or express programs in a language that is familiar to novice programmers, thus users cannot easily verify whether the current solution matches their intention. Edgar’s research explores the usefulness of providing a second representation to support a visual language (in this case a flowchart representation) designed for novice programmers. He aims to assess the benefits of a second representation when comprehension, creation and debugging programming skills are involved.

The main inspiration for exploring natural language as tool to support novice programmers comes from FLIP, a system that combines a visual language based on interlocking blocks with natural language support.

flip screenshot

Flip interface: (a) scripting editor top left, (b) natural language description bottom left, (c) toolbox of scripting instructions on the right side.

This research uses a tool called Origami, which is used to teach basic programming concepts to novices in Universidad Autonoma de Yucatan, Mexico. This tool enables students to create, test and debug programs using a visual language based on flowcharts. The programs produced are syntactically error-free, however, students may input instructions into each flowchart block that are semantically incorrect. Origami logs interaction data during the problem solving process: time taken to complete an exercise, the blocks used, the inner blocks’ instructions and any compilation messages. Each interaction is associated with a timestamp that preserves the chronology of the problem solving process.

The Origami tool was improved using some design ideas from Flip and now combines a visual language based on interlocking blocks with natural language

origami screenshot

Origami + Flip-ish = Origami + Natural Language Descriptions

Last August Edgar headed back to Mexico to carry out his experiment. The experiment considered a comparative study with three conditions: the earlier version of Origami (no secondary representation), a version with a secondary Natural Language description and a version with a secondary pseudo code representation. Some details about the experiment:

  • Participants: 75 first year students of a Computer Science 1st year  course (63 males and 12 females), aged between 17 and 21 years old. Three groups were formed and were taught the same material by two tutors
  • Materials: The materials used in the study were a programming pre-test and post-test, the Origami tool (3 versions), and 3 sets of algorithmic exercises. The pre-test and post-test and each set of algorithmic exercises consisted of 3 type of exercises (completion, debugging and tracing) each with three difficulty levels (simple, medium and complex). The format of the items for the pre- and post-test were based on multiple choice questions (debugging and tracing) and open questions (completion), and were administered via the online course delivery tool.
  • Method: The experiment was carried out over a period of approximately 6 weeks. In addition to the practical work there were a total of 12 sessions (24 hrs), in which the basic concepts of programming (variable, sentences, conditions, loops) were taught. The first week the pre-test was applied. The 3 sets of algorithmic exercises were assessed as coursework during the weeks 2 to 5. Each set of algorithmic exercises was divided into two subsets: (1) simpler exercises and (2) medium & complex exercises. Finally, the post-test was administered during the last week of the experiment.
Experimental design

Experiment: Three conditions, Pre and Post Test, and three sets of exercises.

Finally after some really hot weeks in the south of Mexico, Edgar came back to the UK with a bag full of Mexican sauces and lots of data. Currently he’s working through the sauces, trying to keep warm, and looking at the pre- and post-test analysis of the explanation exercises from the students, where the students were asked to explain the solution for a math-related problem in terms of an algorithm. He’s using more detailed rubric, and at the same time detecting errors, using an omission/commission classification based on a previous work from Judith, Katy and Keiron.* Lots of analysis to do!

The session was full of suggestions about different and interesting kinds of analysis. Ben suggested (and Judith strongly agreed) that the analysis of the pre- and post-tests should be done blind to which groups the individuals were in. Ellie asked if the basic Origami system has ever been compared with the paper version it replaced, which made Edgar realise that the people using the basic Origami were still asking for paper to scribble on, while the other two groups weren’t. There appears to be an early indication that the group who had the natural language representation were better at explaining their solutions in the post-test, which caused the group to suggest that maybe this was equipping the students with the language to talk about what they have done.

We are back in our normal Interact Lab next week, at the normal time. Hopefully we’ll get a full house, as we have a special guest appearance!

*Good, J., Howland, K., & Nicholson, K. (2010). Young People’s Descriptions of Computational Rules in Role-Playing Games: An Empirical Study. In 2010 IEEE Symposium on Visual Languages and Human-Centric Computing (pp. 67–74). IEEE.