Archives for the month of: November, 2013

This post got somewhat delayed by various events. Apologies to both Pejman and Reza. 

Crucially at this lab meeting we had to start by congratulating Pejman on his successful viva! This was done in style, with many cakes from what appears to be the lab group’s favourite, Patisserie Valerie. Ignore the healthy-looking apples in the middle there – we certainly did!

Celebratory cakes

Once we had finished with the cakes, the rest of the lab session was a guest presentation from Reza Rawassizadeh, who is currently working at the Open University in the UK. Reza is currently looking at the problem of personalising public transport, with the grand aim of persuading people to leave their cars at home.

The project he is currently working on looks at the buses, and the various mobile applications supplied by bus companies. In particular he is focussing on buses in Milton Keynes in the UK, and Lisbon in Portugal. The buses in Lisbon are apparently pretty well networked, with automatic fare detection in operation. Analysing this auto fare information has allowed them to show that most people are creatures of habit, riding pretty much the same route at the same time on a given day.

He has found that the apps focus very much on providing info on timetabling, which provides only limited information for the passengers to choose which bus they are going to get. There are often several ways to get between two points, which may have different things to recommend them depending on the traffic or the time of day. These include the amount of crowding on the bus, or perhaps the temperature on a given bus and so on. He also found that the apps on offer were not well-designed. His early studies identified that the main use-case for these apps is whilst walking towards the stop, and most were not well designed for someone who is not concentrating fully on the application.

He produced a number of alternative application designs, attempting to integrate the bus crowding data with the arrival times for the buses. These were then shown to bus users in Milton Keynes, who were asked for their feedback on the designs.

The solution that Reza found was very popular featured an animation of the bus and where it currently was on its route. The problem with that is that the data shown is only an approximation, as the bus company do not have precise data on where their buses are at any given time. It would be interesting to take this design beyond the mock-up stage and give it a more thorough test, to see what implications the lack of accurate data has for the usability and how much the bus users like it in day-to-day use.

It was great to hear about this kind of research, and understand some of the challenges that come about due to the design of the bus system in a given country. The auto-ticket functionality in Lisbon provides much more complete information about who travels where and how many people are on the bus at any given time, whereas UK buses make no record of when or where someone gets off the bus. The fact that the buses cannot be accurately located at a given moment means that one of the big questions (how long until the bus gets to this stop?) cannot be answered as accurately as users might like. He also highlighted some of the trials and tribulations faced by a researcher in this area – the passengers were all too willing to share their frustrations with the entire bus system!

We wish Reza all the best with this challenging project, and hope he succeeds in improving the lives of bus passengers everywhere (and maybe specifically in Brighton…).

Advertisements

In today’s lab meeting we were discussing another paper, this time on the emotions experienced by novice programmers during their very first programming lesson. It was presented by Bosch et al* at the 2013 conference of Artificial Intelligence in Education.  The study participants were undergraduate students from the Psychology Student Pool who had no experience of programming. These students underwent a 3-stage process over a period of 2 hours: first they completed a series of “scaffolded” problems, with hints and help available. Then they completed two “fadeout” exercises, with no help available. Finally the students were asked to review the videos of themselves completing the first two exercises and asked to select what their emotions were at various points from a subset of Pekrun’s taxonomy of academic emotions.

The authors found that four emotions were experienced more frequently than the others: flow/engagement, confusion, frustration and boredom. The proportions of these apparently varied depending on which of the first two stages of the experiment (scaffolding/fadeout) the students were performing. They also correlated the emotions with the success of the students at completing the tasks, and attempted to systematically link them to three student behaviours (running the code, idling, or constructing the code).

As ever, having a paper to start the conversation produced a lively debate! One of our big problems as a group centres on the idea of flow/engagement being the desirable state for learning. As teachers and learners, we have found that periods of frustration are not altogether bad, and the joy of passing through that frustration to understanding is sometimes one of the key highs of learning. Equally, what externally looks like flow/engagement may very well not be. There have been other studies that the group knew of where people looked like they were in this flow/engagement state, but when the researchers examined what they had produced it was rubbish. What was felt might be a better approach would be to try to examine the routes through the different emotional states, and correlate those routes with different outcomes. The challenge then becomes how to identify that someone is on a destructive pathway and, crucially, what (pedagogically) to do about it, rather than reacting instantly to prevent frustration.

The use of the thirteen emotions also raised some issues for the group. By limiting the responses, in a sense the researchers are pre-determining their outcomes. There also didn’t appear to be any agreement with the subjects on what the terms actually meant to them. Jim’s experience in counselling suggests that when dealing with emotions you sometimes have to explain what the word refers to before people understand what they are feeling. Equally some of the group remembered working a project with Madeline Balaam where they negotiated the terms to be used in reporting the emotions with a group of students, rather than predetermining them.The group also thought this would have been quite an interesting project to use the same techniques that Pejman used, where he used biometric data to highlight areas that were worth talking about with a player, rather than asking them for feedback on blocks of around 20 secs at a time.

We also came up with some interesting ideas for follow-up studies. Obviously in this experiment they were relating the emotion to results within the same session. However, we felt it would be valuable to do a slightly more longitudinal study, and relate the emotions in the first lesson to outcomes after an entire term of study. This would help to demonstrate the importance (or maybe unimportance) of the first lesson. Also, we thought it would be interesting to try the same study with a group of computer scientists, or at least with students who had expressed some level of interest in trying to learn to program. The comparison in the emotions experienced in the two groups could be helpful in exploring the role of motivation in the emotions and outcome.

The group were very definitely thoroughly engaged by this paper, which touched on so many areas of expertise for all of us. An entertaining and thoroughly enjoyable discussion.

*Bosch, N., D’Mello, S., & Mills, C. (2013). What Emotions Do Novices Experience During their First Computer Programming Learning Session. In H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), Artificial Intelligence in Education (Vol. 7926, pp. 11–20). Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-39112-5

It is that time of year. The ACM SIGCHI reviews are out, and the lucky recipients have one week and 5000 characters to use rebutting them, trying desperately to get their beautiful papers into CHI 2014 and earn themselves a trip to Toronto. In today’s lab meeting we spent a little time unpicking some of the reviews and how they might be responded to. This post is not about the detail of the papers submitted though (and all lab group names will be left out, just in case!!) – this is more about the underlying process, and how to best go about constructing a sensible, non-sweary rebuttal. (It may take a day or two to get to the point where this is possible.)

The key part of a rebuttal is understanding the reviews. Each paper gets three reviews, and a meta-review that is pulled together by the Associate Chair (“AC”). The meta-review is pretty key. This gives you some kind of insight into whether the AC is “on your side” and is leaning towards including your paper or not. The key is the rating they give you. If they have given you a rating that is lower than the average of the three main reviews, you’re going to have to work pretty hard to convince them to include it.

So, then you need to work out what the key points are that you need to address. The focus should be on clarifications of your position, not major reworking of your arguments. As Hyungyoung Song points out, agreeing to a major rewrite of your key arguments suggest that you agree that there are underlying problems and massive rewriting usually suggests that you may be better off submitting to a later conference. Instead you need to focus on areas where the reviewers have misunderstood, so that you can clarify and help. Although it may not feel like it right now, this may, in the long run, even help your paper.

Part of helping to identify the key points that will need clarifying may well be to understand the reviewer’s form for this conference. There are certain key items that a reviewer is asked to look for, and it may well help to actively sign-post those items. For example, the key contribution of the paper is a good thing to highlight. Two of our lab group are working as ACs this year, so they were able to cast some light on this for people in the group who haven’t done this yet.

The format of the rebuttal was also interesting. The advice from one lab member was to be very specific, and say where you will add sentences to clarify, but also where you will remove something to keep within the page limit. The people who make the decisions are trying to decide whether your rebuttals are strong enough to answer your critics, but also whether they are feasible. This kind of structure makes it much easier for them to see that it is possible, and given the workload on the ACs anything that makes things easier for them is likely to count in your favour.

The best advice by far was not to give up at this point. 5000 characters is worth it. Don’t even think about what else you could do with your paper until after the final rejections have come in. There’s still hope!

Good luck to all who are going through this at the moment…

Starting this week with the news updates, the HCT lab is part of one of four larger sub-groups within the Department of Informatics at Sussex. Until recently this group has been known as Interactive Systems, but Judith reported a proposal to change this name going forward to “Interactive Design and Applications for Society”, or IDeAS. This met with general approval from the lab group members, and we will wait to see if it is adopted by the larger group.

Ben reported that he has been made “President-elect” of the International Society for Artificial Intelligence in Education, with his term of presidency due to start in 2015. Much kudos and congratulations for that, and we’ll have to hope the power doesn’t go to his head.

The lab meeting this week was used as a reading group, with Judith leading the discussion on “Uncomfortable Interactions“, by Benford et al*. This paper discusses the role of uncomfortable interactions within HCI, making the point that traditionally HCI has sought to reduce the pain of interactions whilst in other areas (notably art, films and occasionally education) painful experiences can be extremely powerful. The authors identify different kinds of discomfort – physical yes, but also cultural or control. This is an idea that some of the members of the group working on the African Farmer Game are becoming more familiar with, as the game does tug on some emotional strings of the players, frequently leaving them in a less-than ideal situation. Without this outcome, the power of the gaming experience for the entire group of players would be much reduced.

The discussion covered a many points. The perception of what is discomforting varies wildly from individual to individual, whether via an interface or in a social situation – e.g. the group’s experience of working with children on the autistic spectrum with their needs for calm and predictability may cause something to be an extremely uncomfortable experience for them. This in turn leads to questions about the ethics of the situation, which were covered in the paper but obviously sparked many personal viewpoints within the group.  We also noted a lack of the educational examples – the authors appeared to be firmly embedded in the performance side of uncomfortable interactions, but as mentioned above there is a lot of scope for a controlled level of discomfort in learning situations. And finally, the group felt it would be good to get a clearer view of the type and sense of discomfort that was being designed, and how those aims have influenced the interface design decisions made.

Overall, the group agreed that this was a very well-written paper that began to explore some very interesting points.

*Benford, S., Greenhalgh, C., Giannachi, G., Walker, B., Marshall, J., & Rodden, T. (2012). Uncomfortable interactions. Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems – CHI  ’12, 2005. doi:10.1145/2207676.2208347

TERENCE was a 3 year project that aimed to produce an adaptive learning system to help children learn to comprehend what they read. The University of Sussex were involved in the evaluation of both the learning outcomes from the system and the usability of the system itself. NicolaYuill and Eric gave us an update on their findings, with particular emphasis for our group on the usability.

Due to the various constraints on the project, the main methods for evaluating the usability of the system were a series of pre-deployment expert-user focus groups, and after the pedagogical intervention a series of interviews with the users themselves (both students and teachers). The project had essentially 3 GUIs – one for the experts creating the content, one for the educators, and one for the learners. This evaluation was aimed squarely at the learner interface.

TERENCE was deployed across 4 hearing schools and 4 hearing impaired schools in the UK. There were 83 hearing students and 24 hearing impaired. The interviews with the hearing impaired students were carried out with the help of an interpreter. The system was also tested in Italian schools, but the outcome of those tests were analysed by a different team.

Eric said that several themes had come through in the discussions with both the teachers and the learners. There were a few technical difficulties with this ambitious project, resulting in a number of usability problems with logging in and glitches when the learner was reading. There were also some issues with the level of the content – the stories were too difficult for some of the students, and this lead to them getting bored and just trying to click their way through without actually comprehending the content. However, after the end of the study, Nicola said she had a number of phone calls from the schools involved asking how they could continue to use the project, suggesting that the usability issues were not enough to dissuade the learners from wanting to use the software.

The group as a whole were interested to hear about the use of the system logs, which really hadn’t been used particularly much in the analysis due to the content being poorly designed for this. As several of us within the group have written systems that include logging, we all recognised how hard it is to design a system that is able to help answer many unforeseen questions! Also some kinds of information may be difficult to log – e.g. if a login fails due to network error, that request may never have reached the server to be logged, or the time spent on a given task may be difficult to track if the person can open the task, walk away, make coffee, chat to three people, then come back 40 minutes later and complete it.

The TERENCE project is now beyond the 36 months for implementation, and it was really interesting to see how far they got in that time. Many thanks to both Eric and Nicola for coming along to share this with the group.