Archives for the month of: October, 2012

Today’s lab meeting featured Ellie Martin talking about a workshop she attended at Fun and Games 2012, called “Conceptualising, Operationalising and Measuring the Player Experience in Video Games” (run by Peta Wyeth and Daniel Johnson). She’s written it up herself!

The aim of this presentation was primarily to continue a discussion that we started at the workshop, building on the pictures and notes we took on the day. The presentation therefore centres on sharing what we did, and allowed ample space for the lab group members to comment and discuss as we went through.

The workshop was only one day, so to cover the most ground the participants split into two groups. One group looked at ways to measure the experience of the player, and the other looked at modelling the player experience as we understood it. I joined the modelling group, so chose in this presentation to look at measuring first.

The measuring group decided to think about a list of all of the measuring techniques they could think of. They then looked at whether a method was quantitative/qualitative, based in the lab or the field, and first person or third person.

A first person method is something where the participant or player is giving the data consciously, and has an input in the data they give (e.g. diary, interviews). A third person method is where the tester is reading all of the meaning from the unconscious data gathered from the player (e.g. observation, biometrics). The group discussing measuring were talking about an idea of a “2.5-person” view, which kind of sits between the two. In fact, in the lab group we kind of had a discussion about how many of the methods listed actually slot neatly into one or the other anyway.

The modelling group came out with two diagrams:

Player experience model

This was an overall picture of the player experience. We started with the game and the player. The game experience was felt to sit at the space where those two didn’t quite meet, and we have player choice going one way into the game, and game feedback coming back to the player. We chose to encircle the whole thing in a nice, continual context (indicated by the pink shading), and included the designer as separate from the player and game but within the context. Researchers were added (faintly) across both players and designers. The meta-game (things like fansites, player walkthroughs, reviews etc) sit outside the context of the gameplay.

The context was seen as really being too vague, and it could be argued that the designer sits in a different context than the player. It was suggested that we could really do with being more concrete about the elements of the context that affect the various components, which was raised on the day. Liz recommended looking at it from a systems point of view, which would allow us to define better which areas were in the system and which sat outside it.

We then expanded on the interplay between player and game, using a loop someone remembered from Don Norman.

Player Experience model detail

Ben commented that actually our diagram could very easily model any digital system. I agree, and pointed out that I feel it could equally well describe someone playing a boardgame too. This raises questions about what the differences in a video game may be, and where in our model we have missed them. (No answers on that yet!)

The final bit of the presentation asked what the point was. This actually stemmed from a late arrival to the workshop discussion, who looked at what we had drawn and just said “Very nice, but what’s the point?”. I had interpreted that as “Why is this useful?” particularly with relevance to games designers, but we had an interesting discussion around whether it was actually why model at all, rather than why model this particular bit. Katy suggested that it was useful for creating a shared language to talk about the different areas and aspects, and Jim suggested that it was mostly useful when something goes wrong (e.g. if a game isn’t successful it could help to highlight areas to look at for why). Gareth felt we had been rather ambitious with trying to describe all of it, but agreed that having a shared concept of player experience would be very useful.

A sidenote:

There was also a secondary aim with this presentation: testing a new presentation app! I used Haiku Deck, a fairly simple app that only allows you to create slides with a picture and up to two lines of text. It uses the text you put on the slide to generate keywords to search for suitable pictures from the many free pictures available online (you may also use your own pics, or have just a plain background). The presentation can be seen online here.

Sadly I didn’t have the adapter to actually run the presentation from my iPad, but it was good to prove that the app (and I) can cope with that too. I ended up exporting the presentation and emailing it to myself, which produces a Powerpoint file. I could then run that as normal on the lab pc. The feedback was positive, and people liked the pictures! My next challenge for this would be to use it to talk about numeric/tabular data and how to present that within the confines of the app.

Next week Ellie might remember the cake she promised and forgot this week (don’t worry, it hasn’t been made yet!). We will be looking at a paper on Threshold Concepts. Tuesday, 11am, Interact Lab…

This week the lab meeting featured Pejman talking about his work at UOIT over the summer, and his on-going work with biometric storyboards for his thesis. Sadly we had apologies from Leslie, Eric and Judith today, but we were joined by Nick Collins.

Pejman started by introducing us to some of the work that he did over the summer. The group he was working with are part of the GRAND research network in Canada, so he was immediately involved in submitting to and presenting at the GRAND 2012 conference in Montreal. From the he went to CHI in Austin, where he helped to run the 2-day workshop on games user research (GUR), which was apparently one of the largest run this year.

Along with this he was also involved with some of the undergraduate projects at UOIT. Apparently they have a system where postgrads have to pitch exciting projects to the undergrads and get them to work on them. He showed us two extremely polished videos produced by the undergraduates he was working with, detailing a couple of games produced. They looked seriously impressive, with one group integrating a USB bicycle into a first-person shooter, and the other group using brainwave sensors to control their gaming environment. Pejman says these sorts of innovative controllers and the way that they change the experience for the players and models of joint play is something he would like to return to, and that maybe we should look to bring something like this kind of event to Sussex.

(He also says that he did get to do a little travelling, and that Canada is beautiful. I think I need to book flights.)

Of course, he was also working on his PhD thesis at the same time as all of this. His great motivation when he started was to help to understand users, with the overriding aim to make better games. To further this noble goal, he started out by using biometrics to get feedback on games in development. He uses the biometrics because they allow for a more continuous gameplay experience than surveys, and capture more data than an interview at the end of play (when players have already forgotten many incidents). Rather than try to interpret the emotion of the player from the feedback (which is tricky) he uses it to guide his post-play interview, incorporating video of the game to help jog the memory of the participant.

Using this method he has demonstrated that in conjunction with other, more conventional HCI methods it enables a greater range of problems to be diagnosed. However, his main problem has been how best to report this extra data to the developers without overwhelming them with the masses of data he gathers, and help to highlight the actual problem areas. His ‘biometric storyboards’ are the result – a kind of composite feedback graph of the game broken down by game areas/events called ‘game beats’, showing positive and negative emotion.

Biometric Storyboard

One of his main challenges over the summer was to design and implement an experiment that showed the effectiveness of this kind of feedback. He used a platform game, and ran a user study using it. He then prepared two reports: a biometric storyboard, and a classic usability report. He then asked 6 games designers to recommend game changes, based on the usability report, the biometric storyboard and with no user feedback at all. The game was then developed into three branches using the recommendations, and these were re-tested. He has some interesting results, not just from the testing of the new versions of the game but from observing the way the designers interacted with the data they were given. (I don’t want to give away his findings – he has papers in the works!)

Ellie asked (first, to let Ben off for once!) whether this focus on how to report back to the developers was new. Pejman could name a book chapter on how to feedback on websites and other user experience, but not games specifically.

Ben had a concern about some of the sensors that Pejman’s using, but also that he seemed to be highlighting negative moments that may actually be part of the overall high at a later point. Pejman’s answer to this was actually extremely interesting. He says he isn’t putting any kind of valence on the feedback he gives the designers, he is just trying to show them the user experience in a way that allows them to determine if it’s as they expect. So frustration may be fine, if that is what the designer was trying to achieve at that point.

Edgar asked if some game-types are better represented using the biometric storyboards than others? Pejman felt they might not be ideal for puzzle games, but had worked for a wide range of other kinds of games.

Katy wanted to know if Pejman was happy with the final iteration of the storyboard, given that it was much more complex than the early versions. Pejman has tests with designers on the first three iterations, and has shown that they do prefer the last of the three. He has yet to test the very final version (which is generated using a tool rather than hand-produced), but overall he is happy.

Next week we continue on with the game theme, with Ellie talking about the workshop she went to as part of Fun and Games on player experience in videogames. As always, Tuesday, 11-12, Interact Lab. We’ll have to see if she can match Pejman’s cake…

Another good turn out at the lab meeting today (sadly Lesley sent her apologies) for Ben du Boulay talking about a grant proposal he’s been working on over the summer. The proposal is for research into “Mental models and the learning of programming”. The proposal has been developed in partnership with Saeed Dehnadi and Richard Bornat (both from Middlesex University).

The work is a continuation of the work that Saeed Dehnadi did for his PhD. In that work, the 12-question Dehnadi test was constructed as an attempt to identify an innate aptitude for programming among people starting a computer science course. The questions are written in a sort of pseudo-code, and the participants are given no instructions as to how this works before they start (see figure below). They are asked to select the correct values for variables assigned in the code from a multiple choice list of possible answers. These are then marked, with three options – the person has correctly and consistently interpreted the questions, they have been consistent but incorrect, or they have been totally inconsistent and incorrect with their responses. The results of these tests were then compared with the end of course exam results. There was a significant chance of passing the end of term exam if you started out in the consistent and correct or consistent and incorrect group, and a significant chance of failing the exam if you started out in the inconsistent group.

A sample question

This suggests that consistency (understood as indicating a systemic understanding even if it’s not the understanding the testers expected) is more important than correct answers in predicting which students will do well, and suggests there is an underlying construct that is missing and needs to be taught for the inconsistent group. The two groups were labelled as algorithmic (consistent) vs. non-algorithmic (inconsistent).

One study was also done where the students re-took the test after their final exam. Interestingly, there were some who had passed their exam but still did not exhibit a consistent approach to the Dehnadi test – which might possibly raise questions about their deeper understanding (and just possibly the teaching and exam-setting!).

The goals for the research proposal are to try to understand the underlying problems that dog some of the people who are trying to learn to program, particularly aiming to contribute to the current debate around learning programming at school and university. They do not plan to offer any solutions to the problems. The current goals are:

  • Looking for more data from the start of the course to correlate with the end of term results.
  • More data looking at re-taking the Dehnadi test after the final exam.
  • Possibly to look at different teaching methods with respect to the results of the Dehnadi tests at the end of the course (no suggestion as to which teaching methods would be best, just an exploration).
  • Further develop the test (which currently deals with variable assignment) to look at more complex things like loops, conditionals etc.
  • Include some longitudinal work, with interviews after the tests to discover what the person’s approach was to the test, maybe to identify strategies that were consistent but didn’t appear it to the outsider. Also to continue to follow people through their course to attempt to gauge the evolution of their understanding.

The test was originally paper-based, but is now online and a program has been written to “mark” the test. This means that the data is more amenable to different sorts of processing and could even be re-evaluated as more of the issues are understood. The proposal is currently with ESRC.

There was some discussion afterwards about the risk of discovering that all our teaching methods were ineffectual! Ben pointed out that there has been a lot of work done on teaching programming over the last 50 years, but students were still failing so there was obviously more work to done. He did admit to being just a smidge worried about how many friends and colleagues they risked upsetting though! One point that got raised was that there has been a lot of work done on “Threshold Concepts”, which seemed very applicable to programming – if you don’t understand the first step, you can’t build on top of it. The ‘bi-modal’ results often seen in Computer Science suggests that this is a useful way to consider this.

Many questions were raised about the format of the questions – were the multiple choice answers necessary (we decided yes, for total non-programmers), the use “int” instead of “integer” or something less programmer-like etc. Pejman raised some questions about the motivation of the participants – were they really trying to get the right answer, or just trying to finish fast? Eric asked if any comparisons had been done against other tests – but Ben pointed out that if it’s hard to motivate participants to complete one test it’s going to be even harder to get accurate data from two!

Next week we have Pejman talking about some of the work he did over the summer in Canada. 11-12 in the Interact as always…

The AWARE event took place in the University of Edinburgh’s Informatics Forum on 25th September 2012. There were 39 delegates at the event, representing a wide range of academic institutions, commercial organisations, and organisations and individuals from within the autism community. The meeting had several goals:

  • to share academic and commercial expertise on developing technology to support people with autism spectrum conditions (ASC).
  • to model best practice in software development, involving end users in design and working from a strong theoretical and evidence base.
  • to explore ways of making academic research output available to the public for whom it was created.

The morning session comprised four presentations on the software development process. Thusha Rajendran gave a presentation on psychological theories of autism as a grounding for design; from her design work on the ECHOES project, our own Judith Good discussed the participatory design process; Kate Ho discussed the implementation process from the perspective of a commercial technology developer and Sue Fletcher-Watson tackled the thorny issue of evaluation.

I thought these presentations were excellent and fitted together very well. Thusha gave a concise and helpful summary of our current psychological understanding of ASC and how this can guide technology design; Judith’s presentation, emphasising the importance of giving the end-users a voice and the need for us, as researchers, to become better observers, reminded us why we do the work. I sensed that Kate Ho’s description of the implementation process, which included a discussion of technology options and finance, provided many with a useful insight into the practicalities of developing and distributing commercial software. In her presentation, Sue acknowledged the challenges of evaluation for products that traverse the divide between the academic and public arenas – balancing a responsibility to the community of users, with a recognition that the cost and timescales for a rigorous evaluation might not be appropriate for apps that are often inexpensive and ephemeral. She suggested adopting appropriate outcome models and expectations, and building assessment tools into platforms as steps toward addressing this difficult issue.

The morning presentations were followed by a chaired panel discussion with representatives from the autism community. The discussion focused on the panel members’ experience of using technology to support persons with ASC: how persons with ASC often have an affinity for technology; that it can be empowering, providing them with a safe space for self-expression and learning – though common sense needs to be exercised in managing its availability and use. The need for software that is age-appropriate (i.e. software that acknowledges an individual’s chronological age with respect to social and cultural interests while accommodating specific cognitive difficulties) and also the recognition that some of the best software is not autism-specific. Not for the first time, the issue of heterogeneity was raised – and hence the requirement that software can be customised for the particular needs of the individual. Also considered was the difficulty finding the right app when there are so many available – the issue of consistent and appropriate categorisation and, again, the problem of evaluation.

During the lunch break we had an opportunity to check out various interesting demos including ECHOES, the NAO programmable robot, Zhen Bai’s augmented reality research system and artist Wendy Keay-Bright’s ReacTickles software. I was hooked up to an EDA sensor for the afternoon, courtesy of Ilumivu.

In the afternoon session we had three case study presentations: Ofer Golan (Mindreading and The Transporters; Sarah Parsons (COSPATIAL) and Wendy Keay-Bright (ReacTickles and Somantics). I thought Wendy Keay-Bright’s work particularly inspiring.

This session was followed by a second panel discussion, with panel members from the academic, commercial and charitable sectors. Here the focus was on getting the technology developed in academic institutions into the public domain. Questions of access to technology, platform and version support, the costs of development and support were discussed in a sometimes intense debate. Concrete suggestions included collaboration with commercial organisations and tapping into one’s academic institution’s intellectual property and enterprise development expertise. Towards the end of the session we touched briefly on the fact that the standards and requirements of academia may be at variance with the demands of commercial sector – e.g. with respect to project timescales and product testing & evaluation criteria.

Beyond the interesting presentations and lively discussion panels, the AWARE event was a great opportunity for networking. I was struck by the enthusiasm and commitment of the people who attended the event, and for many I spoke with, autism is clearly more than a purely academic interest.

This is the start of what will hopefully become a new series on the blog, where we keep track of the links that have been flying around on the HCT mailing list. It is after all easier to search a blog than an email inbox! So with no further ado, we present this week’s links:

First up, from Michelle: Special Issue of Simulation & Gaming journal, on engagement, simulation/gaming and learning

Could be relevant for a few of the lab members.

One from Lesley: Interactive Technologies and Games Conference 2012 – Call for papers

A small, friendly conference looking at the latest interactive technologies. Well worth a look (and quite possibly a submission).

Another from Lesley: Foundations of Digital Games 2013 – Second call for papers

“The goal of the conference is the advancement of the study of digital games, including new game technologies, capabilities, designs, applications, educational uses, and modes of play.” – again, could be relevant to a few lab members. (The fact that it’s in Crete is neither here nor there, obviously…)

Lesley actually sent a batch of links around mid-week, so this is her commentary on them:

http://www.vancouversun.com/technology/Digital+Doctor+your+phone/6532253/story.html

What a great behaviour modification tool for kids…… I was thinking that something like this might be very motivating for kids with aspergers if you related it to their emotional behaviour / communication goals rather than to physical movement……

http://www.vancouversun.com/business/Digital+Doctor+your+phone/6532253/story.html

and interesting apps for quantified self / wellbeing too…

The Digital Doctor is in … your phone

Video game makers join forces with medical experts to design apps for improved well-being

http://www.washingtonpost.com/blogs/ezra-klein/wp/2012/09/28/the-economics-of-video-games

economics of video games

http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/

david deutsch on AI

http://store.makerbot.com/replicator2.html

makerbot

http://www.bbc.co.uk/news/education-19647938

coursera

http://www.wired.com/gadgetlab/2012/09/cosmo-the-god-who-fell-to-earth/all/

cosmo the hacker

http://www.ncrm.ac.uk/training/show.php?article=3821

public lecture on mutimodality and learning

http://2012.uxbrighton.org.uk

UX Brighton Nov 2nd  Tickets are now on sale for the 3rd annual UX Brighton conference, to be held on 2nd November 2012. This year’s theme is Past &  Future Interactions – a mix of practical and theoretical, commercial and academic talks from a range of speakers including Alex Wright of the New York Times, Mike Kuniavsky of Adaptave Path and ThinkM and Sri Subramanian of Bristol Interactions and Graphics. ”

Katy sent along this link to a really interesting project from Brighton University: http://www.wired.co.uk/news/archive/2012-10/09/music-memory-box

It uses RFID tags on objects that have meaning for a person with dementia to trigger music that links to a person or memory. A really nice use of tangibles.

And finally, this is one Ellie didn’t actually send round to everyone, but seems an appropriate reminder that feedback/criticism can be painful (warning, contains swearing): http://penny-arcade.com/comic/2012/09/28

Good turn out at the lab meeting this week (sadly with apologies from Lesley and Jim). We had a couple of Masters students join us as well, and Judith supplied cake to maintain our energy levels throughout.

This week Judith was presenting an expanded version of a talk that she presented at the AWARE event in Edinburgh a couple of weeks ago, titled “Autism Software: Design(ing with users)”. AWARE was set up to be a place where software developers, researchers and various others with an interest in creating and using software for children with autistic spectrum condition (ASC) to use.

Judith’s talk focused on the design of this software, but rather than explain hard and fast rules for what works or doesn’t, she focused on a design process using participatory design (PD) and what she has learnt from using this process on the ECHOES project. The aim of the ECHOES project was to create an environment where children aged 5-7, both with ASC and typically developing, could explore social interactions such as shared attention. It featured a large multi-touch screen, shown below:

Giving feedback on ECHOES

Judith started with framing the problem as “how to design software that children with ASC want to use?” She stipulated that she felt that this meant software that was both enjoyable, but also empowering. In order to understand how to achieve this these users really ought to be involved in the design process.

One of the tenants of PD* is that the people who use a product (or service) should have a voice in its design. In particular when designing for children people assume they know what children like and want, but actually we adults don’t really get it. It is even more important with children with ASC. Their experiences are likely to be quite different. Also as a group they are incredibly heterogeneous, and the aspect of voice and control can be incredibly important to them.

There are many methods available for involving people in PD, including playing games, telling stories and making things like paper prototypes. However, many of these can be challenging with children, never mind children with ASC! The processes are unknown and may be unintentionally stressful. Judith cautions against seeing users solely as “repositories of knowledge”, instead focussing on how to build a rapport, on understanding people more broadly, and what exactly it really means to “have a voice”?

So, with all that as a background, Judith then shared some of the “lessons learned” from ECHOES. She said that they didn’t find out what they set out to learn, but that the things they did learn had far greater impact on the understanding of the project than those initial questions would have had.

She showed many video examples of moments of unexpected insight, patterns spotted that seemed inconsequential in a single user but showed up unexpectedly and repeatedly. There were things that came through clearly in the body language and actions, but were never verbalised. The tools that were built to solicit feedback from the children were seldom used in the way they were expected to be, but were still able to facilitate a discussion around the design.

The main lessons learnt were that prototypes were an excuse to engage people early, and gave something to build a rapport around and scaffold the interaction. It is important to become better observers of people, particularly the process of interaction. Much is not verbalised. By all means start with questions in mind, but be prepared to ditch them and learn something else from your research!

The lab group had a good discussion at the end of this talk, around the possibility of using the process without really having an “end product” – focussing on the communication and interaction mechanisms of the process itself rather than the outcome. Judith and others have begun looking at what it is about the software (and some of the problems with it sparked the most communication) that helped to create this opportunity for interaction, with Liz noting that conversations with children mediated through a separate object are not uncommon in therapeutic situations. Eric spotted that it would be important to include conversation starters in your prototypes – which Ellie suggested could let developers off the hook with bug-fixing!

Next week we have Ben telling us about a grant proposal he’s been working on over the summer. 11-12 in the Interact Lab, and you never know, there may even be more cake…

*(yes, I’m afraid that’s a Wikipedia link. Please suggest better links in the comments!)