Archives for posts with tag: learning

…or a book or a journal article, anyway! This was the question facing the group this week, in the first meeting focussing on a lab member’s goal. Ben has been trying to write something quite specific in content, but the format has been a little unclear and that has been causing him issues with the writing. 

Ben’s initial task for this week was to produce a 2-page book blurb, to send around prior to the meeting for us all to read. Without giving too much away (he is trying to publish this, after all), he wants to produce an overview of current research in the area of intelligent tutors. He presented his ideas to us, sketching in some of the detail around why he wants to do this. Ben has been interested in the general area of intelligent tutors for a long time now (as he kept emphasising by referring to work he did in the last century!) and feels that the area currently needs this overview, looking at how the various pieces of the current research fit together but also to show where the gaps are. He says he isn’t really interested in providing the filler for these gaps. Ben has his imaginary reader as a would-be PhD student, looking for an interesting hole to plug. 

The discussion about what the format of the piece should take was interesting. Everyone agreed there was enough material for a book. Some felt that Ben didn’t need write the whole book himself, but should ask for chapters from people currently researching in the various areas. The resounding group opinion was that Ben was in the perfect position to introduce the subject and pull all of the threads together. The big difference was around the journal paper. Some felt that an initial journal paper should also be written, almost as a call-to-arms for possible contributors. Ben actually rather liked that idea, and felt that quite apart from anything else the journal paper would be a less daunting proposition! 

So the process has hopefully given him a good, clear next step. He even thought he might be able to commit to finishing the journal paper by mid-summer. In management-speak he now has a SMART objective, rather than a vague goal. Witness the power of presenting to the group! 

In other news, Jim and Ellie’s African Farmer Game has been attracting attention, with contact from a journalist reporting for El País recently. Also, Eric let us all know that he has managed to find a new job with Rica working on consumer research for older and disabled people, which fits right into his recent research interests but sadly means he won’t be around much. Still, we have him for a couple of weeks yet. 

Advertisements

Again, this is a much-delayed write up of a meeting that happened earlier in term. Many apologies to Ben. 

Occasionally at our lab meetings we are privileged enough to get sneak previews of talks that our lab members are going to be giving elsewhere. Some may call us the guinea pigs. This was one of those times, with Ben giving us a first run through a talk he had been asked to give on “the past, present and future of educational technology”.

Ben started his talk with the past technology that he had started his educational career with – slate tablets(!), log tables, and a leather ferula for motivation. He reminisced happily about the slide rule he had at university, before moving on to his PhD, showing us a picture of a darker-haired Ben with possibly the first ever Logo turtle made of Meccano. However, rather than a screen and a programming language for people to work with, they built a box with buttons that controlled the turtle and allowed them to record subroutines and build them into programs. The widespread use of computer screens happened sometime later.

Ben du Boulay

Ben sans turtle.

What he found with that turtle was that technology can facilitate communication, whether between teacher and student, or between the students themselves. Technology is also very good at providing information – which is as distinct from education as a library is distinct from a school or university. This is one of the central ways that technology is used currently, with the internet providing more information than ever before. The third way that technology is increasingly being used is tutoring. It can help to increase the skills and develop the knowledge of the users in a variety of ways.

Ben went through a list of different pedagogies, and the various ways that people have used them to inform the design of online tutors. He then moved on to an area of future research that he is particularly interested in. He has noticed a trend in the area over recent years of measuring the affect of the learner, of trying to automatically recognise the state that the learner is in. His feeling is that whilst this is an interesting area there has been little work done on what to do with this information. How should it inform the pedagogical approach of the tutoring system? Perhaps even more basically, what is the ideal state for learning? We have seen in papers reviewed by the group that some people feel that flow may be, but flow is poorly defined and a level or period of frustration would also appear to be beneficial. Ben also feels that the approach to the emotions of the learner has focussed very much on their use of the learning tutor (understandably), but as teachers and learners we know that the attitude and emotion that someone brings to learning may have little or nothing to do with either the subject or the type of learning, but everything to do with external events. Can the writers of these systems cope with a learner who’s attitude changes between every session?

These are things that the very best human teachers grapple with on a daily basis. Creating online systems  that can approach that is still some considerable way off if it is even possible, but it’s a fascinating area to investigate! Many thanks to Ben for sharing this with us.

TERENCE was a 3 year project that aimed to produce an adaptive learning system to help children learn to comprehend what they read. The University of Sussex were involved in the evaluation of both the learning outcomes from the system and the usability of the system itself. NicolaYuill and Eric gave us an update on their findings, with particular emphasis for our group on the usability.

Due to the various constraints on the project, the main methods for evaluating the usability of the system were a series of pre-deployment expert-user focus groups, and after the pedagogical intervention a series of interviews with the users themselves (both students and teachers). The project had essentially 3 GUIs – one for the experts creating the content, one for the educators, and one for the learners. This evaluation was aimed squarely at the learner interface.

TERENCE was deployed across 4 hearing schools and 4 hearing impaired schools in the UK. There were 83 hearing students and 24 hearing impaired. The interviews with the hearing impaired students were carried out with the help of an interpreter. The system was also tested in Italian schools, but the outcome of those tests were analysed by a different team.

Eric said that several themes had come through in the discussions with both the teachers and the learners. There were a few technical difficulties with this ambitious project, resulting in a number of usability problems with logging in and glitches when the learner was reading. There were also some issues with the level of the content – the stories were too difficult for some of the students, and this lead to them getting bored and just trying to click their way through without actually comprehending the content. However, after the end of the study, Nicola said she had a number of phone calls from the schools involved asking how they could continue to use the project, suggesting that the usability issues were not enough to dissuade the learners from wanting to use the software.

The group as a whole were interested to hear about the use of the system logs, which really hadn’t been used particularly much in the analysis due to the content being poorly designed for this. As several of us within the group have written systems that include logging, we all recognised how hard it is to design a system that is able to help answer many unforeseen questions! Also some kinds of information may be difficult to log – e.g. if a login fails due to network error, that request may never have reached the server to be logged, or the time spent on a given task may be difficult to track if the person can open the task, walk away, make coffee, chat to three people, then come back 40 minutes later and complete it.

The TERENCE project is now beyond the 36 months for implementation, and it was really interesting to see how far they got in that time. Many thanks to both Eric and Nicola for coming along to share this with the group.

At this week’s meeting we had Kate Howland telling us about what she’s been up to recently. Kate has been working as a lecturer in Informatics since successfully completing her PhD last year, and this was a look at the research work she has been doing.

She initially started out working on TERENCE project, which is a large EU project supporting reading comprehension in children with and without hearing impairments of age 7-11. The project is a web-based game-like tool using an adaptive learning system to match the difficulty level to the skill of the child. Kate was involved with Nicola Yuill in conducting an initial small scale evaluation with users and teachers. Sadly due to the timing of the study (mid-July, just as all the schools go on summer holiday), they were unable to find the numbers of child participants they had been hoping for, so a lot of the feedback they got was from the teachers. This threw up some interesting results in itself. The game is designed to be used by an individual child, and the adaptive learning system models that child and their progress. However, some teachers said that they would be far more likely to use it in a full-class situation, discussing the solution with the class as a whole before responding. This is somewhat counter to the design of the application, and a useful thing to find out.

Another project that Kate was involved with was the Expertise scoping study, alongside Caroline Bassett. This was looking at the concept of expertise in digital technologies, particularly as they intersect with cultures and communities. A series of interviews was carried out with members of a wide variety of different communities to build up a picture of the way different people view their ability with digital technologies and what they perceived as the barriers to them becoming “expert”.

Kate was involved with a local school, and rather than conducting interviews in isolation ran an activity with them to see them actually using their digital skills. She started with a taster session where they came to campus and used the green screen technology to create some short film clips. Then she ran a series of after-school events, leading the children through the process of making their own stop-motion animations. She showed us one of the resulting animations, which was really fun! She felt that this followed on from  her thesis work, and allowed her to examine the way that children constructed a narrative and could be supported in doing so by digital means. She found it generated a lot of interesting data, but needs a bit more time to analyse it properly.

The third project Kate has been involved in this year is the Space Invaders project with the Centre for Innovation and Research in Childhood and Youth. This has two parts: a competition for young people to submit short videos on their experiences of social media and online gaming (as opposed to adults discussing what they think happens with children in these situations), and a live debate as part of the Brighton Fringe where the top 10 videos will also be shown.

As well as all that, Kate is also part of a project that has secured funding and is due to start in September. Face 2 Face (too new to have a web page yet!) is a collaboration with the School of Education and Social Work, the University of Brighton, and the Open University. It will examine the ubiquitous nature of screens in our lives these days, and the effects that is having on children and families.

So combined with teaching, it’s been a spectacularly busy year so far for Kate! There’s some really interesting stuff to follow up on with the work she’s been doing, and we look forward to seeing what happens next.

We finally managed to get Trevor Nesbit to stay in one place long enough to give a talk to the lab group this week. He’s been visiting from the University of Canterbury since the start of December, but has been disappearing off to conferences and on visits hither and thither since.

Trevor’s research stems from his teaching experience. He started out teaching small classes of less than 50 students, and then more recently moved to teaching classes with 200+. He found that some of the techniques that were extremely useful in smaller classes (e.g. small group work, where one member of the group feeds back to the rest of the class) really didn’t work in these larger settings. In fact, as a baseline for his later work he ran an experiment, with a group exercise just before a 10 minute break in the 2 hour lecture that was supposed to be reported back on after the break. Of the 300 students in the class, only 80(!) turned up after the break.

So Trevor has been looking at ways to use what he calls “Prevailing Personal Social Communication Technologies” to overcome some of these difficulties and increase engagement in larger lectures. He started by creating a system that allowed students to use free text messages to anonymously send responses to the lecturer, who can select which to share. This gets past the problem of inappropriate responses! Trevor has used this system in a number of tests, from asking for just the names and favourite colours, through to working out numerical answers to problems and generating an “ask the audience”-style frequency chart, and on to reporting back from small groups.

He and his collaborators have noticed some interesting long-term effects of using this kind of anonymous feedback mechanism:

  • the inappropriate responses tailed off. They weren’t high to start with at around 4 out of 70+, but (possibly because they weren’t getting through) they stopped as the term continued.
  • they started getting other feedback that was useful, such as “please speak up” and “you’re going too fast”. These aren’t things that people are likely to be comfortable with saying in front of 300 of their peers, but the system allowed them a way to communicate.
  • after a few weeks using the system the groups appear to be happier to participate in general, not just via their mobiles. Hands-up style votes often only get around 20% of the group participating, whereas all the lecturers using the system saw an increase.
  • there was anecdotal evidence that the students were helped by seeing the feedback from other students, particularly when they were asked to explain how they had reached a solution.

The phone-based technology is slightly clunky and difficult , and an obvious change is to use smart phones. A group of third year students have developed a web app to do roughly the same thing in a less clunky manner. Trevor has plans to use this with classes in the coming year, with more formal measures of class engagement now that the system is more established. He also mentioned that care needs to be taken to design the participation exercises carefully – just as in smaller classes.

Ben wondered if there was any way to link the results of the students to their use of the tool, but Trevor said that as the experiment was to run on subsequent cohorts there may be some underlying differences between the cohorts that could affect the results. There is already a well-documented link between engagement and improved performance, so that is less important than proving an increase in engagement. Ben suggested that maybe you could compare the students who sent messages to the students who didn’t, but Liz pointed out that this gets into the debate about the value for lurkers, who may gain a lot from observing the participation of others without participating themselves. The group suggested that there could be a lot of value in the anonymity of the system not only for asking “stupid” questions, but also for the people who are seen as always sticking their hands up!

Trevor said they’d had a “bit of hassle” with earthquakes in Christchurch. Fingers crossed for some smooth sailing to pursue this research (and maybe rebuild!).

This was the final meeting of term, and so it was only fitting that Edgar provided us with some of his fabulous food. Cochinita sandwiches all round. Truly delicious, thanks Edgar! And lo, the frozen cakes survived the term. Who knows, maybe some desperate PhD student roaming the lonely corridors over the Christmas period will be sustained by finding these goodies… We’ll find out in the new year.

Merry Christmas all!

We had a new attendee this week. We have Trevor Nesbit visiting from Christchurch, New Zealand until the end of December. Hopefully we will find out a little more about his work in a later lab meeting, but for this meeting we let him off with a quick introduction and let him settle in and listen to Judith.

Judith was talking to the group about the results of a course she taught last year – Technology Enhanced Learning Environments (TELE). A core component of the course was a group project, and to help to make this project more interesting and relevant Judith asked contacts from a group called Digital Education Brighton to get involved. Three different schools came forward, and the groups were asked to go through the process of gathering requirements by a range of means and producing a motivating learning experience for secondary school children around programming. Judith had three of the finished projects that she wanted to share with us.

Project 1 was a very nicely produced set of resources for teachers, including YouTube tutorials and a PowerPoint self-guided learning system that lead the students through creating a small game in Scratch. The graphics were fantastic, although Judith felt that the material itself was somewhat behaviouralist and not particularly ground-breaking.

Project 2 was a great little game called Blobs, which made use of a puzzle-solving mechanic not unlike lemmings to explore the differences between classes and objects with properties and methods. A series of different types of blob with a variety of different properties and methods had to be used together to solve levels. Pejman had a little difficulty with the third level, so this is by no means straightforward! Again, high production values and a lot of thought had gone into this, but it would be interesting to see whether this did actually help children understand coding. Ben asked if in later levels they introduced actual code, which would be an obvious next step that hadn’t been completed.

Project 3 was awesome. Rather than teach programming (the school they were working with already did rather well at that) they looked at software design, with a particular emphasis on user-centred processes. It was a flash program (I’m hesitating to call it a game) that students could go through, collecting requirements from a school that needed a particular piece of software building. Different parts of the scenario required different data-gathering methods – e.g. A recorded message from the headmaster, a questionnaire for the parents, focus groups etc. There were extra bits of information available on all the methods in program, which was very polished. In addition to this, they had prepared lesson plans for the teacher, work sheets to go with the lesson plans, presentations for the teacher to go through for each lesson… It was extremely complete, and a very interesting angle to take on the problem.

Judith’s main problem is that she wants to make these resources available (along with others from other years), either for teachers to use as is, or for further development/modification. The obvious way to do this is via the web, and the group had a variety of suggestions for how that could be achieved (many featured undergraduate labour – if you are an undergraduate who fancies building this let us know!). It was good to hear about the course and the projects.

Next week features a change of venue, as the Interact Lab is being used for some teaching. We will therefore be colonising the IDEAs lab from 11-12 next Tuesday instead. We remembered biscuits for this week, so those frozen cakes still remain and will be out of reach next week. There’s a good chance the cakes will make it to Christmas at this rate!

Another good turn out at the lab meeting today (sadly Lesley sent her apologies) for Ben du Boulay talking about a grant proposal he’s been working on over the summer. The proposal is for research into “Mental models and the learning of programming”. The proposal has been developed in partnership with Saeed Dehnadi and Richard Bornat (both from Middlesex University).

The work is a continuation of the work that Saeed Dehnadi did for his PhD. In that work, the 12-question Dehnadi test was constructed as an attempt to identify an innate aptitude for programming among people starting a computer science course. The questions are written in a sort of pseudo-code, and the participants are given no instructions as to how this works before they start (see figure below). They are asked to select the correct values for variables assigned in the code from a multiple choice list of possible answers. These are then marked, with three options – the person has correctly and consistently interpreted the questions, they have been consistent but incorrect, or they have been totally inconsistent and incorrect with their responses. The results of these tests were then compared with the end of course exam results. There was a significant chance of passing the end of term exam if you started out in the consistent and correct or consistent and incorrect group, and a significant chance of failing the exam if you started out in the inconsistent group.

A sample question

This suggests that consistency (understood as indicating a systemic understanding even if it’s not the understanding the testers expected) is more important than correct answers in predicting which students will do well, and suggests there is an underlying construct that is missing and needs to be taught for the inconsistent group. The two groups were labelled as algorithmic (consistent) vs. non-algorithmic (inconsistent).

One study was also done where the students re-took the test after their final exam. Interestingly, there were some who had passed their exam but still did not exhibit a consistent approach to the Dehnadi test – which might possibly raise questions about their deeper understanding (and just possibly the teaching and exam-setting!).

The goals for the research proposal are to try to understand the underlying problems that dog some of the people who are trying to learn to program, particularly aiming to contribute to the current debate around learning programming at school and university. They do not plan to offer any solutions to the problems. The current goals are:

  • Looking for more data from the start of the course to correlate with the end of term results.
  • More data looking at re-taking the Dehnadi test after the final exam.
  • Possibly to look at different teaching methods with respect to the results of the Dehnadi tests at the end of the course (no suggestion as to which teaching methods would be best, just an exploration).
  • Further develop the test (which currently deals with variable assignment) to look at more complex things like loops, conditionals etc.
  • Include some longitudinal work, with interviews after the tests to discover what the person’s approach was to the test, maybe to identify strategies that were consistent but didn’t appear it to the outsider. Also to continue to follow people through their course to attempt to gauge the evolution of their understanding.

The test was originally paper-based, but is now online and a program has been written to “mark” the test. This means that the data is more amenable to different sorts of processing and could even be re-evaluated as more of the issues are understood. The proposal is currently with ESRC.

There was some discussion afterwards about the risk of discovering that all our teaching methods were ineffectual! Ben pointed out that there has been a lot of work done on teaching programming over the last 50 years, but students were still failing so there was obviously more work to done. He did admit to being just a smidge worried about how many friends and colleagues they risked upsetting though! One point that got raised was that there has been a lot of work done on “Threshold Concepts”, which seemed very applicable to programming – if you don’t understand the first step, you can’t build on top of it. The ‘bi-modal’ results often seen in Computer Science suggests that this is a useful way to consider this.

Many questions were raised about the format of the questions – were the multiple choice answers necessary (we decided yes, for total non-programmers), the use “int” instead of “integer” or something less programmer-like etc. Pejman raised some questions about the motivation of the participants – were they really trying to get the right answer, or just trying to finish fast? Eric asked if any comparisons had been done against other tests – but Ben pointed out that if it’s hard to motivate participants to complete one test it’s going to be even harder to get accurate data from two!

Next week we have Pejman talking about some of the work he did over the summer in Canada. 11-12 in the Interact as always…