You are currently browsing the category archive for the ‘e-Learning design’ category.

Last Friday I did a presentation on the use of Google reader & delicious in creating a personal learning environment for the Facilitating Online Communities course.   I talked about how these tools work well together, how they can be used in education, and I used the evolution of our massage programme’s online communication structure as an example.

Here’s a link to the session – Creating a personal learning environment using google reader & delicious.

(It was held in elluminate.  If you haven’t used this environment you should probably run through the process on the elluminate support page to ensure that your computer is set up correctly.)

After my first trial of using the blogging rubric, I’ve decided that the rubric and the process both need tweaking.

In a post I made last December, I talked about our process of assessing blogging.  I decided that we would have two submission dates.  On the first submission date, the students would submit a draft, and I would give them feedback on if they had met competetncy (based on their demonstrated knowledge of the subject area).  They would then have a chance to polish their post & I would regrade it at the second submission date.  Sounds complicated?  Well surprisingly enough it is.  It seemed like a good idea to me at the time, but after running through it once I’m going to revert to our standard approach which is allow them to submit an assessment, mark it completely, then if anyone is marked as not competent they are allowed one resubmission.  Simpler for the students.  Easier for me.  (I don’t know what I was thinking).

The other thing that needs tweaking is the actual rubric.

After using it once I’ve decided that grading of community involvement is over-weighted.  In fact, requiring this has just made a natural process into an unnatural process.  It hasn’t seemed to increase authentic community involvement at all, but rather has led to a few students incorporating references into their blogs, and making comments on others blogs which are fairly pointless apart from the gaining of marks (I know Leigh, I know).

Another problem is that the use of reflection isn’t particularly relevant to this assessment, so I’ve modified the rubric to create

Oh well, one step at a time.  We’ll get there in the end.  😉

I’m getting into the specifics of designing my assessments.  Last night I was thinking about how to structure my blogging assessment.  In my research methods class, I want to use blog posts in the early days to assess their development of knowledge and skills of research.  To do this, I want them to make four posts

  1. Describe the research process (Week 9)
  2. Describe how information from different sources may vary in quality and how to differentiate good quality information from poor quality (Week 10)
  3. (Given the choice of several topics)  Describe your search process including the creation of your search query, databases accessed, sources found and information quality (Week 12)
  4. (Given several research articles of different types)  Assess the quality of the research findings in each case (Week 13)

I think these four posts will help to scaffold them into the task of performing first a joint literature review, then an individual literature review (more on the joint literature review later).

So that’s all fine, but when considering our assessment policies I realised that for every assessment, our students have the opportunity to resit the assessment if they’re marked as not competent on the first attempt.  At first glance, I thought that this was going to create a monster, however with a bit of thinking I’ve come up with a solution which I think might work.

The plan is to give the students two submission dates, one week apart.  To meet competency, the students will need to make a post on the topic, and have that post graded at a minimum of 2 on the blogging rubric.  The marker will need to review the post of everyone in the class briefly, record key points of misunderstanding, and provide individual feedback on the blogs of students who have not met the competency requirement.  They will then create generalised feedback for the class as a whole which clarifies the main areas of understanding.

The students will then have a week before their final assessment to read the posts of other students, to develop their understanding, and update their original post if they like.  My hope is that this period of reflection will help to stimulate cross-fertilisation of ideas.  At the end of this week the blog post will be graded using the complete rubric.  This rubric has been updated based on the feedback of Whitney & Leigh – thanks guys.  Here is the updated version.

This process will be reasonably time-intensive, but I think it should be managable.  It strikes me as a teaching model much more along the lines of George Siemen’s curator.

A curatorial teacher acknowledges the autonomy of learners, yet understands the frustration of exploring unknown territories without a map. A curator is an expert learner. Instead of dispensing knowledge, he creates spaces in which knowledge can be created, explored, and connected. While curators understand their field very well, they don’t adhere to traditional in-class teacher-centric power structures. A curator balances the freedom of individual learners with the thoughtful interpretation of the subject being explored. (Siemens, 2007)

Siemens talks about the role of the curator being to locate and structure an “exhibition” of learning objects or resources which the students are then free to explore.  The teacher as a guide rather than the font of all knowledge.

Carrying on from my last post, I’ve developed  a rubric for assessing the blog posts of my students.

Initially I intended that the rubric should motivate the students to

  1. Develop understanding of key subject areas
  2. Act in ways which will support the development of a learning community

However as I got into the process of nutting-out how this was going to work, I realised that it’s also important that it motivates the students to write well, reflect on their process, and develop good scholarly habits (i.e. referencing and referring to sources outside of the ones provided in class).

The rubric is a work-in-process rather than a finished product.  It contains 5 categories with a total of 20 marks.

One of the problems I’ve identified is that understanding of the subject of the blog post is perhaps not weighted heavily enough.  I think it should probably have a weighting of 2 or so, but I do like that nice round number 20 as a total, so I’d need to either merge two of the other categories or weight two of them with ½ weights.

Perhaps the second option is the best.  This would provide me with a certain degree of flexibility.  If I was using this rubric in a  course where reflection was particularly important, I could weight writing quality and scholarship with ½ weights.  If in another course scholarship and writing quality were particularly important, I could weight community involvement and reflection with ½ weights.

What do you think?

I’m getting into writing assessments for next year, and it’s clear that some aspects of our assessment model need to change. The main drivers for me are the need to increase engagement in online learning activities, workload reduction, and improving feedback.

Assess them and they will come

In my review of how things have gone this year, one of the things that’s really stood out for me is the fact that the level of participation in the learning activities that I set for my students this year was not even close to a level that I would be satisfied with. It’s clear to me that their learning has been impaired as a result (or at least their learning of the material that I wanted them to learn), and I’m pretty sure that the one thing that would have led to more participation would be more assessment.

Taming the workload beast

But we already spend too much time marking assessment! In a recent staff meeting, we talked at length about workload reduction. One thing that takes up a considerable amount of our time is marking assessment. I’m sure that I can design assessments to involve less workload for the assessor.

Anderson describes a range of methods that may act to reduce assessment-related workload for teachers (2008)

  • Automated assessment processes – ranging from formative tests (simple) to virtual labs and simulation exercises (complex)
  • Online automated tutors
  • Use of neural networks & other artificial intelligence methods
  • Peer review (of either students within a specific course, or students within a network of similar courses)
  • Student creation of open educational resources which are then assessed by lifelong learners who are using the resources (Farmer, 2005 as cited by Anderson, 2008)

Formative tests are fairly straightforward to implement. They take some time to set up, but then they’re there to use year on year. I have thought about creating a simulated clinical environment in second life, but at this point, the creation of automatically marked simulations is well out of my financial ballpark, so I’ll move on.

The next two are also a bit too high tech, and high budget.

The last two options are possible if the students of the course are a part of a learning network. (Anderson, 2008). One of my goals for the future is to develop this network, but I think it’ll take at least a couple of years of students moving through the programme before this happens to any particular degree.

Feedback

Feedback is crucial to the learning process, and this is something that we can definitely improve on. Formative tests that provide feedback directly following the student’s performance provide a wonderful development opportunity for students, and I believe that this is one of the real strengths of online education. According to Shepard (2000 as cited in Caplan & Graham, 2008), and Wiggins (2004), providing detailed feedback as close as possible to the performance of the assessed behaviour enhances student learning.

We should strive to “create assessments that provide better feedback by design” (Wiggins, 2004). I was inspired last year by the way in which Montessori school activities are based on this principle. Learning activities can be designed to provide feedback to students in the absence of the teacher. This can be facilitated through instructional design (Wiggins, 2004), or through social networks (Anderson, 2008). In my experience when courses I’ve been engaged with have required blogging, a community of learners has developed, where the learners have begun to support each other in their learning.

3 phase assessment process

After considering all of this, I’ve come up with a three phase assessment process that I think would be fairly ideal for most of our online courses. Phase 1 and 2 here test different grades of knowledge (simple/moderate complexity) & overlap in temporal space.

  1. Automated formative testing to test knowledge of discrete chunks of knowledge.
    Facilitator’s role: establish test, monitor results
  2. Reflective blogging on key concepts in the first ½ of the course. Students required to post on each topic, rewarded for commenting, updating the work they’ve done based on future learning, and referencing.
    Facilitator’s role: Monitor class activity, encourage engagement, Provide generalised feedback
  3. Final theoretical assessment which integrates learning.
    Facilitator’s role: Mark assessment, provide feedback & opportunity for resubmission

Students are therefore rewarded for acting as good community members, are given feedback on their developing understanding & are assessed for their integration of knowledge.

The one slight issue with this model is that if anything, I can see myself doing more assessing in this than I was doing previously. However the formative assessment that I’ll do in the early stages of the courses will be integrated with my teaching, so in effect I believe I could save time with this approach.

What do you think? Can you see any big holes in my thinking here?

References

Anderson, T. (2008). Towards a theory of online learning. In T. Anderson (Eds.). The theory and practice of online learning (2nd ed., p. 45-74). Canada: AU Press, Athabasca University.

Caplan, D., Graham, R. (2008). The development of online courses. In T. Anderson (Eds.), The theory and practice of online learning (2nd ed., p. 245-264). Canada: AU Press, Athabasca University.

Wiggins, G. (2004). Assessment as feedback. Retrieved December 11, 2008 from http://www.newhorizons.org/strategies/assess/wiggins.htm.

Research Aims

The research project has a number of related aims.

1. To review on an ongoing basis the experience (satisfaction vs. dissatisfaction) and achievement of students in the blended programme

2. To implement changes to improve student experience and achievement.

3.Is blended learning effective in massage education?

Background

 

In recent years an increasing number of educational institutions have begun to offer their courses by online or blended delivery. Massage educators have been slow to adopt these contemporary approaches to learning, but there are now a number of educational institutions offering massage therapy education either purely online or with a blended style of delivery (Remedial massage, 2008; How can you, 2008). Within New Zealand a number of educational institutions are considering the exploration of educational options within this area (J. Morgan, personal communication June 14, 2008; B. Bernie, personal communication June 14, 2008; H. Lofthouse, personal communication June 29, 2008; T. Rodgers, personal communication June 14, 2008). Many massage education providers consider online and/or blended delivery education for massage therapy to be inferior to traditional class-room-based delivery models (P. Charlton, personal communication June 14, 2008; T. Rodgers, personal communication June 14,2008; A. Palmer, personal communication June 14, 2008).

The online environment is rapidly changing, and a course which aims to utilise the richness of contemporary online applications may often be involved in the use of a technology in a way which has not been documented previously. An experimental educational delivery style is therefore called for, where the teachers involved in online education trial the use of an online application with a group of students in a particular way, then assess how effective this educational experience has been. The integrated group of technologies which are used to deliver the course is described here as the online learning environment.

The Otago Polytechnic massage therapy programme has recently undergone the transition from a purely face-to-face delivery style to a blended delivery style. The programme’s delivery style is making use of contemporary online applications such as wikis, blogs, collaborative document editing, voice-over-internet-protocols (such as MSN messenger and skype). This is new ground for massage therapy education, and in many ways for education in general. The department feels that there is a need to monitor the student’s experience and achievement in this new context and to make changes to improve that experience over time.

Literature review

The online component of our course has an email forum (a google group) for the year 1 students which is intended to have a function analagous to the discussions which you have in the classroom.

Because of the increased amount of time which is available for the students to reflect on the questions asked of them (due to the assynchronous nature of these discussions), I have expected that the majority of students will participate in these discussions.  Accordingly I have made the topics of discussion fairly important or even central to the students learning in some cases.

The problem that I’m having here is low participation.  In a discussion topic posted last week two students out of a class of 16 posted a response.  This discussion topic while it is not directly assessed, is based around developing an assessment instrument that the students will use in their major piece of assessment for the course.  I’ve been fairly disappointed with the response rate as a result.  I’ve extended the period of the discussion, and heavily pushed the point that this is a critical discussion for us to have, and that has led to contributions from 2 more students so far.

Maybe I’m expecting too much?  I guess if I compare this to a classroom discussion, I might get a similar response to some questions.

Does anyone have any ideas on how I can improve the response rate (short of making the students contributions an assessment item)?

Despite my efforts to make the course interface as simple as possible, I’m getting feedback that students are finding it difficult to find their way around, and get all of the information that they need.  A couple have mentioned that they would prefer something a bit more like Blackboard.  😉

I’m not quite sure what to do about this.  I’m going to run a tutorial with some of the students who are having difficulty sometime in the next couple of weeks, and that should clarify exactly what the issues are (as it seems like it should be straightforward enough to me).

I’ve been having a good time with Survey Monkey – Thanks Leigh. 🙂
Monkey picture comes courtesy of S.A.M. Licensed under CC-BY. http://www.flickr.com/photos/s-a-m/401970009/

Survey Monkey

I just set up my online experience survey on the web this morning, and am already getting the kind of information I need. The questions are

  1. Which of the learning modules have you started and which have you completed?
    The Study Skills course (which is what I’m assessing at present) is composed of a series of learning modules which have been built on the WikiEducator platform (still a work in progress) .
  2. How would you rate the difficulty of your online learning experience so far?
  3. What if anything are you finding difficult about studying online?
  4. What if anything are you finding enjoyable about studying online?

I’ve just checked in & 10 students have replied. 10/23 – I’m almost happy to treat that as a representative sample. I’m very happy with the class progress so far, and the quality of the information.

It’s funny – I’ve had the impression that much of the class was struggling, and I see now that it’s only because most of the class have been able to get on with it without needing to talk with me much. There is always a risk that the students who are less familiar with computer use will be lagging the others getting to the survey, I might need to touch base with them over the next couple of days to check.

I’m hoping to use this type of surveying as a type of action research, but I’ll need to go through the ethical approval process before embarking on it.

For now, it’s functional. I’m planning to run through an Introduction to Sustainability module soon. Before they get to the module they must be able to search effectively for information on the internet, and to communicate effectively online (as each group will include a distance student). Luckily it looks like most of them are already there. 🙂

Last week I did a presentation for the new DFLP students, outlining what I’m doing with what I’ve learnt over the last year or so. The sound quality is pretty poor, but it provides a basic overview along with questions & answers.