Last Friday I did a presentation on the use of Google reader & delicious in creating a personal learning environment for the Facilitating Online Communities course.   I talked about how these tools work well together, how they can be used in education, and I used the evolution of our massage programme’s online communication structure as an example.

Here’s a link to the session – Creating a personal learning environment using google reader & delicious.

(It was held in elluminate.  If you haven’t used this environment you should probably run through the process on the elluminate support page to ensure that your computer is set up correctly.)

As part of Otago Polytechnic’s recent staff development mini-conference, Tim Brazier presented as the keynote speaker to the selfcare panel. His presentation was based on a model which he uses to motivate his clients (top-level triathletes) towards the achievement of their training goals. While there was nothing particularly innovative about the model, I found it interesting to think about using this or a similar model to support and reinforce the motivation of our students.

There are parallels between tertiary students and athletes. Study at a tertiary level can be fairly demanding, and if a student wishes to achieve at a high level they need to have the ability to perform consistently at a high level. Motivation is a key factor in performance in any field, so identification of strategies that might help our students to be motivated towards their study goals could potentially aid achievement. While facilitating the use of intrinsic sources of motivations is probably preferable in most cases, the use of extrinsic motivations also probably have their place.

The model which Tim presented includes six stages

1. Inspired

2. Inspire

3. Plan

4. Commit

5. Monitor

6. Recover

Inspired/Aspire

In order to enrol in a course of study, a student must be inspired by something. What do our students aspire to? Why do they want to be a massage therapist?

Any student is likely to have in their mind a picture of what working as a massage therapist entails which will be based partially on what that term means to the influential people in their life (their family and friends), and based partially on their own experiences. Their expectations will probably include the type of work that a massage therapist does, the scope of effectiveness of a massage therapist, and the potential pay-rate of a massage therapist.

Students coming into a course of study typically have fairly unrealistic or incomplete expectations of study and the realities of the profession they intend to enter. Ongoing discussion of these realities with the students should help to mold these expectations towards a set which are more reflective of the actual reality (James, Baldwin, McInnis, 1999). While this process of shaping expectations may not help to motivate students, it should help to prevent the dissatisfaction and therefore de-motivation that occurs when expectations are not met.

Inspire

How can we as staff inspire our students towards the achievement of this aspiration? Helping our students to clarify their reasons for study, and more specifically their learning goals is a good first step, but currently this is where we stop.

The question that we now need to answer is once we have identified the desire that is motivating our students to study, how can we as staff motivate our students towards the achievement of their goal(s)?

Some ideas

Regular review & possible revision of goals (the source of inspiration).

Identification of barriers & strategies to overcome these goals.

These overall goals are likely to change through over the course of study. This needs to be kept in mind.

Can we as staff model what the students are aiming for? This is probably partially provided by staff sharing stories of work-based experiences.

How can we make use of the second year students to inspire first year students? More interaction between first and second year students should help first year students to see where they are going.

  • Second year students providing massages to first year students in the massage clinic.
  • Peer tutoring.
  • Massage swaps in the classroom.

Plan

Once the goals have been set, a plan needs to be made which describes how these goals are to be achieved. This plan should include both long-term and short-term priorities as well as the time which should be committed to each element of study.

It is generally expected that students will be able to manage the planning and time management requirements of tertiary study. The experience of the author is that this is often not a realistic expectation. It is therefore advisable to embed tuition in planning and time management within the course of study.

Currently we teach time management as a element of our study skills programme, which is simpler from a teaching point of view, but not ideal from a learning point of view (Wingate, 2006). Next year we plan to embed this within the programme by regularly putting aside time where the students are guided to plan their study. This direction should be fairly direct in the early stages of the course, and should taper off as the students progress to enable the students to internalise the skills of planning and time management.

Commit

Once the plan has been made, the student needs to commit to it. This commitment is largely in the hands of the student, however all of the interventions discussed so far will contribute to building the motivation which underpins consistent commitment.

We could support this commitment further by reminding students to review the previous period to see if they have met the goals which they set.

Monitor

Once a programme is in motion, the performance of the student needs to be monitored.

In education we typically monitor performance through assessments. While this is fairly effective in providing motivation through compulsion, it does not tend to provide very useful feedback on the intrinsic motivation of students. In fact, compelling learning in this way seems to gradually drain the interest in learning about the subject material which students often begin with, the natural result being a reduction in intrinsic motivation.

It could be argued that if we are really interested in the achievement of our students then monitoring motivation is of similar importance to monitoring achievement. How could this be done?

One of my colleagues has a regular practice of a one-to-one meeting with each of her students several times each year. She describes this as an invaluable way of building relationship with each of her students. She gains an in-depth understanding of the challenges that each of them faces, and is able to act as a learning mentor as a result. While this is undoubtedly a fairly expensive exercise in terms of time, she believes that the time is well spent in terms of pastoral care and student retention.

I’m interested in any other ideas that any readers of this blog may have.

Recover

After any period of exertion, the athlete (student) needs to recover. I believe that the standard term & semester breaks fulfil this role adequately. There is often a tendency amongst academic staff to see the term breaks as a chance for students to complete assessments and study, however it is my belief that the aim of the programme coordinator should be to allow the students a period of time to recover from the demands of study so that they can more effectively apply themselves in the next period of study.

References

James, R., Baldwin, G., McInnis, C. (1999) Which university – the factors influencing the choices of prospective undergraduates?. Melbourne: Centre for the study of higher education.

Wingate, U. (2006). Doing away with ‘study skills’. Teaching in Higher Education, 11, 457-469.

After my first trial of using the blogging rubric, I’ve decided that the rubric and the process both need tweaking.

In a post I made last December, I talked about our process of assessing blogging.  I decided that we would have two submission dates.  On the first submission date, the students would submit a draft, and I would give them feedback on if they had met competetncy (based on their demonstrated knowledge of the subject area).  They would then have a chance to polish their post & I would regrade it at the second submission date.  Sounds complicated?  Well surprisingly enough it is.  It seemed like a good idea to me at the time, but after running through it once I’m going to revert to our standard approach which is allow them to submit an assessment, mark it completely, then if anyone is marked as not competent they are allowed one resubmission.  Simpler for the students.  Easier for me.  (I don’t know what I was thinking).

The other thing that needs tweaking is the actual rubric.

After using it once I’ve decided that grading of community involvement is over-weighted.  In fact, requiring this has just made a natural process into an unnatural process.  It hasn’t seemed to increase authentic community involvement at all, but rather has led to a few students incorporating references into their blogs, and making comments on others blogs which are fairly pointless apart from the gaining of marks (I know Leigh, I know).

Another problem is that the use of reflection isn’t particularly relevant to this assessment, so I’ve modified the rubric to create

Oh well, one step at a time.  We’ll get there in the end. 😉

I’ve just completed the first stage of my research project, looking at student satisfaction/dissatisfaction with the online aspects of their experience as  students within our programme.  A summary of these results follows.  Please note that I haven’t spent much time in preparing it for reading.  It’s really just a summary of the data with a few reflections thrown in.


Overall

The first years are much more satisfied with the course than the second years.

  • First year:  100% very satisfied/ satisfied

  • Second year:  80% satisfied / 20% dissatisfied

  • Possible reasons

    • Second-time delivery is always considerably smoother than first-time delivery.  Second year material is often being developed just-in-time which can lead to a lack of clarity at times.

    • With the implementation of the OP IT Induction process this year, and with more integrated support from the learning centre, the first year students have experienced an increased level of support for IT skill development & academic skill development.

    • The first year of online delivery was not smooth sailing.  We had a huge learning curve, which combined with a lower level of funding that was anticipated, resulted in a lot of change, which the students found frustrating.  In 2009, this group seems particularly resistant to changes, and are fairly unforgiving of any perceived lacks in their course of study.

Level & quality of interaction with staff

First year – Mostly satisfied (6.3% dissatisfied)

Second year – 60% satisfied / 40% dissatisfied

The concerns in this area are largely based around delayed scheduling for the clinic and Bioscience classes which has impacted on some students who are working part-time. As a staff group, we’ve decided to finalise all scheduling before we finish up at the end of the year to minimise this type of effect.

Some of the other concerns raised are duplicated in the section on clarity of direction, and are discussed here.

Level and quality of interaction with other students

Generally the students seem happy with this.

Quality and frequency of feedback on your progress

Generally the students are fairly happy with this.

How often do you feel clear on what you need to do to progress in your course work?

First year & second year: 50% always/usually, 50% sometimes/not often

Even though I’ve worked to improve clarity of information structures and processes (being aware that this was a problem with last year’s cohort), clarity remains the area of most concern for both first and second year students.  Clearly clarity of direction is key in being an effective student.

Common themes are

  • Difficulty with assessments – locating the assessments, knowing assessment due dates, not receiving assessments soon enough

  • Some of the first year students would like more explaining of where everything is and how to get at it (information structures, and processes). A screen movie would be the best way to efficiently provide for this need.  We can then cover this material at the start of the course, and students who are struggling to understand can view the movie repeatedly if need be.

  • Elluminate class times have been scheduled generally (e.g. this time is set aside for elluminate sessions). It would be useful for part-time students to have specific classes scheduled on the timetable.

  • Some complaints of delayed communication from lecturers. It’s somewhat difficult for lecturers to be highly responsive when they work for Polytechnic part-time. I do what I can to improve responsiveness when it seems important, but there is probably not much more we can do about this complaint at present.

  • Some students have expressed a preference for a simpler structure (i.e. Blackboard), but for reasons previously discussed, this is not an option that we are entertaining at present.

  • No reports of problems with clarity of instructions from lecturers.

It seems that the structure which is in place is workable with a few improvements. Making better use of the course calender by embedding assessment dates and scheduling specific elluminate classes should be very helpful. Also providing first year students with a little more opportunity to become familiar with the structure of the learning environment in the early stages should pay dividends.

Support for computer use

Students were fairly happy with computer support in the following areas – use of a computer, email, elluminate, using Microsoft products, Internet searching & Other computer use.

Two areas where there were a considerable number of dissatisfied students were in support of Blackboard & use of Google docs.

Blackboard use
Year 1: 31.3% dissatisfied/very dissatisfied
Year 2:   No dissatisfaction

Use of google docs
Year 1:  31.3% dissatisfied/very dissatisfied
Year 2: 2 0% dissatisfied/very dissatisfied

Students seem to perceive that there is a lot of support available, but it’s better in some areas than others. Blackboard & Google docs could be supported better.  We planned to run a session on Google docs in our first practical block, but I decided not to because I believed that the students were getting overloaded.

The community learning centre environment may not be ideal for high need students who are suffering from the convergence of the need to improve their computer literacy & the demands of their study (“Staff in the clcs will help but not that willingly,only one thing at a time and you 3 have to wait 10mins for them to come over to you”).

Suggestions range from suggestions for more IT tutorials to a comment that “the polytechnic has excellent resources to ensure anyone can understand computer use- people just need to use them!”

We plan to initiate a peer tutoring programme next year using some of the second year students.  This  should help.

Confidence with computer use

First year: 20 % sometimes confident/not confident
Second year: 20% sometimes confident

In the first year group, it appears from the feedback of most of the class that the technical difficulty of the computer work is not too high. It seems to be well within the capabilities of most of the class.

The second years have a high level of confidence with the use of any computer applications used previously, but are lacking in confidence with blogging. In particular issues around privacy & sharing thoughts/work openly have been discussed. Some students believe that the poorer students will coast through on the work of the better students.

Computer self-efficacy has been shown to improve with exposure to computer use, and this finding does seem to be reflected in my data so far.   I plan to adopt a wait and see approach – I expect that these figures will change by the next survey date.

Ability to avoid distractions and concentrate on studies

First years: 64.3% always/usually | 35.7% sometimes/not often
Second years: 50% always/usually | 50% sometimes/not often

These percentages are fairly high, and they may be considered a problem, but it would be interesting to compare these results to similar results from other tertiary environments where students have a reasonable amount of flexibility (e.g university style lectures).

There were fairly consistent messages from both groups about the reasons for distractions – good weather, socialising, family commitments, noises in the environment, tv, conflict in the student group, a lack of interest in the subject. some people alluded to juggling study with other commitments (training/work/hobbies),

One student stated that it was distracting when people were talking/typing in elluminate when the teacher is talking. This is distracting, and does take some time to get used to, but there are benefits to having those two communication channels going.

Summary of the summary

It seems that the area that most needs development work is in clarity of student direction.  To achieve this I plan to

  • Embed assessment dates within the course calender
  • Schedule specific elluminate classes within the calender
  • Work on minimising delays in student-staff communication
  • Create a blog page which contains links to the main course areas for the second year students (i.e. something which they can access through Google reader which provides links to everything they need).  This has been requested specifically by the class reps.

Now it’s just a matter of finding some time to do this in the mayhem of my life!!

Thomas Scherz has contacted me recently with some questions regarding the programme.  I spent a bit of time writing my response to his most recent email, and I thought it might be useful to some readers of this blog, so here it is….
Thomas’ queries (reformatted)
  • In general could you explain to me, what you did expect by setting up such a programme?
  • What is your overall impression of the programme? Does the Blended Learning Style work well?
  • What do you mean by saying you encourage students digital information literacy?
  • How do you use case studies?

My answer

We have traditionally taught only students who are based in Dunedin city.  In 2007, I talked to quite a few people who were interested in studying with us who were based in our region, but not in Dunedin.  I had to tell them that it was not possible to study our programme, but I started to think – why?  At the time I was studying online learning because I was intending to enhance our face-to-face programme using online learning, and could see that this barrier didn’t need to be there.
Around the same time, we received news that the national standards (NZQA unit standards) for massage would be deregistered (there’s a very long & involved story around that).  I needed to redevelop the programme to remove the unit standards, and figured that we might as well redesign it with flexibility in mind.  As a result we moved to a blend of block practical courses and online theory.
I have to say that in terms of making the programme available to students, it has been a dramatic success.  I haven’t sorted out any statistics on this, but I estimate 1/3 – 1/4 of our students this year are studying either from outside of Dunedin, or are studying and working part-time.  These are students who we would not be able to teach previously.
In terms of academic success, in general the students appear to be benefitting from the blended style.  There is a big learning curve involved with moving to online teaching, and the first year of students bore the brunt of this.  Some aspects of their course were not ideal, and their learning has suffered in some areas, however in other areas they are performing at a higher level than previous students.  I was disappointed with the engagement with online activities, however I believe that I understand this & have blogged about it.  This year we are implementing some changes to the way our online programme runs which I believe will dramatically improve engagement & academic results.  Throughout this year, I will be conducting an in-depth assessment of the efficacy of the blended programme, and will post results to this blog as they are produced and analysed, so watch this space…
When I speak of digital information literacy, I am talking primarily of the ability of our students to source and be critical of information sourced through the internet.  I’m not happy with the level of digital information literacy which our students developed through 2008, and will be making some changes to improve this.  As an example, in 2008 all of our courses ran through blogs, but I managed the work of aggegrating the feeds for all of the students thinking that I wanted to make life easier for them.  I realised later that because I did this for them, they ended up at the end of the year not able to do it for themselves.  This year I intend to get them to set up an RSS feed reader, then subscribe to each of the course blogs so that they become able to make use this type of information aggegration without me.  There are some challenges with this type of approach (e.g. how do I ensure all students are receiving all course information), but I’m sure that we can work them out.  There were other aspects of our students’ digital information literacy which I wasn’t totally happy with, and I intend to make changes to improve these aspects as well.
I’m not a fan of LMSs because they are a closed environment.  I like the idea of having the majority of our course openly available, so that external people can dip in and out of content/discussions/etc.  However at this stage of the move to being an open-education course, having a locked-down environment is useful because it provides us with the ability to use copyrighted images.  Quality creative commons licensed anatomy & bioscience images are hard to come by on the web, and there will need to be some time & money put into moving to an open-education platform at some stage (not my top priority right now).
Re: Case-based learning
We use case studies in a number of different ways.  We are building a library of massage-relevant case-studies over time which can be used to support teaching, or in examinations.
Many of our assessments particularly at the later stages of the programme require our students to apply the theory they’ve learnt to work with a particular population (pregnancy, elderly, injuries, chronic pain, etc.), then to reflect on how effective their treatment has been with their client.  This process is a particularly rich way of encouraging students to integrate theory with practice.

I’m getting into the specifics of designing my assessments.  Last night I was thinking about how to structure my blogging assessment.  In my research methods class, I want to use blog posts in the early days to assess their development of knowledge and skills of research.  To do this, I want them to make four posts

  1. Describe the research process (Week 9)
  2. Describe how information from different sources may vary in quality and how to differentiate good quality information from poor quality (Week 10)
  3. (Given the choice of several topics)  Describe your search process including the creation of your search query, databases accessed, sources found and information quality (Week 12)
  4. (Given several research articles of different types)  Assess the quality of the research findings in each case (Week 13)

I think these four posts will help to scaffold them into the task of performing first a joint literature review, then an individual literature review (more on the joint literature review later).

So that’s all fine, but when considering our assessment policies I realised that for every assessment, our students have the opportunity to resit the assessment if they’re marked as not competent on the first attempt.  At first glance, I thought that this was going to create a monster, however with a bit of thinking I’ve come up with a solution which I think might work.

The plan is to give the students two submission dates, one week apart.  To meet competency, the students will need to make a post on the topic, and have that post graded at a minimum of 2 on the blogging rubric.  The marker will need to review the post of everyone in the class briefly, record key points of misunderstanding, and provide individual feedback on the blogs of students who have not met the competency requirement.  They will then create generalised feedback for the class as a whole which clarifies the main areas of understanding.

The students will then have a week before their final assessment to read the posts of other students, to develop their understanding, and update their original post if they like.  My hope is that this period of reflection will help to stimulate cross-fertilisation of ideas.  At the end of this week the blog post will be graded using the complete rubric.  This rubric has been updated based on the feedback of Whitney & Leigh – thanks guys.  Here is the updated version.

This process will be reasonably time-intensive, but I think it should be managable.  It strikes me as a teaching model much more along the lines of George Siemen’s curator.

A curatorial teacher acknowledges the autonomy of learners, yet understands the frustration of exploring unknown territories without a map. A curator is an expert learner. Instead of dispensing knowledge, he creates spaces in which knowledge can be created, explored, and connected. While curators understand their field very well, they don’t adhere to traditional in-class teacher-centric power structures. A curator balances the freedom of individual learners with the thoughtful interpretation of the subject being explored. (Siemens, 2007)

Siemens talks about the role of the curator being to locate and structure an “exhibition” of learning objects or resources which the students are then free to explore.  The teacher as a guide rather than the font of all knowledge.

Carrying on from my last post, I’ve developed  a rubric for assessing the blog posts of my students.

Initially I intended that the rubric should motivate the students to

  1. Develop understanding of key subject areas
  2. Act in ways which will support the development of a learning community

However as I got into the process of nutting-out how this was going to work, I realised that it’s also important that it motivates the students to write well, reflect on their process, and develop good scholarly habits (i.e. referencing and referring to sources outside of the ones provided in class).

The rubric is a work-in-process rather than a finished product.  It contains 5 categories with a total of 20 marks.

One of the problems I’ve identified is that understanding of the subject of the blog post is perhaps not weighted heavily enough.  I think it should probably have a weighting of 2 or so, but I do like that nice round number 20 as a total, so I’d need to either merge two of the other categories or weight two of them with ½ weights.

Perhaps the second option is the best.  This would provide me with a certain degree of flexibility.  If I was using this rubric in a  course where reflection was particularly important, I could weight writing quality and scholarship with ½ weights.  If in another course scholarship and writing quality were particularly important, I could weight community involvement and reflection with ½ weights.

What do you think?

In response to my recent post on my online assessment strategy for 2009, my colleague Leigh has commented “could you describe your ideas for assessment 1 and 3 more? I’m pretty familiar with blogging for assessment in the way you describe, but need a better picture of how you plan to do the other two. Hopefully with a clearer idea, I might be able to suggest something.”

Funnily enough, I’ve got a pretty clear idea about how I could use automated formative assessment (Phase 1) & final summative assessments (Phase 3).  We’ve been using similar assessment strategies in our face-to-face classes for years.  The thing I’m not really familiar with is assessing reflective blogging (apart from a couple of experiences as a student in recent years).

After a fairly superficial exploration of this topic using the net, it seems that most assessment of student blogging is based on the normative assessment model rather than a competency based model.  Here are two rubrics for assessment of student blogging that I think have potential for our course, although neither of them provide direct motivation for reading & commenting on the blogs of other students.

  1. Blog reflection rubric courtesy of San Diego State University Edweb
  2. Designing for flexible learning practice – courtesy of Otago Polytechnic’s Educational Development Centre.

Our course has a mixture of competency and normative-like assessment.  Do you know of any examples of competency-based assessment for reflective blogging?

Another other suggestions for the design of assessment of reflective blogging?

I’m getting into writing assessments for next year, and it’s clear that some aspects of our assessment model need to change. The main drivers for me are the need to increase engagement in online learning activities, workload reduction, and improving feedback.

Assess them and they will come

In my review of how things have gone this year, one of the things that’s really stood out for me is the fact that the level of participation in the learning activities that I set for my students this year was not even close to a level that I would be satisfied with. It’s clear to me that their learning has been impaired as a result (or at least their learning of the material that I wanted them to learn), and I’m pretty sure that the one thing that would have led to more participation would be more assessment.

Taming the workload beast

But we already spend too much time marking assessment! In a recent staff meeting, we talked at length about workload reduction. One thing that takes up a considerable amount of our time is marking assessment. I’m sure that I can design assessments to involve less workload for the assessor.

Anderson describes a range of methods that may act to reduce assessment-related workload for teachers (2008)

  • Automated assessment processes – ranging from formative tests (simple) to virtual labs and simulation exercises (complex)
  • Online automated tutors
  • Use of neural networks & other artificial intelligence methods
  • Peer review (of either students within a specific course, or students within a network of similar courses)
  • Student creation of open educational resources which are then assessed by lifelong learners who are using the resources (Farmer, 2005 as cited by Anderson, 2008)

Formative tests are fairly straightforward to implement. They take some time to set up, but then they’re there to use year on year. I have thought about creating a simulated clinical environment in second life, but at this point, the creation of automatically marked simulations is well out of my financial ballpark, so I’ll move on.

The next two are also a bit too high tech, and high budget.

The last two options are possible if the students of the course are a part of a learning network. (Anderson, 2008). One of my goals for the future is to develop this network, but I think it’ll take at least a couple of years of students moving through the programme before this happens to any particular degree.

Feedback

Feedback is crucial to the learning process, and this is something that we can definitely improve on. Formative tests that provide feedback directly following the student’s performance provide a wonderful development opportunity for students, and I believe that this is one of the real strengths of online education. According to Shepard (2000 as cited in Caplan & Graham, 2008), and Wiggins (2004), providing detailed feedback as close as possible to the performance of the assessed behaviour enhances student learning.

We should strive to “create assessments that provide better feedback by design” (Wiggins, 2004). I was inspired last year by the way in which Montessori school activities are based on this principle. Learning activities can be designed to provide feedback to students in the absence of the teacher. This can be facilitated through instructional design (Wiggins, 2004), or through social networks (Anderson, 2008). In my experience when courses I’ve been engaged with have required blogging, a community of learners has developed, where the learners have begun to support each other in their learning.

3 phase assessment process

After considering all of this, I’ve come up with a three phase assessment process that I think would be fairly ideal for most of our online courses. Phase 1 and 2 here test different grades of knowledge (simple/moderate complexity) & overlap in temporal space.

  1. Automated formative testing to test knowledge of discrete chunks of knowledge.
    Facilitator’s role: establish test, monitor results
  2. Reflective blogging on key concepts in the first ½ of the course. Students required to post on each topic, rewarded for commenting, updating the work they’ve done based on future learning, and referencing.
    Facilitator’s role: Monitor class activity, encourage engagement, Provide generalised feedback
  3. Final theoretical assessment which integrates learning.
    Facilitator’s role: Mark assessment, provide feedback & opportunity for resubmission

Students are therefore rewarded for acting as good community members, are given feedback on their developing understanding & are assessed for their integration of knowledge.

The one slight issue with this model is that if anything, I can see myself doing more assessing in this than I was doing previously. However the formative assessment that I’ll do in the early stages of the courses will be integrated with my teaching, so in effect I believe I could save time with this approach.

What do you think? Can you see any big holes in my thinking here?

References

Anderson, T. (2008). Towards a theory of online learning. In T. Anderson (Eds.). The theory and practice of online learning (2nd ed., p. 45-74). Canada: AU Press, Athabasca University.

Caplan, D., Graham, R. (2008). The development of online courses. In T. Anderson (Eds.), The theory and practice of online learning (2nd ed., p. 245-264). Canada: AU Press, Athabasca University.

Wiggins, G. (2004). Assessment as feedback. Retrieved December 11, 2008 from http://www.newhorizons.org/strategies/assess/wiggins.htm.