I’m getting into writing assessments for next year, and it’s clear that some aspects of our assessment model need to change. The main drivers for me are the need to increase engagement in online learning activities, workload reduction, and improving feedback.

Assess them and they will come

In my review of how things have gone this year, one of the things that’s really stood out for me is the fact that the level of participation in the learning activities that I set for my students this year was not even close to a level that I would be satisfied with. It’s clear to me that their learning has been impaired as a result (or at least their learning of the material that I wanted them to learn), and I’m pretty sure that the one thing that would have led to more participation would be more assessment.

Taming the workload beast

But we already spend too much time marking assessment! In a recent staff meeting, we talked at length about workload reduction. One thing that takes up a considerable amount of our time is marking assessment. I’m sure that I can design assessments to involve less workload for the assessor.

Anderson describes a range of methods that may act to reduce assessment-related workload for teachers (2008)

  • Automated assessment processes – ranging from formative tests (simple) to virtual labs and simulation exercises (complex)
  • Online automated tutors
  • Use of neural networks & other artificial intelligence methods
  • Peer review (of either students within a specific course, or students within a network of similar courses)
  • Student creation of open educational resources which are then assessed by lifelong learners who are using the resources (Farmer, 2005 as cited by Anderson, 2008)

Formative tests are fairly straightforward to implement. They take some time to set up, but then they’re there to use year on year. I have thought about creating a simulated clinical environment in second life, but at this point, the creation of automatically marked simulations is well out of my financial ballpark, so I’ll move on.

The next two are also a bit too high tech, and high budget.

The last two options are possible if the students of the course are a part of a learning network. (Anderson, 2008). One of my goals for the future is to develop this network, but I think it’ll take at least a couple of years of students moving through the programme before this happens to any particular degree.

Feedback

Feedback is crucial to the learning process, and this is something that we can definitely improve on. Formative tests that provide feedback directly following the student’s performance provide a wonderful development opportunity for students, and I believe that this is one of the real strengths of online education. According to Shepard (2000 as cited in Caplan & Graham, 2008), and Wiggins (2004), providing detailed feedback as close as possible to the performance of the assessed behaviour enhances student learning.

We should strive to “create assessments that provide better feedback by design” (Wiggins, 2004). I was inspired last year by the way in which Montessori school activities are based on this principle. Learning activities can be designed to provide feedback to students in the absence of the teacher. This can be facilitated through instructional design (Wiggins, 2004), or through social networks (Anderson, 2008). In my experience when courses I’ve been engaged with have required blogging, a community of learners has developed, where the learners have begun to support each other in their learning.

3 phase assessment process

After considering all of this, I’ve come up with a three phase assessment process that I think would be fairly ideal for most of our online courses. Phase 1 and 2 here test different grades of knowledge (simple/moderate complexity) & overlap in temporal space.

  1. Automated formative testing to test knowledge of discrete chunks of knowledge.
    Facilitator’s role: establish test, monitor results
  2. Reflective blogging on key concepts in the first ½ of the course. Students required to post on each topic, rewarded for commenting, updating the work they’ve done based on future learning, and referencing.
    Facilitator’s role: Monitor class activity, encourage engagement, Provide generalised feedback
  3. Final theoretical assessment which integrates learning.
    Facilitator’s role: Mark assessment, provide feedback & opportunity for resubmission

Students are therefore rewarded for acting as good community members, are given feedback on their developing understanding & are assessed for their integration of knowledge.

The one slight issue with this model is that if anything, I can see myself doing more assessing in this than I was doing previously. However the formative assessment that I’ll do in the early stages of the courses will be integrated with my teaching, so in effect I believe I could save time with this approach.

What do you think? Can you see any big holes in my thinking here?

References

Anderson, T. (2008). Towards a theory of online learning. In T. Anderson (Eds.). The theory and practice of online learning (2nd ed., p. 45-74). Canada: AU Press, Athabasca University.

Caplan, D., Graham, R. (2008). The development of online courses. In T. Anderson (Eds.), The theory and practice of online learning (2nd ed., p. 245-264). Canada: AU Press, Athabasca University.

Wiggins, G. (2004). Assessment as feedback. Retrieved December 11, 2008 from http://www.newhorizons.org/strategies/assess/wiggins.htm.

Advertisements