Kamis, 23 Desember 2010

How does assessment design shape the learning space of a distance course? Irina Elgort

What are the consequences of encouraging students to develop an independent learning
path through a course? Is the learning space of a course shaped by the type of assessment
chosen? How do students interact with feedback? These questions are addressed in a pilot
action research study of a postgraduate course in computer assisted language learning
offered in a distance delivery mode. The paper details course design underpinned by the
view of assessment as a key driver and vehicle of learning, with particular attention to the
role of feedback in student learning. The outcomes of this approach are reviewed in the
light of the analysis of student submissions, their engagement with feedback on assessment
and student perceptions of the course.
Keywords: assessment, feedback, CALL.
Students who enrol in master’s degree courses (especially in applied disciplines) are usually practitioners
returning to University to advance their career and discover theoretical underpinnings of good practice.
For this reason, there is a need to deliver academically sound programmes that are also relevant to the
diverse contexts and learning goals of these mature students. This already challenging situation is further
complicated when courses are offered in a distance learning mode, because of a reduced sense of the
learning cohort and limited time student are able to devote to their study due to work and family
commitments. Moreover, if students live and work in different countries their professional practices are
often informed and constrained by social and cultural practices and beliefs of these countries.
All of these considerations played a role in designing a postgraduate course in computer assisted language
learning (CALL) which is the focus of the present paper. The main aim of the course was to engage
participants in critical examination of theoretical and practical issues associated with using computer
technologies in teaching and learning a second or foreign language. Offered for the first time as part of a
Master of Arts programme in Applied Linguistics and TESOL, this course piloted an approach to course
design that would allow students to create their own path through the syllabus in order to make learning
more relevant to their real-life needs as language teaching practitioners. Because assessment sits at the
very core of student learning (Brown & Glasner, 1999; Bryan & Clegg, 2006; Heywood, 2000; Ramsden,
2003; Rust, 2007), this approach was realised by designing and implementing course assessment as
learning experience (Carless, 2009).
Course participants
All course participants (n=7; median age = 47.5) had prior experience of teaching English as a second or
foreign language. At the time of the course, four were part- or full-time ESOL teachers, one was teaching
on a university preparation programme, and two worked as material designers/developers for private
providers of ESOL courses. Two participants lived and worked overseas (in Korea and Saudi Arabia);
three were national students who did not live in the same city as the university that offered the
programme; and two participants lived in the same city, but were in full-time employment. All students
were native speakers of English.
A survey conducted in the first week of the course revealed that all students enjoyed working with
computers, and six out of seven had prior experience with using computers to teach English. Their selfrated
level of confidence in using computers, on the other hand, ranged from ‘expert’ to ‘below average’.
Only three students had had prior experience in using wikis or creating multimedia resources, and only
Proceedings ascilite Auckland 2009: Concise paper: Elgort 257
two were familiar with scripting or programming or had created CALL activities themselves prior to this
course.
Assessment structure
Course assessment was designed to influence quantity, quality and focus of studying (Gibbs, 2006; Gibbs
& Simpson, 2004-5). Because assessment was used both as a vehicle and a driver for learning, students
were explicitly encouraged to use their previous submissions, and feedback on these submissions, as
stepping stones for their later pieces of work. Students were also prompted to ask questions regarding
assessment rubrics, which provoked a couple of interesting exchanges in the course discussion boards.
One such debate focused on the topic of affordances in relations to using generic computer tools in
language learning.
Course assessment comprised three discrete tasks (contributions to a CALL knowledge base wiki, an
evaluation of an existing CALL unit and a CALL project) worth 10%, 30% and 40% of the final course
mark, respectively, and an ongoing task – critical reading assignments (CRAs) – incrementally building
towards 20% of the final course mark. In fortnightly CRAs, students were required to submit a
commentary on one or two key points or questions from core or additional course readings. To facilitate
deep learning and more thoughtful engagement with the literature, participants were encouraged to relate
ideas from the readings to their own teaching experiences and beliefs. CRAs were submitted
electronically through the University LMS, and students received their marks and individual feedback in
the same manner within a week of the submission date. In addition, summaries of the most interesting
points made in student submissions were posted to the online course journal.
A course in CALL is a complex undertaking because of the multidisciplinary nature of this field (Levy &
Stockwell, 2006; Chinnery, 2008). In addition to the key focus on ESOL learning and teaching, common
to all MA papers, participants in this course also have to come to terms with more general educational
research, in particular that in educational technology, flexible learning and instructional design, as well as
with elements of software design and development. In addition, they need to have an in-depth
understanding of the area of linguistics they choose to focus on (such as phonology, socio-linguistics,
corpus analysis, etc.). Since it is virtually impossible to cover all of these additional areas in a single tenweeks
course (with a two-weeks break in the middle), the choice is either for the course designer to select
a limited number of areas for the whole cohort to focus on, or to allow students to specialise, within the
main framework of the course, on areas most relevant to their interests and professional needs. This
decision has an important effect on the course dynamics and the nature of learning interactions. Because
course participants were mature professionals, the latter approach was predicted to better address their
diverse needs and contexts. Furthermore, course assessment had to be designed in a way that did not
penalise students who were less confident computer users, while allowing experts to utilise their
computing skills. For this reason, in the final project students were required to design (but not implement)
an original CALL unit of teaching or learning, while any steps towards its implementation were treated as
a bonus. In addition, students were asked to complete a number of non-assessed practical tasks
(individually or in groups) throughout the course. These tasks aimed to familiarise them with a range of
computer software, tools and environments that could be adapted for CALL. After completing these tasks
students were encouraged to reflect on their experiences in an online course journal.
To encourage students to create their own paths through the course, they were asked to select an area of
interest or relevance for their individual in-depth investigation. After completing introductory readings in
the first three weeks, students had to identify and introduce their chosen CALL topic in a wiki page. A
range of possible topics was offered, but students could also pursue their own topic, having discussed it
with the lecturer. In the second assignment, students evaluated an existing CALL program, tool or web
site of their choice related to their identified area of interest. Finally, their CALL project was an
opportunity for students to think creatively about application of ideas they discussed in the course and to
develop CALL design skills. This highly individualised assessment structure was made possible by
carefully devising assessment rubrics that reflected core skills and knowledge needed to complete each of
the assessed tasks, rather than focusing on the subject matter of the submissions.
Experienced teachers (and ESOL teachers are no exception) generally have a well-established set of
personal theories and beliefs about teaching and learning (Bush, 2008; Ertmer, 2005; Meskill & Anthony,
2007; Warschauer, 1999), whether overt or covert. In a course where teaching practitioners are required
to critically engage with the literature on approaches to using technology in teaching and learning, and to
evaluate and design CALL units themselves, these theories and beliefs are likely to influence learning
outcomes. For this reason, early in the course participants were given a task of describing their own views
Proceedings ascilite Auckland 2009: Concise paper: Elgort 258
about what works and what does not work in language teaching and learning, and why (Hewer & Davis,
Module 2.1). Students were encouraged to refer to and reflect on these entries in their course submissions.
Provision of feedback
A key ingredient of implementing assessment for learning is good feedback (Hattie, 1987; Gibbs, 2006;
Gibbs & Simpson, 2004-5; Nicol & Macfarlane-Dick, 2006). Conceptualisation of “good feedback” in
this course was based on 7 conditions proposed by Gibbs & Simpson (2004-5) and 7 principles of good
feedback practice outlined by Nicol & Macfarlane-Dick (2006). Students were given regular, detailed and
timely feedback on their course work. Clear performance criteria were communicated through assessment
instructions and rubrics, and students were given opportunities to ask for clarification using discussion
forums. They were also encouraged to reflect on the points raised in their submission and feedback using
a shared course online journal. Each piece of individual feedback started with a summary of strong points,
and group feedback spotlighted the most insightful and thought-provoking parts of student submissions.
Course assessment was designed to allow students to use feedback on their previous submissions in order
to improve their following submission, creating the feedback loop needed for “learning-oriented
assessment” (Carless, 2009). Information from student submissions was also used to guide teaching notes
and comments posted to the course web site, in particular, in terms of clarifying misconceptions and
addressing technological issues.
Provision of individual feedback to students, however, can be a time-consuming process, and it is a
challenge to achieve this goal without significantly increasing the teaching workload. Although it was
less of a problem in the present course because of its small size, this approach can create a bottleneck
when working with larger courses. For this reason, time to engage with individual students through
feedback was made available by minimising the amount of time spent on ‘traditional teaching’. This
meant no formal lectures or tutorials (real or virtual). Teaching notes were posted for most (but not all)
topics, and always after students had submitted their critical reading assignments for that topic, in order
to encourage students to develop their own interpretations of the course readings. Electronic submissions
of assignments through the LMS provided a tool for timely individual feedback, while the course journal
served as a place for publishing group feedback. The journal was also intended to give students a vehicle
to engage with this feedback in their own journal entries.
Student submissions
Students’ fortnightly submissions showed a clear trend towards increased critical engagement with the
readings and maturing reflective thinking. Initially, feedback coached students in finding a balance
between expressing a personal opinion and grounding it in the readings [“it would have been helpful to
see your own understanding of this framework”; “I would have liked to see more support from the
literature for the assumptions you are making”]. This point was reiterated in the group summary
feedback, “… best results were achieved by those who were able to ground their writing in the literature,
while articulating a perspective that was distinctly theirs. I also rewarded submission where key ideas
were clearly articulated, well-argued and supported by evidence (either from the literature or by including
examples from practice).” Early in the course, students also had to be reminded about being consistent
and accurate in using references (even though it was one of the assessment rubrics).
In later course submissions this type of feedback was minimal. CRAs submitted by students in the second
half of the course were mostly of two types: either critiques of points made in the literature from the
perspective of their personal beliefs about or experiences of teaching, or analyses of examples from
personal practice from the standpoint of the theory and research covered in the literature. This important
change in the nature of student submissions also affected the nature of feedback; it was now possible to
engage more with students’ approaches to critiquing, analysing and evaluating ideas and information,
suggesting further readings, resources or directions for future research or projects [“A proposition that …
is worth further consideration, and I am glad you and others have picked up on this”; “I found myself
wanting to engage in an academic debate with you about …”; “I may disagree with you on … but you
certainly know how to engage and challenge your readers”; “I believe the increase in the use of … in your
example demonstrates two points …”]. However, this change did not occur in everyone’s submissions to
the same degree. One student continued struggling with the concept of critical commentary, as opposed to
a summary of important points from the literature, while another focused too much on personal views and
experiences per se, rather than reflecting on them in the light of course readings. Interestingly, one
student’s misconception about the CRA task was not picked up until halfway through the course; this
student interpreted the word “critical” in CRA as having to disagree with or criticise something in the
readings. It was his engagement with the feedback that revealed this misunderstanding.
Proceedings ascilite Auckland 2009: Concise paper: Elgort 259
The wiki knowledge base served as a vehicle for students to research aspects of CALL they were
interested in and introduce them to their classmates. Contrary to what was expected, students did not
really engage with each others’ work in the wiki (only one student left a thoughtful comments about a
claim made by a different student in the wiki). Nor did they make changes to their own wiki pages after
receiving teacher feedback. However, all students incorporated at least some points they made in the
knowledge base wiki into their the final CALL project. In fact, students referred to ideas and points
expressed in their own earlier submissions throughout the course [“I myself am a pragmatist ...” – a
reference to the personal teaching theory task; “As mentioned in the CALL review I submitted earlier
…”]. This suggests that assessment design in this course succeeded in engaging students in iterative
rather than one-shot approaches to learning (Gibbs, 2006).
High quality of student CALL design projects was viewed as another indication of the overall success of
the assessment for learning approach adopted in this course. All students showed evidence of deep
engagement with both CALL and applied linguistics literature, and met expectations in the areas of
design and development approaches, and reflective critique. One of the key achievements of the course
was that each student designed a CALL unit of teaching or learning that was directly relevant to and
grounded in their teaching or work context. One student wrote, “This [the final project] was a challenge,
but very useful. One of the most practical assignments I've done during my Masters studies.”
Student feedback
A short survey of student perceptions was undertaken anonymously early in the second half of the course,
using the LMS survey tool. The survey was constructed using qualitative open-ended questions, and
student responses were coded into four themes related to: assessment/feedback; interest/motivation,
content/structure and technology. Responses on what students liked about the course related mostly to
interest/motivation (n=6) [“directly connected to an area of professional interest”; “given me some ideas
for a future career path; sense of achievement…”; “changed my views…”] and assessment/feedback
(n=6), with feedback mentioned five times [“practical and timely feedback”; “feedback … is extremely
good!!”; “the feedback is much more useful than…”; “individual attention”], while assessment was
mentioned once [“CRAs … promote a personal interaction with the readings”].
Among the things that students liked the least, technology was mentioned more often than other aspects
(n=4) [“difficult to read online”; “assembling the required information via Blackboard is problematic”;
“no help (officially) is given re computer skills”], closely followed by the workload mentioned 3 times
(content/structure). Assessment was also mentioned 3 times [“I would like the tasks to be assessed”; “I’d
rather discuss aspects of the readings with others through the discussion board than CRA’s”; “insufficient
modelling of … assessments”]. Under suggestions for improvement, technology [“synchronous
discussions”; “discussions instead of blogs”] and content/structure [“explore more tools and
applications”; “more time coming to grips with the theory”; “a visual overview”; “printable weekly class
notes”] were mentioned the most. Students also suggested leaving the course website available after the
end of the course and setting up an ongoing discussion forum. This suggests that pursuing individual
learning focus did not prevent these students from perceived themselves as a learning cohort, after all.
Conclusions
It appears that assessment for learning can form an effective foundation for creating a highly personalised
learning experience within a common course syllabus. What are the consequences of encouraging
students to develop an independent learning path through a course? Students’ course work, their
perceptions and experiences seem to suggest that this approach to assessment design enabled students to
complete a personal journey of discovery that had real-life value while creating awareness of theoretical
and pragmatic foundations underpinning decisions and actions. This point is well illustrated by one of the
student comments, “I have found this course to be a big eye opener, in making me look further at the
theories behind things... It … indicated that the path to practical implementation starts a lot further back
from the finished product than I ever imagined.”
Even though feedback was perceived as one of the most useful aspects of the course, students did not
engage explicitly with assessment feedback. It is possible that the workload, perceived as high by the
students, was one of the reasons for this outcome. Another possible reason is that the course failed to
create the level of trust (between the students and the lecturer, and among the students themselves)
sufficient to approach a rather sensitive issue of feedback on assessment in the shared online space of a
course web site. Could this have been the result of low student interest in each other’s work due to course
design that encouraged them to pursue their own areas of interest? At this stage there is not enough
Proceedings ascilite Auckland 2009: Concise paper: Elgort 260
evidence for a definitive answer. Further research is needed to test if creating a more close-knit learning
community would encourage better student engagement with assessment feedback.
Finally, one of the key limitations of this research is its small group size. It may be reasonable to ask, Is
this approach to course design scalable? The answer is twofold. Firstly, the provision of regular, detailed
and timely feedback is, no doubt, a resource-intensive enterprise, and, if used in the same form with large
courses, would require additional teaching staff. On the other hand, a postgraduate paper in CALL attracts
diverse student populations, including experienced practicing ESOL teachers, material developers and
computer enthusiasts. It maybe possible to orchestrate and channel this student expertise in such a way
that a significant proportion of course feedback is provided as peer-feedback. This approach will be
piloted in the next iteration of the course.
References
Brown, S. & Glasner, A. (1999). Assessment matters in Higher Education. Buckingham: Open University
Press.
Bryan, C. & Clegg, K. (Eds). (2006). Innovative assessment in Higher Education. London: Routledge.
Bush, M.D. (2008). Computer-assisted language learning: From vision to reality? CALICO Journal,
25(3), 443-470.
Carless, D. (2009). Learning-oriented assessment: Principles, practice and a project. In L. Meyer, S.
Davidson, H. Anderson, R. Fletcher, P. M. Johnston, & M. Rees, (Eds.), Tertiary assessment & higher
education student outcomes (pp. 79-90). Wellington: Ako Aotearoa.
Chinnery, M. G. (2008). Biting the hand that feeds me: The case for e-language learning and teaching.
CALICO Journal, 25(3), 471-481.
Ertmer, P.A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for technology
integration? Educational Technology Research and Development, 53(4), 25-39.
Gibbs, G. (2006). How assessment frames student learning. In C. Bryan & K. Clegg, (Eds), Innovative
assessment in higher education (pp. 23-36). London: Routledge.
Gibbs, G. & Simpson, C. (2004-2005). Conditions under which assessment supports students’ learning.
Learning and Teaching in Higher Education, 1, 3-31.
Hattie, J. A. (1987) Identifying the salient facets of a model of student learning: A synthesis and metaanalysis.
International Journal of Educational Research, 11, 187-212.
Hewer, S. & Davis, G. ICT4IT, Module 2.1 http://www.ict4lt.org/en/en_mod2-1.htm#_Toc475508131
[viewed: August, 2009].
Heywood, J. (2000). Assessment in higher education: Student learning, teaching, programmes and
institutions. London: Jessica Kingsley.
Levy, M. & Stockwell, G. (2006). Call dimensions: Options and issues in computer-assisted language
learning. Mahwah, NJ: Lawrence Erlbaum Associates.
Meskill, C. & Anthony, N. (2007). Learning to orchestrate online instructional conversations: A case of
faculty development for foreign language educators. Computer Assisted Language Learning, 20(1), 5-
19.
Nicol, D. J. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model
and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.
Ramsden, P. (2003). Learning to teach in higher education. London, UK: Routledge.
Rust, C. (2007). Towards a scholarship of assessment. Assessment & Evaluation in Higher Education,
32(2), 229-237.
Warschauer, M. (1999). Electronic literacies: Language, culture, and power in online education.
Mahwah, NJ: Lawrence Erlbaum Associates.

Tidak ada komentar:

Posting Komentar