Friday, February 23, 2007

ASSIGNMENT 3 (Noraziah)


"The tree which moves some to tears of joy is in the eyes of others only a green thing that stands in the way. Some see nature all ridicule and deformity ... and some scarce see nature at all. But to the eyes of the man of imagination, nature is imagination itself."-- William Blake

"Nature is always lovely, invincible, glad, whatever is done and suffered by her creatures. All scars she heals, whether in rocks or water or sky or hearts."-- John Muir Photobucket - Video and Image Hosting Photo Sharing and Video Hosting at Photobucket Photo Sharing and Video Hosting at Photobucket Photo Sharing and Video Hosting at Photobucket

Photobucket - Video and Image Hosting
1) Individual Assignment- Noraziah Bt Mohd Amin- 0325974 (Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study.)

Summary of the Research
One of the popular forms of computer-mediated communication (CMC) is online posting which is widely used in online learning setting. Online postings refer to messages sent to and displayed on an online facility such as internet newsgroup, bulletin board system, or some other public discussion group. An exploratory research was done by a group of professors and students from Purdue University in 2005 with the purpose to find out the significance of peer feedback in boosting the quality of students’ online postings as well as how far that feedback could assist students’ learning in the modern, computer-based, online environment.
In this study, the methods of research included 15 graduate students (10 females and five males) as the subjects with the use of a case study framework and the Taxonomy of Educational Objectives or often called Bloom's Taxonomy (a
classification of the different objectives and skills that educators set for students) as a technique for the participants’ postings evaluation. The research questions in this study involved inquiries regarding the effects of peer feedback towards producing, preserving and enhancing the quality of messages posted by students as well as the students’ views on the importance of giving and accepting peer feedback compared to tutor feedback. The subjects who were students in an online technology integration course, were studied along the spring semester of 2005 by nine researchers who were familiar with the scoring rubric. The procedure needed one researcher to analyze and accumulate data from two participants. Data were gathered quantitavively and qualitatively from the students’ postings, surveys and interviews.
Regarding this research’s procedures, the students who interacted with each other and the instructor within a WebCT environment, were required to give two postings; one posting to weekly discussion questions like for example, “What do you think is the role of technology in learning?” and one to their peers’ reactions to the topics given. Students were first informed about how feedback evaluation should be done and were given models of giving and receiving feedback for several weekly questions by the instructors before the students actually applied this process themselves in week seven onwards (until week twelve). Feedback was scored numerically from 0-2 marks, where zero point for non-substantive comments, one point for postings that represented understanding and application of the topics, and two points if messages indicated analysis, synthesis and evalution. This style of scoring was used by researchers to relate to the quality of thinking in online discussion.
The processes of reviewing and scoring feedback were done by students by firstly going through their peers’ responses to discussion questions, evaluating the reactions using Bloom’s taxonomy, and giving some remarks to support the point or points they had given before they submitted their scorings to the instructor who would look at the feedback, omit reviewers’ names (students’), compiled the feedback and sent it to students’ emails. As for participation points, they were given to students who gave feedback to their peers’ postings. Peer and teacher feedback contributed to students’ grades in the end.
All of students’ postings for 17 discussion questions were rated by two researchers using the similar rubric used by the students before. At first, one discussion question was chosen randomly and its 59 postings were evluated separately by the two researchers with ten postings at one time of scoring until all the 59 postings were rated. There was 86% agreement between two raters regarding their findings as the final results. With respect to survey that included 13 Likert-scale items and five open-ended questions, students by the end of week five had come up with their views on the significance of some aspects of feedback like timeliness etc. While their ratings of the importance of peer and instructor feeback were completed in week 16. For detailed information about individual opinions on peer feedback process, interviews were held with students via telephone. All data were finally analyzed using T-test and NUD*IST qualitative analysis software.
In terms of how students viewed the value and impact of peer feedback, results showed that the participants thought that the feedback was a little more essential in online environment compared to traditional learning. The subjects also thought that peer feedback should be timely and of high quality. Also, results indicated that students viewed both instructor and peer feedback more or less of similar importance and value in maintaining the quality of students’ postings. The students’ perceptions towards these things had increased by the end of the course. However, results proved that students still thought that instructor feedback was more favorable and this perception did not changed until the semester ended. While with respect to the importance of giving and receiving peer feedback, the survey showed that the students perceived both of them significant. As overall results, only 53% (8 students) believed that peer feedback could give many advantages and there was no proof of significant improvement in students’ posting from the beginning until the end of the course.

Reflection to the Research
There are a number of issues appropriate to be mentioned here; first, the quality of the students’ postings quite improved in their view, but there is no evidence that the quality of their assignments also improved. For example, there is no discussion or evidence that students incorporated the feedback from peers in their assignments. Another things to be said about this research is that we are provided with quite enough information and evidence of how peer feedback could more or less result in the enhancement of the quality of students’ postings (since that was the purpose of the research). However, very little is mentioned whether high-quality postings could improve the students’ learning. Apparently, quality of the feedback was the researchers’ concern rather than the outcome of the feedback to students’ learning. The students valued feedback from both peers and instructor, but they valued feedback from the instructor more highly. This is not surprising since the instructor feedback indicated what the instructor felt was more important.
We need to remember that the subjects were postgraduate students who presumably had quite high critical skills. The question here is whether we can expect the same from students of other levels, particularly lower ones. Apparently there was no quantitative improvement in the postings, and so we can also question about qualitative improvement. Surely the point of doing this research was to improve the overall quality of the students’ work, but there is no evidence presented here that this was the case. One problem is whether the outcome of using CMC is equal to or better than doing feedback in the normal classroom. Of course, to a certain extent students feel a sense maybe of anonymity in using CMC that the interaction and feedback is not so ‘in your face’, so to speak.
According to the research, one advantage of using CMC-peer feedback is that it can reduce the instructor’s workload. That perhaps misses the point, meaning instructors may benefit from the fact that much of the feedback is peer-generated, but if that feeedback is wide of the mark, or irrelevant, then the instructor does not benefit. As noted in the article, students often have reservations about the ability or competence of peers to give meaningful and appropriate feedback. If CMC is used, then it should lead to a demonstrably better outcome than utilizing traditional methods, since technology is meant for an improved condition of a certain thing. It appears that the main benefit from the students’ point of view was that they found it useful, but the connection between the ‘process’ of ‘useful’ and the ‘outcome’ of the activity is not clear.
It appears that students could increase their score by making additional postings. The paper suggests that students had little motivation to follow up with additional ‘high quality postings’. It is not quite clear what is meant by ‘high quality’ here. It may refer to the use of Bloom’s higher order skills in the postings. It can be said that that misses the point. The purpose of having peer feedback is to improve the product that is being commented on, not to improve the quality of the feedback itself. The issue of what the purpose of the feedback is, seems to be barely addressed. We perhaps have some reservations about the use of statistically demonstating that that there were ‘significant’ differences or improvements in the students evaluations. What is statistically ‘significant’ may translate into very little difference in the real world.
Another things to be mentioned about this research is its results. They had not successfully and convincingly supported the researchers’ initial theories of using peer feedback to increase students’ quality in their online postings. Even though students were rated for their feedback to their peers’ postings, yet this did not motivate them to score the highest marks (two points), thus resulting in high quality postings. We are not sure if students would have produced what was considered high quality feedback, meaning one that would reach the standard of two-point feedback if they were not scored for doing this. It is obvious here that what is understood as high quality postings probably refer to postings that scored high marks. Perhaps, Bloom’s taxonomy was not suitable to be used for students’ postings assessment and maybe the research should have included more participants rather than just 15 students as well as they should have not be limited to submitting only two postings, in order to come up with more relevant results .
In an online learning environment where students and instructor do not communicate to each other face to face, it is important for them to have an integrative interaction. Online postings which are parts of CMC will be more practicle for distant learning as they can save the travelling costs. However, since this kind of communication is computer- or machine-based, it is lack of sense of humanity, just like what mentioned by the students in the paper. For example, in marking their peers responses to discussion questions, they tended to feel like rating some strangers since the postings were electronical and made annonymous.
As online learning needs students to be of computer and technology literate, it can only be implemented to students of higher level of education, meaning this research had chosen the right subjects. Even though peer feedback was thought essential in increasing the quality of students’ online postings, still the majority of them perceived tutor feedback as far more effective in this case. This is logical considering the fact that instructors are generally more knowlegable than students, thus the reliability of instructor feedback is not questionable. Thus, the quality of students’ postings should not be determined solely by peer feedback. The use of peer feedback as part of learning can lead students to apply the learner-centered approach since they have to actively participate in the learning process rather than taking the periphereal role. Also, they are given the opportunity to express their views and this is good. As proven by this research, peer feedback does have a potential to contribute the quality of students’ postings and therefore, it should not be taken for granted.
In conclusion, this research was a good attempt to discover the impact of peer feedback on the quality improvement of students’ postings. This paper can be used as a good reference for other researchers who want to find out more information about the relationship between peer feedback and high quality postings.

References.Ertmer, P.A., Richardson, J.C., Belland, B., Camin, D., Connolly, P., Coulhard, G., Lei, J., & Mong, C. (2005). Using peer feedback to enhance the quality of student online postings: an exploratory study. Retrieved January 21 2007, from http://jcmc.indiana.edu/vol12/issue2/ertmer.html

No comments: