I found it a bit challenging to incorporate all of the articles into my blog posting this week, however, I learned quite a bit from them, even if they weren’t in the original blog post. Kohn’s (2007) point that “cheating is relatively rare in classrooms where learning is genuinely engaging and meaningful to students and where a commitment to exploring significant ideas hasn’t been eclipsed by a single-minded emphasis on ‘rigor’” really stood out to me. I think we, whether in K-12 or Higher Ed, tend to place all blame for cheating on the students and do not take the time to reflect on how the environment may be encouraging the behavior. As an instructional designer, it is easy to see that engaging and relevant courses tend to be far less plagued by issues of cheating. Further, I think collaborative environments focused on learning for the sake of learning, that take advantage of peer feedback and support, and require students to perform and demonstrate usable skills are those where competition is less of a problem and students excel.
Another take-away came from the Eberly Center’s Grading vs. Assessment (n.d.) article. I’d like to move away from grades that indicate both outcomes and behaviors or activities, such as participation, and focus solely on outcomes. By doing so, my students will receive feedback on specific areas of strength and weakness and I will be able to monitor overall class performance, “[tracking] the impact of instructional or curricular changes on specific learning objectives.”
I was able to interact with Cherie, Jule, and Kendra on their blogs and Gerald on mine. I love how Cherie is using Google Forms to pre-test her students. I inquired if she was intending to use the survey to test student ability to create a sentence demonstrating proper use of prepositions or prepositional phrases as a performance-based assessment. She is doing a great job of differentiating for her students, however, I suggested she consider how she will use formative assessment to provide necessary feedback along the way. I encouraged Jule to reconsider Popham’s explanation of criterion-referenced assessment as her description of assessment indicated she wanted to move away from a criterion-referenced test. Given her explanation of the activity and the comments she’s made in class, I believe it’s likely that she will, in fact, use a criterion-referenced analysis of the data. Kendra and I interacted about the nature of formative assessments and the concept of extra credit. While I agree with how she uses it, I encouraged her to consider a different term as “extra credit” often drives the grade-centric perspectives of our students and doesn’t really enforce the concept of learning for the sake of learning. Finally, Gerald and I bemoaned the fact that it’s hard to make the study of assessment interesting. That said, we both like the idea of performance-based authentic assessments, however, sometimes these are difficult to create in environments that are so content driven and really tend to rely more on breadth than depth.
Eberly Center for Teaching Excellence. Grading vs. assessment of learning outcomes: What’s the difference?. Whys & Hows of Assessment. Retrieved from http://www.cmu.edu/teaching/assessment/howto/basics/grading-assessment.html
Kohn, A. (2007). Who’s cheating whom? Phi Delta Kappan. Retrieved from http://www.alfiekohn.org/article/whos-cheating/