When I read education research and literature from before the time of the web, I am struck by how many of its features are still relevant today. In his 1990 book, “Learning Theories : An Educational Perspective”, Dale H.Shunck presents a table that has the different methods to assess student learning. I find it interesting to see how each of these look like in today’s academic context.
The table below captures them briefly.
As a way to reflect upon my practice as an educator, I thought of jotting down some observational points on how each of these have materialized in my classrooms.
As Shunck rightly points out, student not “displaying” the right body language/behavior may or may not mean learning has happened. I have observed kids work in CS classes a lot and have seen a wide range of misinterpretations. A kid maybe absolutely convinced that they are doing the right thing only to realize they were not. While another kid, who I thought was not even paying attention to what was being said, goes ahead and gets it right. So while observation can be a good starting point, it can’t be a prime basis for assessing learning.
A popular mode of gauging student learning in any classroom is written work. In Computer Science writing takes many forms. From written code/programs (including focus given to internal documentation) to theoretical topics involving use of technology in the world. Given such a range getting a student to critically write for an examination question is a journey. With Google docs becoming a part of education in a big way over the past decade, collaborative work on written responses has definitely been a step up. The tools and triggers may have changed but the cognitive load and student agency required to write critically remains the same. The factors Schunk underlines as causes for misinterpretation still holds good, specially when he says “even when students have learned” they may not exhibit it in written form. Why not?
One of the things I quickly learnt in the classroom is that oral responses have big limits. Working in an international community often means having students in class who do not use English as a first language. This makes the whole premise of oral assessment pointless. The only time I have found this slightly useful is when we have in-class discussions requiring kids to opine on a concept/task. Even then, I have them write down their thoughts first and then use oral means to expand on them.
Ratings by others
While I can’t say I have used this method of rating a student, the only parallel I have worked with are standards. “How well does X demonstrate mastery in using Y?” and so forth. Focusing on the formative than summative has been a major indicator to see how students are progressing during a course. Rating of this sort I don’t think does any justice to the process that a student undertakes in internalizing learning. It has no reflection on the minutia of what he/she has to undergo in terms of both cognitive load and language processing to attain sufficient comfort with any given concept.
Entry and exit tickets apart, self evaluation has been a big presence in my classes. Students are often given the chance to reflect upon their work, either individually or in teams, to critically assess its worth. The IB Learner Profile offers some guidance around this area. Inquiry based learning is the core of all IB subjects, and Computer Science is no different. From learning how to code (using a language such as Python or Java) to figuring out what kind of network security is required for a given location, students are encouraged to constantly think hard about the choices they make. The self reporting Schunk refers to – as a combination of various other types of assessing learning – is the typical hybrid collection of what we see today with the aid of ed tech.
For those interested here are some more related links I found: