Friday, January 19, 2018

Unit 1 Reflection: Dirty Assessments and Definitions

Unit 1 Reflection

This unit opened my awareness to the complexity of assessments as they have been traditionally practiced in K-12 learning environment. I am over a job training program where the assessments are used to qualify students for the training, and then are formative through training to assess student aptitude of the lesson contents. We do use summative reporting as a way to view the class as a whole and summarize areas we can improve the delivery of this blended/online learning environment.

Newton (2007) challenged my perception "formative"and "summative" assessments asserting their is no unified definition or use of either; but rather, the intention of the assessment and use is what drives the modality, delivery, and use of the results.

I spent 10 or so years in clinical research where in science we begin with a hypothesis/theory and then test the theory using well understood methodologies, many including psychometrically validated instruments. We know through this process that the results we pull from the assessments have greater predictability for the intended use in addressing the stated problem.

In education, the assessments may or may not have rigorous testing.  When a validated instrument is used out of context of the originally intended population, the results can be skewed.  I spent a good year pouring over Nunnally & Bernein's book on Psychometric Theory (1979) for a research project I developed:

https://books.google.com/books/about/Psychometric_theory.html?id=WE59AAAAMAAJ

It is heavy in constructivist/behavioral theory that references Dewey and Skinner on predictive responses to set questions that relate to behaviors/perceptions from test takers.
What I learned through this book and applying some of the methodologies is how variable the responses are based on the test taker's perspective and understanding of the test questions themselves! It is important that the tests used and results gathered are used for the intended purpose. Policy makers who summarize results from population targeted studies and apply broad brushes to larger groups are very misleaded! In a strict sense, broad conclusions are invalid and should not be used!

On another point, there is a real practical challenge with tying assessments that aid with a iterative learning process and teaching dynamic in a timely fashion that is meaningful in real time.  In research, we have some great ideas on the information we'd like to collect, but by the time it is gleaned, cleaned, analyzed and written up, by the time of publication the results are obsolete (in manuscript development). This is where technology can help with rapid feedback to instructors and students. If the testing is automated with programming to autograde can help reinforce learning for students.  Of course, not one solution fits all, as some assessments may need to be catered to address student specific needs, such as learning and testing difficulties.

Ideally, I wish to use assessments that feed to an iterative process that is ongoing. Use of technology allows for a timely feedback when using systems that provide reliable data that applies to the student accurately.









3 comments:

  1. I think it is so true “...applying some of the methodologies is how variable the responses are based on the test takers perspective and understanding of the questions themselves!” It is hard to know exactly what is going on when a question is answered in a couple of different ways. When I see this happening, I have to step back and figure out how they interpreted my question. Can I figure out what they were thinking. Was it the way that I worded a problem that was misleading or was it a misconception that was not cleared up?

    ReplyDelete
  2. I liked your comment about 'the variables to responses based on the test taker's perspective and understanding of the test questions." I have found that sometimes I wonder where a student comes up with their answer. Once they talk to me I can see what they were thinking. It is hard to know what the test taker was thinking. I also wonder if assessments are being used for their intended purpose. This is true with standardized tests. What is the true purpose? It is to test the knowledge that the students have or is to test what the teacher has taught them? Assessments applied for their intended purpose, whatever that may be, can be a great tool of measuring success or failure. But the assessments used for a different purpose other than what they were intended to do don't have a lot of validity.

    ReplyDelete
  3. Very good point Jeff. Technology does help us make ongoing, embedded, and real-time assessments. There are much such tools, and from my observations, many educators exploit them already. Yet, my question is to what extent we are assessment literate, do we really make the most of assessments to support teaching dynamic at all?

    ReplyDelete