Unit 1 Reflection
This unit opened my awareness to the complexity of assessments as they have been traditionally practiced in K-12 learning environment. I am over a job training program where the assessments are used to qualify students for the training, and then are formative through training to assess student aptitude of the lesson contents. We do use summative reporting as a way to view the class as a whole and summarize areas we can improve the delivery of this blended/online learning environment.
Newton (2007) challenged my perception "formative"and "summative" assessments asserting their is no unified definition or use of either; but rather, the intention of the assessment and use is what drives the modality, delivery, and use of the results.
I spent 10 or so years in clinical research where in science we begin with a hypothesis/theory and then test the theory using well understood methodologies, many including psychometrically validated instruments. We know through this process that the results we pull from the assessments have greater predictability for the intended use in addressing the stated problem.
In education, the assessments may or may not have rigorous testing. When a validated instrument is used out of context of the originally intended population, the results can be skewed. I spent a good year pouring over Nunnally & Bernein's book on Psychometric Theory (1979) for a research project I developed:
https://books.google.com/books/about/Psychometric_theory.html?id=WE59AAAAMAAJ
It is heavy in constructivist/behavioral theory that references Dewey and Skinner on predictive responses to set questions that relate to behaviors/perceptions from test takers.
What I learned through this book and applying some of the methodologies is how variable the responses are based on the test taker's perspective and understanding of the test questions themselves! It is important that the tests used and results gathered are used for the intended purpose. Policy makers who summarize results from population targeted studies and apply broad brushes to larger groups are very misleaded! In a strict sense, broad conclusions are invalid and should not be used!
On another point, there is a real practical challenge with tying assessments that aid with a iterative learning process and teaching dynamic in a timely fashion that is meaningful in real time. In research, we have some great ideas on the information we'd like to collect, but by the time it is gleaned, cleaned, analyzed and written up, by the time of publication the results are obsolete (in manuscript development). This is where technology can help with rapid feedback to instructors and students. If the testing is automated with programming to autograde can help reinforce learning for students. Of course, not one solution fits all, as some assessments may need to be catered to address student specific needs, such as learning and testing difficulties.
Ideally, I wish to use assessments that feed to an iterative process that is ongoing. Use of technology allows for a timely feedback when using systems that provide reliable data that applies to the student accurately.