One way to gauge student learning more frequently

I just posted about how doing a pretest/posttest assessment isn’t going to help students enrolled in a course now. So I thought I’d share one educational technology solution that nearly anyone can use in their courses. I’m going to make this post in the context of a large scale course with students number close to 300, but it is just as applicable in a smaller setting.

A simple solution for checking for understanding is having a quick out the door survey for students to complete about whatever about the lesson. How I would set this up would be to have a survey in Google Forms or Survey Monkey or even Poll Everywhere if you wanted, that was very short. I’d have two likert-type scale questions with the usual strongly disagree, disagree, neither agree nor disagree, etc. Here are my questions:

  1. I understand the topic of today’s lesson.
  2. I still have questions about the topic of today’s lesson.

Then I’d have one open-ended questions:

  1. Please list any questions you have about the topic of today’s lesson.

The KISS principle really comes into play here. Keep is quick, easy and to the point. There’s no need to have in depth surveys or other assessments for students to take if you are trying to make minor adjustments to your teaching so you are more effective and the students learn more.

Will this take time? Yes, it will take some time, but not that much and you really shouldn’t devote that much time to this unless there’s problem. The good thing about using these technologies is that there’s a built-in analytic tool that makes analyzing that much data really easy.

Does this mean I shouldn’t do a pretest/posttest? No, still do that. It’s data and it can help you improve over time. Just don’t base all your decisions on those data points. That’d be like making major decisions on student standardized test scores…oh, right…can’t win ’em all…

Advertisements

Gauging student learning in higher education…I think we’re missing the point

I just read an article on The Des Moines Register about a new requirement for the three Iowa regents universities. The law, according to the article, requires professors to give a pretest/posttest assessment in their courses if they have over 300 students, with that number dropping to 100 in fall 2015. These universities are then going to need to develop a continuous improvement plan for these courses and then implement it. Sounds good…right?

Overall, I agree with the sentiment of the law. We should be paying more attention to students, especially in large courses. However, I think the law misses the point or perhaps it’s the interpretation of the law that’s off. The idea of a pre/post assessment isn’t a bad idea. However, it shouldn’t be the only data point. Rather, all teachers, whether they teach a large course or small, should be engaging in formative assessment more frequently, not just summative. A pre/post assessment is a good idea for longitudinal data to see if the course is improving in the long term. But I think the spirit of the law is to help students learn better in the near term and a pre/post assessment doesn’t do that. At least not as described in the article.

As it was described in the article, the professor will do a pretest on day one, then on the last day do a posttest, with neither score counting towards the final grade. Then the instructor can make changes to the course. However, the professor does have the ability to collect data from students through a variety of formative assessments that would allow them to make changes during the same semester, perhaps the same week if needed. The idea that we can’t change how we teach until the end of the semester is a fallacy at best and ridiculous at worst. Students are changing and will continue to change, so if we are only making changes to teaching over the longterm, then I think we’re missing the point. If this is the case, this new law isn’t going to do anything productive and will likely, as some have suggested, serve to feed the bureaucratic machine.

What’s this standards based reporting thing?

Over the past few months it seems every time I go to a conference, a meeting, or check in with my PLN, standards based reporting keeps coming up.  At first I didn’t pay much attention to it.  My focus recently hasn’t been on PK-12 student assessments as much as it has been on teacher evaluations and teacher effectiveness.  However, with the extra time in the summer to catch up and refocus, I thought I’d look into this standards based reporting thing a little more before I get back into the full swing of the school year.  So I sent a message to @mctownsley and @russgoerend to ask for some help starting off.  Russ sent me a link to what his district is doing with standards based reporting and I am surprised how much I learned.

Standards based reporting to me, based on my very limited research, but I’ll take Russ and the Waukee CSD as reliable sources, is about assessing students based on their progress towards grade level content standards.  The kicker is that the normal grading system isn’t used and in its place is a scheme that helps describe the level at which a student is performing on each standard.  What I quickly discovered that I liked about standards based grading was the fact that students aren’t being assessed on their behavior, but rather on what they know.  However, behavior is important, but instead of grouping it with the content standards, it has its own separate place, which seems to make sense.

So why does this all this matter?  Well first and foremost, teachers are better able to design and target their efforts for their students since each student more or less gets an individualized report of where they are in terms of acquiring a particular concept or skill.  This means less large group instruction where only a fraction of students will benefit, to more small group and individual style activities.  By grouping students or through individualized instruction, the type of “teaching” students receive can be more targeted to what they need, rather than trying to meet the needs of anywhere from 20 to 30 students.  Standards based reporting also matters because communication with parents can now be more clear and open.  Whether a student is struggling, succeeding, or has behavioral considerations, teachers, parents, and students have a much more concise method for looking at the evidence, allowing for better decision making.

However, I do have a couple questions.  My first question surrounds how college admissions would react to standards based reporting.  However, as I think about it a little more, if the transcripts reflect the progress students have made towards a standard, they’d still be able to determine their success on different subjects making them more than able to determine if the student would be a good candidate.  In fact, they would likely be better informed than with the traditional report card.  I’d wager the real problem would be with how a change at the LEA level would change the process at IHE level.  In the end, are colleges and universities not going to accept any students from a certain school because of the type of grading system used?  I highly doubt it.  It comes down to change, which is a long and difficult process for some.

My other burning question is: At what point is a student deemed ready to move on to the next level?  At what point do they pass the course?  Does it require a satisfactory completion of all standards, the majority, or some other indicator?  I wasn’t sure what to think about this question.  I’d imagine this would be unique for each school?

This may seem like an off topic for an edtech blog, but it really isn’t.  When it comes down to educational technology, the only thing that truly matters is the education part and how technology can support it.  This means we need to have a logical assessment system for students so we can better design instruction with appropriate technology.  As I think about all the 1:1 initiatives in the state, having a solid assessment system in place so stakeholders can determine if gains are being made in each content area is crucial.  Standardized tests are too unreliable in my opinion, while standards based reports show some promise.

Your thoughts?  Did I miss something or mis-represent a key component of standards based reporting?