I think I've earned all of the above after this semester. It was dreadful. But I want to reiterate, it was not the chemistry modeling curriculum I found dreadful. Rather, it was the combination of a lousy group of especially unmotivated students, attempting new curriculum, and a number of other factors out of my control.
I'm not going to lie, I was truly worried about my students' EOC scores. As I have said before, a large portion of our annual teaching evaluation comes from student growth on their state exams. Across the board, my students were not showing the strong mastery I usually see.
The EOC results:
Spring 2015 - Modeling Curriculum
58 total students (all standard)
Average Score: 83.5
Median Score: 84
Lowest Score: 66
Highest Score: 96
Failures: 3
2013-2014 - Traditional Instruction
79 total students (all standard)
Average Score: 84.0
Median Score: 83
Lowest Score: 66
Highest Score: 94
Failures: 3
As you can see, the differences are minuscule. My class average was half a point lower, but my median was a point higher than last year. My lowest scores were the same, but my highest scores were higher. I think it's worth noting, last year's junior class was collectively one of the highest performing groups of students we've ever had at this school. Overall, they were a bright and motivated bunch. This year's juniors are collectively considered to be the polar opposites of their preceding class. I only say that to illustrate why I'm not kicking myself over the half point drop in class average.
What I am kicking myself about is the failures. I had the same number of failures this year, but with less students. It works out to a 5% failure rate this school year vs. a 4% failure rate last school year. If that wasn't bad enough, 2 of those 3 failures were complete surprises to me. I had targeted a list of about 8 students whom I was seriously concerned about. Of those 8, only 1 failed, which I had unfortunately anticipated. But my other 2 failures were students with B and C averages in class. They performed well on my assessments and the practice EOC. I am dumbfounded as to why neither passed. While neither were child prodigies, I had zero indication that they were at risk of failure. One student actually scored higher on her practice EOC than the actual EOC, and she's not one who I would suspect of even cheating. Consider me stumped.
I gave the
Chemistry Concepts Inventory as my final exam. This is a hard test geared towards college level students, so I don't base their "grade" off their score. Instead, I give it to them as a pre-test/post-test and they earn a 100 on the final if they show improvement (I don't tell them their pre-test scores).
I plugged their data into another homemade Excel data tracker, hoping to identify some trends on where my students are weakest and strongest:
|
Pre-test data... with unit/topics listed at the top for each question. Blue indicates something I felt was strongly covered. Yellow were students who I had no data on because they transferred into my class late. |
|
Post-test data, including the difference in score. The student with a -8 was absent and didn't take the post-test yet. The -3 difference on another student is correct, though. The #VALUE indicate students who I didn't have data on or were exempt from the post-test due to school activities. |
The data:
Pre-Test
54 students
Average Score: 5.6 out of 22 questions
Highest Score: 11 out of 22
Lowest Score: 0 out of 22
Post-Test
52 students
Average Score: 6.8 out of 22 questions
Highest Score: 12 out of 22 (not the same student who had the highest pre-test score)
Lowest Score: 1 out of 22
Average Gain: 1.2
Highest Gain: 6
Number of Students with Gains: 35
Number of Students with No Change: 10
Number of Students with Regression: 7
One student (whom I've had severe behavioral/disciplinary issues with all year) actually scored 6 points worse on the post-test than the pre-test. Lovely.
I tried to track the questions for strongest/weakest topics and patterns on what students answered correctly and incorrectly, but the data was all over the place. I couldn't identify many real patterns. Overall, I think the majority of my students were completely guessing both times they took this test. The students who had no change didn't even answer the same questions correctly both times.
The only two questions where I saw a noticeable change in results were questions #7 and #8, which were related questions. Question #7 is true or false about matter being destroyed when a match burns. Question #8 is the reason for the answer in question #7. 75% of students answered those questions correctly on the post-test, as compared to 46% and 52% on the pre-test. Nice gains, although I have to shake my head about the 25% who still managed to answer those questions wrong...