CompSci 249, Spring 2017: CS Education Research Seminar

Schedule

Session 1 (1/11): Introduction to Peer Mentoring & Teaching

Session 2 (1/18): Helper Hours: Self-Regulation, Self-Efficacy, Growth Mindset and Goal Orientation

Session 3 (1/25): Helper Hours: Emotional Intelligence

Session 4 (2/1): Emotional Intelligence and Feedback

Session 5 (2/8): GPS Syndrome, Helper hours, and Discussion Sections

Notes

Session 6 (2/15): Evaluating Student Work

Notes

Session 7 (2/22): Mock 1-on-1 Reviews

Session 8 (3/1): Active Learning Modules

Session 9 (3/8): No class

Session 10 (3/22): Active Learning

Session 11 (3/29): Evaluating teaching

Postponed

Session 12 (4/5): ALMs

Session 13 (4/12): Active Learning Modules

From SRI's Performance Analytics in Science Information on Rubrics

Technically sound rubrics are:

  1. Continuous: The change in quality from score point to score point must be "equal": the degree of difference between a 5 and 4 should be the same as between a 2 and 1.
  2. Parallel: Similar language should be used to describe each level of performance (e.g., low skill level, moderate skill level, and high skill level), as opposed to non-parallel constructions (e.g., low skill level, understands how to perform some of the task, excellent performance).
  3. Coherent: The rubric must focus on the same achievement target throughout, although each level of the rubric will specify different degrees of attainment of that target. For example, if the purpose of the performance assessment is to measure organization in writing, then each point on the rubric must be related to different degrees of organization, not factual accuracy or creativity.
  4. Highly Descriptive: Highly descriptive evaluative language ("excellent," "poor,") and comparative language ("better than," "worse than") should be used to clarify each level of performance in order to help teachers and raters recognize the salient and distinctive features of each level. It also communicates performance expectations to students, parents, and other stakeholders.
  5. Valid: The rubric permits valid inferences about performance to the degree that what is scored is what is central to performance, not what is merely easy to see or score, or based on factors other than the achievements being measured. The proposed differences in levels of performance should a) reflect the key components of student performance, b) describe qualitative, not quantitative differences in performance, and c) not confuse merely correlative behaviors with authentic indicators of achievement (e.g., clarity and quality of information presented should be a criterion in judging speaking effectiveness, not whether the speaker used note cards while speaking). Valid rubrics reduce the likelihood of biased judgments of students' work by focusing raters' attention on factors other than students' gender, race, age, appearance, ethnic heritage, or prior academic record.
  6. Reliable: In traditional assessments, such as multiple choice tests, where a student selects a response from among several options, the reliability of the score has to do primarily with the stability of the test score from one testing occasion to another in the absence of intervening growth or instruction. Establishing the reliability of a rubric for a performance assessment, however, is more complex. A reliable performance assessment rubric enables:

Session 14 (4/19): Project Flash Talks

Session 15 (4/26): LDOC


Last updated: Wed Jan 11 12:17:14 EST 2017 [validate xhtml]