CompSci 249, Spring 2017: CS Education Research Seminar
Schedule
Session 1 (1/11): Introduction to Peer Mentoring & Teaching
- Introductions & Goals
- Qualities of Good UTAs
- Course Syllabus and Details
- Readings
Session 2 (1/18): Helper Hours: Self-Regulation, Self-Efficacy, Growth Mindset and Goal Orientation
- Administrative:
- Discussion assignments
- Helper hours
- Helper hours czar
- Git/assignment maven?
- Discussion Prep:
- Eclipse issues
- Where to get help with Java?
- WOTO - How to lead groupwork?
- CirclesCountry
-
- Tool: ASCEND remote collaboration
- Reflection: Qualities of a good UTA/peer mentor?
- Have you had a peer mentor? What was positive or negative about that
experience?
- Have you ever been a peer mentor? What made you successful or
unsuccessful?
- What qualities should a peer mentor have?
- What might make peer mentoring difficult? Does it depend on the mentor?
The mentee? The environment?
Session 3 (1/25): Helper Hours: Emotional Intelligence
Session 4 (2/1): Emotional Intelligence and Feedback
- Tools:
- Viewing student code on Git
- Sakai
Gradebook for entering discussion scores
- Gradescope for
entering assignment scores
- My Digital Hand for
managing helper hours
- Issues
- High wait times
- Long interactions
- Fairness
- Unequal distribution of instruction time
- Starvation
- Data collection
- Interaction model
- Students: be ready.
- UTAs: keep it focused and brief.
- UTAs: : suggest next steps.
- Matching mechanism: encourage rapport.
- Assessment: Emotional
Intelligence Self-Assessment
For each of the following scenarios consider these prompts:
-
- What factor might be contributing to this situation?
- How does this situation make you feel?
- What might be the best way to react?
Sample emotional intelligence scenarios:
- A student shows up late to an appointment. They seem flustered and
unfocused. When you begin discussing some of the weak points in their
work, they begin to cry.
- A student gets increasingly frustrated when listening to your feedback on
his recent work. Eventually he has a hostile outburst, blaming your
teaching skills for his difficulty with the assignments.
- A student appears to be less motivated and not taking the work as
seriously. When you meet with her, she makes it clear that she does not
feel challenged and can finish her work much faster than her classmates.
Conclusions:
- How are emotional intelligence and self-efficacy related?
- Even when students complete programming assignments successfully, students
may still perceive their own competence negatively. Why is emotional
intelligence important in this context?
- In what ways might you encourage emotional intelligence and self-reflection
in your students?
- Discussion Prep: Discussion
3 draft
Session 5 (2/8): GPS Syndrome, Helper hours, and Discussion Sections
Notes
Session 6 (2/15): Evaluating Student Work
Notes
Session 7 (2/22): Mock 1-on-1 Reviews
Session 8 (3/1): Active Learning Modules
Session 9 (3/8): No class
Session 10 (3/22): Active Learning
- Reading
- Active Learning Material from Mt. Holyoke:
- Assignments:
- Topic selection for:
- Active Learning Modules
- Java Basics: Objects & Classes
- Java Basics: Collections & Arrays
- Java Basics: Functions & Parameters
- Java Basics: Static vs. Dynamic
- Debugging strategies
- Union-Find algorithms
- Linked lists
- Recursion
- Recognizing recurrences
- Code tracing
- Big-Oh from code
- Binary Trees
- Others?
- Projects
- Create an assignment for CompSci 201
- Create a tool to support automation of an activity for CompSci 201
- Create a proposal for a new topic of study OR a new way to study a current
course topic in a different way
- Other?
- Reflection:
Discussion
Reflection
Session 11 (3/29): Evaluating teaching
Postponed
Session 12 (4/5): ALMs
- Reflection:
Discussion
Reflection
- Classwork:
- ALMs
- Project elevator pitches
- Need: What problem have you observed in CompSci 201?
- Solution: How do you solve this problem?
- Impact: How will this impact students or UTAs in the course?
- Plan: How will you make it work?
- Administrative:
- Autocomplete
- MDH reminder
- Test grading and scanning
Session 13 (4/12): Active Learning Modules
From
SRI's Performance
Analytics in Science Information on Rubrics
Technically sound rubrics are:
- Continuous: The change in quality from score point to score
point must be "equal": the
degree of difference between a 5 and 4 should be the same as between a 2
and 1.
-
Parallel: Similar language should be used to describe each level of
performance (e.g.,
low skill level, moderate skill level, and high skill level), as opposed to
non-parallel
constructions (e.g., low skill level, understands how to perform some of
the task, excellent
performance).
-
Coherent: The rubric must focus on the same achievement target
throughout, although each
level of the rubric will specify different degrees of attainment of that
target. For example, if
the purpose of the performance assessment is to measure organization in
writing, then each point on
the rubric must be related to different degrees of organization, not
factual accuracy or
creativity.
-
Highly Descriptive: Highly descriptive evaluative language
("excellent," "poor,") and
comparative language ("better than," "worse than") should be used to
clarify each level of
performance in order to help teachers and raters recognize the salient and
distinctive features of
each level. It also communicates performance expectations to students,
parents, and other
stakeholders.
-
Valid: The rubric permits valid inferences about performance to the
degree that what is
scored is what is central to performance, not what is merely easy to see or
score, or based on
factors other than the achievements being measured. The proposed
differences in levels of
performance should a) reflect the key components of student performance, b)
describe qualitative,
not quantitative differences in performance, and c) not confuse merely
correlative behaviors with
authentic indicators of achievement (e.g., clarity and quality of
information presented should be
a criterion in judging speaking effectiveness, not whether the speaker used
note cards while
speaking). Valid rubrics reduce the likelihood of biased judgments of
students' work by focusing
raters' attention on factors other than students' gender, race, age,
appearance, ethnic heritage, or
prior academic record.
-
Reliable: In traditional assessments, such as multiple choice tests,
where a student selects
a response from among several options, the reliability of the score has to
do primarily with the
stability of the test score from one testing occasion to another in the
absence of intervening growth
or instruction. Establishing the reliability of a rubric for a performance
assessment, however, is
more complex. A reliable performance assessment rubric enables:
Session 14 (4/19): Project Flash Talks
Session 15 (4/26): LDOC
Last updated: Wed Jan 11 12:17:14 EST 2017
[validate xhtml]