CSE 151A Intro to Artificial Intelligence: Statistical Approaches
undergraduate course, University of California San Diego, 2019
Two offerings (Summer Sessions 1 and 2) of a 5 week intro course to machine learning.
Background
This was my first time serving as the Instructor of Record for an undergraduate course at UCSD. It was supported by the Summer Graduate Teaching Scholars program.
Curriculum
This course focuses on supervised learning and discriminative classification models. The beginning of the course also includes a review of linear algebra topics, such as the various interpretations of dot products.
- Training and Testing Discriminative Models.
- Training Error and Test Error.
- Validation and Hyper Parameter Selection.
- Overfitting and Underfitting.
- Formal Definition of Bias and Variance.
- k Nearest Neighbor Model.
- Decision Trees.
- Entropy and Information Gain.
- Pruning.
- Interpretation of the Learned Model.
- Perceptron.
- Geometric and Algrebraic Interpretations of the Update.
- Formal Proof of Mistake Bound Given the Margin.
- Kernel Trick for Extending the Model Beyond Linear Models.
- Ensemble Learning.
- Weak vs Strong Learners.
- Boosting.
- Bonus Topics (as Time Permits)
Presentation Style
This course was presented as a chalkboard talk in a large lecture hall. Student interaction was encouraged through frequently posing questions to the audience during the steps of an algorithm in a worked example, as well as open-ended questions about the nature of data collection and modelling choices.
Handwritten notes for the lecture were prepared and distributed to students.
Experience with Flipped-Classroom
During the second week of the first five-week session, I had to travel to an academic conference. In order to accomodate this, I designed this week as a spin on the idea of a flipped classroom. The decisions here were communicated clearly to students at the beginning of the course.
Due to the segmented nature of the course material (a variety of distinct classification models), this meant the alternative style would be applied just to the content on decision trees. Normally, this course met twice per week for lectures. This week, the first lecture was cancelled, and instead students were asked to read through the lecture notes on their own. During the second scheduled lecture, my faculty mentor gave a guest lecture on decision trees, going faster through the basic setup and spending more time on the trickier aspects of the setting.
This represented a sizeable reduction in direct instructional hours on this material, but it was still effective in conveying the content. Time in the third week was spent on reviewing and wrapping up the material.
Feedback from students was mixed, but overall supportive and understanding of the circumstances. The course was designed such that poor performance on this weeks content would not solely lead to poor overall grades. Discussions sections and office hours also gave plenty of opportunities for solidifying this material before exams and deadlines.
Feedback from the guest lecturer was positive. They felt the students were more engaged and curious than in that lecturer’s past experience with this material. The lecture went well, and covered a sizable portion of the material in the reduced time.
A confounding variable here is that this week aslo saw a number of students drop the course. This week is the normal drop deadline for a summer course, and it is not unusual for students to drop summer courses, ranging from personal circumstances as well as the shock of transitioning from a 10 week schedule to a mere five weeks. I do not have access to the reason for the drop decisions, so I cannot assess how much this flipped experience impacted that. However, the second session did not flip the second week, and had proportionately fewer students drop.
Reflecting on this experience, my personal teaching philosophy on flipped classrooms is that it can be highly effective when it works, but it likely has a disparate impact on different learners. Not all students have lives and spaces outside the lecture room conducive for digesting lecture material on a regular basis. I might consider exploring these techniques again in a segmented course, but I actively avoided during my remote 2020 courses, since the early material is heavily built upon to form the latter material. I felt that trying to flip a classroom on foundational material poses too great a risk for putting additional strain on the less prepared students. When there is intrinsic motivation, and students are picking and applying topics of theoir own choosing later in a course, I do think such flipped tactics are more fruitful.
Classroom Observation
I was observed by a qualified observer from the Engaged Teaching Hub. They attended one of my lectures in the middle of the course, where we present the formal mathematical definitions for the bias and variance of a concept class of a model. It was a bit of the dryer abstract technical material, in which a formal definition is presented, and then examples are worked to see how the definition applies.
Along several metrics, I was observed demonstrating effective teaching practices where applicable. A notable shortcoming was employing classroom technology to further encourage peer discussions and engagement, which I did factor into my post-2019 teaching methods.
Experience with Learning Reflections and Participation Credit
As part of the course, I implemented online “exit ticket” surverys, asking students to reflect on the material of that day’s lecture and also explore what aspects seemed unclear. It also provided a means of giving me feedback on my lecturing style on a continual basis.
With two offerings of the course, I had the opportunity to change things for the second session based off of my experience with the first session. Unfortunately, the quick timeline of the summer makes massive changes impractical, but the area I did change was how participation in these tickets was handled.
In the first session, they were optional but repeatedly encouraged. The reflections not only benefit the students in exercising their metacognition skills, but also gives me as the instructor a feeling for how the material is being perceived. In the second session, a small portion of the overall grade was assigned to participation in these exit tickets.
The first noticeable difference was the quantity of submissions. The first session had meager participation at the beginning, and that dwindled further as the course progressed. The second session had much broader count of submissions.
The quality of the submissions also differed, with the second session containing more “low effort” reflections. All of this is roughly in line with expectations.
The unexpected impact was in the midcourse feedback survey, which anonymously requests students to evaluate the course and give direct suggestions and criticism for the course structure. This survey is purely optional, but contains a minor incentive in giving students a chance to suggest bonus topics for the final week of the course (thankfully artificial intelligence is a topic that tends to garner student interest). In the second session, participation in this survey was drastically reduced, along with the quality of the submissions. I attribute this to a form of fatigue: after being required to do the exit tickets, this survey (despite being different from the tickets in content and type of questions) failed to be seen as worthy of a student’s time.
When invited to speak at a roundtable for summer instructors, I presented my experience with this minor course adjustment. This experience also led to me reassessing the design and presentation of reflection assignments in future courses I taught.