CSE 150A Intro to Artificial Intelligence: Probabilistic Reasoning

undergraduate course, University of California San Diego, 2020

Academic year offering of a 10 week intro course to probabilistic modelling.

Background

Following my successful instruction of CSE 151A in the summer of 2019, I again taught as the instructor of record in the winter of 2020. This was my first experience with a 10 week course.

Curriculum

This course focuses on probabilistic modelling, using belief networks. A heavy emphasis is placed on following probility axioms to their logical conclusions in order to implement models to complete useful tasks. The start of the course also contains a review of probability basics.

  • Using Probability to Handle and Track Uncertainty.
    • Intepreting Probabilities as Beliefs.
    • Conditional Independence and Conditional Dependence (d-separation).
  • Inference on Models to Accomplish Tasks.
    • Defining Belief Networks.
    • Computing Conditional Probabilities on Belief Networks.
    • Enumeration Strategy for Inference.
    • Variable Elimination Strategy for Inference.
    • Naive Bayes Model.
    • Markov Chain Model.
    • Viterbi Algorithm for Hidden Markov Models.
  • Learning Parameters to a Model from Data.
    • Data Collection.
    • Maximum Likelihood Estimation.
    • Training Error.
    • Different Parameterizations of a Model.
  • Learning Parameters to a Model from Incomplete Data.
    • Hidden Variable Models.
    • Partially Observed Data.
    • Expectation Maximization.
    • Forward-Backward Algorithm for Hidden Markov Models.
  • Using Reasoning to Choose Actions and Strategy.
    • Markov Decision Processes.
    • Policy Iteration.
    • Value Iteration.
    • Q-Learning and Reinforcement Learning.
  • Bonus Topics (as Time Permits)

Presentation Style

This course was presented with slides for the primary material aided by digital drawing tools, supplemented with lengthy computations performed on chalkboards. An in class student response system was used for formative assessment exercises.

Compared to the summer courses, I greatly appreciated the ability to space out the content across the 10 weeks of instruction.

Adapting to the 2020 Pandemic

This course was held in-person during the months of January to March 2020. Around the final weeks of instruction, the presence of Covid-19 in southern California was becoming apparent and teaching plans needed to adapt in short notice.

In the time span of a week or so, a series of decisions from central campus offices were given to instructors, often replacing decisions made just days prior. It was a time of great uncertainty. However, it was becoming more and more apparent that physically having students in the classroom was becoming unfeasible.

For lecture material, attendance could no longer be expected. This was easy to work around for this course, since the final week was mainly bonus topics, and the course was already using UCSD’s robust podcasting infrastructure for recording and distributiing the lectures.

The bigger obstacle was handling the final exam. I was making use of course materials from previous instructors, that also had various policies on the digital distribution of exam materials. Prior to UCSD eventually cancelling all in-person exams, I proactively met with other instructors to discuss how examination integrity should be maintained in an online setting.

Ultimately I created a comprehensive take-home exam with a lengthy period for completion. Students were undoubtedly stressed during these weeks due to the state of current events, and I did my best to clearly communicate my decision making as an instructor to the students. Although the take-home exam opened up the possibility of academic integrity violations, it is my firm belief that the learners during this stressful time deserve the benefit of the doubt, and that excessive exam control methods would harm good-faith test takers.

Experience with In-Class Response Systems

This course also was my first implementation of in-class technology for polling students. I actively avoiding requiring students to purchase additional hardware to complete the course.

Although such systems have limitations, I was pleased by how it worked, and students also expressed an appreciation for it. It was valuable to see which distractors were being chosen by students, while also being able to go a little faster when all respondents were choosing the right answer. Some care still needs to be taken in the latter case, since not all students would respond (especially if they were consuming the lectures via the podcast system).

I chose to make the questions no-credit and anonymous. I currently do not see sufficient reason in my experiences to change these policies. Participation was still high (among attending students), and trying to follow up with specific students on specific misconceptions seems better handled by other assessment tools.

I see why they are popular, and often encouraged as a teaching practice. I just wish I had access to better, more fluid ways of polling students in a classroom without requiring additional buy-in from students.