# New? Start Here

The easiest way to find something that you’ll be interested in is to look at the archives and browse the titles, which (I hope) are descriptive. Using the built-in Google site search there would also be useful.

If you’re interested in knowing more about **graduate-level classes at
Berkeley**, I write reviews on all the ones I have taken. Here they are:

- CS 267, Applications of Parallel Computing
- CS 280, Computer Vision
- CS 281A, Statistical Learning Theory
- CS 287, Advanced Robotics
- CS 288, Natural Language Processing
- CS 294-112, Deep Reinforcement Learning
- CS 294-112, Deep Reinforcement Learning (self-study)
- CS 294-115, Algorithmic Human-Robot Interaction
- CS 294-129, Deep Neural Networks (GSI/TA)
- CS 294-131, Special Topics in Deep Learning
- EE 227BT, Convex Optimization
- EE 227C, Convex Optimization and Approximation
- STAT 210A, Theoretical Statistics (Classical)
- STAT 210B, Theoretical Statistics (Modern)

When I was preparing for **the AI prelims at Berkeley** (required for PhD
students), I wrote a lot about AI topics. I also wrote a “transcript” of my
prelims.

- My Prelims
*[Transcript]* - Miscellaneous Prelim Review (Part 1)
- Miscellaneous Prelim Review (Part 2)
- Markov Decision Processes and Reinforcement Learning
- Perceptrons, SVMs, and Kernel Methods
- Notes on Exact Inference in Graphical Models
- Closing Thoughts on Graphical Models
- The Least Mean Squares Algorithm
- Expectation-Maximization
- Hidden Markov Models and Particle Filtering
- Reading Russell and Norvig
- Stanford’s Linear Algebra Review

I also write a lot about other technical areas, and am attempting to write up more about my thoughts on various technical research papers.

**Generic Technical Guides**, not including those related to my prelims study:

- Basics of Bayesian Neural Networks
- Mathematical Tricks Commonly Used in Machine Learning and Statistics
- Going Deeper Into Reinforcement Learning: Fundamentals of Policy Gradients
- Going Deeper Into Reinforcement Learning: Deep-Q-Networks
- Going Deeper Into Reinforcement Learning: Q-Learning and Linear Function Approximation
- Higher Order Local Gradient Computation for Backpropagation in Deep Neural Networks
- Understanding Generative Adversarial Networks
- Independent Component Analysis — A Gentle Introduction
- Ten Things Python Programmers Should Know

Notes on **Specific Research Papers** (for others, see this GitHub repository):

- Read-Through of Multi-Level Discovery of Deep Options
- Learning to Act by Predicting the Future
- Minibatch Metropolis-Hastings (a paper that I wrote)
- Understanding Deep Learning Requires Re-Thinking Generalization
- Notes on the Generalized Advantage Estimation Paper
- Some Recent Results on Minibatch Markov Chain Monte Carlo Methods

If you are interested in knowing about **what it’s like being deaf** then, while
I obviously can’t claim to speak for everyone who has hearing impairments, here
are a few that might be informative:

- The Obligatory “Can I Lip Read?” Question
- The BVLC (BAIR) Retreat: Disaster Averted!
- Advocate for Yourself
- After a Few Weeks of CART, Why do I Feel Dissatisfied?
- The Problem with Seminars
- My Pre-College Education as a Deaf Mainstreamed Student
- New Closed-Captioning Glasses
- Hearing Aids: How They Help and How They Fall Short in Group Situations
- Technical Term Dilemma
- Why Computer Science is a Good Major for Deaf Students

Finally, I write sometimes about **the books I read**, such as in the following:

- All the Books I Read in 2017, Plus My Thoughts
- All the Books I Read in 2016, Plus My Thoughts
- Thoughts on How to Win Friends and Influence People
- Alan Turing: The Enigma
- The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
- My Three Favorite Books I Read in 2015