My Blog Posts, in Reverse Chronological Order

subscribe via RSS

The Same Sex Marriage Ruling, and a Paradox

Jun 27, 2015

Yesterday represented a historic moment for America as the Supreme Court legalized same sex marriage nationwide. I’m happy and proud at this result, because I supported same sex marriage and gay rights ever since I first learned about the issue. This was back in 2007, before prominent Democrats such as President Obama and the soon-to-be 2016 presidential candidate Hillary Clinton explicitly pledged their support.

As I ponder about the ruling and its various consequences, I’m noticing from Facebook and other sources (e.g., Scott Aaronson’s blog) that support for same sex marriage is practically universal among computer scientists in academia. To this day, I have yet to have one come to me stating that he/she opposed same sex marriage.

Yet this signals an interesting paradox.

What’s one of the biggest concerns regarding diversity in computer science? That there are too many Caucasian (and Asian) males1!

And what’s one of the most notable characteristics regarding the Republican party — now infamously known as the anti-gay party2? That there are too many Caucasian males3!

So how come academics support same sex marriage?

I am a biracial, White and Asian male, and view myself as a moderate Democrat, so I am probably one of many examples of this paradox. Perhaps I can offer some opinions on why this is present:

  • Academia is liberal. This is not controversial, with notable political figures stating that universities are liberal bastions. As Michael Bloomberg commented in his Harvard 2014 commencement speech, ninety-six percent of all faculty campaign donations for the 2012 U.S. presidential election race went to Obama4. Computer science just happens to be one subfield of academia, and there is no obvious reason why we should be more liberal or less liberal than other fields.

  • Computer science is also a subfield of, well, science, and being a well-educated scientist is inversely correlated with religious fervor (and positively correlated with athiesm), which is then positively correlated with support for same sex marriage. Richard Dawkins, in his thought-provoking book The God Delusion, eloquently dissects these observations and their subsequent consequences. By the way, I highly recommend his book.

  • Here’s a reason that’s specific to our field: one of the founders of computer science was Alan Turing, who was arguably one of the most important gay figures in history5. His story — that of being the most important British code-breaker during World War II, one of the pioneers of comptuer science … and being prosecuted for displaying homosexual behavior in private (really?), and then committing suicide — is heart wrenching to digest. The Imitation Game, while not the most factually accurate account of his life, shows how our opinions of homosexuals has changed over the past few decades. I don’t think it’s a coincidence that some prominent Republicans who support same sex marriage have a gay relative6. Perhaps computer scientists feel an obligation to respect the father of the field.

There’s a lot more that I’d like to cover, but for now I’m taking pleasure in the current ruling and thinking about the consequences. Hopefully we’ll see other countries continue to follow suit. I’m really wondering about what will happen to Japan, the country of origin for my father’s family. There’s a really nice map of LGBT rights by country or territory on Wikipedia, but there’s not a whole lot of dark blue (I think that’s the color for marriage … I’m colorblind) by Asia. I don’t know why Asian countries seem to lag behind the curve in gay rights. Still, as I look at how much attitudes have changed in recent times, perhaps it’s not too far-fetched to suggest that within fifty years, same sex marriage will be legal in Japan (as well as China and South Korea). Actually, one might be able to make a reasonable argument for every country except North Korea and the Middle Eastern ones.

We’ll see what the future holds.


  1. The issue of diversity in computer science and other STEM fields has been well-documented, and a quick Google search will lead to articles such as this one.

  2. And yes, Governor Jindal, staying firm against gay marriage will only continue to damage the image of the Republican Party and deter young voters like me. Ironically, Governor Jindal is Indian-American, and Asians tend to vote Democratic.

  3. The Pew Research center is one possible source for learning about the breakdown of party affiliation among various demographic categories.

  4. By the way, Michael Bloomberg also delivered the 2014 Williams College commencement speech, so I saw him in person. He did not mention the issue of liberal academia; instead, he talked about cracking down on the illegal gun market.

  5. Seeing lists of important gay people like these perplexes me. These lists should only contain people whose sexuality is known without a doubt. Alan Turing fits this criteria; guys like Leonardo da Vinci (really?) do not.

  6. I can immediately think of several right off the bat: Rob Portman, Charlie Baker, and Dick Cheney.










Reading Russell and Norvig

Jun 22, 2015

I had previously mentioned that the classic AI textbook by Russell and Norvig (2010) was fairly easy reading compared to most computer science textbooks. Now that I’ve recently gone through the first half of the book (which is about 500 pages) in the span of two weeks, I stand by my claim. Reading all these pages, however, does not necessarily mean that I would sufficiently absorb the material to the extent I wish, so in this post, I’ll give a brief overview of what’s covered in the first half of the book. The first two chapters serve as an introduction to AI, as a review of how the field came to be, and how we wish to design AI agents that are rational, which means that they make decisions that “make sense” according to some utility definition. There isn’t that much to see here.

Part II: Problem Solving

This part encompasses chapters 3 through 6 and is about problem-solving. Yes! Now we’re onto something that’s interesting, and something that’s also covered in every AI course. And every algorithms course, because what’s in chapter 3? Search algorithms on graphs!

Chapter 3: Solving Problems by Searching

The following list outlines the most important search algorithms to know:

  • Breadth-First Search (BFS), a strategy where we start from a root node, expand it to generate its children, and then put those children in a queue (i.e, FIFO) to expand then later. This means all nodes at some depth level of the tree get expanded before any node at depth level gets expanded. The goal test is applied when nodes are immediately detected (i.e., before adding it to the queue) because there’s no benefit to continue checking nodes. BFS is complete and optimal, but it also suffers from horrible space and time complexity.

  • Uniform Cost Search (UCS) is like BFS, except that it orders the nodes to expanded in a priority queue based on some path cost function . One would want to use UCS rather than BFS in cases when a step cost (i.e., the cost of traversing from one node to another) is not uniform. Technically, this means without a uniform step function, BFS isn’t optimal, but usually we are smart enough to not apply BFS in those situations. UCS applies the goal test when a node is expanded, i.e., when it is pulled off the queue. This is later than when BFS would check. UCS is complete and optimal (so long as edge costs are nonzero, to prevent infinite loops) but can suffer from the same complexity issues as BFS.

  • Depth First Search (DFS) expands the deepest node in the search frontier, so it stores the frontier as a LIFO stack. Unfortunately, this means that DFS can traverse one really long path forever, without stopping to check back at other unexpanded noes near the beginning, so it’s clearly non-optimal. The real savings for DFS comes with space complexity, because once a node has been expanded, it can be removed from memory once all its descendants have been fully explored.

  • Depth-Limited Search is like DFS, except that nodes at a depth limit are treated as if they had no children. This can avoid DFS spiraling off in wild directions, but it also means that we will never reach the goal if the shallowest goal is beyond the depth limit.

  • Iterative Deepening Search (IDS) is another version of depth-first search, and here we run depth-limited search multiple times, increasing the depth limit by one each time so as to gradually get close to the goal. It’s not as slow as one might think, because we would be repeating most of the initial node expansions, which have a lower branching factor.

  • Bidirectional Search means that we run two searches, one from the starting state and another from the ending state. The real challenge is how to combine the two search paths in the middle, and how to backtrack if necessary. In the -queens problem, it’s not clear how to backtrack.

  • Greedy Best-First Search uses a heuristic function (explained later) to choose which node to expand. This means that nodes are stored in a priority queue according to . This may seem a lot like UCS (and it is), but here, is the estimated cost of reaching the goal from node , not the overall path cost we have seen so far.

  • A-Star Search fixes problems with greedy best-first search by supplementing with the path cost seen so far, . So here, the nodes would be stored in a priority queue based on , where the first term indicates a known cost so far, and the second indicates our estimate of the future cost. A-Star Search is probably the most widely used form of best-first search.

For problems that use heuristic functions (i.e., in the above notation), one would like heuristics that are admissible and consistent, because then that would make A-Star search complete and optimal. Admissible means that the function never overestimates the cost of reaching to the goal, and consistent means that the function obeys the triangle inequality1. Every consistent heuristic is also admissible, but the reverse is not true. Probably the canonical example of a consistent heuristic in traveling salesman-like problems is when we use the straight-line distance from one city to another (well, assuming that our travel speed would be uniform across all possible routes).

If one has multiple admissible heuristics for a problem, and none dominates the other, then we should take the max of those for each node, . Do not take the sum if we want an admissible heuristic!

Amusingly enough, this chapter is about trying to extend the previous one to bring it closer to the “real world.” Though admittedly, yes it is more like what most people would use. The first part of this chapter gives a very brief introduction to the field of optimization. Hill-climbing search is a search algorithm that attempts to move in the direction of increasing utility value, and is a more general version of the commonly-used gradient-descent algorithm, which is only applicable in continuous domains. The main problem with these local algorithms is that they can get stuck in local minima (or maxima, depending on whichever is most convenient for the problem description), so one should run the algorithm multiple times with random starting positions. Alternatively (or in addition), one can use simulated annealing. The way to think about how that works is to imagine a local minima problem where we have a small ping-pong ball on a curvy, bumpy surface and are trying to get it to rest in the deepest crevice. The ball would obviously get stuck in a local minima easily due to gravity, so simulated annealing is like “shaking” the surface enough to shoot the ball out of a small valley, but not out of the actual global minima, which would be like a deep, giant pit. I find this analogy a lot easier to understand than most descriptions of simulated annealing, by the way.

The fact that we bring up gradient descent is interesting, because the search problems in Chapter 3 cannot handle continuous domains due to the infinite branching factor. Said another way, a human has an infinite number of ways to walk in a specific direction along the 360 degree circle; how would we design DFS to help a robot do that? One way to find optimal solutions in continuous spaces is to discretize the problem, so going back to my “human walking” example, we might limit the search directions to be anywhere from 0 up to 30 degrees, any where from 30 up to 60 degrees, etc. Another, of course, is to use the gradient and update the current state according to . We could also have constrained optimization problems, of which the best known and easily solvable problems are of the linear programming variety.

Remember that in Chapter 3, the environment is assumed to be fully deterministic and observable. It is also interesting to see how we would design an agent to search in spaces that have non-deterministic actions and if the agent can only have partial or even no (!) observations about the world. The key idea here is that we have to make use of an agent’s percepts that will help inform it what state it is in, so that we can say things like “if X happens, do Y, else do Z.” We can still design search trees and traverse them to reach the goal, but the trees have different flavors. In the nondeterministic environment case, we would need to have edges between a parent node and all its possible children nodes that could result. In the partial observation case, we would have the tree’s nodes be belief states, so each node is actually a set that contains the possible states the agent could be in. Gradually moving around in the search space might narrow down the set of possible belief states. The book’s vacuum cleaner example helps to explain how even an agent with no observations can still tell which state it is in given that it executes a specific sequence of actions and knows the consequences. In fact, having a sensor-less agent can be advantageous in situations when it would be expensive to pin down an exact state, which is why doctors tend to prescribe a broad spectrum antibiotic rather than perform detailed analysis of a patient to decide on an incredibly specific drug.

As an interesting side note, I was reading sections 4.3 and 4.4 and noticed the similarities between the graphs provided here and the finite automata I learned from my undergraduate theoretical computer science course. We have notions of a state, a transition function, actions, and final (i.e., goal) states. Section 4.5, discussing online search, also uses these graphs. The nice thing about the book’s organization is that the search algorithms from Chapter 3 can be applied to the graphs on Chapter 4, along with additional problem-specific restrictions.

We spent a lot of time in my undergraduate AI course on adversarial search. This is like what we consider in Chapters 3 and 4, except that the agent is no longer alone, and its actions are in conflict with other agents. The simplest abstraction of adversarial search is a two-player game involving one overall game score. The players are named MAX and MIN because they wish to maximize and minimize, respectively, the overall game score. In normal search algorithms, a player named MAX (who by convention tends to start these games) would just search for and form a sequence of moves that would reach a terminal state. Unfortunate, MIN can stop this in some cases. So the best algorithm for MAX to pursue is the minimax algorithm, because it minimizes the worst-case scenario2. The easiest way to view this algorithm is to draw a graph of the game tree with various scores. Levels would alternate between a MAX player and a MIN player, so the MAX player should trace through the entire game tree and recursively backtrack to check and see the best score it can get on each node assuming that MIN plays optimally.

The problem with minimiax search is that nontrivial games have too many possible states; it’s actually exponential in the depth of the tree. By using alpha-beta pruning to prune away branches that cannot possibly affect the final decision, we can cut the exponent in about half and still return the same solution used by minimiax. For any given state at any time in the search algorithm, alpha is the value of the best choice found along the path for MAX, and beta is the value of the best choice found along the path for MIN. Alpha starts as negative infinity and tries to go up, whereas beta starts as positive infinity and tries to go down. It’s easiest to see how this works by tracing through some game trees.

I remember these algorithms well because my undergraduate AI courses used the Berkeley Pacman assignments3, which involved heavy use of minimiax search and alpha-beta pruning. I remember that our problem involved four agents: Pacman (us) and three ghosts who wanted to eat him. With more than two players, we can associate a vector of values, and I think that’s what we did in the assignment, since the description says:

Now you will write an adversarial search agent in the provided MinimaxAgent class stub in multiAgents.py. Your minimax agent should work with any number of ghosts, so you’ll have to write an algorithm that is slightly more general than what you’ve previously seen in lecture. In particular, your minimax tree will have multiple min layers (one for each ghost) for every max layer.

If you’ve checked the AI project description, you’ll also see that we only run minimax algorithms to a limited depth, sometimes as small as just two layers. This is necessary due to the exponential explosion in the number of states in the Pacman maze. Another way to speed up the searching (but again, at the cost of optimality) is to treat nonterminal nodes at a given level as terminal nodes, and create a heuristic evaluation function for their values. (Yes, this is very similar to Chapter 3 material!) After all, this is what humans do when they play games. I can’t remember 20 moves ahead in a chess game, but I can reason that moving my queen to capture an opponent’s queen, while not threatening any of my pieces in the process, will have a higher utility for me.

One can also use minimax algorithms with games involving chance, which means that game nodes have chance nodes in addition to the normal MAX and MIN nodes. To make correct decisions here, we have to change our analysis to consider expected values.

Chapter 6: Constraint Satisfaction Problems

Now we’ll switch gears and focus on problems that have more sophisticated notions of a “state.” The reason for doing this is that algorithms like DFS, BFS, etc., assume that states are just black boxes. There is no domain-specific part of those search algorithm to those problems4. With constraint satisfaction problems (CSPs), we represent each state as a set of variables , and a problem is solved when each variable has a value that satisfies all the constraints imposed from the states and problem formulation. The example used in the book is about coloring the seven regions of Australia so that no two bordering regions have the same color. To formulate it as a CSP, we

  • define seven variables to be the seven regions
  • define the domain for each variable, which consists of three colors for us
  • define the constraints, which means listing all the color inequalities from bordering regions

Just to be clear, why do we want to use CSPs? Here are a few reasons:

  • It is nice to have a single solver for a CSP. We can then solve a problem by converting it to a CSP, and then running our CSP solver. This is what a lot of theorists do when they reduce problems to known ones.
  • There is no need to develop a detailed, problem-specific heuristic.
  • CSPs can eliminate large portions of the search space all at once by quickly identifying variable assignments that violate constraints.

It’s worth discussing that last point in more detail. In the search problem of previous chapters, our search algorithm can search. With a CSP, we can perform inference called constraint propagation, which uses the constraints to reduce the number of legal values for variables. As the book delightfully points out, Sudoku is a problem that has “introduced millions of people to constraint satisfaction problems.” Constraint propagation is obvious here: if I see a column of variables that has all values other than 3 and 7 filled in (i.e., two empty squares), I can identify one of those spots that I want to fill in and constrain the number of possible variables from nine to two. If I then see that the square coincides with a row that already has 3 in it (but not 7, unless something went wacky), then that further constraints the choice of my variable to be 7 … and I will obviously put 7 there.

But while Sudoku problems can be solved by inference over constraints, sometimes we just have to search for a solution, and here is where backtracking search comes into play. This is a depth-first search algorithm that goes down the tree assigning values to variables, and if it reaches a point where a variable has no legal values left to assign, then there is clearly no solution, so it backtracks to previous variables to try and perform different assignments. The order that we assign the variables does not matter, which helps to cut down on the branching factor.

To make backtracking search more efficient without using problem-specific knowledge, we should decide on solid heuristics for the following:

  • What is the order in which we should choose to assign variables, and in what order should the possible assignments be done?
  • What inferences should be performed at each step in the search?
  • When the search arrives at an assignment that violates a constraint, how can we avoid repeating this failure?

For choosing the variable ordering, one way is the minimum-remaining-values (MRV) heuristic, or choosing the variable that has the fewest legal values, because then we can detect failure quicker. When we do choose a variable, but have to assign it from the list of possible values, it actually makes sense to follow least-constraining-value heuristic, or choose the variable that rules out the fewest choices for the neighboring variables in the constraint graph (this is a graphical representation of the CSP) because it allows the possibility of more solutions down the road (ideally). So, most constrained when choosing a variable, and least constrained when assigning that chosen variable.

To perform inference, we can do forward checking after we assign a variable. This establishes arc consistency among adjacent variables in the constraint graph by iteratively updating constraints on those variables. However, this will fail in simple cases such as after we have assigned a variable a color in a problem where we have to two-color the graph, because forward checking can’t reason about arcs that don’t directly include the currently assigned variable.

When we violate a constraint, we can backtrack one step up in the DFS tree like normal DFS, but that tends to work poorly because if we have an inconsistency, then we may have made a mistake much earlier in our sequence of variable assignments, so we want to backtrack far up in the tree beyond the most recent decision point. We can design a backjumping method by maintaining conflict sets for each variable, or in other words, a set of assignments that are in conflict with a variable assignment. Then the backtracking process would go back to the most recent variable assignment in the conflict set. However, forward checking already supplies the conflict set (check this yourself!), and so “simple” backjumping as previously described is redundant in a forward checking search or a search that utilizes arc consistency measures.

Instead, we can use the more sophisticated conflict-directed backjumping. Instead of backjumping once we detect a failure based on conflict sets, we should backtrack all the way before that to the point where the branch “gets doomed.” Clearly, this is a more challenging task, and we do this by redefining what it means to be a conflict set: for a variable, its conflict set is the set of preceding variables that caused this one, together with any subsequent variables, to have no consistent solution. These conflict sets are computed by an ingenious method of “absorbing” other nodes’ conflict sets.

Once we have our constraint graph, we can also apply some local search techniques from Chapter 4 (e.g., simulated annealing) to CSPs.

The previous stuff is very general, but honestly, if you look at the problem and can figure out something from it that is “obviously” going to make the problem easier, do it! In the Australian color mapping, Tasmania was not connected to the mainland, so it’s obvious that it shouldn’t have been part of the original coloring problem at all! Thus, splitting the graph into connected components would have been a smart tactic. Another way to make a problem easier is to reduce their constraint graphs to trees, because any tree-structured CSP can be solved in time linear in the number of variables. We can assign values to variables so that the remaining ones form a tree, or we can do a more sophisticated tree-decomposition, where nodes are now a set of variables, and variables can be part of multiple nodes. Each node represents its own subproblem.

Part III: Knowledge, Reasoning, and Planning

This part of the book is a little dry, and is about how one can design “languages” or various formalisms for agents to help them reason and plan about the world by extracting from a knowledge base. In my undergraduate AI course, we barely covered this part at all, and it was only towards the last week of class, when attendance was half the normal level because we didn’t have a final exam. I’m not sure how important this part is to AI research nowadays, since AI tends to be synonymous with machine learning these days. But maybe in some parts of robot motion planning?

Nevertheless, I still decided to read the entirety of it as there might be some important stuff here.

Chapter 7: Logical Agents

In which we design agents that can form representations of a complex world, use a process of inference to derive new representations about the world, and use these new representations to deduce what to do.

This somewhat boring chapter5 introduces the class of logic known as propositional logic, which lets agents represent the world through a series of statements and provides inference techniques to make conclusions. This is an upgrade over the agents in previous chapters. Why? When we tell Pacman to perform DFS to determine where it should move in the game, Pacman doesn’t really know anything about the game. A human can deduce a number of facts from the world, such as that Pacman should avoid going towards dead ends if a ghost is behind it and there is no power-up available there, but to the point of view of the DFS search agent, that knowledge is irrelevant. To say it another way, search agents only know the world in a very narrow, inflexible sense, and they can’t make real conclusions. They cannot reason. Constraint Satisfaction Problems alleviate this knowledge block by changing the representation of states from atomic to a set of variables, which allows for more efficient inference techniques (arc consistency, etc.), so there is a little bit of reasoning going on. Here, in Chapter 7, we take another step by representing the world not through a set of states and variables, but through logic. Remember that throughout this chapter (and the subsequent chapters), the overall theme is representation.

A few terms are in order to review:

  • A knowledge base (KB) is the central component of our agents and will contain all the set of sentences (each represented with a specific syntax) that are known to the agent. A knowledge base is monotonic if the set of entailed sentences only increases as new information is added. Otherwise, it would be like the model is changing its mind.
  • But wait, what is entailment? It is the idea that a sentence follows logically from another sentence. By writing , we state that entails , so that in every model in which is true, is also true. The relation implies that is a stronger assertion than (check this yourself).
  • An inference algorithm is one that uses existing logical sentences and derives logically valid conclusions from them. If we have in our knowledge base that and then we should be able to make the conclusion somehow that . Algorithms that are sound derive only entailed sentences (this is a good thing), and algorithms that are complete can derive any sentence that is entailed from the KB, so complete algorithms are also sound. The slowest complete inference algorithm (assuming finite spaces) is model-checking because it enumerates all models to check for entailment. This is not a scalable solution.

Propositional logic includes the following:

  • atomic sentences, which consist of one symbol
  • not connectives () to negate an expression
  • and connectives () to join two expressions together (producing a conjunction)
  • or connectives () to join two expressions together (producing a disjunction)
  • implies relationships:
  • if and only if relationships:

The semantics of these relationships are what one would expect, i.e., directly based on your discrete math or math logic class.

How do we use these facts to perform inference, and to do it efficiently6? In other words, the ultimate question is that we want to decide if (logically equivalent to ) for some sentence . For this, we do some theorem-proving. An important rule is Modus Ponens, which states that whenever is true, and if is true, then has to be true. (This makes sense because is only false if were true but false.) There are a few others, such as and-elimination, which reduces to (without loss of generality) , but I generally apply these rules by directly appealing to what I remember about logic, rather than trying to remember rule names and their exact syntax. This must be why I hate resolving logic by hand.

A rule like Modus Ponens is sound, but incomplete. For a complete rule, we want to use resolution7, which simplifies our problem by eliminating clauses that resolve with each other and don’t contribute to the resulting truth values. If we have , then we can simplify the sentence to be because are complimentary, so they cannot both force their respective clauses to be true. This is resolution. It applies to pairs of arbitrarily long clauses ANDed together. The key fact is that a resolution-based theorem prover can, for any sentences and in propositional logic, decide whether . Why?

  • Every sentence of propositional logic is logically equivalent to a conjunction of clauses, or said another way, every sentence can be converted into CNF form. This is important for resolution because it relies on there being a disjunction of literals. (Again, a sentence is in CNF form if it is an AND of ORs, and a clause is a disjunction of symbols.)
  • Resolution-based theorem provers work by using contradiction. To show that , we show is unsatisfiable. Starting with a sentence in CNF form, we apply the resolution rule to pairs of clauses to produce (potentially) new clauses. We continue with this until there are no new clauses that can be added ( does not entail ) or if any two clauses resolve to yield the empty clause ().
  • Termination of the above algorithm follows due to the finite amount of symbols in the knowledge base, so long as useless literals such as are removed throughout the clause formation process. The proof of completeness for resolution is the ground resolution theorem.

While resolution is complete, sometimes we do not need its full power, or it is too slow. A Horn clause is a disjunction of literals of which at most one is positive, and if our clauses are of this form, we can perform more efficient inference using forward-chaining and backward-chaining algorithms. Forward-chaining means we start with the known facts and try to draw conclusions (e.g., using Modus Ponens) and propagate inference through the AND-OR graph. Backward-chaining does this in reverse. These algorithms decide entailment in linear time. And they are easy to describe to humans. Yay. Of course, we need Horn clauses for these to apply.

Moving away from resolution and back to model checking (remember how inefficient that is?), we can devise several heuristics to improve model checking, such as backtracking search and hill-climbing search. Backtracking search is a depth-first enumeration of possible models with early termination, pure symbol heuristics, and unit clause heuristics. Hill-climbing search is a seemingly crazy way of doing inference. It randomly picks an unsatisfied clause and flips a symbol in it. Obviously, this may go on forever if we get unlucky in our draws, but if we do get a solution, then we know for sure that a solution actually exists!

Chapter 8: First-Order Logic

To design an agent based on propositional logic, as in the Wumpus world, one has to perform cumbersome steps to take care of variables representing the same world object, but at different times. The next upgrade of logic into what is known as first-order logic will alleviate us of this nuisance because of existential () and universal () quantifiers. The following rule:

means that “For all , if is a king, then is a person.” More formally, first-order logic assumes that the world now consists of facts, objects, relations, and functions, while propositional logic only assumes the world consists of facts (or propositions). The syntax terms are that constant symbols represent objects, predicate symbols represent relations, and function symbols stand for functions. Functions are a special type of relation where there is only one value for an input (which is the standard way we think of functions). In the above rule, is a variable; terms with no variables are called ground terms.

A model in first order logic consists of not only objects, but also various interpretations of each predicate and function. If our world consists of three terms and , and two objects, there are multiple ways we could map those terms: all of them could mean the first object, or could mean the first and and could both mean the second, and so on. Due to the amount of ways one could assign symbols to various objects or change the definition of a relation, model-checking for entailment (which must apply to all possible models) in first-order logic is much slower than it is for propositional logic.

Going back to universal quantification, we say that is true in a given model if is true in all possible extended interpretations constructed from the interpretation given in the model. This is a fancy way of saying that if our model has three objects (e.g., Richard the Lionheart, King John, and Richard’s Left Leg), then we better be able to plug in all three of those objects in as the variable in the rule and have those statements be true. For existential quantifiers, we just need at least one statement true in the extended interpretation for to be true.

First-order logic also includes equality. This is convenient when we need two variables to be unequal. Consider the following rule:

This is stating that Richard has at least two brothers. If we removed the part, then we could assign and to be the same person. But even if and referred to two different names (e.g., Daniel and Darius) then they could still refer to the same symbol/object. Thus, to make things easier for our brains, we will follow the unique-name assumption. We can also invoke the closed-world assumption in which atomic sentences not known to be true are false.

Chapter 9: Inference in First-Order Logic

One way to perform inference for first-order logic is to convert a first-order knowledge base to a propositional one, and then apply the propositional inference algorithms from Chapter 7. (Yes, I know you can tell that this will be crazily inefficient, but it might be useful to see how that works.) There are two techniques that help:

  • Universal instantiation means we can substitute a ground term for a variable in a universally quantified rule. The rule is that if is true, then so is , where is the ground term8.
  • Existential instantiation means that in an existentially quantified rule, we can create a single new constant symbol that does not appear in the knowledge base. If is true, then so is , where is that new symbol. This new symbol is a Skolem constant and is part of a general process called skolemization.

These two methods help us to discard universal and existential quantifiers, respectively. (The former would require us to make many new rules, the latter requires only one new rule.) There’s more to this technique of propositionalization, but the point is that we can transform first-order inference queries into propositional form while preserving entailment. Unfortunately, the question of entailment is semidecidable. We will be able to prove entailment for every entailed sentence, but we cannot refute entailment (in layman’s terms, “say no”) to every non-entailed sentence. The reason for this is that functions can construct infinitely-many ground-term substitutions, and we found out earlier that propositional inference algorithms (i.e., resolution) terminate precisely because we are guaranteed to have finitely many terms.

Despite this somewhat sorrowing news, there is better news to be had with regards to how efficiently we can “propositionalize.” For this, there are two techniques we can draw from: Generalized Modus Ponens and Unification.

  • Generalized Modus Ponens is an inference rule which states that for atomic sentences and , if there is a substitution such that , then if the following are true:

    then the conclusion is that is true. So what does this mean in English? The conclusion is the sentence after we have applied the substitution that created equivalence between and . This is helpful in cases when the s are variable rules (e.g., ) and the s are knowledge-base sentences (e.g., ). By applying Generalized Modus Ponens with appropriate substitutions (like ), we can avoid the unnecessary extended interpretations of .

    Before moving on, it’s worth connecting this rule to Modus Ponens from Chapter 7. It’s obvious that there are some similarities: we make the conclusion , which is the result of an implication . But why is this called the generalized version? It’s because we “generalize” this rule from propositional logic to first-order logic by introducing variables and substitutions, which we know are not present in propositional logic. The book uses the term lifted for this, but it seems a bit arbitrary to me.

  • Our next rule relates to the previous one. Remember that we have to make sure that , but this will require a lot of comparisons. Unification is the hugely-important process of making different logical expressions have identical meanings. For two sentences and , unification returns , a set of substitutions for their variables to make them identical, if one exists.

    Unification may require standardizing apart variables to avoid name clashes. Also, more than one unifications may be possible for a set of statements, so it is logical to pick the one that places fewest restrictions on the variables.

    A naive (but sound) algorithm for unification recursively explores the expressions and builds up a unifier along the way, but has complexity quadratic in the size of the expressions being unified. More complicated unification algorithms can run in linear time.

Let us now briefly discuss three families of first-order inference algorithms: forward chaining, backward chaining, and resolution. These should be familiar from propositional logic, since all we are doing here is extending them to fit in the framework of a first-order logic system, but it is important to understand where exactly the extensions occur. We will clearly have to use rules like Generalized Modus Ponens and Unification here.

Forward chaining in first-order logic applies Generalized Modus Ponens repeatedly to add more atomic sentences to the knowledge base until no further inferences are possible. This is similar to propositional logic, where forward chaining would repeatedly apply Modus Ponens. But remember how propositional forward chaining required Horn clauses (a generalized version of propositional definite clauses)? In first-order logic, forward chaining requires first-order definite clauses, which are disjunctions of literals of which exactly one is positive. Many (but all not all) knowledge bases can be converted into a set of definite clauses, which acts as a preprocessing step. Then, as stated earlier, we apply Generalized Modus Ponens, ideally until we’ve solved our query or reached a fixed point. Again, it’s similar to the propositional logic version, except here we include universally quantified atomic sentences. It is sound and complete, but entailment with definite clauses is semidecidable.

There are three sources of inefficiency in the naive forward chaining algorithm: (1) that unification involves searching through too many sets of facts on the knowledge base, (2) that the algorithm rechecks each rule on every iteration to see whether its premises are satisfied, and (3) that the algorithm generates facts that may be irrelevant to the goal.

Backward chaining in first-order logic means we work backward from the goal, searching for substitutions and unifications to satisfy where the expression is already known. To find suitable substitutions for , which is a list of conjuncts which must all be positive, we may have to perform additional backtracking. The naive backward chaining algorithm is depth-first search, so it suffers from some standard problems with DFS (e.g, lack of completeness) that forward chaining avoids.

Backward chaining is used in logic programming, a technology where systems are constructed and make conclusions using processes similar to what happens in first-order logic. Prolog is an example of a logic programming language, but it is incomplete as a theorem prover for definite clauses. To avoid redundant computations, backward chaining should memoize solutions to sub-problems.

We can extend resolution from propositional logic to create a complete inference procedure for first-order logic. As before, the first step is to convert first-order sentences into inferentially equivalent CNF sentences, which is always possible, and forms the basis for future proofs-by-contradiction resolution procedures. This conversion process isn’t too difficult, though we need to eliminate existential quantifiers via Skolemization (briefly mentioned earlier). The process might involve creating Skolem functions to clarify variable dependencies.

The resolution inference rule is a generalization of the propositional reference rule to handle variables. Two clauses standardized apart (i.e., no shared variables) can be resolved (and therefore removed from proof as they don’t affect the outcome) if they contain complementary literals. In first order logic, complimentary literals are those in which one unifies with the negation of the other; remember that in propositional logic, we just had to worry about straightforward negations.

Resolution is refutation-complete in the following sense: if is an unsatisfiable set of clauses, then the application of a finite number of resolution steps to will yield a contradiction. Resolution cannot generate all logical consequences of a set of sentences.

Chapter 10: Classical Planning

This chapter introduces a representation for planning problems in single-agent, deterministic, observable environments, which scales far better than the earlier search agents of Chapter 3 and the logical agents of Chapter 7. As a starting step to analyze and standardize language, AI researchers have introduced the PDDL language, the Planning Domain Definition Language. It can describe the things we need to define a search problem:

  • the initial state, or states in general, which are conjunctions of ground (i.e., no variables or functions) atoms called fluents.
  • the actions, which have variables, preconditions, and effects. They are only applicable in a state if the state satisfies the preconditions.
  • the list of actions at each state, and the result of applying each action.
  • the goal test, which is again a conjunction of literals, though these may contain variables. The goal is to find a sequence of actions that lead to a goal state.

There are several straightforward examples of PDDL in the book. They are intuitive descriptions of various problems written in a structured framework, though there is some trickery involved (e.g., with inequalities).

PDDL maps planning problems to search problems, and we can solve these with forward searching or (you must know what’s coming…) backward searching through states. Forward searching needs heuristics, because as stated earlier, the branching factor is too large to apply one of the Chapter 3 or 4 search algorithms directly. Backward searching avoids many irrelevant states, and PDDL makes it easy to represent the backtracking process, but requires sets of states and does not lend itself to easy heuristics.

To get an admissible heuristic, we can relax the problem to make it easier and apply the resulting solution to the original one. The corresponding search graph has nodes as states and actions as edges. Some ideas for heuristics:

  • Ignore preconditions, so every action becomes applicable in every state.
  • Ignore delete lists, applicable if all goals and preconditions are positive literals. This removes all negative literals from all actions, and the problem is easier now because no action will undo progress made by another action.
  • To reduce the number of states, ignore some of the fluents.
  • Assume subgoal independence, so if the goal is , take as the estimate the maximum cost over all , or sum up the estimated costs for each state (note: this is inadmissible).

A special data structure called a planning graph can provide better admissible heuristics than the ones previously suggested; we build the graph, and then search over it (this is called “GraphPlan” in the book). The planning graph is a directed graph of states, and is a polynomial-sized approximation to the exponential-sized tree that consists of all the possible paths taken from the starting state, with the goal of finding a path to the goal state. It consists of alternating state levels and action levels . The state levels contain the literals that might be true (since it’s an approximation) at a given state9, and the action levels contain the actions whose preconditions are satisfied by those literals in the previous state level. Literals that show up later in the tree (i.e., farther away from the starting state) are “harder” to achieve, so the true states that contain those literals are “harder” to reach.

A key property of state and action layers is that they contain mutual exclusion or mutex links between literals and actions, respectively. A mutex relation holds between two actions if they have inconsistent effects, interference, or competing needs. A mutex relation holds between two literals if one is the negation of the other, or if the pairs of actions that could lead to those literals are mutex.

Now that we’ve constructed a planning graph, how do we use it? As stated earlier, we can estimate the cost of achieving any goal literal as the level in which it first appears in the planning graph, i.e., the level cost. If the goal state is a conjunction of literals, which is the normal case anyway, then some ideas are:

  • Take the maximum level cost over all literals. This is admissible.
  • Take the sum over all literals. This is not admissible, though it might work if the individual literals are reasonably independent of each other.
  • Take the level cost of the level in which all literals are in the planning graph, without there being any mutex relations between the two. This is admissible and also clearly dominates the maximum level cost heuristic because the level we return will have all the literals.

As an alternative, we can run the GraphPlan algorithm to search directly on the planning graph. The algorithm repeatedly adds levels to the planning graph, and finishes once all the goals show up as non-mutex in some possible state for a state layer. If the action and state levels do not increase, then the algorithm returns failure.

The downside of planning graphs is that they only work for propositional planning problems, though if we wanted, we could obviously convert a first-order logic encoding of a plan into propositional logic. They also fail to detect unsolvable problems with three-way mutex relations but no two-way mutex relations.

There are a few other classical planning paradigms:

  • We can treat this as a theorem-proving problem by transforming the PDDL description into a form that can be processed by a SAT solver. This step involves propositionalizing the actions and the goal, as well as adding in more axioms to handle successor states and mutual exclusions.
  • We can use first-order logical deduction rather than PDDL. Rather than tie time directly to fluents, we can use situation calculus and create new rules to apply to our states. The downside of this approach is that it’s hard to get good heuristics.
  • We can transform the problem of finding a plan of length as a constraint satisfaction problem (from Chapter 6), similar to the encoding for a SAT problem.
  • We can also create partially ordered plans, which is useful with independent subproblems. We can create such plans by searching through the space of plans, rather than the state space. Unfortunately, it doesn’t represent states easily, and these fell out of favor (after the 1990s) in place of plans that search through states with strong heuristics.

Chapter 11: Planning and Acting in the Real World

This rather amusingly-titled chapter now brings us into the “real world” by requiring that our agent representation must handle not only planning, but also handle a changing environment. Here are some of the new assumptions we make in our world:

  • We may have to deal with time and resource constraints.
  • We may have to organize plans in a hierarchical fashion.
  • We may have to handle nondeterminism and uncertainty in our environment.
  • We may have to deal with multiple, competing agents.

Let’s briefly discuss how we design an agent to handle these four cases.

To deal with time and resource constraints, we augment the language of the problem formulation to include amounts for certain resources (e.g., means there are nine inspectors available in a car inspection problem), as well as Consume and Use keywords to indicate whether the resource is gone or available again after usage (e.g., inspectors would usually be available again after their usage). We can represent the problem with a directed graph that obeys the time/resource constraints, and then find the critical path through the graph, which is the path with the longest duration and thus is the “limiting factor” in the schedule. A heuristic to find the minimum cost path: for each iteration, choose the action with all predecessors satisfied, and which has the least amount of slack.

In order for AI systems to think like humans do, AI systems will need to make plans at a higher level of abstraction by forming hierarchies. The classic example is when we organize a plane trip to Hawaii. The high-level abstraction is: take the BART to the airport, search for the gate, etc. Humans do not think: first, open the door carefully, then take seven paces to the right, then walk down this way for 500 steps, then put the ticket inside the BART gate, etc. That kind of detail would lead to too much combinatorial explosion in AI systems. So AI agents must use high-level actions (HLA) that can possibly be refined later. One way to solve such hierarchy problems is to start with one or multiple HLAs that solve the problem, and then refine it (e.g., in a BFS fashion) until we get a sequence of primitive actions that accomplishes the goal. This can be substantially faster than normal BFS over the space of primitives. To get an even greater — potentially exponential — speedup in search, what we would like to do is only search through the space of HLAs, and once we find a sequence of HLAs that work, then we can refine that one plan into primitives, since we know it works. To do this, we set up preconditions and effects for each HLA, and the state space will be a set of fluents (as it was before in many areas of Chapter 10). We can define a search problem known as angelic search that utilizes reachable sets of HLAs. Pessimistic and optimistic reachable sets can prune away refinements that have no chance of reaching the goal.

Our agents will have to deal with nondeterminism and uncertainty in the environment. With no observations, we can perform sensorless planning. With partial observations, we can perform contingency planning, and for unknown environments, we can do online learning. Some material is similar to what was presented in Chapter 4, and the main difference here is that we have a far richer state representation (fluents rather than atomic) and thus we can represent belief states easily using -length conjunctions (well, assuming that our belief states are 1-CNF). It can be tricky to update belief states after an action. An example problem with fluents might be that we have to paint two chairs to be the same color, and a sensorless agent could solve the problem by just dumping a can of paint on both chairs, without knowing the color of the paint at all. A contingent agent can also solve the problem, and often does so more efficiently.

Finally, we can consider the multiagent case, either when one “super” agent controls multiple smaller agents, or when there are multiple, separate agents who only control themselves (and whose goals may be in competition with one another). If the agents are loosely coupled, it makes sense to decompose the transition model into independent subproblems to avoid an exponential branching factor.

Chapter 12: Knowledge Representation

In which we show how to use first-order logic to represent the most important aspects of the real world, such as action, space, time, thoughts, and shopping.

Wait, shopping? All right all right, let’s see what’s in store here in the final chapter of the Knowledge, Reasoning, and Planning aspect of AI. The previous chapters have come up with the technology (e.g., first-order logic and its various inference methods) for knowledge-based agents, but now, we have to figure out the content to put in an agent’s knowledge base.

I don’t think there is much material in this chapter that I need to know, and a lot of it is common sense. But here are the highlights regardless:

  • Ontological engineering is the process of representing abstract concepts of events, time, physical objects, and beliefs in various domains. We clearly have to do something like this to design an agent! One way is to describe an upper ontology of the world by listing some general things first, and then moving to more specific items down the tree (this is what we do in object oriented programming).
  • We need to represent the following: categories, objects, and events. We can do represent categories by using straightforward predicates (e.g., the category of basketballs could be ) or by reification10, a.k.a. thingification, which means representing it as an object . To reason about categories, we can use the graphically appealing semantic network framework, or appeal to formalism with description logic11. Objects should be arranged in a hierarchy of categories with subclassing and inheritance, kind of like (again) how we do it in object-oriented programming. For events, rather than the situational calculus we saw earlier, we should use event calculus to deal with continuity. Event calculus reifies fluents and events.
  • Sometimes, we may wish to represent mental beliefs by model logic rather than first-order logic, because the former lets us take sentences as arguments, and allows us to represent a set of possible worlds of beliefs. The notation means that agent knows .
  • At the end of the chapter, there is a shopping mall example, which I find to be amusing but not that educational.

Whew! Reading the above ten chapters, as well as Chapters 1 and 2 in the book, and Chapter 13 (which is about basic probability, nothing to see here!) brings me to the 500-page mark out of 1000 pages. Now I only have 500 pages to go!


  1. Said another way, it never overestimates the cost of reaching a given state.

  2. It is also possible that the MIN player is dumb and doesn’t play optimally, but the MAX player following the minimax strategy would perform even better if that were the case.

  3. These assignments are a great way to see more complicated applications of algorithms from the textbook.

  4. Various evaluation functions (e.g., heuristics) are of course domain-specific, but the point is that the search algorithms are not domain-specific. It’s worth thinking about this often.

  5. There’s also a nice Wumpus example in this chapter that makes it twice as much fun to read. That part is not dry.

  6. The book states that propositional entailment is co-NP-complete, so every known inference algorithm for propositional logic has a worst-case complexity exponential in the size of the input. Still, this is “only” worst case complexity.

  7. Technically, resoultion is complete if it is coupled with a complete search algorithm.

  8. The notation denotes the result of applying the substitution to sentence . The substitution rule should be of the form where is a variable and is a real term that we want to plug in.

  9. A literal cannot appear too late in a state level, because that would be like over-estimating the cost of that literal and the states it belongs to, resulting in inadmissible heuristics.

  10. One of the advantages of reification is that by doing so, we represent what we want in terms of objects; then we can add an arbitrary amount of information about them. For instance, refiying events means we can add in as many descriptions about the event as we want by adding in more conjuncts.

  11. Representing categories these ways also helps us to establish default values for categories. Unlike in previous forms of knowledge representation, semantic networks and description logics make it easy for us to have exceptions for objects. For example, most tomatoes are red, but some can be purple. Thus, the category of tomatoes should have a default color attribute of red, with individual objects potentially overriding those values. The connections between this and programming languages is once again obvious. Also, note that if we allow overriding, then this violates monotonicity of the logic. Monotonicity means that if , then for any , .










Advocate For Yourself

Jun 20, 2015

Advocate For Yourself

I remember hearing those words fifteen years ago when I was in elementary school. I was in a classroom where the few Deaf and Hard of Hearing (DHH) students of the school were bunched up to get tailored advice from staff members who knew sign language. The speech teacher who said those words was reminding us that as we get older, we would need to take the initiative to secure accommodations.

Fifteen years later, my mind consistently replays that phrase, and I am amazed at the importance of advocating for myself now that I successfully (by some standards) went through college and now have a “real” job. (I know being a graduate student doesn’t count, but please pretend it does.)

If I had to evaluate myself on my ability to advocate for myself on a scale of one to ten, where a one means that I’m so shy that I need my parents to conduct every form of communication, and a ten indicates that I’m so good that my inbox is stuffed with other DHH students begging me to advocate on their behalf … I’d rate myself a five. I’ll explain why.

Part of advocating for myself is, really, to state that I am deaf. This is obviously priority number one, since I have to clear that hurdle before getting additional benefits such as interpreting services. I don’t generally have a problem telling people that I’m deaf, because unlike some people with hidden disabilities who have great incentives to hide them, there is almost no reason for me to hide my deafness. Not revealing it puts me at an immediate disadvantage.

At the same time, I don’t want those words to be the first sentence that I’m telling people. This raises the key question:

When is the optimal time for me to tell people that I’m deaf?

I think the optimal time can be captured in a curve, and it would bear a shape like the hypothetical email productivity curve, with the x-axis indicating the time when I tell people I’m deaf (and make the usual requests, e.g., please talk to me clearly, turn off that blaring music, and please do not make me attend a seminar right away), and the y-axis indicating the overall joint utility that me and the other party gain.

Here’s why. At time = 0, equivalent to me telling someone I’m deaf immediately, I’ll have made things clear from the start, or at least more clear than it would have been had I not made that proclamation. That results in some utility for me.

But when two hearing people meet for the first time (e.g., during graduate student orientation), I can’t imagine that they talk about such personal things right away. They probably begin with their names, where they are from, their interests, and other small-talk fodder. Plunging right into deaf-related topics means that we would talk about something deeply personal to me, but that other person would have no knowledge about it, and may be struggling to determine if his or her immediate questions are offensive.

This is why I generally like to start conversations about “normal” things. Then when the time is right, I’ll be at the top of the joint utility curve. That is when I will tell the other person that I’m deaf. He or she may or may not gain much utility as compared to time = 0, but I know that I will have much more utility at that time, which explains the rise in the curve.

The problem with my approach is that I really have to tell the person early, because the curve quickly levels off (or can dip below zero, indicating negative (!) utility). If I keep hiding my deafness, and only reveal it after the 347th meeting with that person, then I’ll be angry at myself for not advocating for myself early, and the other person will be incensed that I didn’t tell them why I missed some information during earlier, wasted meetings. Oops.

I wish I could say that I conveyed the information regarding my deafness at optimal times to everyone important in my life. Unfortunately, I do not, and there have been several unsatisfactory events in college and in Berkeley that I probably could have avoided had I made things clearer earlier. The classic example is when I show up to meetings with at least two other people. These are a problem for me even without background noise, since it is difficult to understand two people when they are talking to each other, rather than directly to me.

Hence, the five I rate myself.

To work on my internal advocacy rating, in the future, I will no longer agree to take part in a group meeting without me making it clear that I will need some assistance. It makes things so much easier in the long run, at the cost of a little initial awkwardness that I have to learn to ignore.










I Did Not Request ASL. I Requested Transliteration.

Jun 17, 2015

Before the start of the spring 2015 semester, I had a meeting with Randy Jordan, who works in Berkeley’s Disabled Students’ Program and specializes in providing services for Deaf and Hard of Hearing students. After a terrible first semester, I talked to him about having American Sign Language (ASL) interpreting accommodations for my classes, rather than captioning.

Randy smiled and agreed. In fact, he had already made the appropriate requests for the semester. But to my surprise, he also mentioned that he requested transliteration sign language interpreting, which I assume he meant to be Signed Exact English (SEE). He said:

I did not request for ASL. I requested sign language for transliteration. That’s based on my professional opinion of you. I don’t want you to be in a class and not understand what’s going on.

I did not respond, and our meeting concluded shortly after. It’s clear that Randy thought my sign language skills with regards to American Sign Language were not up to speed.

I’ve thought a lot about what he said to me. I’m not angry at all — Randy is a great guy. In fact, I’m happy he brought this up, because to me it’s reminder that my ASL skills are raw and unrefined. I am far better at reading, writing, and speaking English as compared to ASL. When Randy and I speak in our meetings, we usually sign. (He is the child of deaf adults.) During our conversations, Randy must have observed a tendency for me to lean towards SEE over ASL style.

Looking at my background, this shouldn’t be a surprise. While I learned sign language when I was two or three years old, I don’t know how much of it was formal ASL, or how much of it was just a sequence of memorize-this-sign-then-memorize-this-sign. I don’t remember formally studying ASL grammar, such as how one should manage facial expressions when signing particular questions.

As I progressed through my education, I also distanced myself from other deaf students (I’m excluding the hard of hearing ones now, since they usually had very limited skills). At the start of elementary school, I might have had multiple classes a day where all the students were deaf or hard of hearing. But as I got older, both (1) the frequency of such classes and (2) the ratio of deaf-to-hearing students would jointly decrease. Then I went to college, where I was the only deaf student for four years, and now I’m in graduate school.

Who would I sign with? My brother, sometimes, but we don’t see each other that often now, and we were always sloppy signing to each other. It’s one of the interesting things about knowing someone for so long: even if he communicates so poorly, I can still understand what he’s said due to years of practice.

Throughout college, I did have interpreters who used ASL, and I think I understood them reasonably well (and my academic performance might offer supporting evidence). It’s clear that I am better at reading ASL than speaking it. Unfortunately, the only way I’ll be able to get better at my ASL is if I can practice with others.

Any volunteers?










If you are Deaf and Know Sign Language, but are not using Video Relay Services, you are Missing Out

Jun 6, 2015

Video Relay Services is a federally-subsidized service that allows deaf and hard of hearing users to have conversations over the phone with the assistance of a sign language interpreter. If the deaf person uses video relay to call another person, then he or she will be calling using a computer or a television, and there will be a sign language interpreter shown on the screen who follows the conversation. The sign language interpreter signs what the hearing person says, and the deaf person can see the translation and then respond by communicating in English or sign language. In the former case, no additional action is needed from the interpreter, as the callee should hear the deaf person. In the latter case, the interpreter watches what the deaf person says, and then verbally relays the information to the other party. To the callee, it is like having a normal phone call.

This sounds like a tremendously beneficial service for deaf people, and I admit it: I should have been using Video Relay Services a long time ago. I signed up to use Video Relay in February, and have used it four times so far. Since it’s June now, that might not sound like a lot, but for someone who used to make about one phone call a year to a non-family member, it’s significant.

The reason why I took so long to embrace Video Relay had to do with initial perceptions. I learned about Video Relay in 2008 when a staff member from Sorenson VRS gave a presentation about it to the deaf and hard of hearing students in my high school. I soon had it installed at home, but probably made only three calls. At that time, I had to use a specialized television screen with a specialized camera and a complicated remote, and the video quality left much to be desired.

Nowadays, it’s much easier because there are applications that allow me to use Video Relay from my laptop. I use Video Relay like I use Skype: I open the software on my computer, sign in, make sure my camera is working, and dial the desired number. The software will not call the number immediately, because it first has to connect to an available sign language interpreter, who will then make the actual call to the callee. It should not take more than five minutes to connect to an interpreter.

As I explained earlier, one can use video relay by signing or speaking to the interpreter. I use the latter case, which is technically called Voice Carry Over. This is my preferred communication mechanism, because my English is better than my ASL, and signs can turn choppy and distorted across a monitor.

To get started with using Video Relay, I signed up with Z5 Desktop and downloaded their free application for my Macbook Pro laptop. I had to first verify my address and related information with a staff member via video relay. We talked for a little while (in sign language) and then he officially gave me the go-ahead to start using the service. The whole setup was much easier than I expected. Surprisingly, he never asked me for documentation regarding my hearing loss, and as far as I know, only deaf and hard of hearing people are allowed to use Video Relay.

I suppose the lack of accessible software was a valid reason for avoiding Video Relay Services, but again, I should have used it once I went to college and had my own laptop. I can remember many cases when a simple phone call might have made things so much clearer for me. Yet, I would often settle for wading though a mountain of documentation and sending emails with long turnaround times. As far as technical difficulties are concerned, I have not had any problems so far. My one concern with accommodating the communication needs for deaf people is that there are some who do not know sign language. They would still struggle in cases when people only give out a phone number for contact information. But for someone with my background, Video Relay Services takes care of a lot of my needs, and I am thankful for that.










Review of Computer Vision (CS 280) at Berkeley

May 31, 2015

Last semester, I took Berkeley’s graduate-level computer vision class (CS 280) as part of my course requirements for the Ph.D. program. My reaction to this class in three words: it was great.

Compared to what happened in classes I took last semester, there were a lot fewer cases of head-bashing, mental struggles, and nagging doubts in CS 280. One reason for this favorable outcome is that I eschewed captioning in favor of sign language interpreting, which is the accommodation type I’m most used to experiencing. What may have played an even bigger role than that, however, was the professor himself. The one who taught my class was Professor Jitendra Malik, a senior faculty member who’s been at Berkeley since 1986 and was recently elected to the National Academy of Sciences (congratulations). I realized after the first few classes that he really takes his time when lecturing. He talks relatively slowly, explains things at a high level, and repeatedly asks “Does this make sense?” He is slowest when dissecting math he’s scribbled on the whiteboard. In fact, I was often hoping he would speed up when he was strolling through basic linear algebra review that should have been a prerequisite for taking CS 280. If someone like me wants a faster class pace, that means everyone else must have wanted the same thing!

For obvious reasons, the slow lecture pace was well-suited for the two sign language interpreters who worked during lectures. They sat near where the power point slides were displayed, which made it relatively easy to move my eyes back and forth across the front of the room. As usual, I hope no one else got distracted by the interpreters. Unfortunately, the seat next to mine (I sat on the edge of the first row, no surprise) would remain suspiciously empty for a long time, and would only fill up once all the other seats were occupied save for a few hard-to-reach center ones.

I should warn future CS 280 students: this is a popular graduate-level course, for reasons that will be clear shortly. The auditorium we had was designed to seat about 80-90 people, and we probably had over 100 when the class began. It did eventually drop to 80 — aided by some forced undergraduate drops — but before those drops, the tardy students had to sit on the floor. This is a real problem with Berkeley EECS courses, but I guess the university is short on funds?

Anyway, let’s discuss the work. While the lectures were informative, they did not go in great detail on any topic. Professor Malik, when faced with an esoteric concept he didn’t want to explain, would say “but I won’t explain this because you can read the [academic] paper for yourself, or Wikipedia.” Well, I did use Wikipedia a lot, but the thing that really helped me understand computer vision concepts were the homework assignments. We only had three of them:

  • Homework 1 was a jack-of-all-trades assignment which asked us questions on a wide variety of subjects. The questions were related to perspective projection, rotation analysis (e.g., Rodriguez’s formula), systems of equations, optical flow, and the Canny edge detector, which we had to implement. I think it is too hard to implement the Canny edge detector from the original 1986 paper, and I’m pretty sure most of the students relied on a combination of Wikipedia and other sources to get the algorithm pseudocode. Overall, this homework took a really long time for me to finish (I blame the perspective projection questions), but we did have two weeks to do this, and we all got an extension of a few days.

  • Homework 2 was a smaller assignment that focused on classifying hand-written digits using SVMs (part 1) and neural networks (part 2). The first part was not too bad, as we just had to write some short MATLAB scripts, but the second part, which required us to use the open-source, Berkeley-developed caffe software, probably took everyone a longer time to finish. To put it politely, research software is rarely easy to use1, and caffe is no exception. I could tell that there were a lot of headaches based on the complaints in Piazza! Oh, and I should also mention that caffe had a critical update happen a few days before the deadline, which broke some of the older data format. Be warned, everyone. To no one’s surprise, we all got an extension for this homework after that surprise update.

  • Homework 3 was another small assignment, and it was about reconstructing a 3-D scene using points measured from multiple cameras with different centers. We had to implement a matching algorithm that would match the different coordinate systems to determine the true coordinates of a point. Our online textbook went through the algorithm in detail, so it wasn’t too bad to read the textbook and apply what was there (and possibly supplement with external sources). Unfortunately, this homework’s due date was originally set to be on the same day as the midterm! After some more Piazza complaints (none from me!), we all got yet another extension2.

One thing I didn’t quite like was that Homework 1 took significantly longer than the second one, which took significantly longer than the third one. There wasn’t quite much balance, it seems. On the other hand, we could work in groups of two, which made it easier.

Aside from the homeworks, the other two aspects of our grade were the midterm and the final project. The midterm was in-class, set to be eighty minutes, but the administrative assistant was a little late in printing the tests, so we actually started ten minutes late. Fortunately, Professor Malik gave us five extra minutes, and told us we didn’t have to answer one of the questions (unfortunately, it was one that I could have answered easily). As for the midterm itself, I didn’t like it that much. I felt like it had too many subjective multiple-choice questions (there were some “select the best answer out of the following…”). I don’t mind a few of those, but it’s a little annoying to see thirty percent of the points based on that and to see that you lost points because you didn’t interpret the question correctly. The average score appeared to be about 60 percent.

With regards to my final project, I did enjoy it, even if what I produce almost always falls under initial expectations. I paired up with another student to focus on extracting information from videos, but we basically did two separate projects in parallel and tried to convince the course staff that they could be combined in “future work” in our five-page report. What I did was take YouTube video frames from Eclipse, Excel, Photoshop, or SketchUp videos and trained a neural network (using caffe, of course) to recognize, for a given frame, to which movie it belonged. Thus, the neural network had to solve a four-way classification problem for each frame. The results were impressive: my network got over 95 percent accuracy!

The final project was also enjoyable because each group had to give a five minute presentation to the class. This is better than a poster session because I don’t have to go through the hassle of going to people and saying “Hi, I’m Daniel, can I look at what you did?” One amusing benefit of these class presentations is that I get to learn the names of people in this class; it’s definitely nothing like Williams where we are expected to know our classmates’ names. When I see names that are familiar from Piazza, I think: whoa, that was the person who kept criticizing me!

To wrap it up, I’d like to mention the increasing importance of computer vision as a research field, which is one of the reasons why I took this class. Computer vision is starting to have some life-changing impacts to real life. It has long been used for digit recognition, but with recent improvements, we’ve been able to do better object detection and scene reconstruction. In the future, we will actually have automated cars that use computer vision to track their progress. It’s exciting! (One of my interpreters was terrified at this thought, though, so there will be an “old guard” that tries to stop this.) A lot of progress, as Professor Andrew Ng mentions in this article, is due to the power of combining the ages-old technique of neural networks with the massive amounts of data we have. One of the things that motivated this line of thinking was a 2012 paper published at NIPS, called ImageNet Classification with Deep Convolutional Neural Networks, that broke the ImageNet classification record. It spawned a huge interest in the application of neural networks to solving object detection and classification problems, and hopefully we may end up seeing neural networks become a household name in the coming years.


  1. Moses, for statistical machine translation, is also not an exception. It is extremely hard to install and use, as I discussed last November.

  2. If there had been a fourth homework assignment, I would have been tempted to say the following on Piazza: “To the TAs/GSIs: I would like to ask in advance how long the incoming homework extension will be for homework four?”










The Joy of Talking To Others

May 30, 2015

I had a vision of what I wanted to be like before I entered graduate school. Some things have worked out, and others haven’t. One thing that hasn’t — and not necessarily in a bad way — is my changing opinion of how I want to structure my schedule so as to talk to others.

Originally, I wanted to be someone who could hunker down at his desk for sixteen hours a day and tenaciously blast his way through a pesky math or programming problem. I wanted to possess laser-sharp, Andrew Wiles-level focus, and channel it to work on computer science all day without a need to have others hinder my progress with meetings and various requests.

That vision has not become reality. The key factor? I really want to talk to people.

For most of my life, I never viewed myself as “normal.” This was largely a consequence of being deaf and being isolated in social settings. But I am normal in the sense that I thrive on talking and socialization.

My isolation in recent years has made me hungrier and hungrier to socialize, and when I don’t get that opportunity and see people my age establishing new friendships on a regular basis, I relentlessly beat myself up for failing to take the necessary initiative. Is there something they do that I should be doing? Am I not painting the correct impression of myself?

As a result, sometimes those “uninterrupted hours” that I’ve gone through during work have really been “interrupted” by my brain1, which is constantly telling me that I should socialize. Somehow.

My brain will often go further than that, in a peculiar way. I don’t know how common this is with people, but my brain is constantly creating and envisioning fictional social situations involving me. A typical scene will be me and a few other people socializing. Interestingly enough, I will be participating in these conversations much more often than is typical for me, and the other people will be more engaging towards me than usual. That’s it — those are the key commonalities in these scenes. I don’t know … is my brain trying to form what my hope would be for a normal social situation? Is it trying to compensate for some real-life deficiency? This kind of hypothetical scene formation, for lack of a better way to describe it, happens more often when I am in bed and trying to sleep. I will usually go through cycles of social scenes, with an intriguing rotation of settings and conversationalists2.

During the day, I find that if I go too long without talking to someone, these thoughts may appear in a similar form as those that occur in the evening. Recently, they seem to begin when I think about how I’m missing out because I don’t know many people and don’t always have the courage to talk to others. Unfortunately, there’s a paradox: most of the time, when I have tried attending social events, I tend to feel worse. Huh.

One-on-one meetings, of course, are the main exception to this rule. Even if such meetings are not strictly for social purposes (e.g., a student-advisor relationship), I usually feel like they have served a social purpose, and that they fulfill my minimum socialization goal for the day. It’s no surprise that after meetings, my mood improves regardless of the outcome, and I can get back to my work in a saner state.

I think it’s more important for someone like me to have meetings and to talk to others while at work. The rationale is that I don’t get to talk to many people, so any small conversation in which I do participate provides more utility to me as compared to that other person, because he or she will have had more social opportunities throughout the day.

As a result, I now try to stagger my schedule so that, instead of having three days completely free and one day with four meetings, I’ll have one meeting per day. Having just one half-hour meeting can completely change the course of a day by refreshing my “focus” and “motivation” meters so that I can finish up whatever task I need to finish.

You know, for someone who doesn’t socialize much, I sure do think about socialization a lot! Case in point: this short essay! The reason why I am just now writing this post is due to a recent visit at the Rochester Institute of Technology (RIT) to discuss some research with several colleagues. While I was at RIT, I had an incredibly easy time talking with my colleagues in sign language.

More than ever, I appreciate the enormous social benefit of RIT to deaf students. While I may have issues with how RIT handles academic accommodations, it has one thing that no other university can boast: a thriving, intelligent deaf population of computer scientists and engineers. Wouldn’t that be my ideal kind of situation?

My visit to RIT made me again wonder how my life would have been different had I decided to pick RIT for undergrad. Life, however, is full of tradeoffs, and whatever social benefit from going to RIT would have been countered by a possible reduction in my future opportunities. If I had attended RIT, I doubt I would have gotten in the Berkeley Ph.D. program, because the reputation of one’s undergraduate institution plays a huge (possibly unfair) role in determining admission.

I don’t want to suggest that all deaf people can follow my footsteps. I know that the only reason why I had that kind of “choice privilege” to decide between a hearing-oriented versus a hearing-and-deaf-oriented school is that my level of hearing (with hearing aids) and speech are just good enough so that I can thrive in a hearing-dominated setting. By thrive, I mean academically (in most cases); I am always at the bottom of the social totem pole.

When I think about my social situation, I sometimes get angry. Then I react by reminding myself of how lucky I am in other ways. With the exception of hearing, I have a completely functional body with excellent mobility. My brain appears to be working fine and can efficiently process through various computer science problems that I face in my daily work. I live in a reasonably nice apartment in Berkeley, in an area that is reasonably safe. I have a loving family that provides an incredible amount of support to me.

In fact, almost every day since entering college five years ago, I’ve reminded myself of how lucky I am in these (and other) regards. I wish I could say I did this every day, but I’ve forgotten a few times. Shame on me.

I make no illusions. I am really lucky to be able to have conversations with hearing people, and I treasure these moments to what may seem like a ridiculous extent. Not all deaf people can do this on a regular basis. I do get frustrated when I communicate with others and don’t always get the full information.

But it could be a lot worse.


  1. Or many I should say that it’s my mind that’s mentally communicating with me? For simplicity, I’ll just term my collective conscience as “brain” here.

  2. Most of the time, I know the people that are in my fictional conversations. They do not always know me.










Finished the Wordpress-to-Jekyll Migration

May 22, 2015

In my last post, I talked about the process of migrating my Wordpress.com blog into this Jekyll blog. I finally finished this process — at least, to the extent where nothing is left but touch-ups — so in this post, I’d like to explain the final migration steps.

The Manual Stuff

Let’s start with the stuff that I put off as long as possible: checking every single post to fix every single blog-to-blog post link and every single image. With 150 posts, this was not the most fun activity in the world.

Due to the transition to Jekyll, the old blog-to-blog links1 that I had in my blog posts were no longer valid. To fix those up, I went through every single blog post, checked if it referenced any other blog posts as links, and changed the link by replacing seitad.wordpress.com with danieltakeshi.github.io. Remember, this is only valid because I changed the Wordpress.org way of permalinks to match Jekyll’s style. If I had not done that, then this process would have involved more editing of the links.

I also fixed the images for each post. For each post that uses images, I had to copy the corresponding file location in my old folder that contained my Wordpress.com images, and paste them into the assets folder here. Then, I fixed the image HTML in the Markdown file so that they looked like this:

<img src="http://danieltakeshi.github.io/assets/image_name" alt="some_alt_text">

By keeping all images in the same directory, they have a common “skeleton” which makes life easier, and image_name is the file name, including any .png, .jpg, etc. stuff. The some_alt_text is in case the image does not load, so an alternative text will appear in place of the image.

The Interesting Stuff

Incorporating LaTeX into my posts turned out to be easier than expected. Unlike in Wordpress, where I had to do a clumsy $latex ... $ to get math, in Jekyll + MathJax (which is the tool to get LaTeX to appear in browsers) I can do $$ ... $$. For example: $$\int_0^1 x^2dx$$ results in . It does require two extra dollar signs than usual, but nothing is perfect.

Note: to get MathJax, I pasted the following code at the end of my index.html file:

<script type="text/javascript" 
  src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>

There were a few other things I wanted to do after incorporating math, and they also required modifying index.html:

  • To have all the post content appear on the front page rather than only the titles. The latter case forces the user to click on the title to see any of the content.
  • To add pages on the home page so that the first page (the default one) would display the 20 most recent posts, then the next page would list the next 20, etc.

To get the post content to appear, in index.html where it loops through the posts in the site, there is a place where one can post the content by using the post.content variable2. To get the pages, see this blog post which explains that one needs to add the paginage: X line in the _config.yml file, where X is clearly the amount of posts per page. Then, one can loop through the paginator.posts variable to loop through the posts. To get a page counter at the bottom that lets you jump from page to page, you need to use the paginator.previous_page, paginator.page, and paginator.next_page variables. Again, see the linked blog post, which explains it clearly.

The Future

There are certainly a bevy of things I could do to further customize this blog. I’ll probably add some Google Analytics stuff later to track my blog statistics, but I don’t view that as a high priority. Instead, I would like to start writing about more computer science topics.


  1. Just to be clear: these are when blog posts link to another blog posts in the text, such as by saying “click HERE to see my earlier statement”, where “HERE” is replaced with the link.

  2. It’s also possible to use post.excerpt which will only display part of a post on the home page, but I found that it messed up the links within the post.










Seita's Place has Migrated from Wordpress to Jekyll!

May 14, 2015

A New Era

This post marks the beginning of a new era for my blog. For almost four years (!), which saw 151 posts published (this post is #152), Seita’s Place was hosted by Wordpress.com. Over the past few months, though, I’ve realized that this isn’t quite what I want for the long run. I have now made the decision to switch the hosting platform to Jekyll.

Many people others (e.g., Vito Botta and Tomomi Imura) have provided reasons why they migrated from Wordpress to Jekyll, so you can read their posts to get additional perspectives.

In my case, I:

  • wanted to feel like I had more control over my site, rather than writing some stuff and then handing it over to a black-box database to do all the work.
  • wanted to write more in Markdown and use git/GitHub more often, which will be useful for me as I continue to work in computer science.
  • wanted to more easily be able to write code and math in my posts.
  • wanted to use my own personal text editor (vim is my current favorite) rather than Wordpress’s WYSIWYG editor.
  • was tired of various advertisements underneath my posts.
  • wanted to be more of a hacker and less like the “ignorant masses,” no offense intended =).

Jekyll, which was created by GitHub founder Tom Preston-Werner1, offers a splendid blogging platform with minimalism and simplicity in mind. It allows users like me to write posts in plain text files using Markdown syntax. These are all stored in a _posts file in the overall blog directory. To actually get the site to appear online, I can host it on my GitHub account; here is the GitHub repository for this site2. By default, such sites are set to have a URL at username.github.io, which for me would be danieltakeshi.github.io. That I can use GitHub to back up my blog was a huge factor in my decision to switch over to Jekyll for my blog.

There’s definitely a learning curve to using Jekyll (and Markdown), so I wouldn’t recommend it for those who don’t have much experience with command-line shenanigans. But for me, I think it will be just right, and I’m happy that I switched.

How Did I Migrate?

Oh boy. The migration process did not go as planned. I was hoping to get that done in about three hours, but it took me much longer than that, and the process spanned about four days (and it’s still not done, for reasons I will explain later). Fortunately, since the spring semester is over, there is no better time for me to work on this stuff.

Here’s a high-level overview of the steps:

  • Migrate from Wordpress.com to Wordpress.org.
  • Migrate from Wordpress.org to Jekyll
  • Migrate comments using Disqus
  • Proofread and check existing posts

The first step to do is one that took a surprisingly long time for me: I had to migrate from Wordpress.com to Wordpress.org. It took me a while to realize that there even was a distinction: Wordpress.com is hosted by Wordpress and they handle everything (including the price of hosting, so it’s free for us), but we don’t have as much control over the site, and the extensions they offer are absurdly overpriced. Wordpress.org, on the other hand, means we have more control over the site and can choose a domain name to get rid of that ugly “wordpress” text in the URL. Needless to say, this makes Wordpress.org extremely common among many professional bloggers and organizations.

In my case, I had been using Wordpress.com for seitad.wordpress.com, so what I had to do was go to Bluehost, pay to create a Wordpress.org site, which I named seitad.com, and then I could migrate. The migration process itself is pretty easy once you’ve got a Wordpress.org site up, so I won’t go into detail on that. The reason why I used Bluehost is because it’s a recommended Wordpress provider, and on their website there’s a menu option that you can click to create a Wordpress.org site. Unfortunately, that’s about it for my praise, because I otherwise really hate Bluehost. Did anyone else feel like Bluehost does almost nothing but shove various “upgrade feature XXX for $YZ” messages down our throats? I was even misled by their pricing situation and instead of paying $5 to “host” seitad.com for a month, I accidentally paid $71 to host that site for a year. I did notice that they had a 30-day money back guarantee, so hopefully I can hastily finish up this migration and request my money back so so I won’t have to deal with Bluehost again3.

To clarify, the only reason why I am migrating to Wordpress.org is because the next step, using a Wordpress-to-Jekyll exporter plugin, only works on Wordpress.org sites, because Wordpress.com sites don’t allow external plugins to be installed. (Remember what I said earlier about how we don’t have much control over Wordpress.com sites? Case in point!) But before we do that, there’s a critical step we’ll want to do: change the permalinks for Wordpress to conform to Jekyll’s default style.

A permalink is the link extension given to a blog post after the end of the site URL. For instance, suppose a site has address http://www.address.com. It might have a page called “News” that one can click on, and that could have address http://www.address.com/news, and news would be the permalink.

Modifying permalinks is not strictly necessary, but it will make importing comments later easy. The default Wordpress.org scheme seems like it appends a “p” followed by an integer, and then a question mark. We want to change it to match Jekyll’s default naming scheme, which is /year/month/day/title, and we can do that by modifying the “Permalinks” section in the Wordpress dashboard.

permalinks

Now let’s discuss that Wordpress-to-Jekyll exporter I recently mentioned. This plugin, created by GitHub staff member Ben Balter, can be found (you guessed it) on GitHub. What you need to do is go to the “Releases” tab and download a .zip file of the code; I downloaded version 2.0.1. Then unzip it and follow the instructions that I’ve taken from the current README file:

  1. Place plugin in /wp-content/plugins/ folder
  2. Activate plugin in WordPress dashboard
  3. Select Export to Jekyll from the Tools menu

Steps (2) and (3) shouldn’t need much explanation, but step (1) is the trickiest. The easiest way to do this is to establish what’s known as an FTP connection to the Wordpress.org server, with the “host name” field specified by the URL of the old site (in my case, seitad.com). What I did was download FileZilla, a free FTP provider, and used its graphical user interface to connect to my Wordpress.org site.

filezilla

Note that to connect to the site, one does not generally use his or her Wordpress.org’s login, but instead, one needs to use the login information from Bluehost4! Once I got over my initial confusion, I was able to “drag and drop” the wordpress-to-jekyll exporter plugin to the Wordpress site. You can see in the above image (of Filezilla) that I have the plugin in the correct directory on the remote site. Executing steps (2) and (3) should then result in a jekyll-export.zip file that contains the converted HTML-to-Markdown information about blog entries, as well as other metadata such as the categories, tags, etc.

All right, now that we have our zip file, it’s time to create a Jekyll directory with the jekyll new danieltakeshi.github.io command, where danieltakeshi should be replaced with whatever GitHub username you have. Then take that jekyll-export.zip file and unzip it in this directory. This should mean that all your old Wordpress posts are now in the _posts directory, and that they are converted to Markdown, and that they contain some metadata. The importer will ask if you want to override the default _config.yml file; I chose to decline that option, so _config.yml was still set to be what jekyll new ... created for me.

The official Jekyll documentation contains a tool that you can use to convert from Wordpress (or Wordpress.com) to Jekyll. The problem with the Wordpress.com tool is that the original Wordpress.com posts are not converted to Markdown, but instead to plain HTML. Jekyll can handle HTML files, but to really get it to look good, you need to use Markdown. I tried using the Wordpress.org (not Wordpress.com) tool on the Jekyll docs, but I couldn’t get it to work due to missing some Ruby libraries that later caused a series of dependency headaches. Ugh. I think the simplicity and how the posts actually get converted to Markdown automatically are the two reasons why Ben’s external jekyll plugin is so popular among migrators.

At this point, it makes sense to try and commit everything to GitHub to see if the GitHub pages will look good. The way that the username.github.io site works is that it gets automatically refreshed each time you push to the master branch. Thus, in your blog directory, assuming you’ve already initialized a git repository there, just do something like

$ git add .
$ git commit -m "First commit, praying this works..."
$ git push origin master

These commands5 will update the github repository, which automatically updates username.github.io, so you can refresh the website to see your blog.

One thing you’ll notice, however, is that comments by default are not enabled. Moreover, old comments made on Wordpress.org are not present even with the use of Ben’s Wordpress-to-Jekyll tool. Why this occurs can be summarized as follows: Jekyll generates static pages, but comments are dynamic. So it is necessary to use an external system, which is where Disqus comes into play.

Unfortunately, it took me a really long time to figure out how to import comments correctly. I’ll summarize the mini-steps as follows:

  • In the Admin panel for Disqus, create a new website and give it a “shortname” that we will need later. (For this one, I used the shortname seitasplace.)
  • In the Wordpress.org site, install the Disqus comment plugin6 and make sure your comments are “registered” with Disqus. What this means is that you should be able to view all comments in your blog from the Disqus Admin panel.
  • Now comes the part that I originally missed, which took me hours to figure out: I had to import the comments with Disqus! It seems a little confusing to me (I mean, don’t I already have comments registered?), but I guess we have to do it. On Disqus, there is a “Discussions” panel, and then there’s a sub-menu option for “Import” (see the following image for clarification). There, we need to upload the .xml file of the Wordpress.org site that contains all the comments, which one can obtain without a plugin by going to Tools -> Export in the Wordpress dashboard.

disqus_image

  • You will also likely need to do some URL mapping. Comments in Disqus are stored relative to a URL, and the default URL is obviously the source from where it was imported! But if we’re migrating from source A to source B, doesn’t it make sense to have the comments default to source B’s URL instead of source A? In my case, I used a mapper in the “Tools” menu (in the above image) to convert all the comment links to be based on the current site’s URL. That way, if the original source (i.e., the Wordpress site) gets deleted, we still retain the comments7. If you made sure the permalinks match, then this process should be pretty easy.
  • Finally, the last thing to do is to actually install Disqus comments in the code for wordpress. For that, I went to the “Universal Code” option for Disqus, and pasted the HTML code there into the _layouts/post.html file.

After several restarts due to some random issues with Disqus/Wordpress having problems with deleted material, I was finally able to get comments imported correctly, and they had the same names assigned to the commentators! Excellent! The traceback comments, which are created by Wordpress when one blog post links to another blog post, did not get copied over here, but I guess that’s OK with me. I mostly wanted the human comments, for obvious reasons.

Whew! So we are done, right? Oh, never mind – we have to proofread each post! Since I had 151 posts from Wordpress to import, that meant I had to proofread every single one of them. Ben’s importer tool is good but not perfect, and code- or math-heavy posts are especially difficult to convert correctly. Even disregarding code and math, a common issue was that italicized text wouldn’t get parsed correctly. Sometimes the Markdown asterisks were “one space too ahead”, e.g., if the word code needs to be italicized, the Markdown syntax for that is *code*, but sometimes the importer created *code *, and that danging space can create some ugly asterisks visible in the resulting HTML.

Even after basic proofreading, there are still additional steps one needs to take in order to ensure a crisp blog. One needs to

  • fix the links for the images, since the images by default are set to the original Wordpress address. The Wordpress-to-Jekyll plugin will put the images in the wp-content folder, but I (and the official Jekyll documentation) recommend copying those images over to an assets folder. The default wp-content folder contains too many folders and sub-directories for my liking, but I guess it’s useful if a blog contains thousands of images.
  • fix the post-to-post hyperlinks in each post to refer to the current Jekyll version. In vim, this should be easy as I can do a bunch of find-and-replace calls to each file. Ensuring that the Wordpress permalinks follow Jekyll-style permalinks makes this task easier.
  • incorporate extra tools to get LaTeX formatting.

I haven’t been able to do all these steps yet, but I’m working on it8.

Whew! The best part about the migration is that you only have to do it once. Part of the problem is that I had to rely on a motley collection of blog posts to help me out. The Jekyll documentation itself was not very helpful9.

Post-Migration Plan

In addition to the actual migration, there are some sensible steps that users should take to ensure that they can extract maximal utility from Jekyll. For me, I plan to

  • learn more Markdown10! And in addition, it makes sense to use a text editor that can handle Markdown well. I’m using vim since it’s my default, but it’s actually not that useful to me, because I set the syntax mode off (using :syntax off) and by default vim does not have a Markdown highlighter. I’m sure someone has created a Markdown syntax add-on to vim, so I’ll search for that.
  • actually make the site look better! I don’t mind the simplicity of default Jekyll, but a little more “piazza” wouldn’t hurt. I’d like to at least get a basic “theme” up and running, and to include excerpts from each post on the front page.
  • make a site redirect from my old Wordpress.com site, so that it redirects users to this site. I’d rather not delete the old site all of a sudden (even though I will delete it eventually). But I will get rid of that Wordpress.org site that I had to pay to create, all just to help me migrate to Jekyll.

Incidentally, now that we’ve covered the migration pipeline, I thought I should make it clear how one would go about using Jekyll. To add new posts, one simply adds a file in the _posts directory that follows the convention YYYY-MM-DD-name-of-post.ext and includes the necessary front matter, which contains the title, the date, etc. Looking at the raw Markdown code of sample posts is probably the easiest way to learn.

One could update the site with each edit by adding, committing, and pushing to GitHub, but probably a better way is to update locally by running jekyll build; jekyll serve. This will create a local copy of Jekyll that one can have open in a web browser even if one doesn’t have Internet access. Each time one saves a post, the server will update, so by refreshing, we can see the edit. It won’t catch all edits — I had to push to GitHub and then update the local copy to get images to show up correctly — but it is useful enough that I thought I’d share (and suggest) it. Furthermore, if the website is public, it’s best to update/push polished versions of posts rather than works-in-progress.

Hopefully these personal notes prove useful to future Wordpress.{com,org}-to-Jekyll migrators. In the meantime, I’m going to fix up the rest of this site and prepare some new entries that accentuate some of Jekyll’s neat features.


  1. By the way, saying something like “GitHub co-founder Tom …” is the computer programming equivalent of the law school saying “Yale Law School graduate Bob …”. The fact that he founded GitHub immediately hightens my opinion of him. Oh, by the way, do you like how Jekyll does footnotes? Just click the little arrow here and you’ll be back up to where you were in the article!

  2. If you have experience using GitHub, then you can even fork my repository on GitHub to serve as a launching point for your own site or to test out Jekyll.

  3. Just to be clear, if you host a site on a public GitHub repository, then it’s free. That’s yet another reason to use Jekyll/GitHub!

  4. This username information should be included in the first email (or one of the first, I think) you got from Bluehost. The password should be the password you use to log into Bluehost to get to your user control panel.

  5. If you’re wondering how I was able to get a code block highlighted like that, I wrap the commands with three tildas (~~~) before and after the text. This is with the kramdown Markdown scheme.

  6. Fortunately, you can find this plugin by searching in Wordpress directly; there’s no need to engage in fancy FTP stuff.

  7. Actually, I haven’t tested this yet. I hope this works.

  8. Interestingly enough, the Jekyll docs for migrating from Wordpress.com to Jekyll currently link to an external blog post from March 2011. I found that blog post to be reasonably helpful, but it didn’t really do what I needed, which tends to be a problem when following such guides.

  9. To add to the complexity, there are several different versions of Markdown. My site is currently using the kramdown style, but another popular one (that GitHub pages use) is redcarpet, but that style messed up my footnotes, so I eschewed from using it.










Why It’s Difficult for me to Drop Classes

Apr 25, 2015

At this time, many Berkeley students are selecting their tentative courses for the Fall 2015 semester. I’m doing the same as well. I’m thinking of taking EE 227BT, Convex Optimization, which is a math class describing the wonders and treasures of convexity, and maybe CS 287, Advanced Robotics, which pertains to the math behind robot motion and decision making. In a few weeks, I’ll need to let Berkeley’s Disabled Students Program (DSP) know about my courses so that they can make arrangements to secure semester-long services.

I have to make such course decisions early and I have to be sure about what I am taking. The reason is that it is difficult for me to add or drop a class once a semester starts.

Most students do not have this problem. Schools usually have an add/drop period during the beginning of the semester. In that time, students can show up to a variety of classes and decide to drop a few that turned out not to be what they expected. (The overwhelming reason why students drop a class is because it demands more work than they can handle.) Depending on the class policies, students can also enroll in new classes within this period even if they didn’t show up to the first few lectures.

For me, I don’t have that luxury because class accommodations require weeks of advance preparation. To start, I must inform Berkeley’s Disabled Student Program about the classes I am taking so that they can make the necessary preparations. Securing a semester-long CART provider or sign language interpreter is not automatic because availability varies; I have experienced cases where I got accommodations with a day’s notice, and others where I couldn’t get any despite a week’s notice or more.

Those were for one-time events, though. It takes a longer time to secure CART providers or interpreters for semester-long jobs, and if I were to show up to a class for a few weeks and decide to drop it when there were still eight weeks to go, then those people would effectively lose up to eight weeks’ worth of money. (Replacing the funding with other interpreting jobs is not always easy, because demand occurs at varying times and locations.) In fact, when I was in my second semester at Williams, I enrolled in a class’s lab section that met on Thursday afternoons. I quickly secured accommodations for that lab session … and then just before the semester began, I decided to switch to having that session meet on Wednesday afternoons, because it greatly simplified my schedule.

It was a routine switch, but doing so cost that Thursday interpreter about $600 dollars’ worth of payment in a month. While I did secure a different interpreter for that lab session, the original one did not work for me again in my remaining time at Williams, and I constantly regret my choice to switch sessions. He obviously had the opportunity to work for me in later semesters, but because I dropped that lab session on short notice, he (understandably) did not want to take the risk of losing more money. Furthermore, Williams is isolated and does not have an interpreting workforce, so the interpreters I did have (from Albany, New York) had to drive long distances to get to work. Thus, a one-hour commitment at the school could have easily taken up four hours in a day, which reduces the chances of finding other interpreting work in the same day. This is one reason why I often tried to schedule consecutive classes to maximizes the monetary benefit for my interpreters.

As a result of that experience, I did not drop any Williams classes other than that lab session, which barely counts since it was part of the same general course. It does mean I have to “tough it out” in classes that turn out to be really difficult or boring, but I was a good student so this usually was not an issue. This line of thinking is carrying over to Berkeley, where I aim to complete all classes I enroll in and to minimize sudden schedule chances. I really want to maintain a good relationship between Berkeley’s DSP and the San Francisco agency that provides interpreting services.

Nevertheless, it’s important to retain perspective and realize the best case, most probable case, and worst case scenarios. Having hassles relating to adding and dropping classes is better than not getting any accommodations.










The Missing Deaf American Politician

Apr 11, 2015

clinton

It’s time to gear up for the 2016 United States Presidential election race! Ted Cruz, Rand Paul, Hillary Clinton, and Marco Rubio — in that order — have all announced or will announce that they will be running for president.

Now marks the beginning of the inevitable wait until Rand Paul becomes the next president. But in the meantime, I wonder about whether the United States has ever had a prominent deaf politician. Anyone in the Senate? How about the House of Representatives? Or even a member in a state legislature, or a mayor for a large city? Due to their average age, I’m sure we have had some slightly hearing-impaired politicians, but those don’t count to me. I’m talking about someone who was born deaf or became deaf at a young age, and who knows sign language, and who has strong connections to the Deaf community? Here, I’m using the capital “D” to indicate association with the community.

Unfortunately, to the best of my knowledge, America has never had one. On Wikipedia, there’s currently two relevant pages: “Deaf politicians” and “List of Deaf People“. (I know there are politicians who don’t have Wikipedia pages, but the simple existence of such a page indicates that there is some prestige to the position to which the politician is elected.)

The “Deaf politicians” page currently lists 14 relevant names. What’s interesting to me is that none of these people are or were American politicians. There are four British, two Hungarian, one French, one Austrian, one Greek, one Belgian, one Icelander, one Canadian, one South African, and one New Zelander.

It’s also intriguing that the list of deaf politicians is dominated by Europeans. It seems like a future topic of investigation would be to see if there exist additional biases against deaf people in non-European countries as compared to European countries. I’m particularly curious about the treatment of deaf people in Asian countries.

That second page, “List of Deaf People” does not provide any new deaf politicians outside of what the first page did.

Thus, it looks like America has lacked prominent deaf politicians in its entire existence. From my investigation, the closest thing we have had as a deaf politician is that Canadian guy, Gary Malkowski, because he spoke in American Sign Language while on the job. (Here is a biography of him, and another one on lifeprint.com, which is also an excellent resource for getting up-to-speed on ASL.) Mr. Malkowski was probably the first truly elected deaf politician in the world, serving on the Legislative Assembly of Ontario from 1990 to 1995 and becoming one of the world’s foremost advocates of rights for people with disabilities. Not being Canadian, I don’t have a good idea of how prestigious his position was, but I imagine it’s comparable to being a member of an American state legislature? His background includes a Bachelor’s degree in Social Work and Psychology from Gallaudet University.

While it is disappointing that the Deaf American Politician has never come into play, I am sure that within the next thirty years, we will be seeing at least one such person, given how numerous barriers have eroded over the years to allow a more educated deaf population, though I’m guessing there will be some debate over the “level of deafness” of such a candidate who shows up. I would bet that if this future politician has a background in American Sign Language and has even a weak connection to the Deaf Community, he or she will be able to win the vote of most of the severely hearing-impaired population (which includes the Deaf Community). The main question, of course, would be if the general population can provide the necessary votes.

To be clear, a deaf person should not automatically vote for a deaf politician, akin to how a black person should not automatically vote for Barack Obama or a woman should not automatically vote for Hillary Clinton. But such demographic information is a factor, and people can relate to those who share similar experiences. For instance, being deaf is key for positions such as the presidency of Gallaudet University.

To wrap up this post, here’s my real prediction for the two ultimate candidates for the 2016 U.S. Presidential Election: Hillary Clinton and Scott Walker. Jeb Bush is not winning his party’s nomination since voters will (possibly unfairly) associate him with his brother.

I’ll come back to this post in a little over a year to see if I predicted correctly. By then, hopefully there will be a deaf person who is making a serious run for a political position, but I doubt it.










Do I Inconvenience You?

Apr 4, 2015

Like many deaf people, I often have to request for assistance or accommodations for events ranging from meetings and social events in order to benefit from whatever they offer. These accommodations may be in the traditional realm of sign language interpreters, note-taking services, and captioned media, but they can also be more informal, such as asking a person to talk in a certain manner, or if I can secure a person who will stay with me at all times throughout a social event. (Incidentally, I’ve decided that the only way I’ll join a social event nowadays is if I know for sure that someone else there is willing to stay with me the entire time, since this is the best way to prevent the event from turning into a “watch this person talk about a mysterious subject for thirty seconds and then switch to watching another person” situation.)

On the other hand, when I request for assistance, I worry that I inconvenience others. This is not new for me (I wrote about this a year and a half ago), but with the prospect of having to attend more group meetings and events in the future, I worry about if others will view me as a burden, if they do not think so already.

Unfortunately, I preoccupy myself about whether I inconvenience others way too often than is healthy or necessary. For instance, I often wonder if sign language interpreters distract other students. I remember my very first class at Williams (wow, that was a long time ago…) where the professor remarked that a lot of the students were exchanging glances at the sign language interpreters (though to be clear, she was not saying this in a derogatory manner, and I have never had another professor say this in any other class). So other students do notice them, but for how long? For the sake of their own education, I hope the novelty factor wears off in the first few minutes and then it will be as if they were in a “normal” lecture without sign language interpreters. Now that I think about this, I really should have asked the people who shared many classes with me about if the interpreters affected their focus. I also wonder about how this affects whoever is lecturing. My professors have varied wildly in how much they interact with the interpreters, both during and outside of class.

Sign language interpreting services are the prominent reason why I worry I inconvenience others because they are very visible. Another, possibly less intrusive accommodation would be captioned media. I use captions as much as possible, but hearing people don’t need them. If they are there, is it an inconvenience for them? Captions that have white text and black background can obscure a lot of the screen. This is why even though I’ve only used them twice, I am already a huge fan of closed captioning glasses. They provide the best case scenario: high-quality accommodations with minimal hassle to others.

The vast majority of people do not express overtly negative reactions when my accommodations are present, but likewise, I have had few direct reassurances from others that I do not inconvenience them. I remember exactly one time where a non-family member told me I was not inconveniencing her: a few years ago, a Williams professor relieved me of a few concerns when she told me that having extra accommodations in lectures was not distracting her at all.

While this blog post might convey a bleak message, there is, oddly enough a very simple yet hard to follow method to ensure that you don’t feel like you are inconveniencing others, especially in workforce-related situations.

That method is to do outstanding work. If you can do that, and others are impressed, then you know that you’ve been able to overcome the various minor hassles related to accommodations and that you’re an accepted member of the community. If not, then either the situation doesn’t fit in this kind of framework, or it might be necessary to re-evaluate your objectives.










Another Hearing Aid Fails to Live Up to Its Water Resistant Label

Mar 7, 2015

hearingaid

Today, I played basketball for the first time since I arrived in Berkeley. It was a lot of fun, and I was at Berkeley’s Rec Sports Facility for 1.5 hours. Unfortunately, I also received a sobering reminder that my water resistant hearing aids are not actually water resistant.

My Oticon Sensei hearing aids worked great for about half an hour … then I heard that all-too-familiar beeping sequence in both ears, and then a few minutes later, the hearing aids stopped working. So I didn’t have any hearing and had to rely on various body language cues and last-resort tactics (honed over the years) to understand what others were saying. Fortunately, in basketball, communication among players in game situations tends to be blunt and simple and from experience, I’ve learned what players typically say to each other.

It is not uncommon for my hearing aids to stop working while I’m engaging in some physical activity. In fact, I get surprised if my hearing aids last through a session of pickup basketball. Thus, I already knew that I would have to reduce the amount of sweat near my hearing aids. I tried using my shirt and the gym’s towel cloth to absorb some of it, but they can only help out so much.

I understand that water resistant does not mean water proof, but I just cannot fathom how a water resistant hearing aid stops functioning after a half hour of physical activity. Out of curiosity, I re-checked my manual and it states that the Oticon Sensei has an IP57 classification. This means that it was still able to function properly after being immersed in water for 30 minutes at a depth of 1 meter.

I am somewhat surprised, because 30 minutes is about the time it took for the hearing aids to stop working after playing basketball. Oh well. At least I have a functional hearing aid dryer. Within a few hours after arriving home, I had them working. But it’s still incredibly annoying. Honestly, the biggest problem with hearing aid breakdowns is not the lack of communication on the court, but what happens off the court. Between pickup games, players are constantly talking to each other about who should be playing the next game or what they want to do after basketball’s over. A more important issue is that I drive to the gym, and driving without working hearing aids is something I would rather avoid.










Make the Best Peer Reviews Public

Feb 28, 2015

The annual Neural Information Processing Systems (NIPS) conference is arguably the premier machine learning conference along with the International Conference on Machine Learning (ICML). I read a lot of NIPS papers, and one thing I’ve only recently found out was that NIPS actually makes the paper reviews (somewhat) public.

As I understand it, the way NIPS works is:

  1. Authors submit papers, which are eight pages of text, and a ninth one for references. Unlimited supplementary material is allowed with the caveat that reviewers do not need to read it.
  2. The NIPS committee assigns reviewers to peer-review the submissions. These people are machine learning faculty, graduate students, and researchers. (It has to be like that because there’s no other qualified group of people to review papers.) One key point is that NIPS is double-blind, so reviewers do not know the identity of the papers they read while reviewing, and authors who submit papers do not know the identity of the people reviewing their papers.
  3. After a few months, reviewers make their preliminary comments and assign relative scores to papers. Then the original authors can see the reviews and respond to them during the “author rebuttal” phase. Naturally, during all this time, the identity of the authors and reviewers is a secret, though I’ve seen cases when people post submitted NIPS papers to Arxiv before acceptance/rejection, and Arxiv requires full author identity, so I guess it is the reviewer’s responsibility to avoid searching for the identity of the authors.
  4. After a few more months, the reviewers make their final decision on which papers get accepted. Then the authors are notified and have to modify their submitted papers to include their actual names (papers in submissions don’t list the authors, of course!), any acknowledgments, and possibly some minor fixes suggested by the reviewers.
  5. A few months after that (yeah, we’re getting a lot of months here), authors of accepted papers travel to the conference where they discuss their research.

This is a fairly typical model of a computer science conference, though possibly an aytpical model when compared to other academic disciplines. But I won’t get into that discussion; what I wanted to point out here is that NIPS, as I said earlier, makes their reviews public, though the identity of the reviewers is not shown. Judging by the list of NIPS proceedings, this policy of making reviews public began in 2013, and happened again in 2014. I assume NIPS will continue with this policy. (You can click on that link, then click on the 2013/2014 papers lists, click on any paper, and then there’s a “Reviews” tab.) Note that the author rebuttals are also visible.

I was pleasantly surprised when I learned about this policy. This seems like a logical step towards transparency of reviews. Why don’t all computer science conferences do this?

On the other hand, I also see some room for improvement. To me, the obvious next step is to include the name of the reviewers who made those reviews (only for accepted papers). NIPS already gives awards for people who make the best reviews. Why not make it clear who wrote the reviews? It seems like this would incentivize a reviewer to do a good job since their reviews might be made public. Incidentally, those awards should be made more prestigious, perhaps by announcing them in the “grand banquet” or wherever the entire crowd gathers?

You might ask, why not make the identity of reviewers known for all reviews (of accepted papers)? I think there are several problems with this, but none seem to be too imposing, so this might not be a bad idea. One is that the current model for computer science seems to assign people too many papers to review, which necessarily lowers the quality of each individual review. I am not sure if it is necessary or fair to penalize an overworked researcher for making his/her token reviews public. Another is that it is a potential source of conflict between future researchers. I could image someone obsessively remembering a poor public review and using that against the reviewer in the future.

These are just my ideas, but I am not the only one thinking about the academic publishing model. There’s been a lot of discussion on how to change the computer science conference model (see, for instance, “Time For Computer Science to Grow Up“), but at least for the current model, NIPS got it mostly right by making reviews somewhat public. I argue that one additional step towards greater clarity would be helpful to the machine learning field.










Review of Natural Language Processing (CS 288) at Berkeley

Feb 14, 2015

siri

This is the much-delayed review of the other class I took last semester. I wrote a little bit about Statistical Learning Theory a few weeks months ago, and now, I’ll discuss Natural Language Processing (NLP). Part of my delay is due to the fact that the semester’s well underway now, and I have real work to do. But another reason could be because this class was so incredibly stressful, more so than any other class I have ever taken, and I needed some amount of time to pass before writing this.

Before I get to that, let’s discuss what the class is about. Natural Language Processing (CS 288) is about the study of natural languages as it pertains to computers. It applies knowledge from linguistics and machine learning to develop algorithms that computers can run to perform a variety of language-related applications, such as automatic speech recognition, parsing, and machine translation. My class, being in the computer science department, was focused on the statistical portion of NLP, where we focus on the efficiencies of algorithms and justify them probabilistically.

At Berkeley, NLP seems to be offered every other year to train future NLP researchers. Currently we only have one major NLP researcher, Dan Klein, who teaches it (Berkeley’s hiring this year so maybe that number will turn into two). There are a few other faculty that have done work in NLP, most notably Michael Jordan and his groundbreaking Latent Dirichlet Allocation algorithm (over 10,000 Google Scholar citations!), but none are “pure” NLP like Dan.

CS 288 was a typical lecture class, and the grading was based exclusively on five programming projects. They were not exactly easy. Look at the following slide that Dan put up on the first day of class:

cs288

I come into every upper-level computer science expecting to be worked to oblivion, so this slide didn’t intimidate me, but seeing that text there gave me an initial extra “edge” to make sure I was focused, doing work early, and engaging in other good habits.

Let’s talk about the fun part: the projects! There were five of them:

  1. Language Modeling. This was heavy on data structures and efficiency. We had to implement Kneser-Ney Smoothing, a fairly challenging algorithm that introduced me to the world of “where the theory breaks down.” Part of the difficulty in the project comes from how we had to meet strict performance criteria, so naive implementations would not suffice.
  2. Automatic Speech Recognition. This was my favorite project of the class. We implemented automatic speech recognition based on Hidden Markov Models (HMMs), which provided the first major breakthrough in performance. The second major breakthrough came from convolutional neural networks, but HMMs are surprisingly a good architecture on their own.
  3. Parsing. This was probably the most difficult project, where we had to implement the CYK parsing algorithm. I remember doing a lot of debugging and checking indices of matrices to make sure they were aligned. There’s also the problem of dealing with unary expressions, since that’s a special case that’s not commonly described in most textbook descriptions of the CKY parsing algorithm (actually, the concept of “special cases not described by textbook descriptions” could be applied to most projects we did…).
  4. Discriminative Re-ranking. This was a fairly relaxing project because a lot of the code structure was built for us and the objective is intuitively obvious. Given a candidate set of parses, the goal was to find the highest ranking one. The CYK parsing algorithm can do this, but it’s better if that algorithm gives us a set of (say) 100 parses, and we run more extensive algorithms on those top parses to pick the best of those, hence the name “re-ranking.”
  5. Word Alignment. This was one that I had some high-level experience with before the class. Given two sentences of different languages, but which mean the same thing, the goal is to train a computer to determine the word alignment. So for an English-French sentence pair, the first English word might be aligned to the third French word, the second English word might be aligned to *no *French word, etc.

I enjoyed most of my time thinking about and programming these projects. They trained me to stretch my mind and to understand when the theory would break down for an algorithm in practice. They also forced me to brush up my non-existent debugging skills.

Now, that having been said, while the programming projects were somewhat stressful (though nothing unexpected given the standards of a graduate level class), and the grading was surprisingly lax (we got As just for completing project requirements) there was another part of the class that really stressed me out, far beyond what I thought was even possible. Yes, it was attending the lectures themselves.

A few months ago, in the middle of the semester, I wrote a little bit about the frustration I was having with remote CART, a new academic accommodation for me. Unfortunately, things didn’t get any better after I had written that post, and I think they actually worsened. My CART continued to be plagued by technical issues, slow typing, and the rapid pace of lecture. There was also construction going on near the lecture room. I remember at least one lecture that was filled with drilling sound while the professor was lecturing. (Background noise is a killer for me.)

I talked to Dan a few weeks into the course about the communication issues I was having in the class. He understood and thanked me for informing him, though we both agreed that slowing down the lecture rate might reduce the amount of material we could cover (for the rest of the students, of course, not for me).

Nonetheless, the remaining classes were still insanely difficult for me to learn from, and during most lectures, I found myself completely lost within ten minutes! What was also distressing was knowing that I would never be able to follow the question/answer discussions that students had with the professor in class. When a student asks a question, remote CART typically puts in an “inaudible” text due to lack of reception and the relatively quiet voice of the students. By my own estimate, this happened 75 percent of the time, and that doesn’t mean the remaining 25 percent produced perfect captions! CS 288 had about 40-50 students, but we were in a small room so everyone except me could understand what students were asking. By the way, I should add that while I do have hearing from hearing aids and can sometimes understand the professor unaided, that hearing ability virtually vanishes when other students are asking questions or engaging in a discussion.

This meant that I didn’t have much confidence in asking questions, since I probably would have embarrassed myself by repeating an earlier question. I like to participate in class, but I probably spoke up in lecture perhaps twice the entire semester. It also didn’t help that I was usually in a state of confusion, and asking questions isn’t always the ticket towards enlightenment. In retrospect, I was definitely suffering from a severe form of imposter syndrome. I would often wonder why I was showing up to lecture when I understood almost nothing while other students were able to extract great benefits from them.

Overall verdict: I was fascinated with the material itself, and reasonably liked the programming projects, and the course staff was great. But the fact that the class made it so hard for me to sit comfortably in lecture caused way more stress than I needed. (I considered it a victory if I learned anything non-trivial from a lecture.) At the start of the semester, I was hoping to leave a solid impression on Dan and the other students, but I think I failed massively at that goal, and I probably asked way too many questions on the class Piazza forum than I should have. It also adversely affected my CS 281a performance, since that lecture was right after CS 288, which meant I entered CS 281a lectures in a bad mood as a result of CS 288.

Wow, I’m happy the class is done. Oh, and I am also officially done with all forms of CART.










Harvard and MIT’s Lack of Closed Captions

Feb 14, 2015

In the future, I will try not to discuss random news articles here, because often the subject might be a fad and fade in obscurity. Today, I’ll make an exception with this recent New York Times article about how Harvard and MIT are being sued over lack of closed captions. The actual suing/lawsuit action itself will probably be forgotten by most soon, but the overall theme of lack of captions and accessibility is a recurring news topic. Online education is real, and accommodations for those materials will also be necessary to ensure a maximal range of potential beneficiaries.

I don’t take part in online courses or video resources that much since there’s already plenty that I can learn from standard in-person lectures, and the material that I need to know (advanced math, for instance) is not something that I can learn from MOOCs, which by their very definition are for popular and broadly accessible subjects. For better or worse, the concepts I do need to know inside-out are embedded in dense, technical research papers.

Fortunately, the few online education resources I have experience with provide closed captions. The two that I’m most familiar with are MIT OpenCourseWare and Cousera, and both are terrific with captions. Coursera is slightly better, being more “modern” and also allows the video to be paused and sped up, while for MIT OCW one needs to use external tools, but both are great.

Apparently, using MIT OCW and Coursera (and sparingly at that) has probably led me to forget about how most online sources do not contain closed captions. It’s especially frustrating to me since in the few cases when I want to look at videos, I have to rely on extensive rewinding and judicious pauses to make sense of the material. I think in the next few years, I may need to employ those cumbersome tactics when I watch research talks.

It’s nice to see that captions are getting more attention, and I believe this issue will continue to reappear in news in the near future. Perhaps the brand names of “Harvard” and “MIT” are playing a role here, but I don’t view that as a bad sign: if they can take the initiative and be leaders in accessibility, then other universities should try and emulate them. After all, those universities want Harvard and MIT’s ranking…










Day in the Life of a Graduate Student

Feb 14, 2015

I was recently thinking about my daily routine at Berkeley, because I always feel like I am never getting enough work done. I wonder how much of my schedule is common among other graduate students (or among people in other, vastly unrelated careers). Let’s compare! Here’s my typical weekday:

5:45am: Wake up, shower, make and eat breakfast, which is usually three scrambled pastured eggs, two cups of berries, and a head of raw broccoli. Pack up a big-ass salad to bring with me to work.

6:45am: Leave for work. I usually drive — it takes ten minutes at this time — though at least one day of the week I’ll take the bus.

7:00am: Arrive at Soda Hall. Angrily turn off the lights in the open areas outside of my office after finding out that the people there last night left them on after leaving. Put my salad in the refrigerator. Unlock the door to my shared office, turn on laptop, pull out research and classwork notes. Check calendar and review my plan for the day.

7:15am to 9:15am: Try to make some headway on research. Check latest commits on github for John Canny‘s BID Data Project. Pull out my math notes and double-check related code segment from last night’s work to make sure it’s working the way it should be. Make some modifications and run some tests. Find out that only one of my approaches gets even a reasonable result, but it still pales in comparison to the benchmark I’ve set. Pound my fist on the table in frustration, but fortunately no one else notices because I’m still the only one on this floor.

9:30am: Realize that a lecture for my Computer Vision class is about to start. Fortunately, this is Berkeley, where lectures strangely start ten minutes after their listed time, but I need to get there early to secure a front row seat so I can see the sign language interpreters easily. (I can always ask people to move if I have to, and they probably will, but it’s best if I avoid the hassle.)

9:40am to 11:00am: Jitendra Malik lectures about computer vision and edge detectors. I concentrate as hard as I can while rapidly switching my attention between Jitendra, his slides, and my interpreters. Make mental notes of which concepts will be useful for my homework due the following week.

11:00am: Class is finished. Attempt to walk around in the huge crowd of entering/leaving students. Decide that since I don’t have anyone to eat lunch with, I’ll grab something from nearby Euclid street to take to my office.

11:15am to 11:45am: Eat lunch by myself in my office, wishing that there was someone else there. Browse Wikipedia-related pages for Computer Vision concepts from lecture today. Get tripped up by some of the math and vow that I will allocate time this weekend to re-review the concepts.

noon to 2:00pm: Try to get back to research regarding the BID Data Project. Write some more code and run some tests. Get some good but not great results, and wish that I could be better, knowing that John Canny would have been able to do the same work I do in a third of the time. Skim and re-read various research papers that might be useful for my work.

2:00pm to 3:00pm: Take a break from research to have a meeting with another Berkeley professor who I hope to work with. Discuss some research topics and what would be good but not impossible problems to focus on. Tell him that I will do this and that before our next meeting, and conclude on a good note.

3:15pm to 4:30pm: Arrive back in my office. Get my big-ass salad from the refrigerator and drizzle it with some Extra Virgin Olive Oil (I keep a bottle of it on my desk). My office-mate is here, so I strike up a quick chat. We talk for a while and then get back to work. My mood has improved, but I suddenly feel tired so end up napping by mistake for about fifteen minutes. Snap out of it later and try to get a research result done. End up falling short by only concluding that a certain approach will simply not work out.

4:30pm to 5:00pm: Decide to take a break from research frustration to make some progress on my Computer Vision homework. Get stuck on one of the easier physics-related questions and panic. Check the class Piazza website, and breathe a sigh of relief upon realizing that another classmate already asked the question (and got a detailed response from the professor). Click the “thanks” button on Piazza, update my LaTeX file for the homework, and read some more of the class notes.

5:00pm to 5:30pm: Take a break to check the news. Check Google Calendar just in case I didn’t forget to go somewhere today. Check email for the first time today. Most are from random mailing lists. In particular, there are 17 emails regarding current or forthcoming academic talks by visiting or current researchers, but they would have been a waste of time for me to attend anyway due to lack of related background information, and the short notice means it can be hard to get interpreting services. Some of those talks also provide lunches, but I hate going to lunches without having someone already with me, since it’s too hard to break into the social situation. Delete most of the email, respond to a few messages, and soon my inbox is quite clean. (The advantage of being at the bottom of the academic and social totem poles is that I don’t get much email, so I don’t suffer from the Email Event Horizon.)

5:45pm to 6:30pm: Try to break out of “email mood” to get some more progress done on homework. Rack my brain for a while and think about what these questions are really asking me to do. Check Piazza and Wikipedia again. Make some brief solution sketches for the remaining problems.

6:40pm to 7:00pm: Hit a good stopping point, so drive back home. (Still not in the greatest mood, but it’s better than it was before my 2:00pm meeting.) At this point most cars have disappeared from Hearst parking lot, which makes it easier for me to exit. Cringe as my car exits the poorly-paved roadway to the garage, but enjoy the rest of the ride back home as the roads aren’t as congested as I anticipated.

7:15pm: Think about whether I want to go to Berkeley’s Recreational Sports Facility to do some barbell lifting. It’s either going to be a “day A” session (5 sets of 5 for the squat, 5 sets of 5 for the bench) or a “day B” session (3 sets of 5 for the squat, 5 sets of 5 for the overhead press, and 1 set of 5 for the deadlift). I didn’t go yesterday, which means I have to go either now or tomorrow night. After a brief mental war, conclude that I’m too exhausted to do some lifting and mark down “RSF Session” on my calendar for tomorrow night.

7:30pm to 8:00pm: Cook and eat dinner, usually some salad (spring mix, spinach, arugula, carrots, peppers, etc.), more berries (strawberries or blueberries) a half-pound of meat (usually wild Alaskan salmon), and a protein shake. Browse random Internet sites while I eat in my room or out on my apartment’s table.

8:30pm to bedtime: Attempt to get some more work done, but end up getting making no progress, so pretend to be productive by refreshing email every five minutes and furiously responding to messages. Vow that I will be more productive tomorrow, and set my alarm clock an hour before I really should be waking up.










Deaf-Friendly Tactic: Provide an Email Address

Jan 31, 2015

Update 1/31/2015: I realized just after writing this that video relay is possible with the same phone number … whoops, that shows how long it’s been since I’ve made a single phone call! But in any case, I think the ideas in this article are still valid, and not every deaf person knows sign language.

Original article: In my search for deaf-friendly tactics that are straightforward to implement, I initially observed that it’s so much easier for me to understand someone when he or she speaks clearly (not necessarily loudly). I also pointed out that in a group situation, two people (me and one other person) is optimal (not three, not four…). Two recent events led me to think of another super simple deaf-friendly tactic. In retrospect, I’m surprised it took me a few years to write about it.

I recently had to schedule an appointment with Toyota of Berkeley to get my car serviced. I also received a jury duty summons for late February, and I figured that it would be best if I requested a sign language interpreter to be with me for my summons. Unfortunately, for both of these cases, calling Toyota and the California courts, respectively, seemed to be the only way that I could achieve my goals.

In fact, my jury summons form said the following:

Persons with disabilities and those requiring hearing assistance may request accommodations by contacting the court at [phone number redacted].

There was nothing else. I checked the summons form multiple times. There was no email address, no TTY number, no video relay service number, nothing. Yes, I am not joking. Someone who is hearing impaired — and logically will have difficulty communicating over the phone — will have to obtain jury duty accommodations by … calling the court! I actually tried to call with my iPhone 6. After multiple attempts, I realized that there was a pre-recorded message which said something like: “for doing X, press 1, for doing X, press 2…”, so I had to press a number to talk to a human. Actually, I think it’s probably best that there was no human on the other end, because otherwise I probably would have frustrated him or her by my constant requests for clarification.

I will fully admit that the iPhone 6 is not perfect for hearing aid users because its Hearing Aid Compatible rating is M3, T4 rather than the optimal M4, T4 rating, but still, even after about five or six attempts at calling, I did not understand what numbers corresponded to what activities. Sure, I’m rusty since I make around two phone calls a year to people outside of my immediate family, but I don’t see experience being much of a factor here.

This motives the following simple deaf-friendly tactic:

Provide an email address (perhaps in addition to a telephone number) that people can use to contact for support, scheduling services, and other activities.

I am aware that deaf people can easily use alternative services, such as TTY or video relay. Such services, however, are far inferior to email in many ways. Email nowadays is so prevalent in our lives and is incredibly easy to use. It’s rare when I don’t have some form of Internet access, so I can effectively check email whenever I want. The fact that I’m also writing instead of talking means that I can do things like revise my ideas more clearly and paste relevant web links. The process of forming an email can sometimes result in me resolving my own situation! I’ve often been in the process of writing an email, but then I realized I needed to add more information to show the person on the other end that I had done my research, but then that extra research I do can lead to an answer.

Furthermore, the set of people who regularly use email form effectively a proper superset over those people who use TTY and video relay services. In other words, the vast majority of TTY and video relay users also use email, but the converse is not true. In my case, I have not used TTY and video relay in years; email forms the foundation of almost all my communication nowadays. As long as it doesn’t become an obsession (as in checking it 50 times a day), I don’t see how it interferes that much in my daily life, and I would argue that a telephone call can drag on and on.

Conclusion: if you’re going to provide a phone number for contact, I would strongly urge you to also provide an email address.










Gallaudet University is Searching for a President

Jan 11, 2015

The news is out: Gallaudet University is searching for its eleventh president. Here’s the Presidential Search Advisory Committee web portal and here’s the specific job description, including desired candidate qualifications. I’ll be anxiously following the news. While I have never been on the campus before, I am obviously aware of its history as a college for the deaf (even though it was never on my college radar) and I know several current and former students.

Choosing a president of a college that caters at a specific group of people is a sensitive issue, because often the president is expected to share the same characteristic. For instance, students, faculty, and staff at an all-women’s college or a historically black college might be more favorable towards a female and a black president, respectively. Wellesley College has only had female presidents in its history, and Mount Holyoke College has had mostly female presidents.

Gallaudet is unique in that, as the world’s only university that caters to deaf and hard of hearing students across the board, the president is now expected to be deaf. The first seven presidents of Gallaudet were hearing, and it was not until the now famous 1988 Deaf President Now (DPN) saga that they had a deaf president.

It’s also not enough to just be deaf; the Gallaudet culture prides itself on American Sign Language (ASL), so the president is now expected to be fluent in that language (and immersed in deaf culture). I’m reminded of the 2006 fiasco when Gallaudet appointed Dr. Jane Fernandes as president. Students protested for a variety of reasons, but their argument can be succinctly stated as: “she wasn’t deaf enough.” The board of trustees eventually revoked her appointment. Strangely enough, I don’t remember personally knowing anything about it back in 2006. When I first learned about the incident a few years later, I thought the students mostly embarrassed themselves, but now I’ve become more understanding of their perspective. Incidentally, Dr. Fernandes still ended up with a strong career, as she’s now the president of Guilford College.

Thus, if the next president does not meet the de facto profile requirements, expect the students (and maybe faculty) to protest. The current job description asks that the candidate “has a deep understanding of bilingualism and biculturalism in the deaf community,” though it does not explicitly state that he or she be deaf or be fluent in ASL.

So, as I said, I’ll be anxiously following the news.










New Year’s Resolutions: 2015 Edition

Jan 8, 2015

It’s that time of the year when many people are creating New Year’s resolutions.

Wait, scratch that. We’re a week into 2015, so I think it’s more accurate for me to say: it’s that time of the year when many people have forgotten or given up on their New Year’s resolutions. After all, this guy from Forbes claims that only eight percent of people achieve their resolutions.

Why am I discussing this subject? Last semester, I was in a continuous “graduate student” state where I would read, read, take a few notes, attend classes, do homework, read more research papers, do odd hobbies on weekends, and repeat the cycle. I rarely got the chance to step back and look at the big picture, so perhaps some New Year’s resolutions would be good for me. And before you claim that few people stick with them, I also had New Year’s resolutions for 2014, and I kept my text document about it on my desktop. Thus, I was able to keep them in mind throughout the full year, even if I ended up falling short on many goals (I set the bar quite high).

For a variety of reasons, I had a disappointing first semester, so most of my resolutions are about making myself a better researcher. I think one obstacle for me is the pace in which I read research papers. I’ve always thought of myself as someone who relies less on lectures and more on outside reading in classes than most (Berkeley computer science graduate) students, so I was hoping that my comparative advantage would be in reading research papers. Unfortunately, to really understand even an 8-page conference paper that I need for research, I may end up spending days just to completely get the concepts and to fill in the technical details omitted from the paper due to page limits.

When reading research papers, it’s not uncommon for me to lose my focus, which means I spend considerable time backtracking. Perhaps this could be rectified with better reading habits? I’m going to try and follow the advice in this blog post about reading real books, rather than getting all my news from condensed newspaper or blog articles. (Ironically, I just broke my own rule, but I will cut back on reading blogs and arbitrary websites … and also, I came up with this idea about two weeks ago, so it’s nice to see that there’s someone who agrees with me.) Last week, I read two high-octane thrillers — Battle Royale and The Maze Runner — to get me back into “reading mode” and am moving on to reading non-fiction, scholar-like books. Maybe books will help me quit Minecraft for good (so far, it’s working: I’ve played zero seconds of Minecraft in 2015).

I’ve also recorded some concrete goals for weight lifting (specifically, barbell training), which is one of my primary non-academic hobbies. For the past four years, my motivation to attend the gym has been through the roof. I’ve never missed substantial gym time unless I was traveling. In retrospect, I think programs like Stronglifts and Starting Strength (which I loosely follow) are so popular because they generate motivation. Both use the same set of basic, compound lifts, but as you proceed throughout the programs, you add more weight if it is safe to do so. Obviously, the more weight you can lift, the stronger you are! I often juxtapose weight lifting and addictive role-playing games (RPGs), where my personal statistics in real life barbell lifts correspond to a hypothetical “strength” attribute in an RPG game that I continually want to improve.

Here’s a video of me a few days ago doing the bench press, which is one of the four major lifts I do, the others being the squat, deadlift, and overhead press. I know there’s at least one reader of this blog who also benches, and we’re neck-to-neck on it so maybe this will provide some motivation (yeah, there’s that word again…).

This is one set of five reps for 180 pounds; I did five sets that day. (The bar is 45 pounds, the two large plates on both sides are 45 pounds, and each side has two 10-pound plates and one 2.5-pound plate.) I remember when I was a senior in high school and couldn’t do a single rep at 135 pounds, so seeing these new results shows how far I’ve come from my earlier days. I’m definitely hoping the same feeling will transition to my research and motivation in general.

Motivation. It’s an incredibly powerful concept, and a must for graduate students to possess with respect to research.