# Graduate School Visit #1: The University of Texas at Austin

I  recently finished up a three-day stay in Austin, TX, in order to attend UT Austin’s computer science visit days (“GradFest”) for admitted Ph.D. students. This was my first of what will be four school visits.

UT Austin’s GradFest committee assigns each visiting student to a current graduate student who acts as a host and provides housing and transportation in Austin. They must have been paying attention carefully to my RSVP form because the committee chair sent me an email saying that another student at UT Austin had specifically requested me because he knew sign language and lived with someone who also knew sign language. That was nice!

I arrived in Austin at around noon on the first day of GradFest and went to my host’s house via the airport’s shuttle service. I might have been one of the first prospective students to arrive, since we didn’t have much to do until the school-sponsored dinner began at 7. My host briefly showed me around campus, took me out to lunch, and dropped me off at a coffee shop, where I had a meeting set up with a deaf linguistics Ph.D. student at UT Austin. I didn’t have any coffee of course (always hated that stuff), but we had a nice conversation about what life was like in Austin. My sign language must not have been too rusty, as she didn’t have any problems understanding me. She told me that accommodations at UT Austin are great, and that she’s generally had little trouble obtaining what she needed. I also learned that there was a decent academic deaf community in Austin.

Following that, my host took me to his cubicle and provided me with a computer since he knew I was super-busy with work. (I didn’t actually get any done.) By the way, UT Austin’s computer science department is housed in a brand-new building called The Bill & Melinda Gates Computer Science Complex and Dell Computer Science Hall, pictured below.

For anyone who’s worried about collaboration among different subfields of computer science, you definitely won’t have that concern with UT Austin. Previously, different computer science departments were scattered around several buildings. Now, they’re all together in one, making interaction among different areas much easier. Each of the floors is mostly the same, with lounges, computer science labs, offices, and so on. All faculty members have their own offices, with the rest being shared or assigned to postdocs. Ph.D. students are given cubicles in the area corresponding to their computer science subfield, so it’s very easy to talk to other graduate students. Cubicles are also located near the real offices, which I imagine is great for having quick chats and updates with professors.

I stayed in the cubicle area until evening, and then met with other prospective students and their hosts. We went out to dinner at some blazingly loud restaurant. If my host didn’t know sign language, I probably wouldn’t have said anything at all that night!

The second day of GradFest was the most important one. All the students got together in the computer science building to eat breakfast, do paperwork, and obtain schedules. I also met two sign language interpreters hired by UT Austin, who would follow me throughout most of the day.

Prospective students attended our choice of three faculty panels and as many lab “open houses” as we wanted. I attended the Artificial Intelligence, Theory, and the Data Mining and Machine Learning panels and took part in the Mechanized Reasoning and Analysis lab.

Perhaps most importantly, I also had four individual meetings with faculty members, which allowed me to gauge their interest in new students, their advising style, what they look for in new students, and other aspects. (Prospective students write on their RSVP their choices for faculty to meet.) One professor told me that one of the reasons why a new building was needed was because there are plans to increase the computer science faculty from 44 to 60. Certainly this could be reflected in UT Austin’s recent hiring spree, as there are four new faculty coming in this fall and the department is still hiring.

While I was at those faculty meetings, I was well aware that professors might gloss over the truth when talking to prospective students. Fortunately, just before dinner, we had a “Bull Session” with current graduate students which as one described, “is a place without any people in position of authority.” The graduate students were brutally honest when answering the many questions from the visitors. For instance, in a response to the question of when one shouldn’t attend UT Austin, one grad student said that if someone gets in “top four” schools (Berkeley, Carnegie Mellon, MIT, and Stanford), that person should pick one of those over UT Austin, which is currently ranked #8. Surprisingly, there was no talk/gossip of specific faculty members, e.g., “Professor X is really bad and should be avoided at all costs.”

After the very helpful Bull Session, we went out to dinner with the students to a pizza place. The food was great, but I did have to break the rules of my grain-limited diet (the same went for my time at the airports). Oh well.

The third day of GradFest was much more relaxed. I ate brunch and went on a campus tour. After the tour, I took the shuttle back to the airport and flew back to the northeast.

Overall, GradFest was an enjoyable experience and a welcome break from my typical schedule. My other three scheduled visits are in the middle of March, so stay tuned for additional posts about visit days.

# Williams College: The Final Chapter

It is hard to believe that I am already in my eighth and final semester as a Williams College student. I have learned so much over the past few years, including what I want to do after Williams and possibly even after graduate school.

Speaking of graduate school, I’ve heard back from most of the institutions to which I applied. So far, five have offered me admission. Thus, I’ll definitely have some tough decision-making to do over the next two months, and final choices for graduate school have to be made on April 15. I’m going to be traveling to at least four of the “student visit-days” at those schools to assist me in making my final decision, and I’ll probably post some details about the events on this blog. My first trip will be to The University of Texas at Austin, as their computer science department is hosting visit days next weekend. By the way, schools will generally pay up to $500 for airfare and provide you with free lodging, either in a hotel or with graduate students, so it’s a great deal. In the meantime, I’m also doing some more research and taking some classes. Here are the lecture courses I’m taking: 1. Distributed Systems. This is a computer science course that teaches the design and implementation of systems that involve multiple, connected computers, hence the name “distributed.” I will also learn about networking and operating systems, which are two areas that I don’t know much about, so I am definitely going to learn a ton from this course. (Just now, I can finally discern the differences between a process and a thread.) 2. Tiling Theory. This is a mathematics course dealing with the theory behind tilings, which are essentially patterns formed by some simply connected pieces that we can fit together to fill up a plane without gaps and without tiles overlapping. In my opinion, it has a lot of similarities to graph theory in that it’s a course that heavily depends on visualization, proof by pictures, and clever doodling. That’s it! Just two courses. Of course, I do have a thesis. I’m badly behind on it, so I’ll have to focus super-hard on it during the rest of February, March, and the beginning of April. I’m also taking an independent study course in Operations Research, where I’ll be part of a team of roughly ten students who will review and utilize concepts relating to — in addition to operations research — advanced linear algebra, linear programming, sabermetrics, and so on. This schedule seems easier than usual with the small number of lecture courses, but even after two weeks I can already tell that I’ll be overwhelmed as usual, especially when taking into account my heavy traveling this semester. (I expect to miss more classes this semester alone than I have during the past seven semesters combined!) Furthermore, I am going to be repeating as a Teaching Assistant for the Algorithm Design & Analysis course. In the second half of this semester, I am also thinking about signing up for a computer science course on Coursera. Anyway, it’s time for me to stop blogging and get back to work. # Derrick Coleman’s Super Bowl Commercial Last night, I participated in a popular American tradition by watching the Super Bowl game. You can read recaps on The New York Times, ESPN, and other sources. The game itself ended up being a blowout. It was boring to someone like me who doesn’t actively support either the Denver Broncos or the Seattle Seahawks. I also don’t watch halftime shows, so that didn’t increase my excitement of the game. The commercials were also a bit of a disappointment, at least compared to the ones in last year’s Super Bowl. But one of them did catch my eye. No, it was not a Tim Tebow commercial, even though he was arguably the more impressive Denver Broncos-affiliated quarterback last night. It was the Duracell batteries commercial, featuring the deaf Seattle Seahawks player Derrick Coleman. In just a single minute, it chronicles his story from a young boy into a current NFL player. It also seems to have inspired two other deaf girls to write letters of support to Coleman. Imagine their surprise when Coleman met them and offered tickets to the Super Bowl! Now, I know a lot of deaf people. I actually went on Facebook (!) last night, and saw deaf people supporting the Seattle Seahawks just because Coleman was on the team. So his story is inspiring, and clearly has an effect on other deaf people. It will be interesting to see how his career progresses — he’s only 23, by the way — and if other deaf players will join him in the NFL. # Deaf People on an Island, Revisited This discussion will be a continuation of my last post, What if 300 Deaf People were Isolated on an Island? I will focus on the usage of American Sign Language versus English in the hypothetical scenario that a group of deaf people migrate to an island and are allowed to form their own small country. The Scenario To be specific, here is what I have in mind. 1. In January 2014, 300 deaf people from America decide to migrate to a island that has enough resources and infrastructure to maintain a small population of people. Therefore, the island’s inhabitants do not need to rely on trade with other communities or countries, so this remains a secluded area throughout the lifetimes of those 300 people. 2. All these deaf people know ASL and some, but not all, have the ability to speak English reasonably well. They can all read English, but because most do not have excellent hearing ability, ASL is the dominant language for communication among the population. (Assume that only a few can afford quality hearing aids, not an unreasonable expectation nowadays.) 3. Years pass by. People do not leave the island, nor do outsiders come in, as travel is heavily restricted. The deaf people marry among themselves, form generations of families, and eventually the island becomes quite populous, with millions of inhabitants. The main question I want to consider pertains to the use of ASL versus English. In other words … In the long run, which language will become the “official, spoken language” of this island? There are two candidates: ASL or English. By “spoken language,” I’m referring to what people will use to communicate with each other. Yes, English will be the language used for writing, but I’m more interested in conversations. We could take the “easy way” out and say that both are official, but let’s assume that the United Nations or some futuristic, worldwide diplomacy venue mandates exactly one spoken language registered for each country. I suspect that, eventually, English will reign supreme in this regard. Though the island originally starts with a population that consists exclusively of deaf people, the next generation will not share that characteristic. In fact, the majority of the children born to parents among the 300 starting inhabitants will probably be hearing. Deafness is very uncommon among newborn babies, and even if both parents are deaf, their children are still likely to have normal hearing. This trend continues generation after generation, so in the long run, the island’s population will approach a proportion of deaf people similar to the one that exists in today’s world. So what happens? The island becomes a “hearing world,” where the official language is spoken English. There are sure to be some people who know ASL, of course, because there will still be deaf people around. But English becomes the conventional, spoken language because hearing people will constitute the majority of the population, and they will be the ones taking up management positions, political offices, and so on. But I still have a nagging suspicion that I’m missing something. I wonder … Would there be any circumstance in which ASL could actually be the official spoken language in the long run? There are obvious challenges. First, we’re talking about a language that the vast majority of the population won’t need to use. Hearing people may even view it as an inconvenience when communicating with each other; why put the effort in moving hands when you can expend less while talking and still achieve the same objectives? Second, is it possible to have an official spoken language that can’t really be used for written documents? The island scenario does have one major aspect that doesn’t totally kill the idea of ASL being the spoken language: it starts with 300 deaf people. If ASL were to be the spoken language of this island or future country, then I suspect it will all rest on the influence of those 300 people. They will certainly teach their children ASL, regardless of them being hearing or deaf. Thus, the second generation will use ASL. The question is whether the all-hearing families with parents from that second generation will stress the importance of ASL to their hearing children. Even if they do, I worry that the use of ASL would gradually weaken from generation to generation among hearing families. In order for such a concept to be passed down and not whither away, it would probably need to take on as much importance as a religion or a true, cultural activity. So there is an outside chance that ASL could be the spoken language. Still, I suspect we would need additional strong assumptions for that situation to occur. The one that most easily supports ASL being the spoken language would be if the incidence of deafness among babies skyrockets. Anyway, that is one scenario. What are other similar ones that come in mind? I encourage you to think about the various possibilities and their resulting, long-run equilibrium states. Try tweaking some of the assumptions and see what you get. Side Note: 100 Blog Entires on Seita’s Place Somehow, I made it to 100 posts on Seita’s Place. (This is post #100.) So in honor of this “special event,” let’s look at some additional statistics about my blog. According to my WordPress Dashboard, I now get roughly 25 views a day (from about 15 to 20 visitors), which is up from the 0 to 5 views/visitors that I was getting when the blog first began. Excellent! Seita’s Place’s best day was on Monday, December 30, 2013, when it got 98 views from 19 visitors. All time, there are 9,052 views and 133 comments. There are currently 34 followers. My blog entry that has the most comments is, by far, ASL Guidelines with 13 comments, though a lot of the comments on Seita’s Place consist of “pingbacks,” which occur when entries link to other entries, which can inflate the comment count. There are four entries that have five comments, and an additional four that have four comments. How do the blog posts rate in terms of popularity? That’s pretty easy to gauge, as WordPress keeps track of views for each entry. Here’s an image showing the top of the chart: And the bottom: So the most popular post is by far Ten Things Python Programmers Should Know, with 404 views. The least popular is On the Hardness of Nintendo Games, with a measly two views. I can also tell that there are three primary ways that people manage to find Seita’s Place. (1) They search my name, (2) they find the link to it in my Facebook profile, and (3) they search about Theory of Computation or Python. Finally, where are my viewers coming from? Since February 2012, it seems like the majority are (by far) from the United States. The next country on the list is India, followed by the United Kingdom. # What if 300 Deaf People were Isolated on an Island? John Lee Clark, a deaf writer and an active participant in the Deaf Academics mailing list, has a blog on his website. While Mr. Clark doesn’t appear to update it frequently, his blog entries are well-written. I particularly found his Cochlear Implants: A Thought Experiment blog post interesting. Here is his “thought experiment:” Let’s suppose three hundred deaf people, all wearing cochlear implants, are gathered and moved to an island. None of them knows ASL and all of them have excellent speech. There are no hearing people there. What will happen? Mr. Clark’s main argument is that because there are no hearing people to provide feedback on the deaf population’s speech skills, the 300 people will experience erosion in their ability to talk. In response to that, they will develop a sign language. There are some obvious logistical issues with this experiment. This would never happen in the first place, and even if it did, it is unclear how quickly speech erosion would occur, if it did at all. As one reads more into Mr. Clark’s entry, it becomes apparent that he views cochlear implants with disdain: Another thing that it reveals is that the cochlear implant is not FOR deaf people. If it is for deaf people, they would be able to, or even want to, use the implants on their own and for their own reasons. But the cochlear implant is for, and promotes the interests of, hearing people. It was invented by a hearing man and the risky experiments and sometimes fatal operations were legalized by hearing people. The demand for it is driven by hearing parents. It financially benefits hearing teachers, hearing doctors, hearing speech therapists, and hearing businesses in the industry. It is only at the bottom of the industry that we find the token deaf person. It is known that cochlear implants are a controversial topic in the Deaf community, which is well-summarized by the Wikipedia entry on cochlear implants. There was even an entire documentary about this issue: Sound of Fury. Mr. Clark also brings up some of the common arguments against cochlear implants in the rest of his blog post. I think I should write more about cochlear implants in my blog. This entry is apparently the first that uses the “cochlear implant” tag. Stay tuned for future posts…. # Better Hearing Through Bluetooth Better Hearing Through Bluetooth is a recently published New York Times article that, unsurprisingly, I found interesting. The main idea is that people who have some slight hearing loss can use personal sound amplifier products (PSAPs) as an alternative to hearing aids. PSAPs are wearable electronic devices designed to amplify sound for people with “normal” hearing. Interestingly enough, they are not meant to substitute hearing aids for people with substantial hearing loss. The When Hearing Aids Won’t Do article makes that statement clear and uses an example of a hunter who might wear PSAPs to hear better in the forest. Personally, I doubt the benefit of that since amplification doesn’t necessarily correspond to increased clarity and may introduce unwanted side effects such as distracting static, but maybe some hunters can correct me. Unlike hearing aids, PSAPs are not regulated by the Food and Drug Administration. This means customers don’t need to consult with a physician, audiologist, or a hearing aid manufacturer, a major benefit if you want to avoid those intermediaries for time, personal, or other reasons. (A quick look at the comments in the New York Times indicates that audiologists aren’t quite popular.) Consequently, PSAPs are substantially cheaper than hearing aids. A decent PSAP seems like it can be bought at$300, while a hearing aid might cost around \$3,000.

Another aspect of PSAPs is that they are user-programmable. Customers can download an app to their phone or computer and fiddle around with the device to their heart’s content. In contrast, a hearing-aid wearer typically needs an audiologist to do the programming. The user-programmable feature can be a good thing or a bad thing, and the benefit largely rests on two factors: (1) how much the user knows about the PSAP and is comfortable with technology, and (2) the quality of the actual program. It should be no surprise that, due to the lack of regulation, PSAPs vary considerably. Patients should be aware of the pitfalls and be circumspect in purchasing them.

A second possible risk with PSAPs is that people who have serious hearing loss may make the unwise decision of buying them over of hearing aids. Given how hearing aids have a bad reputation for their price, PSAP manufacturers have probably marketed PSAPs as low-cost hearing aid alternatives.

Perhaps PSAPs will soon become the norm for older people who are losing their hearing. I will keep up-to-date on news relating to PSAPs, though I will never wear them.

# Michael Lizarraga, a Deaf Basketball Player

This is fairly old news (about three years old), but I found it interesting to read through two ESPN articles about Michael Lizarraga here and here. Lizarraga is a deaf basketball player who, as a college student, made the dream of playing Division I basketball a reality by earning a spot on Cal State Northridge’s basketball team as a walk-on. He was the first deaf Division I basketball player in history.

I consider myself a basketball fan. I’ve actually written a few posts on basketball here and was considering starting an NBA-based blog for myself. (I decided not to pursue that idea since there are already too many excellent blogs that cover the NBA.) So as a deaf person myself, it shouldn’t come as a surprise that I’m interested in Lizarraga’s story.

It appears that Lizarraga and I were born and raised under similar circumstances. His family had no history of deafness, and his parents didn’t find out he was deaf until they brought him to a doctor as a toddler. Lizarraga’s parents, like mine, were willing to learn sign language, and they mainstreamed him in school and introduced him to sports.

When Lizarraga was in sixth grade, he started attending the California School for the Deaf, and stayed there until college. (For details on my educational history, see My Pre-College Education.) While encouraged to go to Gallaudet by friends and coaches, Lizarraga instead opted for Cal State Northridge, since he wanted to have the chance of playing Division I basketball (Gallaudet is Division III). Another plus for Cal State Northridge is that it houses the National Center on Deafness (NCOD), America’s first postsecondary program to offer full-time, paid interpreters for hearing-impaired students.

While at Cal State Northridge, the coaches reached out to him and encouraged him to try out. Somehow he not only made the team but ended up as a solid rotation player, which is quite rare for a walk-on.

Of course, there is the nontrivial matter of setting up accommodations. He was quite fortunate that an (apparently competent) sign language interpreter volunteered herself for the purpose, but I would be curious to know how involved Lizarraga or his parents were in this process. Also, it unfortunately does not appear that Cal State Northridge always provided accommodations. This section in the 2010 article caught my eye:

What started out as a way to goof off on the sidelines while they couldn’t play quickly turned into a second language for Galick, who even began dating a deaf girl he met through Lizarraga. Galick has become so good at sign language that he can fill in for Mathews [Lizarraga’s interpreter], who is unable to attend as many games and practices as she has in the past because of budget cuts.

Uh-oh … budget cuts leading to a lack of interpreting services? It reminds me a lot of the article I wrote about RIT’s accommodations. And this is happening at a school that has the NCOD! Hopefully this didn’t have any detrimental effects, but I wonder if Lizarraga knew that if he really wanted to and fought hard enough, he could probably obtain full interpreting services for all practices and games.

From what I can tell, Lizarraga is now playing professional basketball in Mexico and is seeing some playing time in the 2013-2014 season. There does not seem to be substantial media coverage on him, so I can’t really say much more. I hope things are going well for him.

# A Fake Sign Language Interpreter at Nelson Mandela’s Memorial

Last month, Nelson Mandela passed away. Mr. Mandela, who led the emancipation of South Africa from white minority rule, was one of the most beloved people in his country. Thus, there was no doubt that his memorial would be well-attended by not only people from South Africa, but also some of the world’s most prominent leaders such as President Barack Obama.

Unfortunately, according to this article, a fraudulent sign language interpreter was hired to “interpret” for the deaf. This person, a 34 year old man named Thamsanqa Jantjie, apparently performed meaningless symbols and gestures on stage. And, as the image for this blog post indicates, he somehow had the privilege of being inches away from President Obama and other leaders who spoke at that podium. (Side notes: what’s up with how close he is to the podium? Whenever I’ve had a podium-interpreter situation, the interpreters have typically been a healthy distance away from the speaker. And also, usually for something like this, you would want two interpreters…)

I took a look at the video from the linked New York Times article. While I don’t know the South African sign language, the gestures Mr. Jantjie performed certainly didn’t seem like a sign language: too many rhythmic activities, lack of facial or lip movement, too many calculated pauses, and so on.  I can certainly believe the experts’ judgement that this is a fake.

Bruno Druchen, the national director of DeafSA (a Johannesburg advocacy organization for the deaf), had this to say about the debacle:

This ‘fake interpreter’ has made a mockery of South African sign language and has disgraced the South African sign language-interpreting profession. […] The deaf community is in outrage.

Mr. Jantjie also failed to even perform the correct sign for Mr. Mandela. That doesn’t bode well … I mean, if there was any one sign to know, wouldn’t it be the one for Mr. Mandela?

But wait, there’s more! Check out this article, where Mr. Jantjie admits that he was hallucinating during the talks (“angels falling from the sky” kind of stuff). We also learn that he’s receiving treatment for schizophrenia. In addition, the company that supplied Mr. Jantjie — at a bargain rate — disappeared. Somehow, up until now, they had been getting away with providing substandard sign language interpreting services.

There is also news that Mr. Jantjie has a criminal history. This article mentioned that he was part of a group that murdered two men in 2003, yet somehow didn’t go trial because he was deemed mentally unfit. He also is accused of a plethora of other offenses dating back to 1994.

Murdering two men? Mentally unfit? A schizophrenic person?

That doesn’t sound like the kind of person I’d like to see up there. Hiring someone like him to be an interpreter for the memorial of possibly the most important person in South African history? I’m glad that I can trust my own interpreters here at Williams College.

# Quickly Learn emacs and vim

Programming as part of a large project, such as building a new C++ compiler from scratch, is vastly different compared to a smaller task, like writing a script to answer a random Project Euler question. Large projects typically involve too many files interacting with each other to work on a single program in isolation (for instance, it can be confusing to know what methods to refer to from another program in a file), so using an advanced integrated development environment (IDE) like Eclipse is probably the norm. For smaller tasks, two of the most commonly used text editors by programmers are emacs and vim.

One of the biggest problems that people new to these editors face is having to memorize a bunch of obscure commands. Emacs commands often involve holding the control or escape/meta key while pressing some other letter on the keyboard. With vim, users have to press escape repeatedly to go into “insert” (i.e., one can type things in) versus “normal” (i.e., one can move the cursor around, perform fancy deletions, etc.) mode.

Once one becomes used to the commands, though, emacs and vim allow very fast typesetting. If you watch an emacs or vim master type in their preferred text editor, you will be amazed at how quickly he or she can perform fancy alignment of text, advanced finding/replacing involving regular expressions, and other tasks that would otherwise have taken excruciatingly long using “traditional methods.” (Unfortunately, these people are hard to find….)

So what’s the quickest way to get started with these editors to the point where someone can write a small program? In my opinion, the best way is to go through their tutorials. Open up your command line interface (e.g., the Terminal on Macs) or your method of opening these editors.

1. For emacs, type in “emacs” and then perform control-h (hold the control key while pressing “h”) and press “t.” In emacs terminology, C-h t.
2. For vim, type in “vimtutor” from the command line.

This method of learning is excellent since you get a copy of the tutorial and can go through steps and exercises to test out the commands while navigating or modifying the file. I think the vim tutorial is better because it’s more structured but again, both of them do the job. They emphasize, for instance, that once a programmer gets used to the commands to move the cursors, using them will be faster than resorting to the arrow keys since one doesn’t have to move hands off of the keyboard:

1. From emacs: There are several ways you can do this. You can use the arrow keys, but it’s more efficient to keep your hands in the standard position and use the commands C-p, C-b, C-f, and C-n.
2. From vim: The cursor keys should also work. But using hjkl you will be able to move around much faster, once you get used to it. Really!

I have to confess that I didn’t learn emacs by using their tutorial. I just went by a list of common commands given to me by my professor and picked things up by experience and intense Googling. As a result, I experienced a lot of pain early on in my computer science career that could have been avoided by just reading the tutorial. (The control-g command to quit something was a HUGE help to me once I read about it in the emacs tutorial.) And judging by the emacs questions that other Williams College computer science students have asked me, I can tell that not everyone reads the tutorial.

So read the tutorials before seriously using these text editors.

Of course, one should decide at an early step of his or her programming career: emacs or vim? This is certainly a non-trivial task. They both do basically the same thing, and with the variety of extensions and customizations possible, I’m not sure there’s anything they can’t do that any other text editor can do, to be honest. If I had to give a general rule, I’d just go with choosing the one that felt best to you in the tutorials, or the one that’s most common among your colleagues, professors, or other peers.

And besides, it’s difficult to say which one of these editors is better than the other. It really depends on who you ask. I don’t have a good answer to this so I’ll opt-out and provide a lame one: the one that’s better is the one that you can use best.

Personally, I started my programming career using the emacs text editor because that was the preference of my Computer Organization professor, so I didn’t see the need to deviate. Within the past year, I substantially increased my emacs usage to the point where I was using it almost everyday for most typing tasks, including LaTeX, and my pinky finger didn’t like the constant reliance of the control key. So I’m giving vim a shot.

It’s still useful to know the emacs commands, because they often work in other places. For instance, when I type in blog entries here, I can use a subset of emacs commands while typing in my entry in the built-in editor for WordPress, which was surprising to me when I first found out. Instead of deleting a line by using the mouse or Mac trackpad, I can use the ctrl-k emacs command to do that. And many IDEs will have support for emacs or vim commands, so one can get the best of both worlds. I know that Eclipse has this, but I was personally disappointed that certain emacs commands didn’t work.

To wrap up this post, I suggest that knowing one of emacs or vim well is important for (computer science) graduate school since programming is a huge part of research today, so the faster things get done, the better. Moreover, university professors tend to take on more of a “leader” role rather than someone who “gets things done” (I’m not trying to disparage professors here). In other words, they tell the graduate students what to do in research projects and the students will write the necessary programs.

# Programming Project Ideas for Summer 2014

As 2014 nears, I face the realization that I will have a free summer. If all goes well, it will be the summer after college and before graduate school. I do not think I will be doing an internship so that gives me the freedom to explore some topic in further detail. Given my interests and the realities of my future career options, I thought it would be prudent to pursue a programming project of my choosing.

I have some ideas which I’ll list below, but nothing’s set in stone. (Other ideas are listed here and here.) I’ll probably do a few of them to start and narrow down on one. I’ll do my best to describe the project once it gets started, probably by posting it on Seita’s Place. And if readers have suggestions, feel free to email me (see About).

Making a Game

I might as well discuss this one first, since many programming projects fall into this category. I was thinking of some minor online text-based game, possibly in a role-playing genre. For now, I’ll abstain from getting too involved in graphics and other parts necessary for a game, since I feel like that will detract from the pure programming parts. I’m a little worried that there will be a lot of work in getting simple stuff like saving the game or logging in/out, but perhaps that’s important for me to know.

Making an App

I am interested in creating an app that assists with education, such as an app that describes course notes in a subject I’m comfortable with (e.g., Artificial Intelligence) or one that is like a “mini-textbook.” The downside is that most users will probably just use their laptops if they want to access notes.

Making Basic but Useful Software

What I mean with this is that I’ll attempt to make something that I might actually use in the future. For instance, I could theoretically make my own text editor, and I would use it in the rest of my career. (This is pure fantasy, though, because emacs and vim are perfectly fine for what I do and there’s no way I could build something that complicated.) The difficult part with this idea is coming up with something that could be useful to me or others and hasn’t already been built in a better way.

Do Something Based Off of a Class

The advantage with this idea is that it’s less likely I’ll get lost, since substantial programming assignments as part of a class tend to have specific instructions and starter code. I don’t want to follow a project like this exactly, since that will stifle creativity, but at least things will be more structured. I feel like this is a lame objective to pursue, though….

Join an Open-Source Project

This could be interesting … the questions is, which ones interest me and would be open to a programmer helping this out for a summer?

# Voice Recognition on Google Chrome

About a month ago, Google Chrome released voice search and voice action. The main idea is that we can literally just talk to the computer while browsing Google, and it will respond with (ideally) what you wanted. This feature isn’t limited to just browsing Google, though. It’s also possible to tell Google to start an email, or to find information about people in your contacts list.

To use it, open Google, click on the microphone and wait until Google prompts you to say something. But if one has to click on a microphone, it wouldn’t make sense to use voice recognition because we can type almost as fast as we can talk. (It seems like this feature is built for mobile users, where typing is typically a lot slower.) So there is now a Google Voice Search Hotword extension, so one can avoid typing at all by saying “OK Google.” For details, refer to their article about the extension. As one can see, it’s still in beta (at the time of this blog post), so this is all recent technology. In fact, Google suggested that they released this feature so that people cooking for Thanksgiving didn’t have to wash their hands in order to use Google.

The voice extension sounds nice at first, but there’s always the unfortunate caveat in voice recognition: it’s not good enough due to a variety of reasons, such as multiple words sounding the same, uncommon words, inconsistent or unclear speech, and so on. In fact, for users of Gmail, I sure hope that this voice recognition won’t immediately send emails created out of voice recognition without user confirmation, since there’s too much that could go wrong with that feature.

But maybe I’m just naturally pessimistic about the success of voice recognition, especially after trying to search my own name on Google and getting “Daniel C Town.” I know that no voice recognition software has any chance of recognizing my last name. What I wonder is if this Google feature will be able to remember my browsing history after I say my name and, in the future, use that data to correctly identify it via voice recognition?

Still, voice recognition is certainly an important aspect of applied artificial intelligence and machine learning, so I look forward to seeing this subfield progress.

To recap:

1. Can be useful under limited circumstances (dirty hands, quick browsing, etc.)
2. Will probably be far more popular on mobile devices
3. Hindered by the natural limitations of voice recognition
4. Potentially a privacy concern
5. Definitely opens up additional areas for future work/research

# Grad School Applications, Stage 3: The Online Applications

It seems like every prospective graduate student is using the Thanksgiving break to catch up on applications. That’s definitely been my situation; I’ve delayed things far too long (which is quite unlike me), but hopefully I have made up for it these past few days by submitting several fellowships/scholarships and creating final drafts of my statement of purpose essays. With ten schools and a variety of fellowships/scholarships to apply to, I can’t afford to leave everything to the last week before the schools’  deadlines, especially when that also happens to correspond to my final exam week!

To budget my time, I first submitted all the fellowships and scholarships that had deadlines earlier than that of any of my ten graduate schools. Then, I went to work on creating draft after draft of one school’s statement of purpose essay. Fortunately, most universities have similar essay questions, so I can just modify a paragraph at the end that is school-specific.

Once I had done sufficient work for one essay, I put that aside and then did all the “administrative” tasks by filling in the easy stuff of the online applications. This includes writing information about recommenders, writing your address and contact information, and so on.

Some thoughts as I was doing these:

1. I did them in bulk fashion (i.e., one right after another) and did everything except upload the statement of purpose essays. I felt like that was the most efficient way to do things. Now, when I head back to school, I only have to worry about the essays.
2. Most applications were part of a university-wide graduate school application form, so I frequently read information that was not relevant to computer scientists but would be relevant to other subject areas. This makes it a little harder on the applicant (since we have to fill in more information than is probably necessary) but it’s easier on the school (only one application website/form needed for the entire graduate school) so I can understand why schools like that.
3. Some places want me to paste my essay into a “text box,” while others want uploaded PDF documents. I vastly prefer the latter, since there is some LaTeX stuff that I’ve included in my statement of purpose to make it look better, but maybe schools don’t want faculty to be influenced by the aesthetics of the text.
4. Some schools weren’t specific about whether they wanted me to upload an unofficial transcript or a scanned official transcript. (One school even had contradictory information in two places of their application.) In fact, for two schools, I didn’t realize this until I had actually reached the point in the application where they asked me to upload scans. Fortunately, the registrar emailed me a PDF scan of my official transcript and that solved everything. The lesson is that it’s best to just get an official scan to not leave anything to chance.

# The Problem with Seminars

I like the concept of seminar courses. These are typically small classes that feature active discussion and debate each session. At Williams, these courses — where enrollment is limited to 19 students to comply with U.S. News & World Report standards for “small class sizes” — make up a substantial portion of the humanities curriculum. While there are certainly many wonderful things I can say about seminars, one of my personal gripes is that they pose additional burdens to deaf students.

Here’s the problem: if students are interested and motivated by the course material, they’ll be active participants. That means class discussion will be moving at a quick pace from one person to another as people raise their hands immediately after others finish talking.

But as a deaf person who cannot really understand what my classmates say until I get the feedback from a sign language interpreter, there’s an added delay until I can get the same information. Inevitably, once I can understand what my classmates have said, another one immediately jumps into the discussion by saying something similar to: “building off of [his/her] previous point, I think that […]”.

The end result is that I’ve often felt lost in some of these discussions. Many times, I’ve wanted to say something, only to see someone else claim credit for that concept by being quicker than me at raising his or her hand. It’s not a problem that’s easily solved. There’s always going to be some sort of delay with sign language, CART, and other accommodations, but it can pose difficulty to deaf students. Since seminars tend to incorporate class participation as a large fraction of students’ grades, that factor can be a deterrent to deaf people for taking these courses.

As I think about seminar courses, I’m reminded of a particularly painful high school AP US History class. The class was divided into groups of three, and we had to debate over a topic that I’ve long since forgotten. (Each group had to defend a unique perspective.) But the main thing that I remember was that the teacher required each student to make three substantial comments in the debate in order for him or her to receive full credit.

The debate ended up being chaotic, with students shouting out their comments all over the place, often interrupting each other without restrain. (My teacher actually had to stop the class once, so we could relax and start fresh.) Predictably, I was completely lost among the commotion and didn’t see any way I could participate without sounding awkward. Eventually, towards the end of the debate, I finally made my sole comment of the day. And that was only because one of my group members (out of sympathy?) actually told me what to say! He mentioned his thoughts to me, raised his hand and then let me talk once the focus was on our group. In other words, I was just echoing his idea.

Fortunately, my teacher recognized the challenges I faced and didn’t penalize me for failing to participate in that embarrassing debate.

So how should a deaf person approach seminars? I’m not interested in asking professors to lower their grading standards (I’d be offended otherwise), though it might be wise to mention to them the delay in reception due to ASL or other accommodations, just so they’re aware. Another thing one could ask is that the professor slow down the pace of discussion. That is, if one student finishes talking, ask the professor to wait a few extra seconds before picking the next person to talk.

With respect to how class discussion proceeds, my best advice is that one should aim to be the first to comment on a class topic. That means when the professor reviews something based on homework readings and then says: “Any thoughts on this?” to the class, that’s the best time for someone like me to participate.

This situation typically happens at the start of class, so there isn’t a need to make your contribution to the class debate relate to previous comments (a huge plus!). Furthermore, professors often articulate better than students, making it easier for me to rely more on my own hearing. Finally, while this might be entirely anecdotal evidence, I’ve observed that professors are often more willing to wait a longer time when they open up a discussion than when they’re in the middle of one.

# Allocating Time for the Undergraduate Thesis

As someone who is currently working on a computer science thesis this year, one of the things that’s really hit home lately is how the undergraduate thesis serves as a healthy “medium” between the undergraduate student and the beginning graduate student mentalities.

For one’s first few years as an undergraduate, it is expected that he or she focus primarily on courses. Research is an excellent “extracurricular” activity and should be taken seriously, but unless one does an extraordinary job — by that, I mean first-rate conference or journal publications — it is likely that students still need to perform well in courses in order to get accepted into a Ph.D. program.

Meanwhile, the beginning graduate student at a Ph.D. program suddenly needs to break away from the undergraduate mentality in order to succeed.

I’m at the point where my grades are still important, but my research is starting to become a bigger part of my studies. Consequently, I have to find sufficient time away from my “normal” classes to focus on research. It’s tempting to let thesis work slide in favor of another hour or two spent on perfecting a problem set to get that “A” grade, but it can add up. As a time-management technique, I suggest having a schedule for one’s thesis work outlined in a “problem set” format so that it mirrors what a typical science class would be like.

PS: Yes, I know I haven’t been blogging too much lately. I’m sorry.

# Grad School Applications, Stage 2: Preparing Information for Reference Writers

Back in July, I published a post proclaiming the start of my fall 2013 graduate school applications process.

Now that it’s the start of October, I can safely say that I’m at a new stage: the point where I need to provide information to all my reference writers about my applications. Don’t neglect this non-trivial step! Letters of recommendations are probably the third most important aspect of one’s application after one’s research experience and grades (in that order), and they become extremely useful in picking out the best of the best.

Here’s what I included in my “packet” of information to my recommenders:

1. A copy of my updated curriculum vitae. This should be something everyone does.

2. A document that clearly outlines all of the programs to which I’m applying, as well as any other fellowships and/or scholarships. For me, this is ten Ph.D. programs, four fellowships, and three outside scholarships, and my document ended up being six pages. This includes a LaTeX-generated table of contents and a separate page devoted to an Introduction. For each application, I also included a web link, just in case my reference writers wanted some extra information. Finally, for each school, I also indicated the labs and professors that caught my interest.

I think the last point is something that — sadly — often gets glossed over when sending information to recommenders. It’s not just enough to say that one wants to study at a school; one also should have a general idea of the different research groups at an institution and which ones suit himself or herself the best.

One thing that I had hoped to include in the packet was an updated statement of purpose. Unfortunately, I haven’t found the time to get a sensible essay ready, so that’s the next thing on my agenda to send to my recommenders.

It was a relief to finally send information to my three recommenders, so now I can focus on getting my actual essays and applications. No, I’m not as far as I hoped to be in the plan I posted in July, but I’m getting there. I still have a couple of weeks before the first deadlines arrive…