My Blog Posts, in Reverse Chronological Order
subscribe via RSS
Sorry for the delay in posting this entry. Cornell’s visit days happened on March 9-11, but as usual, I did not have much time to write about it until now, which corresponds to my long-overdue spring break.
Cornell did not give visiting admits the option to stay with graduate students. Instead, we were all assigned rooms in the Statler Hotel. Not only is this located right in campus, but it’s literally just a one-minute walk away from Cornell’s brand-new computer science building, called the Bill & Melinda Gates Hall. (Yes, sound familiar?)
The first night proceeded as expected. The current and visiting students split into groups to go to dinner. Following that, students either played board games (this is what I did), went to a planetarium (I think that’s what it was…), or just went back to the Statler (this is what I probably should have done, based on my mountain of homework).
The second day, March 10, was filled with activities. The admits gathered in the third floor of Gates lounge to have breakfast. The department chair gave us a slide show presentation that highlighted some of the department’s strengths. Here is what I remember:
- They moved into Gates Hall three weeks ago. (So now, it’s been about six weeks.)
- The department had multiple new hires for this year and planned to continue expanding the faculty in the near future. The chair also told us that Cornell had hired a prominent researcher from another company (i.e., a major victory), but said the hiring was supposed to remain a secret for now. I’m not sure if I’m allowed to say who it is, since I don’t see any announcements and his/her website doesn’t yet indicate that he/she is going to Cornell.
- The chair listed some of the schools or companies where recent Ph.D. graduates were now working. A few Ph.D. grads got faculty positions at top-tier universities.
- The chair said the professors and students truly had a close relationship with each other, probably closer than that of other comparable schools. (And he showed us a few party-related pictures to confirm this … actually I think some of the graduate students put them in the slide show while the chair was away from his computer.)
I didn’t have much time to relax afterwards, due to faculty meetings. At Cornell, I had meetings with seven professors, more than any other school I visited. That made it challenging to prepare well. I like being able to check professor’s websites so I can avoid asking redundant questions.
We also ate lunch in Gates Hall. Cornell had different lunches based on research area. I attended the AI lunch, which was a fairly standard buffet. About five AI-affiliated faculty members also gave brief talks about their research at that time. Following the AI lunch was a theory meeting, where the theory professors discussed their research.
I noticed that AI and theory were incredibly popular among the admits. Almost everyone visiting Cornell was interested in one of AI, machine learning, algorithms, or theory. I suspect that some of these AI/theory admits might have to switch to systems or programming languages research.
After the theory talks was the grad student panel. Visiting students asked the standard questions such as “what’s the worst thing about Ithaca?” and “what’s the living situation like?”. I’m not sure why people kept asking the former question, because the answer is always the same for northern schools: the cold weather.
I did ask one question: which professors are the most popular to hang out with socially, and which ones are the most well-known in terms of research? I probably should keep their answers to myself, but I think most outsiders can state the highest-profile faculty with enough Internet searching.
I had a little more time to relax before dinner, so I briefly chatted with other admits and students. Then it was dinner, where just like the previous night, we went out to eat at different restaurants. Afterwards, there was a “party” in the Gates Hall lounge, featuring ice cream and a ton of beer. I met more students and professors, including one current faculty member who graduated from Williams College. I think he thought I was a student at the University of Washington at first, since I wore my Williams jacket with a large purple W, which also correlates with Washington. During visit days, I met at least three students or admits who did their undergrad studies at Washington, so it’s definitely a place with solid representation at Cornell. Three admits also went to Harvey Mudd College. I believe there were no other current students or visiting admits from Williams except me, unfortunately.
Then I went back to the hotel room and turned in some late homework while briefly chatting with my hotel roommate.
The third day was fairly relaxed compared to the second day. There was an excellent brunch at the Statler Hotel featuring tons of fruit, bacon, and french toast (of course, I didn’t eat the french toast). I still had a few more faculty meetings to attend, and after that, I left, departing the hilly environment of Ithaca/Cornell back to Williamstown. Ithaca, by the way, seems remarkably similar to Williamstown.
Overall, it was another nice visit.
In the past few weeks, there were two notable events in the computer science community. The first was the announcement of the 2013 Turing Award winner: Leslie Lamport, for his contributions to distributed and concurrent systems. I am learning about his work in my distributed systems course this semester, so it is nice to see that he is getting recognized.
Here is the updated list of Turing Award laureates by university affiliation. MIT is now gaining ground on Stanford and Berkeley.
The second major piece of news relates to the new computer science Ph.D. rankings, which is nice because before that, the last update was in 2010. The top four schools — Carnegie Mellon, MIT, Stanford, and Berkeley — did not change in rank, but Cornell dropped from five to six, while Illinois retained their status as fifth. Washington moved up from seven to six, which might be a reflection of their latest faculty hiring spree. Princeton stayed at eighth, while the UT Austin dropped from eight to nine.
Cornell and UT Austin can’t be happy about their rank dropping, as it will adversely affect their yield for both their graduate and undergraduate enrollment. As I mentioned earlier, UT Austin is planning to substantially expand the size of their department, and last I heard, Cornell is doing the same (more on that later), so they are fighting to get their ranking up. Unfortunately, this means that in about five years, professorships will be tough to get.
Side note: after Googling my own name, I’m happy to see that Seita’s Place is the first hit. Technically, for most of the past two years, the first hit was actually the Hello World entry. Why was that post ranked higher over the blog homepage for so long?
A few weeks ago, The New York Times published an article about a “deaf” person who was exposed as a fraud. Mamoru Samuragochi, a popular Japanese composer whose deafness made him seem like a modern-day Beethoven, staged a career-long hoax in which someone else surreptitiously wrote his compositions. Furthermore, it seems like he faked his hearing disability.
Reading this article makes me consider two perhaps unfortunate scenarios.
- A person faking a hearing disability to make him or her stand out, win praise from others for overcoming obstacles, etc.
- A deaf person who has enough hearing and speaking ability from hearing aids or cochlear implants such that others mistakenly view him or her as hearing.
It would be incredibly naive to think that neither of these scenarios occur. Sadly, they do, and Samuragochi seems to be a prime example of Scenario #1. Fortunately, I don’t know of any others off the top of my head.
But what about Scenario #2? I think I fit into this one. With amplification from powerful hearing aids, I can easily communicate to someone so long as there is insignificant background noise. I’m sure that others have doubted my deafness in the past.
The problem is that, in the absence of medical records and audiograms, there isn’t a clear-cut algorithm for determining if a person qualifies as being deaf. A person wearing hearing aids could be wearing them for just a tiny, almost negligible benefit, or the hearing aids could mean the difference between hearing anything versus nothing at all. Speaking ability also varies from person to person. There is an abstract spectrum of “deafness,” and I think it’s challenging for people to determine where anyone else lies within it.
I recently finished up a three-day stay in Austin, TX, in order to attend UT Austin’s computer science visit days (“GradFest”) for admitted Ph.D. students. This was my first of what will be four school visits.
UT Austin’s GradFest committee assigns each visiting student to a current graduate student who acts as a host and provides housing and transportation in Austin. They must have been paying attention carefully to my RSVP form because the committee chair sent me an email saying that another student at UT Austin had specifically requested me because he knew sign language and lived with someone who also knew sign language. That was nice!
I arrived in Austin at around noon on the first day of GradFest and went to my host’s house via the airport’s shuttle service. I might have been one of the first prospective students to arrive, since we didn’t have much to do until the school-sponsored dinner began at 7. My host briefly showed me around campus, took me out to lunch, and dropped me off at a coffee shop, where I had a meeting set up with a deaf linguistics Ph.D. student at UT Austin. I didn’t have any coffee of course (always hated that stuff), but we had a nice conversation about what life was like in Austin. My sign language must not have been too rusty, as she didn’t have any problems understanding me. She told me that accommodations at UT Austin are great, and that she’s generally had little trouble obtaining what she needed. I also learned that there was a decent academic deaf community in Austin.
Following that, my host took me to his cubicle and provided me with a computer since he knew I was super-busy with work. (I didn’t actually get any done.) By the way, UT Austin’s computer science department is housed in a brand-new building called The Bill & Melinda Gates Computer Science Complex and Dell Computer Science Hall, pictured below.
For anyone who’s worried about collaboration among different subfields of computer science, you definitely won’t have that concern with UT Austin. Previously, different computer science departments were scattered around several buildings. Now, they’re all together in one, making interaction among different areas much easier. Each of the floors is mostly the same, with lounges, computer science labs, offices, and so on. All faculty members have their own offices, with the rest being shared or assigned to postdocs. Ph.D. students are given cubicles in the area corresponding to their computer science subfield, so it’s very easy to talk to other graduate students. Cubicles are also located near the real offices, which I imagine is great for having quick chats and updates with professors.
I stayed in the cubicle area until evening, and then met with other prospective students and their hosts. We went out to dinner at some blazingly loud restaurant. If my host didn’t know sign language, I probably wouldn’t have said anything at all that night!
The second day of GradFest was the most important one. All the students got together in the computer science building to eat breakfast, do paperwork, and obtain schedules. I also met two sign language interpreters hired by UT Austin, who would follow me throughout most of the day.
Prospective students attended our choice of three faculty panels and as many lab “open houses” as we wanted. I attended the Artificial Intelligence, Theory, and the Data Mining and Machine Learning panels and took part in the Mechanized Reasoning and Analysis lab.
Perhaps most importantly, I also had four individual meetings with faculty members, which allowed me to gauge their interest in new students, their advising style, what they look for in new students, and other aspects. (Prospective students write on their RSVP their choices for faculty to meet.) One professor told me that one of the reasons why a new building was needed was because there are plans to increase the computer science faculty from 44 to 60. Certainly this could be reflected in UT Austin’s recent hiring spree, as there are four new faculty coming in this fall and the department is still hiring.
While I was at those faculty meetings, I was well aware that professors might gloss over the truth when talking to prospective students. Fortunately, just before dinner, we had a “Bull Session” with current graduate students which as one described, “is a place without any people in position of authority.” The graduate students were brutally honest when answering the many questions from the visitors. For instance, in a response to the question of when one shouldn’t attend UT Austin, one grad student said that if someone gets in “top four” schools (Berkeley, Carnegie Mellon, MIT, and Stanford), that person should pick one of those over UT Austin, which is currently ranked #8. Surprisingly, there was no talk/gossip of specific faculty members, e.g., “Professor X is really bad and should be avoided at all costs.”
After the very helpful Bull Session, we went out to dinner with the students to a pizza place. The food was great, but I did have to break the rules of my grain-limited diet (the same went for my time at the airports). Oh well.
The third day of GradFest was much more relaxed. I ate brunch and went on a campus tour. After the tour, I took the shuttle back to the airport and flew back to the northeast.
Overall, GradFest was an enjoyable experience and a welcome break from my typical schedule. My other three scheduled visits are in the middle of March, so stay tuned for additional posts about visit days.
It is hard to believe that I am already in my eighth and final semester as a Williams College student. I have learned so much over the past few years, including what I want to do after Williams and possibly even after graduate school.
Speaking of graduate school, I’ve heard back from most of the institutions to which I applied. So far, five have offered me admission. Thus, I’ll definitely have some tough decision-making to do over the next two months, and final choices for graduate school have to be made on April 15. I’m going to be traveling to at least four of the “student visit-days” at those schools to assist me in making my final decision, and I’ll probably post some details about the events on this blog. My first trip will be to The University of Texas at Austin, as their computer science department is hosting visit days next weekend. By the way, schools will generally pay up to $500 for airfare and provide you with free lodging, either in a hotel or with graduate students, so it’s a great deal.
In the meantime, I’m also doing some more research and taking some classes. Here are the lecture courses I’m taking:
- Distributed Systems. This is a computer science course that teaches the design and implementation of systems that involve multiple, connected computers, hence the name “distributed.” I will also learn about networking and operating systems, which are two areas that I don’t know much about, so I am definitely going to learn a ton from this course. (Just now, I can finally discern the differences between a process and a thread.)
- Tiling Theory. This is a mathematics course dealing with the theory behind tilings, which are essentially patterns formed by some simply connected pieces that we can fit together to fill up a plane without gaps and without tiles overlapping. In my opinion, it has a lot of similarities to graph theory in that it’s a course that heavily depends on visualization, proof by pictures, and clever doodling.
That’s it! Just two courses. Of course, I do have a thesis. I’m badly behind on it, so I’ll have to focus super-hard on it during the rest of February, March, and the beginning of April. I’m also taking an independent study course in Operations Research, where I’ll be part of a team of roughly ten students who will review and utilize concepts relating to — in addition to operations research — advanced linear algebra, linear programming, sabermetrics, and so on.
This schedule seems easier than usual with the small number of lecture courses, but even after two weeks I can already tell that I’ll be overwhelmed as usual, especially when taking into account my heavy traveling this semester. (I expect to miss more classes this semester alone than I have during the past seven semesters combined!) Furthermore, I am going to be repeating as a Teaching Assistant for the Algorithm Design & Analysis course. In the second half of this semester, I am also thinking about signing up for a computer science course on Coursera.
Anyway, it’s time for me to stop blogging and get back to work.
The game itself ended up being a blowout. It was boring to someone like me who doesn’t actively support either the Denver Broncos or the Seattle Seahawks. I also don’t watch halftime shows, so that didn’t increase my excitement of the game.
The commercials were also a bit of a disappointment, at least compared to the ones in last year’s Super Bowl. But one of them did catch my eye. No, it was not a Tim Tebow commercial, even though he was arguably the more impressive Denver Broncos-affiliated quarterback last night.
It was the Duracell batteries commercial, featuring the deaf Seattle Seahawks player Derrick Coleman. In just a single minute, it chronicles his story from a young boy into a current NFL player. It also seems to have inspired two other deaf girls to write letters of support to Coleman. Imagine their surprise when Coleman met them and offered tickets to the Super Bowl!
Now, I know a lot of deaf people. I actually went on Facebook (!) last night, and saw deaf people supporting the Seattle Seahawks just because Coleman was on the team. So his story is inspiring, and clearly has an effect on other deaf people. It will be interesting to see how his career progresses — he’s only 23, by the way — and if other deaf players will join him in the NFL.
This discussion will be a continuation of my last post, What if 300 Deaf People were Isolated on an Island? I will focus on the usage of American Sign Language versus English in the hypothetical scenario that a group of deaf people migrate to an island and are allowed to form their own small country.
To be specific, here is what I have in mind.
- In January 2014, 300 deaf people from America decide to migrate to a island that has enough resources and infrastructure to maintain a small population of people. Therefore, the island’s inhabitants do not need to rely on trade with other communities or countries, so this remains a secluded area throughout the lifetimes of those 300 people.
- All these deaf people know ASL and some, but not all, have the ability to speak English reasonably well. They can all read English, but because most do not have excellent hearing ability, ASL is the dominant language for communication among the population. (Assume that only a few can afford quality hearing aids, not an unreasonable expectation nowadays.)
- Years pass by. People do not leave the island, nor do outsiders come in, as travel is heavily restricted. The deaf people marry among themselves, form generations of families, and eventually the island becomes quite populous, with millions of inhabitants.
The main question I want to consider pertains to the use of ASL versus English. In other words …
In the long run, which language will become the “official, spoken language” of this island?
There are two candidates: ASL or English. By “spoken language,” I’m referring to what people will use to communicate with each other. Yes, English will be the language used for writing, but I’m more interested in conversations. We could take the “easy way” out and say that both are official, but let’s assume that the United Nations or some futuristic, worldwide diplomacy venue mandates exactly one spoken language registered for each country. I suspect that, eventually, English will reign supreme in this regard.
Though the island originally starts with a population that consists exclusively of deaf people, the next generation will not share that characteristic. In fact, the majority of the children born to parents among the 300 starting inhabitants will probably be hearing. Deafness is very uncommon among newborn babies, and even if both parents are deaf, their children are still likely to have normal hearing.
This trend continues generation after generation, so in the long run, the island’s population will approach a proportion of deaf people similar to the one that exists in today’s world.
So what happens? The island becomes a “hearing world,” where the official language is spoken English. There are sure to be some people who know ASL, of course, because there will still be deaf people around. But English becomes the conventional, spoken language because hearing people will constitute the majority of the population, and they will be the ones taking up management positions, political offices, and so on.
But I still have a nagging suspicion that I’m missing something. I wonder …
Would there be any circumstance in which ASL could actually be the official spoken language in the long run?
There are obvious challenges. First, we’re talking about a language that the vast majority of the population won’t need to use. Hearing people may even view it as an inconvenience when communicating with each other; why put the effort in moving hands when you can expend less while talking and still achieve the same objectives? Second, is it possible to have an official spoken language that can’t really be used for written documents?
The island scenario does have one major aspect that doesn’t totally kill the idea of ASL being the spoken language: it starts with 300 deaf people. If ASL were to be the spoken language of this island or future country, then I suspect it will all rest on the influence of those 300 people. They will certainly teach their children ASL, regardless of them being hearing or deaf. Thus, the second generation will use ASL.
The question is whether the all-hearing families with parents from that second generation will stress the importance of ASL to their hearing children. Even if they do, I worry that the use of ASL would gradually weaken from generation to generation among hearing families. In order for such a concept to be passed down and not whither away, it would probably need to take on as much importance as a religion or a true, cultural activity.
So there is an outside chance that ASL could be the spoken language. Still, I suspect we would need additional strong assumptions for that situation to occur. The one that most easily supports ASL being the spoken language would be if the incidence of deafness among babies skyrockets.
Anyway, that is one scenario. What are other similar ones that come in mind? I encourage you to think about the various possibilities and their resulting, long-run equilibrium states. Try tweaking some of the assumptions and see what you get.
Side Note: 100 Blog Entires on Seita’s Place
Somehow, I made it to 100 posts on Seita’s Place. (This is post #100.) So in honor of this “special event,” let’s look at some additional statistics about my blog.
According to my WordPress Dashboard, I now get roughly 25 views a day (from about 15 to 20 visitors), which is up from the 0 to 5 views/visitors that I was getting when the blog first began. Excellent! Seita’s Place’s best day was on Monday, December 30, 2013, when it got 98 views from 19 visitors. All time, there are 9,052 views and 133 comments. There are currently 34 followers.
My blog entry that has the most comments is, by far, ASL Guidelines with 13 comments, though a lot of the comments on Seita’s Place consist of “pingbacks,” which occur when entries link to other entries, which can inflate the comment count. There are four entries that have five comments, and an additional four that have four comments.
How do the blog posts rate in terms of popularity? That’s pretty easy to gauge, as WordPress keeps track of views for each entry. Here’s an image showing the top of the chart:
And the bottom:
I can also tell that there are three primary ways that people manage to find Seita’s Place. (1) They search my name, (2) they find the link to it in my Facebook profile, and (3) they search about Theory of Computation or Python.
Finally, where are my viewers coming from? Since February 2012, it seems like the majority are (by far) from the United States. The next country on the list is India, followed by the United Kingdom.
John Lee Clark, a deaf writer and an active participant in the Deaf Academics mailing list, has a blog on his website. While Mr. Clark doesn’t appear to update it frequently, his blog entries are well-written. I particularly found his Cochlear Implants: A Thought Experiment blog post interesting.
Here is his “thought experiment:”
Let’s suppose three hundred deaf people, all wearing cochlear implants, are gathered and moved to an island. None of them knows ASL and all of them have excellent speech. There are no hearing people there. What will happen?
Mr. Clark’s main argument is that because there are no hearing people to provide feedback on the deaf population’s speech skills, the 300 people will experience erosion in their ability to talk. In response to that, they will develop a sign language.
There are some obvious logistical issues with this experiment. This would never happen in the first place, and even if it did, it is unclear how quickly speech erosion would occur, if it did at all.
As one reads more into Mr. Clark’s entry, it becomes apparent that he views cochlear implants with disdain:
Another thing that it reveals is that the cochlear implant is not FOR deaf people. If it is for deaf people, they would be able to, or even want to, use the implants on their own and for their own reasons. But the cochlear implant is for, and promotes the interests of, hearing people. It was invented by a hearing man and the risky experiments and sometimes fatal operations were legalized by hearing people. The demand for it is driven by hearing parents. It financially benefits hearing teachers, hearing doctors, hearing speech therapists, and hearing businesses in the industry. It is only at the bottom of the industry that we find the token deaf person.
It is known that cochlear implants are a controversial topic in the Deaf community, which is well-summarized by the Wikipedia entry on cochlear implants. There was even an entire documentary about this issue: Sound of Fury. Mr. Clark also brings up some of the common arguments against cochlear implants in the rest of his blog post.
I think I should write more about cochlear implants in my blog. This entry is apparently the first that uses the “cochlear implant” tag. Stay tuned for future posts….
Better Hearing Through Bluetooth is a recently published New York Times article that, unsurprisingly, I found interesting. The main idea is that people who have some slight hearing loss can use personal sound amplifier products (PSAPs) as an alternative to hearing aids. PSAPs are wearable electronic devices designed to amplify sound for people with “normal” hearing. Interestingly enough, they are not meant to substitute hearing aids for people with substantial hearing loss. The When Hearing Aids Won’t Do article makes that statement clear and uses an example of a hunter who might wear PSAPs to hear better in the forest. Personally, I doubt the benefit of that since amplification doesn’t necessarily correspond to increased clarity and may introduce unwanted side effects such as distracting static, but maybe some hunters can correct me.
Unlike hearing aids, PSAPs are not regulated by the Food and Drug Administration. This means customers don’t need to consult with a physician, audiologist, or a hearing aid manufacturer, a major benefit if you want to avoid those intermediaries for time, personal, or other reasons. (A quick look at the comments in the New York Times indicates that audiologists aren’t quite popular.) Consequently, PSAPs are substantially cheaper than hearing aids. A decent PSAP seems like it can be bought at $300, while a hearing aid might cost around $3,000.
Another aspect of PSAPs is that they are user-programmable. Customers can download an app to their phone or computer and fiddle around with the device to their heart’s content. In contrast, a hearing-aid wearer typically needs an audiologist to do the programming. The user-programmable feature can be a good thing or a bad thing, and the benefit largely rests on two factors: (1) how much the user knows about the PSAP and is comfortable with technology, and (2) the quality of the actual program. It should be no surprise that, due to the lack of regulation, PSAPs vary considerably. Patients should be aware of the pitfalls and be circumspect in purchasing them.
A second possible risk with PSAPs is that people who have serious hearing loss may make the unwise decision of buying them over of hearing aids. Given how hearing aids have a bad reputation for their price, PSAP manufacturers have probably marketed PSAPs as low-cost hearing aid alternatives.
Perhaps PSAPs will soon become the norm for older people who are losing their hearing. I will keep up-to-date on news relating to PSAPs, though I will never wear them.
This is fairly old news (about three years old), but I found it interesting to read through two ESPN articles about Michael Lizarraga here and here. Lizarraga is a deaf basketball player who, as a college student, made the dream of playing Division I basketball a reality by earning a spot on Cal State Northridge’s basketball team as a walk-on. He was the first deaf Division I basketball player in history.
I consider myself a basketball fan. I’ve actually written a few posts on basketball here and was considering starting an NBA-based blog for myself. (I decided not to pursue that idea since there are already too many excellent blogs that cover the NBA.) So as a deaf person myself, it shouldn’t come as a surprise that I’m interested in Lizarraga’s story.
It appears that Lizarraga and I were born and raised under similar circumstances. His family had no history of deafness, and his parents didn’t find out he was deaf until they brought him to a doctor as a toddler. Lizarraga’s parents, like mine, were willing to learn sign language, and they mainstreamed him in school and introduced him to sports.
When Lizarraga was in sixth grade, he started attending the California School for the Deaf, and stayed there until college. (For details on my educational history, see My Pre-College Education.) While encouraged to go to Gallaudet by friends and coaches, Lizarraga instead opted for Cal State Northridge, since he wanted to have the chance of playing Division I basketball (Gallaudet is Division III). Another plus for Cal State Northridge is that it houses the National Center on Deafness (NCOD), America’s first postsecondary program to offer full-time, paid interpreters for hearing-impaired students.
While at Cal State Northridge, the coaches reached out to him and encouraged him to try out. Somehow he not only made the team but ended up as a solid rotation player, which is quite rare for a walk-on.
Of course, there is the nontrivial matter of setting up accommodations. He was quite fortunate that an (apparently competent) sign language interpreter volunteered herself for the purpose, but I would be curious to know how involved Lizarraga or his parents were in this process. Also, it unfortunately does not appear that Cal State Northridge always provided accommodations. This section in the 2010 article caught my eye:
What started out as a way to goof off on the sidelines while they couldn’t play quickly turned into a second language for Galick, who even began dating a deaf girl he met through Lizarraga. Galick has become so good at sign language that he can fill in for Mathews [Lizarraga’s interpreter], who is unable to attend as many games and practices as she has in the past because of budget cuts.
Uh-oh … budget cuts leading to a lack of interpreting services? It reminds me a lot of the article I wrote about RIT’s accommodations. And this is happening at a school that has the NCOD! Hopefully this didn’t have any detrimental effects, but I wonder if Lizarraga knew that if he really wanted to and fought hard enough, he could probably obtain full interpreting services for all practices and games.
From what I can tell, Lizarraga is now playing professional basketball in Mexico and is seeing some playing time in the 2013-2014 season. There does not seem to be substantial media coverage on him, so I can’t really say much more. I hope things are going well for him.
Last month, Nelson Mandela passed away. Mr. Mandela, who led the emancipation of South Africa from white minority rule, was one of the most beloved people in his country. Thus, there was no doubt that his memorial would be well-attended by not only people from South Africa, but also some of the world’s most prominent leaders such as President Barack Obama.
Unfortunately, according to this article, a fraudulent sign language interpreter was hired to “interpret” for the deaf. This person, a 34 year old man named Thamsanqa Jantjie, apparently performed meaningless symbols and gestures on stage. And, as the image for this blog post indicates, he somehow had the privilege of being inches away from President Obama and other leaders who spoke at that podium. (Side notes: what’s up with how close he is to the podium? Whenever I’ve had a podium-interpreter situation, the interpreters have typically been a healthy distance away from the speaker. And also, usually for something like this, you would want two interpreters…)
I took a look at the video from the linked New York Times article. While I don’t know the South African sign language, the gestures Mr. Jantjie performed certainly didn’t seem like a sign language: too many rhythmic activities, lack of facial or lip movement, too many calculated pauses, and so on. I can certainly believe the experts’ judgement that this is a fake.
Bruno Druchen, the national director of DeafSA (a Johannesburg advocacy organization for the deaf), had this to say about the debacle:
This ‘fake interpreter’ has made a mockery of South African sign language and has disgraced the South African sign language-interpreting profession. […] The deaf community is in outrage.
Mr. Jantjie also failed to even perform the correct sign for Mr. Mandela. That doesn’t bode well … I mean, if there was any one sign to know, wouldn’t it be the one for Mr. Mandela?
But wait, there’s more! Check out this article, where Mr. Jantjie admits that he was hallucinating during the talks (“angels falling from the sky” kind of stuff). We also learn that he’s receiving treatment for schizophrenia. In addition, the company that supplied Mr. Jantjie — at a bargain rate — disappeared. Somehow, up until now, they had been getting away with providing substandard sign language interpreting services.
There is also news that Mr. Jantjie has a criminal history. This article mentioned that he was part of a group that murdered two men in 2003, yet somehow didn’t go trial because he was deemed mentally unfit. He also is accused of a plethora of other offenses dating back to 1994.
Murdering two men? Mentally unfit? A schizophrenic person?
That doesn’t sound like the kind of person I’d like to see up there. Hiring someone like him to be an interpreter for the memorial of possibly the most important person in South African history? I’m glad that I can trust my own interpreters here at Williams College.
Programming as part of a large project, such as building a new C++ compiler from scratch, is vastly different compared to a smaller task, like writing a script to answer a random Project Euler question. Large projects typically involve too many files interacting with each other to work on a single program in isolation (for instance, it can be confusing to know what methods to refer to from another program in a file), so using an advanced integrated development environment (IDE) like Eclipse is probably the norm. For smaller tasks, two of the most commonly used text editors by programmers are emacs and vim.
One of the biggest problems that people new to these editors face is having to memorize a bunch of obscure commands. Emacs commands often involve holding the control or escape/meta key while pressing some other letter on the keyboard. With vim, users have to press escape repeatedly to go into “insert” (i.e., one can type things in) versus “normal” (i.e., one can move the cursor around, perform fancy deletions, etc.) mode.
Once one becomes used to the commands, though, emacs and vim allow very fast typesetting. If you watch an emacs or vim master type in their preferred text editor, you will be amazed at how quickly he or she can perform fancy alignment of text, advanced finding/replacing involving regular expressions, and other tasks that would otherwise have taken excruciatingly long using “traditional methods.” (Unfortunately, these people are hard to find….)
So what’s the quickest way to get started with these editors to the point where someone can write a small program? In my opinion, the best way is to go through their tutorials. Open up your command line interface (e.g., the Terminal on Macs) or your method of opening these editors.
- For emacs, type in “emacs” and then perform control-h (hold the control key while pressing “h”) and press “t.” In emacs terminology, C-h t.
- For vim, type in “vimtutor” from the command line.
This method of learning is excellent since you get a copy of the tutorial and can go through steps and exercises to test out the commands while navigating or modifying the file. I think the vim tutorial is better because it’s more structured but again, both of them do the job. They emphasize, for instance, that once a programmer gets used to the commands to move the cursors, using them will be faster than resorting to the arrow keys since one doesn’t have to move hands off of the keyboard:
- From emacs: There are several ways you can do this. You can use the arrow keys, but it’s more efficient to keep your hands in the standard position and use the commands C-p, C-b, C-f, and C-n.
- From vim: The cursor keys should also work. But using hjkl you will be able to move around much faster, once you get used to it. Really!
I have to confess that I didn’t learn emacs by using their tutorial. I just went by a list of common commands given to me by my professor and picked things up by experience and intense Googling. As a result, I experienced a lot of pain early on in my computer science career that could have been avoided by just reading the tutorial. (The control-g command to quit something was a HUGE help to me once I read about it in the emacs tutorial.) And judging by the emacs questions that other Williams College computer science students have asked me, I can tell that not everyone reads the tutorial.
So read the tutorials before seriously using these text editors.
Of course, one should decide at an early step of his or her programming career: emacs or vim? This is certainly a non-trivial task. They both do basically the same thing, and with the variety of extensions and customizations possible, I’m not sure there’s anything they can’t do that any other text editor can do, to be honest. If I had to give a general rule, I’d just go with choosing the one that felt best to you in the tutorials, or the one that’s most common among your colleagues, professors, or other peers.
And besides, it’s difficult to say which one of these editors is better than the other. It really depends on who you ask. I don’t have a good answer to this so I’ll opt-out and provide a lame one: the one that’s better is the one that you can use best.
Personally, I started my programming career using the emacs text editor because that was the preference of my Computer Organization professor, so I didn’t see the need to deviate. Within the past year, I substantially increased my emacs usage to the point where I was using it almost everyday for most typing tasks, including LaTeX, and my pinky finger didn’t like the constant reliance of the control key. So I’m giving vim a shot.
It’s still useful to know the emacs commands, because they often work in other places. For instance, when I type in blog entries here, I can use a subset of emacs commands while typing in my entry in the built-in editor for WordPress, which was surprising to me when I first found out. Instead of deleting a line by using the mouse or Mac trackpad, I can use the ctrl-k emacs command to do that. And many IDEs will have support for emacs or vim commands, so one can get the best of both worlds. I know that Eclipse has this, but I was personally disappointed that certain emacs commands didn’t work.
To wrap up this post, I suggest that knowing one of emacs or vim well is important for (computer science) graduate school since programming is a huge part of research today, so the faster things get done, the better. Moreover, university professors tend to take on more of a “leader” role rather than someone who “gets things done” (I’m not trying to disparage professors here). In other words, they tell the graduate students what to do in research projects and the students will write the necessary programs.
As 2014 nears, I face the realization that I will have a free summer. If all goes well, it will be the summer after college and before graduate school. I do not think I will be doing an internship so that gives me the freedom to explore some topic in further detail. Given my interests and the realities of my future career options, I thought it would be prudent to pursue a programming project of my choosing.
I have some ideas which I’ll list below, but nothing’s set in stone. (Other ideas are listed here and here.) I’ll probably do a few of them to start and narrow down on one. I’ll do my best to describe the project once it gets started, probably by posting it on Seita’s Place. And if readers have suggestions, feel free to email me (see About).
Making a Game
I might as well discuss this one first, since many programming projects fall into this category. I was thinking of some minor online text-based game, possibly in a role-playing genre. For now, I’ll abstain from getting too involved in graphics and other parts necessary for a game, since I feel like that will detract from the pure programming parts. I’m a little worried that there will be a lot of work in getting simple stuff like saving the game or logging in/out, but perhaps that’s important for me to know.
Making an App
I am interested in creating an app that assists with education, such as an app that describes course notes in a subject I’m comfortable with (e.g., Artificial Intelligence) or one that is like a “mini-textbook.” The downside is that most users will probably just use their laptops if they want to access notes.
Making Basic but Useful Software
What I mean with this is that I’ll attempt to make something that I might actually use in the future. For instance, I could theoretically make my own text editor, and I would use it in the rest of my career. (This is pure fantasy, though, because emacs and vim are perfectly fine for what I do and there’s no way I could build something that complicated.) The difficult part with this idea is coming up with something that could be useful to me or others and hasn’t already been built in a better way.
Do Something Based Off of a Class
The advantage with this idea is that it’s less likely I’ll get lost, since substantial programming assignments as part of a class tend to have specific instructions and starter code. I don’t want to follow a project like this exactly, since that will stifle creativity, but at least things will be more structured. I feel like this is a lame objective to pursue, though….
Join an Open-Source Project
This could be interesting … the questions is, which ones interest me and would be open to a programmer helping this out for a summer?
About a month ago, Google Chrome released voice search and voice action. The main idea is that we can literally just talk to the computer while browsing Google, and it will respond with (ideally) what you wanted. This feature isn’t limited to just browsing Google, though. It’s also possible to tell Google to start an email, or to find information about people in your contacts list.
To use it, open Google, click on the microphone and wait until Google prompts you to say something. But if one has to click on a microphone, it wouldn’t make sense to use voice recognition because we can type almost as fast as we can talk. (It seems like this feature is built for mobile users, where typing is typically a lot slower.) So there is now a Google Voice Search Hotword extension, so one can avoid typing at all by saying “OK Google.” For details, refer to their article about the extension. As one can see, it’s still in beta (at the time of this blog post), so this is all recent technology. In fact, Google suggested that they released this feature so that people cooking for Thanksgiving didn’t have to wash their hands in order to use Google.
The voice extension sounds nice at first, but there’s always the unfortunate caveat in voice recognition: it’s not good enough due to a variety of reasons, such as multiple words sounding the same, uncommon words, inconsistent or unclear speech, and so on. In fact, for users of Gmail, I sure hope that this voice recognition won’t immediately send emails created out of voice recognition without user confirmation, since there’s too much that could go wrong with that feature.
But maybe I’m just naturally pessimistic about the success of voice recognition, especially after trying to search my own name on Google and getting “Daniel C Town.” I know that no voice recognition software has any chance of recognizing my last name. What I wonder is if this Google feature will be able to remember my browsing history after I say my name and, in the future, use that data to correctly identify it via voice recognition?
Still, voice recognition is certainly an important aspect of applied artificial intelligence and machine learning, so I look forward to seeing this subfield progress.
- Can be useful under limited circumstances (dirty hands, quick browsing, etc.)
- Will probably be far more popular on mobile devices
- Hindered by the natural limitations of voice recognition
- Potentially a privacy concern
- Definitely opens up additional areas for future work/research
It seems like every prospective graduate student is using the Thanksgiving break to catch up on applications. That’s definitely been my situation; I’ve delayed things far too long (which is quite unlike me), but hopefully I have made up for it these past few days by submitting several fellowships/scholarships and creating final drafts of my statement of purpose essays. With ten schools and a variety of fellowships/scholarships to apply to, I can’t afford to leave everything to the last week before the schools’ deadlines, especially when that also happens to correspond to my final exam week!
To budget my time, I first submitted all the fellowships and scholarships that had deadlines earlier than that of any of my ten graduate schools. Then, I went to work on creating draft after draft of one school’s statement of purpose essay. Fortunately, most universities have similar essay questions, so I can just modify a paragraph at the end that is school-specific.
Once I had done sufficient work for one essay, I put that aside and then did all the “administrative” tasks by filling in the easy stuff of the online applications. This includes writing information about recommenders, writing your address and contact information, and so on.
Some thoughts as I was doing these:
- I did them in bulk fashion (i.e., one right after another) and did everything except upload the statement of purpose essays. I felt like that was the most efficient way to do things. Now, when I head back to school, I only have to worry about the essays.
- Most applications were part of a university-wide graduate school application form, so I frequently read information that was not relevant to computer scientists but would be relevant to other subject areas. This makes it a little harder on the applicant (since we have to fill in more information than is probably necessary) but it’s easier on the school (only one application website/form needed for the entire graduate school) so I can understand why schools like that.
- Some places want me to paste my essay into a “text box,” while others want uploaded PDF documents. I vastly prefer the latter, since there is some LaTeX stuff that I’ve included in my statement of purpose to make it look better, but maybe schools don’t want faculty to be influenced by the aesthetics of the text.
- Some schools weren’t specific about whether they wanted me to upload an unofficial transcript or a scanned official transcript. (One school even had contradictory information in two places of their application.) In fact, for two schools, I didn’t realize this until I had actually reached the point in the application where they asked me to upload scans. Fortunately, the registrar emailed me a PDF scan of my official transcript and that solved everything. The lesson is that it’s best to just get an official scan to not leave anything to chance.