My Blog Posts, in Reverse Chronological Order
subscribe via RSS
Many people have asked why I am drawn to computer science. I thought that instead of trying to come up with an answer extempore each time, I would put this in writing to provide a better quality explanation. I will place a special emphasis on how computer science can particularly benefit deaf people. (As well as those who are hard-of-hearing.) This post will probably be one of the few that I’ll regularly come back to revise in the future.
What is Computer Science?
Before delving deeper, we need to understand the definition of computer science. Before coming to college, I held the naive assumption that computer programming == computer science. But that’s far from the truth, even though many aspects of computer science involve programming. In the context of computer science, I view programming as a mechanism for expressing something that I have learned or have rigorously analyzed and derived. To modify a phrase my math professor once wrote, m**ost programs in computer science courses are mostly trivial. This means if you understand what you’re supposed to be doing, the programming itself isn’t all that challenging. There might be some syntax issues or compiler errors that you’ll have to deal with, but for the most part, those should be straightforward to correct. Analyze error messages and read documentation to find the fix, or ask on StackOverflow. The difficulty, as mentioned before, is understanding what you need to do conceptually before performing the application.
That being said, let’s take a look at what a computer science major entails. I focus chiefly on the computer science major and not related majors, such as computer engineering or electrical engineering. A typical undergraduate course load in computer science will take on some of the following classes:
- Programming 1 (Introduction to syntax, control structures, etc.)
- Programming 2 (Data Structures)
- Artificial Intelligence
- Compiler Design
- Concurrent, Parallel, or Distributed Systems
- Operating Systems
- Programming Languages (Not to be confused with introductory programming)
- Software Development and Engineering
- Theory of Computation
- Several math and statistics classes
- Advanced versions of any of the previous classes, as in a graduate-level class
I asseverate that, in general, students take the two programming classes first, followed by algorithms and architecture in some order, and then as many of the other classes that fit their interests or are required. (In my school, programming languages and theory are required.) Architecture and algorithm courses are both crucial for a complete computer science education. Architecture focus on how computers function at the lowest level, discussing issues ranging from hardware to digital logic, while algorithms takes a more mathematical perspective and involves analyzing the efficiency of solutions to problems.
The programming classes tend to form the introductory courses of a computer science major because it is essential that students are acclimated to programming for the upper-level classes. Consequently, the primary objective for those beginning classes is to form a working program. For upper-level courses, creating a program is no longer the main bottleneck; they are simply necessary to carry out, express, prove, or support an experiment or project.
So hopefully I gave a nice introduction for those who aren’t familiar with computer science. With that in mind, we can focus on its benefits, with a bent towards the needs of deaf students. We’ll first talk about benefits within a collegiate or university setting, and then for post-college life.
Group Work & Easier Communication
In my opinion, it is much harder for deaf students to socialize among hearing students. There are a variety of reasons for this, the most notable of which is the barrier between speech and hearing. I believe that a good major for a deaf person will incorporate a significant social aspect to it. And that’s one of the ways computer science can help.
Many computer science courses allow students to collaborate together in groups on homework assignments or projects. I know of many classes, some of which I have taken, that required group work. This is already a boon, but it gets magnified upon realizing that computer science group works tends to be group programming, so most of the work is in text, which should not a barrier for deaf people. A strong group for a lengthy computer science project will utilize extensive code documentation and a journal of the group’s progress, all of which pose no more difficulty for a deaf person to follow and comprehend as compared to a hearing person. In the case of when no interpreters are around and a deaf person can’t communicate, the group can use their computers to quickly write down any necessary instructions or objectives.
A seminar is a class composed of small students who discuss on a certain topic, with the expectation that students will actively participate. They are predominant in the humanities where class discussion and participation play vital roles in helping a student better understand the course material.
But they are also disadvantageous to deaf students, and it’s fairly easy to see why. With class discussion, students can quickly take turns talking, and very often those who manage to raise their hands first will get the chance to participate. If a topic is particularly popular or heated, a deaf student may find it very difficult to participate as he or she will have to wait until understanding what the fellow students have said, and that gets delayed by the natural lag of sign language interpreters and CART as compared to normal hearing. Also, one of the more embarrassing things that can happen in seminars is if one makes a compelling and passionate argument, only to find that a student had previously provided those insights to the class. Of course, if that happens, a deaf student can explain that the misunderstanding came from the urge to quickly participate in the seminar format. But why be in a class that can offer that kind of risk?
It’s no coincidence that I enjoy lectures far more than seminars. But the good news for computer science majors? You (hopefully) won’t have seminars. My college has no seminars in computer science, and I’m sure many other schools are similar in that regard. It’s a different story if you’re one of the small fraction of students entering a Ph.D program and sign up to take a research seminar, but how many students do that?
Growing Support Groups
Recently, there’s been a pleasant surge in the number of groups, mailing lists, organizations, and other entities designed to help support deaf students in STEM fields. I can personally vouch for the Summer Academy as a strong example of this. Briefly, the program was founded in 2007 by Richard Ladner and allows about ten deaf and hard-of-hearing students to attend a nine-week, residential program at the University of Washington in Seattle where they take one computer science class, one special animation class, have talks, and participate in field trips. The program is free for students.
Not surprisingly, diamonds like these don’t come around very often. The Summer Academy concludes with a community premiere, where students present their work and outline their post-graduate plans in front of an audience of about 100. Just by observing the audience, I could tell that many were middle-aged deaf residents of Seattle who were pleasantly surprised with what the program had to offer. A common theme in their sign language conversations was: “This kind of program never existed when I was younger!”
This network of support is assuredly a product of how today’s world has become unquestionably more accessible year by year. Sign language interpreters began to regularly populate schools with deaf students in the late 1900s, with more arising when The Americans with Disabilities Act was passed in 1990. Then again, it still took until 2007 and a well-known professor before the program got funding.
Aside from the previously mentioned Summer Academy, another possible resource for deaf computer science students is DO-IT (also related to the University of Wshington). I’m on their mailing list, and I regularly receive emails about technical internships where employers are targeting students with disabilities.
Ability to Easily Conduct, Perform, or Verify Tedious Tasks
This is one of the benefits that can apply to anyone, not just a deaf person, but I will include it here as it can potentially be extremely useful. As a computer science major, you are required to know how to program. Even if you never have to program outside the class, having the ability to do so can help you in a variety of circumstances. You might, for instance, have a probability class that assigns questions such as “how many ways can we form a group of blue and red marbles if…,” and a simple Python program can verify your answer. At work, you might be casually wondering to yourself: if we can get about X new customers a day while keeping the same prices and retaining all previous customers, how much will we benefit? Again, just fire up a script on your terminal or text editor and do this. Obviously, this would mostly apply towards casual thoughts, but I find that having that kind of intuition helps me better understand the scope of a topic.
It is true that this previous advice is more useful as a reason to why you should take one computer programming class, rather than major in computer science. But inevitably, the more computer science classes you take, the more programming becomes an inveterate activity, and therefore, these “casual” programs are easier to write and can be applied to a wider variety of circumstances.
Being Computer-Literate in a Technological World
From the printing press to the scientific method to today, data and technology have been expanding exponentially. Thus, it is crucial that people understand what is out there and how it works. As an example, deaf people should be aware of the latest advances in cochlear implant and hearing aid technology. Just recently, a hearing aid was now upgraded to be completely waterproof. And by “just recently,” notice that the linked blog post is dated as January 21, 2013.
I claim that part of the responsibility as a computer science major is keeping up with the news on any latest advances in their field. Furthermore, especially if you understand electronics, you might be able to better understand the detailed description of a device, how it works, and its usefulness compared to other competing goods, all excellent qualities for the prescient buyer. And of course, you get to explain this to all your friends!
Strong Accessibility at Work
One of the biggest concerns deaf students may have is job accessibility. Sure, we can bank on the Americans with Disabilities Act to help us in a pinch. But why not avoid this trouble in the first place, and aim for companies that clearly have no issues in hiring qualified deaf employees? The good news is that there are plenty of these in the computer science industry, such as Microsoft. A deaf employee there personally told me this: Microsoft is one of the most accessible companies out there. You can ask for an accommodation and you will get it. Also, as I said earlier, I am regularly informed of internship opportunities for deaf students in computing, so there are places reaching out. Perhaps computer science firms, a relatively new phenomenon in today’s world, are up to date on all the latest laws related to accessibility. For this, I commend them.
Hopefully this was an explanation that elucidates most of the reason why I decided to major in computer science. It’s not the entire reason, but then I would be going on and on about how theory is so scintillating, and that doesn’t quite help to spread the word about computer science. Part of my aim in this blog, for instance, was to explore connections between computer science and deafness. I hope this was a small step.
As any regular reader of this blog knows, I’m almost certainly heading to computer science graduate school directly after college. This got me thinking about an obvious question: how many deaf people have computer science Ph.D.s? I’ll limit the answer to those who earned them from American institutions, though if anyone has information about other countries, please let me know either via comments here or by email (see the “About” page).
A simple Google search and some outside information led to these people:
- Karen Alkoby (Ph.D., DePaul University, 2008)
- Raja Kushalnagar (Ph.D., University of Houston, 2010)
- Christian Vogler (Ph.D., University of Pennsylvania, 2003)
One thing that struck me was their Ph.D. dates: 2003, 2008, 2010. That’s awfully recent, and that’s a sign that there may be some other deaf students currently enrolled in Ph.D. programs. I don’t know of any.
Another thing that was interesting is that, while all are professors, none of them are actually computer science professors! Dr. Kushalnagar is in Information and Computing Studies at RIT, Dr. Vogler is in Communication Studies at Gallaudet, and Dr. Alkoby is in Gallaudet’s Business department. Of course, the lack of a Computer Science department at Gallaudet is likely a factor.
I’ve never met nor talked with Dr. Alkoby, but I met Dr. Kushalnagar a few years ago and recently had a video chat with Dr. Vogler, so I can say a bit more about them.
Dr. Kushalnagar and I met at the 2011 Summer Academy. He was raised in India and took a heavy math and science curriculum in high school. Due to his family’s strong educational values, he not only got a B.S. from Angelo State University, but he also has the uncommon combination of a computer science Ph.D. and a law degree. His research interests deal with deaf education, and he acts as a primary tutor to deaf computer science students.
Dr. Vogler and I both think that Dr. Vogler was the first deaf person to earn a computer science Ph.D. — at least in the United States — and we also believe there are three deaf computer science Ph.D.s. He and I appear to possess similar hearing loss levels and communication ability. Dr. Vogler used “ASL” accommodations while he was in conferences and taking classes at the University of Pennsylvania. (By the way, Williams College currently has three alumni at Penn’s computer and information sciences Ph.D. program, and I may be applying there.) His key suggestion was that, when faced with a technical term dilemma, “ASL” interpreters need to abolish the standard grammar of the language and focus more on a direct English translation. Yes, it will involve a lot of finger spelling and some confusion, but trying to comprehend such technicalities on top of ASL’s grammar is not convenient for effective communication.
Today, I read Philip Guo’s e-book The Ph.D. Grind. It’s completely free (just visit the link and download) and it’s fairly easy reading, so one should be able to finish in about an hour or two. After reading it, I was both enlightened and impressed. The book seemed to accomplish its goals: provide a clear — but not overly detailed — account of a computer science graduate student’s journey to obtain a Ph.D. (Philip Guo was a Ph.D. student at Stanford University.) One of the reasons why I like it is that the author included many examples of how he struggled during his first three years of his Ph.D. program, and more importantly, why those struggles occurred. By the end of the book, I was pondering to myself: can I avoid the pitfalls he encountered, and intelligently grind away at a publishable project? As you can tell, some amount of “grinding” is necessary; otherwise, you’ll never make any progress. But if you go entirely on the wrong track — that is, if you’re working hours and hours by yourself on a famous professor’s project without any direction — that’s not a good idea.
Possibly more than anything else, The Ph.D. Grind taught me the value of (at least initially) working with assistant professors and postdoctoral researchers. The reason is quite simple.
They are the ones under the most pressure to publish.
The assistant professor needs to publish for tenure; the postdoc needs to publish to get an assistant professorship. Now, that’s not to say a Ph.D. student should never pick a tenured professor to be an advisor … it’s just that the student might want to be part of a research group that’s composed of at least one non-tenured faculty.
Some of my quick thoughts after reading this book include:
- I absolutely, positively want to get a graduate fellowship.
- I should talk with all assistant professors at whatever university I’m at (for graduate school) and offer my services.
- I cannot, unless circumstances are extremely unexceptional, work all day on a project
- Aim for top-tier conference publications earlier; it makes the process of actually writing the thesis a formality.
To wrap things up, I recommend The Ph.D. Grind especially to undergraduates who are considering pursuing a Ph.D. in computer science. Again, it’s free and easy to read.
I’m in the process of applying to research programs for this summer, and I’ve finished eight out of twelve applications. Most are research experience for undergraduates (REUs) sponsored by the National Science Foundation, and others are programs unique to a school that have special funds to support undergraduates. For succinctness, I’ll use REUs to refer to any research program designed for undergraduates, even if it’s not NSF-sponsored.
Because of all these applications, I’ve been doing a prodigious amount of reading and am seeing some common quotes. Here are some from two of the sites I’m applying to:
The MIT Summer Research Program is an institutional effort to help facilitate the involvement of talented students in research aspects of the fields of engineering and science, in particular those from disadvantaged backgrounds such as under-represented minorities, or first-generation college students. [From MIT]
Although student participants will be selected based on merit after a nationwide recruitment from a broad range of colleges and universities, a fifth objective of the project is to broaden the participation of underrepresented groups including minorities, women, and students with disabilities. [From UNC Greensboro]
While I support diversifying the workforce, I can’t help but wonder: how effective are these? This is a hard question to answer, not in the least because the question itself is equivocal. I’m going to view it and base my answer on how many REU graduates end up as professors in* first-rate research institutions*. It’s not a perfect measure of judgement, since many professors likely did not participate in any REUs as undergraduates, but it’s one possible interpretation.
Why do I hold this perspective? If the National Science Foundation and other prominent institutions, such as MIT, are truly committed to fostering a diverse workforce, then shouldn’t that mean there is diversity at the top of the hierarchy (i.e. professors at top schools)?
Unfortunately, I don’t think REUs have had as significant an impact on diversity as desired … yet. Obviously the future may prove me wrong, but I’m not optimistic. I did a quick search on professorship patterns in the past few years. Check out this MIT article, for instance. In 2007, not that long ago, 25 MIT professors were promoted, and exactly one was a woman.
Hey, science isn’t alone; look at politics. Even though the 113th Congress has been among the most diverse ever, the great majority of its members – 67% – are white males, and in Obama’s cabinet, whose positions are arguably more prestigious than being members of Congress, white males dominate (at least 69% to date).
One way that I think REUs could ease (or perhaps confirm) my concerns would be to publish a list of their participants and, most importantly, their current occupation. The only REU that I know of that does this really well is at the University of Minnesota, Duluth. Looking here, their program director has produced a meticulous history of past participants and listed where they are now. I believe the NSF should encourage sites to have lists like these, so that there’s a greater sense of how successful these places are at supporting diversity. And we can’t just look at where students go to graduate school; we have to look at how they perform after graduate school.
Just as I did last summer, I browsed through the list of computer science REUs listed here to figure out possible research sites for this summer. While I was reading through Rochester Institute of Technology’s computer science REU, I noticed something unique about their award abstract:
[…] With specific recruiting efforts that target underrepresented groups such as women, minorities, and persons with disabilities, especially deaf and hard-of-hearing students, this REU program also aims to increase the size and diversity of the scientific workforce.
Most of these REUs embody the NSF’s commitment to increasing diversity among the next generation of scientists. In almost all cases, however, these award abstracts do this by encouraging women and minorities to apply. There are a handful that mention disabilities, though, but only RIT’s site specifically mentions deaf and hard-of-hearing students.
- Women compose roughly 50 percent of the population.
- African Americans, Latin Americans, and Native Americans compose roughly 30 percent of the population.
- People with disabilities in the late teens and young adult group compose roughly 5-8 percent of the same aged population.
I had to approximate a bit with #3, but it’s clear that it’s a minority even compared to #2. And then there an extraordinary number of subdivisions of disabled people, and even then a range of severity for each kind. It is best to conflate these into one general category, but I do wish that institutions trying to promote diversity would add on the category of disabled people to women and minorities.
By the way, I didn’t apply to the linked REU at RIT. Their grant money apparently ran out in 2012 and the NSF didn’t renew it. Perhaps they will reconsider this year.
My new research project is underway. It involves the use of a web crawler that can reach the computer science home pages for colleges and universities across the country. That web crawler can then try and derive data from web pages belonging to individual computer scientists. Given that data, is it possible for us to determine a set of features that will indicate whether the web crawler has found the homepage of a female computer scientist? If so, then it might be possible to reach out to those people by adding them to mailing lists, such as the one belonging to the CRA-W. I hope to learn a lot during this project, and I’ll probably discuss it later on Seita’s Place.
Since this is a machine learning project, I’m using the Weka software to help me run experiments and determine what features in the data are most useful in indicating whether a homepage belongs to a female computer scientist. An example of a feature I think would be very indicative is the number of times the pronoun “she” appears. Another one might be how often a name appears in an index of common American female first names.
Because I’ve never used Weka before, I looked at this tutorial on YouTube. I wasn’t planning to watch the whole thing, since I just wanted to see what the user’s settings were, where he clicked, and other small things. But I was surprised to see that there were complete and correct subtitles for the entire 23-minute video! This wasn’t the Google captioning that you can just click on for most videos. This guy, Brandon Weinberg, actually inserted a full transcript of his spiel. Major kudos goes to him.
Don’t you wish that every YouTube video could be like this? That would be a nice Christmas present for me in 2030.
According to the National Science Foundation, the fields with the highest proportion of international students in American Ph.D programs are computer science, engineering, and mathematics. The majority of these students are from China, India, or South Korea. These statistics worry me, but not because of the increased competition for precious few spots in respected doctoral programs. From what I can tell, it’s generally easier for American students to get in Ph.D programs as compared to foreign students. I won’t get into the exact reasons, as that’s enough for an entirely different blog post.
But what I am worried about is how I will communicate with those foreign students. Many have accents that make it difficult for me to understand their speech. I encounter this problem frequently at Williams College, where about eight percent of the student body is international. Even the most simple conversation might require me to ask my conversationalist to repeat sentences multiple times before I understand it, and can leave both of us feeling awkward.
So how can a deaf person fix this problem? If you have enough hearing to easily converse with most Americans, one thing I strongly recommend is to actively continue to talk with people whose accents are difficult to understand. If you see those people alone, start a conversation! The worst thing you can do is avoid them or, failing that, gently nod at whatever they say. Rebuff any of those actions! The logic is quite simple; the more you talk with someone, the more you get used to his or her style of speech. And eventually, though it might take a while, conversations will require fewer and fewer “Pardon?”s and “What?”s that are an all too common occurrence in my life. As a case in point, my ASL interpreters have told me that I can understand the voices of certain international students at Williams College better than they can. Not coincidentally, they’re among the students who I’ve talked with the most.
This is important to me because I’m going to have to manage this in graduate school. The strength of a Ph.D program depends in part on the quality of its students. In strong computer science programs, I’m sure many foreign students have enough background (i.e. at a Master’s level) to pass the qualification examination on day 1. Clearly, these students are valuable resources for me. Almost all current research is the product of collaboration of at least two people, so I’m going to have to communicate with my peers if I participate in a project with them. They are the ones who I can learn the most from, so let’s start talking.
I’ve got another finals period coming up. It concludes on December 17th, so I’ll update after that point. (And modify this post.)
See you soon.
All right, I’m back home and settled for the holidays. This past semester went relatively well; I only took classes in my two majors, and I achieved my goal of a 4.0 GPA and have set up a new research project for this winter. Unfortunately, I wasn’t able to do as much theoretical computer science review on this blog as I wanted, so I’ll continue that in the beginning of 2013. Related to that, here are some of the possible topics I’ll write about this winter and the spring:
Deaf friendly tactics
Computer science research topics
More posts that weave deafness and computer science together, such as deaf computer scientists or new machine learning technology (hey, my current area of research!)
How to ace the GRE
Anything else I can think of …
And here we are with what I think is the most important “new” concept in this course: the Turing Machine. Roughly speaking, these are automata that are equivalent in power to what we think of normal computers today. Therefore, any problems that are unsolvable by Turing Machines are beyond the limit of current computation. Let’s briefly go over the automata we’ve seen thus far:
- Finite Automata (DFAs and NFAs)
- Pushdown Automata (PDAs and NPDAs)
- Turing Machines (TMs, NTMs)
The N’s denote the nondeterministic versions. In other words, they allow multiple branches of computation to proceed simultaneously, rather than having one strictly defined path for each input string as would be the case with deterministic automata.
Here, the automata are listed in order of increasing power. Finite automata recognize the class of regular languages. Pushdown automata recognize the class of context-free languages. All context-free languages are regular, but not all regular languages are context-free. This is why pushdown automata are considered more “powerful” than finite automata, with respect to computability. Their infinite stack gives them more memory that can be useful to solve certain non-regular languages.
But Turing Machines (abbreviated as TMs), which recognize the class of decidable and recognizable languages (to be explained in a future blog entry), take things a step further. Like finite automata and pushdown automata, TMs have states and use transition functions to determine the correct path through states to take upon reading in an input string. But they have an additional feature called an infinite tape that is depicted in the picture above. There is also a tape head that reads in one symbol from a TM. This is a fairly abstract concept, and it’s often surprising to first realize that this simple addition to an automata allows it to recognize the class of all computable functions.
For instance, if we wanted to compute the value of a function, such as a function that returns a value twice as much as its input , then assuming our alphabet consists of 0s and 1s, the infinite tape starts with the binary symbol for the input, and then once the computation is over, contains the binary symbol for the output. Thus, the infinite tape can be the medium for which input/output occurs. Obviously, a TM must have some way of modifying the tape, and there are two ways of doing so.
- The function means that, if at state and the head reading in an , the TM will replace the with a on the tape, move to state , and move the tape head left.
- The function means that, if at state and the head reading in an , the TM will replace the with a on the tape, move to state , and move the tape head right.
That’s literally all we need to know about a TM’s transition function. That the tape head can move along the infinite tape and replace symbols on the tape from anything in the TM’s pre-defined “tape alphabet” is what allows a TM to go through a computation. There’s a small caveat: we obviously need a place for the tape head to start out on the tape! So when we say the TM’s tape is infinite, we really mean it’s infinite in the rightward direction. That is, there is a “bumper” that indicates the start of the tape, and the rest of the tape extends to the right. Therefore, if the TM’s tape head is on the first symbol of the tape, it can’t move left since the bumper prevents it from doing so. Thus, having a transition function that causes the tape head to move left while at the leftmost symbol of the tape will just leave the tape head where it is.
Another important thing to know about the tape is that, since it’s infinite, we need to have a symbol on each component. The input string takes up the first few segments of the tape, but beyond that, the tape is composed of what’s known as the blank symbol. These are part of the tape alphabet, so it’s legal to put them on the tape, as well as replace them if necessary.
The infinite tape is not the only feature that makes TMs different from finite automata (for instance, TM’s have only one accept and one reject state, and their effects take place immediately), but it’s by far the most important one. To understand a TM, it is necessary to have a “good feel” of how the tape works in your head. I can’t emphasize this enough. Don’t waste time trying to get the technical details of the Turing Machine. Use a high-level view of them. Make sure you get an intuitive understanding of how they work. I like to do this by imagining arbitrary computation paths and moving the tape head around the infinite tape.
Let’s do a quick example question that highlights the importance of understanding how a TM works over the nitty gritty details.
Example Question and Answer
Question: Say that a write-once Turing Machine is a single-tape TM that can alter each tape square at most once, including the input portion of the tape. Show that this variant TM model is equivalent to the ordinary TM model.
Answer: We first show how we can use a write-twice TM to simulate an ordinary TM, and then build a write-once TM out of the write-twice TM.
The write-twice TM simulates one step of the original machine by copying the entire tape over to a fresh portion of the tape to the right of the portion used to hold the input. The copying procedure marks each character as it gets copied, so this procedure alters each tape square twice, once to write the character for the first time, and again to mark that it has been copied, which happens on the following step when the tape is re-copied. When copying the cells at or adjacent to the marked position, the tape content is updated according to the rules of the original TM, which allows this copying procedure to simulate one step of an ordinary TM. (Minor technical detail: we also need to know the location of the original TM’s tape head on the corresponding copied symbol.)
To carry out the simulation with a write-once machine, operate exactly as before, except that each cell of the previous tape is now represented by two cells. The first of these contains the original machine’s tape symbol, and the second is for the mark used in the copying procedure. The input is not presented to the machine in the format with two cells per symbol, so the very first time the tape is copied, the copying marks are put directly over the input symbol.
Quite nice, isn’t it?
I’m someone who tends to do a lot of work solo, but once in a while, I see myself working in groups. This typically arises during academic settings, but is a broad enough concept to be applied to any areas of life. There have been advantages and disadvantages of working in groups, but one thing is clear: there’s a clear correspondence between the number of people in the group and my level of enlightenment, productivity, and satisfaction. I imagine that many deaf people will agree with this simple tactic that I call the Power of Two rule:
Divide people into groups of two.
If that’s not possible, then try three people, then four, and so on … but always be sure to use the minimum amount of people in each group possible. Why do I consider this a deaf-friendly tactic? Because it’s much easier to communicate in a one-on-one setting as compared to a many-on-one setting.
Let’s compare the two situations. Suppose a professor of a college computer programming class assigns students to work together in groups of three on some major project. (Furthermore, suppose there’s one deaf student in the class.) Assuming all three students are roughly at the same skill level, I can see three situations arising.
- One student dominates the decision-making of the project, and acts as the de facto leader, while the other two essentially obey orders. The workload may or may not be equal, but what stands out here are the social dynamics between the leading student and the two others.
- Two students become closer to each other, while the third is more or less isolated, relegated to following the lead of the other two. Again, I’m intentionally ignoring the workload here — maybe the isolated student works more, less, or just as much as the other two.
- The three students are equally close to each other. In other words, if we were to assign a measure of how friendly two students were to each other, the score would be roughly equal for all three of the possible pairings of the three students.
From my experience, #3 rarely happens, unless the professor was lucky enough to assign to a group three students who knew each other equally well. And I believe #2 probably happens more often than #1. The more I think about it, the more I believe a deaf student mingling with hearing students is likely to be the unfortunate third ring in the social group. From a hearing person’s perspective, he or she basically has the choice between interacting with someone who can talk and hear just as well versus someone in which communication tends to be more difficult and may require third party assistance. Given the convenience, why wouldn’t that person opt to talk with the hearing person more often? Obviously, I’m ignoring tons of extraneous factors, but I consider them irrelevant to my main argument.
But if there’s only two people in a group … isn’t that much more helpful to a deaf person? Now, there’s really no choice for either member of the group not to communicate with his or her partner. Moreover, I believe that in a group of two, members become more comfortable having personal discussions without the third person. So the benefit of two people is that, as they hash through ways to complete a project, the collaboration among the two is on a closer level than with three people. Both members also have a larger say in the decision-making process as compared to if they were in a group of three. Thus, it becomes easier for the members to know exactly what’s going on in their project.
So to anyone thinking about dividing up people, I suggest keeping the “Power of Two” rule in mind.
[This post is part of a series that I started here.]
After talking with someone who was toiling away at his physics National Science Foundation (NSF) Graduate Fellowship application, I checked the NSF website and saw that November 13 is the deadline for 2013 GRFP applications in computer science. As it’s the beginning of November, I’m aware that I probably have more important things to worry about than an application deadline that won’t affect me for at least another year. (Hey, I heard the Presidential election results will be out in a few days….)
But in the back of my mind, I know it’s almost certain that I will be applying for a computer science NSF Graduate Fellowship next fall. I’m not sure how I’ll handle that along with four courses (one of which will be a thesis), graduate school applications, and likely some teaching assistant duties, but I’ll manage.
One thing that caught my eye from the NSF website was this text:
The NSF welcomes applications from all qualified students and strongly encourages under-represented populations, including women, under-represented racial and ethnic minorities, and persons with disabilities, to apply for this fellowship.
So does that mean that a deaf person, like me, has a slight advantage in receiving a fellowship, as compared to similarly qualified students? It sounds like it, which can only be good news for me. I’m not sure how many deaf computer science Ph.D students have received NSF Fellowships. (A quick search online gave me no results.) I’m obviously hoping to be one of the few to get this ultra-competitive fellowship. I am curious, though, as to how much of my application should focus on being deaf. Should it be the main topic of my application essays? Should I make it one of many points as to why I would be a nice candidate for a fellowship? My college application essays focused almost entirely about my being deaf; given my lackluster acceptance results, perhaps I shouldn’t talk about being deaf that much? But then again, the application fellowship has several required essays encompassing a variety of topics; I should probably avoid talking about deafness on, for instance, the previous research essay, since my previous research hasn’t had anything to do with being deaf.
Obviously, this is just pure speculation as I ponder about the NSF Graduate Fellowship program. The next few years for me have the potential to give me an enormous head-start on my future career. I’m crossing my fingers that everything will proceed as planned. But for now, I thought I’d post this up here to remind myself of an important date to keep in mind for the next year.
(Note: I’m aware that I’ve been a bit computer science heavy in the past month; I’m working on getting additional topics up here, but time is scarce, and I may end up revising these posts a bit in the winter.)
We’re moving on to the next unit of Theory of Computation, which deals with context-free languages. These are a special class of languages that contain the regular languages, but a few other non-regular ones. (See CS Theory Part 2 for ways in which I prove certain languages are not regular.) What makes them different as compared with regular languages is that languages are context-free if we can derive a context-free grammar that describes it. A grammar is simply a way of constructing strings using a set of terminals and variables. Consider the example below.
This grammar will generate the language containing all strings that have more 1s than 0s, assuming the entire alphabet is composed of just 0 and 1.
Clearly, using grammars to generate languages is more powerful than regular expressions, because of the ability to generate non-regular languages. To derive any string from the language, we start with the variable , which generates a required 1; this is necessary because in order for a string to have more 1s than 0s, we must have at least one 1 as the string “1” is clearly in the language. But generates another variable with that 1, which we call . This can recursively expand to generate an infinite number of strings. For each derivation, we substitute the with a randomly chosen derivation, such as . The derivation process ends when we have all non-terminals (i.e. 0s and 1s), and the resulting string formed will be a member of the language. The language described above is non-regular because any DFA recognizing it would have to keep track of how many 0s and 1s there were, and there could be infinitely many of them.
I now want to introduce a new finite automaton called pushdown automata, These are analogous to NFAs in that constructing a pushdown automata (PDA) that recognizes a given language is the same as showing that the language is context-free. Thus, to show a language is context-free, either (1) construct a context-free grammar generating it, or (2) construct a PDA recognizing it.
The main difference between NFAs and PDAs is the stack component of the PDA, which is empty at the start of an input string but can have alphabet symbols (e.g. a 0 or a 1) or arbitrary symbols added as the input string is read. This is the heart of why context-free languages are a super set of the regular languages; an NFA is a PDA, but a PDA provides additional counting and memory capabilities, which allows languages, such as the canonical non-regular language of , to be considered context-free. Like NFAs, we can easily draw PDAs to recognize languages using states and transition arrows. This time, when making a transition arrow, we use symbols of the form . This is saying “if we read in as input, we may pop off the top of the stack and add on ”. Note: if the symbol on the top of the stack doesn’t match the transition function, that computational path dies. Furthermore, can be used anywhere in the transition function, as we’ll see in the always-necessary example later.
What language do you think this PDA recognizes? Note: the dollar sign is added on the stack before the first character is read; it’s used solely to indicate the bottom of the stack.
The above PDA will accept all strings that recognize the language . It nondeterministically guesses where the middle of the input string will be, and tries to pop off stuff from the stack from that point on. Since we have nondeterminism here, we are guaranteed that if a string is in the language previously described, then it will be recognized by this PDA. Pretty impressive! Notice that this language is non-regular, which we can justify with a simple pumping lemma proof.
Up next for me is arguably the most important concept of the course: the Turing Machine.
After a little bit of a delay, I’m back to writing a little bit more about theory of computation. (Be warned that this post will assume more prior knowledge about automata than the previous entry.) So far, we’ve been learning a lot about languages, which are a set of strings formed from some alphabet (e.g. 0 and 1 for binary numbers) that obey certain guidelines. Languages can be classified as either regular or non-regular. Regular languages are those that can be recognized by some DFA, and I made some examples of that in the first Theory post. But not all languages are regular. And so far, I believe there’s three main ways to prove a language is not regular.
- Closure properties of regular languages (unions, intersections, concatenations, “star” operation, etc.)
- The Pumping Lemma
- The Myhill-Nerode Theorem and Fooling Sets
The usefulness of these three depends on the language. For some, the closure properties are the easiest to prove non-regularity; for others, it might be the Myhill-Nerode theorem. In fact, there are some languages that are not regular, yet are satisfied by the Pumping Lemma! That lemma is not a sufficient condition on non-regularity, so if a language satisfies it, all bets are off as to whether it’s regular or not. But if a language “passes” the requirements of Myhill-Nerode, it must be regular. Clearly, languages require multiple strategies to prove regularity or non-regularity.
I thought I’d present an example of a non-regular language and see how all three methods could be applied. First, before we begin, the canonical non-regular language (meaning, the one language that often introduces students into the world of linguistic non-regularity) is the following:
It’s the language consisting of all strings with an equal amount of zeroes and ones, with the ones following the zeroes. Intuitively, it’s fairly simple to see that it’s non-regular. If a DFA were to recognize this language, it would have to compute the total amount of zeroes, followed by the ones. Consequently, since there would be an infinite amount of states needed, we can’t make a DFA since they are meant to be finite automata. That is, they have finitely many states. We will use this knowledge in hand, combined with the three common ways to prove non-regularity, when we consider this similar language:
It’s similar, but we just have to make sure that there are a different amount of zeroes and ones, and again, that the ones all come after the zeroes.
So let’s consider the different ways we could prove that this language is not regular.
First, we look at closure properties of regular languages. A theorem of regular languages is that they are equivalent to the class of regular expressions. We know that and are both regular expressions. Furthermore, a theorem states that the class of expressions is closed under concatenation, so the following regular expression (i.e. regular language) is regular! Notice the key difference between this language and the previous two; the fact that we don’t have restrictions on the amount of 0s and 1s makes it possible to construct a DFA for this! Such a DFA would be fairly simple to construct — it would only require three states, one to start and represent the initial zeroes, one to represent the ones, and another to represent the death state, which is when, at any point, we see a 0 following a 1.
Why is knowing important? The class of regular languages is closed under intersection, so consider the language
Where represents the complement of . Yet another theorem is that regular languages are closed under complements. That is, if is regular, then is regular. But notice that if we assume is regular, then the intersection must be regular by intersection closure. But we know that the intersection is equal to the canonical non-regular language! Hence, must be non-regular.
The Pumping Lemma
That wasn’t too bad, but things will get slightly more complex when we use the pumping lemma. We assume, by way of contradiction, that is regular. Then by the pumping lemma, there exists some string length such that any string in this language that is at least this long can be partitioned into components . Consider the string , which clearly satisfies the minimum length requirement, so it can be pumped. We set , where . Let us pump the component times in , which must be in the language by the pumping lemma. But pumping this string turns out to be , a contradiction. Any language that fails the pumping lemma is not regular, so is non-regular.
The Myhill-Nerode Theorem
Knowing how to use the pumping lemma after reading the solution seems simple, but the hard part is actually coming up with the component. We wrap up by using the often easier Myhill-Nerode method to prove that this language is not regular. Let’s use the fooling set . Then for any distinct , we know that the have a different amount of zeroes in them. Let Then take . We know that , but we also have … so these strings are distinguishable from each other! And since our fooling set is infinite, and any pairs of distinct strings in it are distinguishable, any DFA would have to have an infinite amount of states, which is impossible. Hence, this language is not regular.
(Personally, I prefer the fooling set method to solving problems … it’s often very simple, and the same fooling sets can be used to prove non-regularity for multiple languages.)
Up next in my studies are context-free grammars. Stay tuned!
(On a side note, this is my 50th blog entry.)
My blog writing has slowed in the past few weeks, as I’ve been REALLY busy at college. But I’m still alive and well, and I hope to write more about theory of computation and other topics in the upcoming weeks, particularly about the Myhill-Nerode theorem. At the moment, I’m reviewing the moment generating functions that I learned in August while studying MIT’s 18.440 course. I also have a significant amount of computer graphics laboratory work to do this weekend. At least it should be Mountain Day in two days, that one October Friday when the president intentionally cancels classes.
As a side topic, I have, strangely enough, actually gotten more views on this blog than during the summer, when I was updating every few days. I’ll see what the future holds, then. In the meantime, I have midterms to study for, a winter study research project proposal to write, and some GRE studying to do, which may or may not include some writing practice.
As I mentioned before, I am writing a series of blog posts on my Theory of Computation class. This particular post will be somewhat image-heavy due to complete lack of experience on how to use LaTeX in accordance with state machine diagrams. Even the LaTeX I embed in these posts doesn’t look too great with this background, so I’ll have to do some more experiments. UPDATE May 16, 2015: I think the Jekyll + MathJax combination looks great now!
But anyway, in this class, I’m trying to understand three central areas: automata, computability, and complexity, and they are all linked by the following question:
What are the fundamental capabilities and limitations of computers?
Computability and complexity will come later in the course. Right now we’re focusing on automata.
To start off, let’s look at some basic computers, called finite automata. To put things formally, a finite automaton is a 5-tuple , where1
- is a finite set of states
- is a finite set known as the alphabet
- is the transition function
- is the start state
- is the set of accept states
But this is very abstract. Let’s get more specific by talking about the alphabet, and I’ll then return to discuss the other four points. An alphabet is defined to be any nonempty, finite set of symbols. For instance, is a valid alphabet. And so is the alphabet composed of the 26 letters of the English language.
Related to alphabets, we have strings and languages. A string is just a finite sequence of characters derived from our alphabet. All the English words I type in this blog entry are strings of the alphabet composed of the 26 letters of the English language. The sequence of symbols in 11001100 is a valid string of the alphabet . Any binary number is a string that can be formed by the alphabet. And a language is a set of these strings.
Here is the key relation between languages and finite automata.
A language is called a regular language if some finite automaton recognizes it.
By definition, a finite automaton will recognize a language if all strings the automaton accepts are members of the language. That’s the key. And to indicate what I mean, let’s look at a state diagram. Many finite automata can be written using these diagrams, and it’s highly advantageous to do so given how intuitive it is. The following is a state diagram of a finite automata that recognizes some language.
It’s a little blurry (future images will be better), but I hope you can still see the interesting symbols. First, there are four large circles, with one having a circle within it. Each of these four circles represents a state. Hence, , the set of states (#1 on my list above). These are used to represent some situation that we encounter as we progress through a given string.
We have forming our *start state *(#4 on my list above). This means when we progress through a given string, this is where we start before we make any “moves.” And represents the lone accept state, indicated by the double circle outline. We can have multiple — or zero — accept states; this particular diagram only has one. If a string ends on one of these states, it’s “accepted” by the language. Otherwise, it’s rejected.
When I say “progress through a given string,” I refer to the process of determining whether a machine will recognize a certain string, and here is where we use the transition function (#3 on my list above), indicated by arrows in the diagram. Our alphabet in the above example is composed of just 0 and 1, so the machine only works with strings of that form. Let’s see what happens when we “input” a few strings into this machine.
- — This is the empty string. We start by going to and we stop there, since we have no characters left. The finish state is the same as the start state, but this means the empty string is not accepted by the language since it did not finish in state .
- “1” — Here, we again start at the empty state. From this point forth, always assume we start at the start state. Since we have a “1” this means we go to state as dictated by the transition function (i.e. arrows). We stop here, because we’re out of characters. But unfortunately, “1” did not end in the accept state so, like , it is not accepted by the machine.
- “0” — We go from the start state to . Again, we stop, and just as in the previous two examples, the machine doesn’t accept the string.
- By now it should be clearer what should be accepted by the machine. Let’s go with “10” and see what happens. We start at as usual. We proceed to since we started with a 1. Our next character is a 0, so we go to and stop there. At last! We have a string that is recognized by the language! So whatever the language is here, it better include “10” but exclude “1”, “0”, and the empty string.
- Let’s try “010”. We start by going to due to the leading zero. Then we have “10” left, so we follow the arrow for the “1” which loops back to the same state. Then our final task is to go where the “0” arrow points to, but again, that goes back to the same state! Thus, “010” is not accepted by this machine.
We can go on and on, but at some point, we have to come up with the rules for this language the machine accepts. Notice that state is a “death” or “trap” state, because as soon as a string enters that state, it must end there. All strings are finite, and no matter what, any symbol we get (which is only a 0 or a 1 here) will lead us back to the same state. In other words, it is impossible for a string to be accepted (i.e. finish at ) if it ever enters . This means that if a string has a leading zero, it will never be accepted by the language/machine.
So now we know the language this machine recognizes must have all strings starting with a 1. But are there further restrictions? The answer is yes. If we have a string that consists of all 1s, then we will always end at due to the 1 that loops back to that state. This is not an accept state, so we need to consider having a 0 in some string. Notice that the accept state has a 0 that loops back to it. So if a given string ever reaches , as long as it ends in a 0, it will remain in that state and be accepted. It doesn’t matter if we have one, five, or a hundred zeroes. The only way a string can leave an accept state is by having a 1, which means that string goes back to . But notice that this is not a death state! It is possible to come back to the accept state if a string is in .
Now we can formalize things. This state machine recognizes the following language :
And we know that this language is regular. (To show a language is regular, it suffices to make a state diagram of a finite automaton.)
It’s also worth pointing out that the diagram I have above should represent the simplest possible state machine that accepts this language. There are infinitely many other diagrams that would also accept this language.
Now that was a simple example. I want to bring up a more complicated question that uses these same concepts.
Let be the language consisting of all strings over containing a 1 in the third position from the end.
Designing a finite automata that accepts some language is arguably harder than the reverse process, determining what language is accepted by a given machine. I have my solution to the above question below. The diagram only needs to keep track of the last three digits. There are four accept states that correspond to the last three digits being 100, 101, 110, or 111, which are the four possibilities we could have for the last three symbols of accepted strings. Naturally, the four other non-accept states correspond to the last three symbols being 000, 001, 010, or 011.
Up to now, I assumed that my finite automaton were deterministic, so it was always possible to know what was happening. But soon I’ll be moving on to non-deterministic finite automaton …
From my textbook, Introduction to the Theory of Computation, Third Edition, by Michael Sipser. ↩
A few days ago, I said that I was on the hunt for simple, yet effective deaf-friendly strategies that most people would be able to apply in life.
Here’s a basic one.
As much as you can, articulate as if you’re in an interview.
To be more specific, suppose you’re on the job market with a freshly minted Ph.D. In today’s situation, you may be competing with 300 other qualified candidates for one tenure-track faculty position at your dream school. If you manage to get an interview, you should feel proud, but it’s not a job guarantee. A competent interview means you remain in the applicant pool, and a terrible one … well, you get the idea.
So during that interview, what are the odds that you’ll mumble, slur or fail to fully project your voice as your interviewer stands across you? If you’re serious about getting the job, I think you’ll articulate very clearly.
But do most people extend this kind of voice in informal conversations?
I don’t believe that’s the case. Certainly for me, I pay much more attention to my speech when I’m giving a lecture or talking to a prominent person. But I need to work on extending that mentality to all conversations. I want to always make a good impression on my conversationalist by speaking as well as I can. If everyone else tried to do the same thing, we would all benefit.
That’s why getting into a habit like this is useful.
So please, as much as possible, do the following:
- Project your voice and articulate by moving your lips. This is similar to not mumbling.
- Focus on pronouncing the ends of your phrases to avoid sound tailing off.
- Don’t talk in a rushed manner; relax and know that, for the most part, everything will be okay.
- Don’t talk abnormally loud so that hearing people would wonder if there’s something wrong.
In my case, the first point I listed is the most helpful for me when I hear. But for those who may lack more “raw” hearing even with the help of assisted listening devices, the fourth may be more effective. But the first one, in my opinion, should take priority.
Over the next year, I hope to embark on a long-term blog writing project. These posts will center around a key concept: how to be more deaf friendly. I don’t think I will restrict myself to one setting or entity — e.g., how a college or university can be deaf friendly. I want to search for strategies that apply broadly, whether they are used in an educational, social, working, or other environment.
I feel inspired to start this because too many times ignorant people have made seemingly simple situations, such as one-on-one conversations, much more difficult for me than is necessary. With only slight modifications in accommodation, demeanor, and other strategies, those situations can be made much more appeasing, beneficial and invigorating to all parties involved.
My plan is to go beyond what the Americans With Disabilities Act and other laws require. I want to focus particularly on either human behavior, which cannot entirely be regulated by law, or simple strategies that are largely unknown. Obviously, I’ll have to be reasonable with what I can expect, and I anticipate that some of my suggestions may be controversial. But I also plan to argue that the tactics that can improve life for the deaf and hard of hearing have benefits that extend to hearing people.
Possible topics I may discuss in the future will likely fall in one of the following categories:
- body language/demeanor
- captioning/subtitles (though I have touched on the topic here and here)
I am solicitous to see how this turns out, and how much I learn from this project.
My summer job ended on July 28, 2012, and I was back home with about three or four weeks to go until the start of my junior year at Williams. I wanted to put whatever time I had to good use, and one way I thought of doing so was to prepare for my probability course this fall. Of the four courses I’m taking in the fall 2012 semester, probability is the easiest to prepare for since my professor gave out lecture notes in the middle of the summer to students (like me) who wanted to prepare well in advance.
And even better, I was able to take advantage of the wonderful repository of information on MIT’s Open Courseware. There’s a mathematics course at MIT called Probability and Random Variables, or 18.440, with a full set of lecture notes and exams with solutions.
The syllabus for my probability class closely matched what was covered in 18.440, so I wanted to challenge myself and see if I could pass the two midterms and final (using the same constraints as actual MIT students) for that course before setting foot in my actual probability class. It’s a miniature version of Scott Young’s MIT Challenge. For obvious reasons, I don’t have the time nor desire to take a full curriculum of MIT classes.
But I thought investing in understanding 18.440 would pay dividends later.
I started my personal challenge on August 13 and took the first and second midterms on the 22nd and 28th, respectively. I scored a 90 on the first midterm, and a 100 on the second. At that point, there were just seven lectures left (out of thirty or so) before the final exam, but I could tell that most of the information beyond the second midterm would not be included in my real probability class. With other unfinished business this summer, I opted to skip the final. It turned out to be a wise decision; their practice final (they didn’t have an actual one) was almost an exact recitation of the lecture slides, with half of their answer key consisting of “look at the slides for lecture X.” I doubt it would be an effective measure of how well I retained the material. I still reviewed it, but I didn’t take it under timed conditions as I did with the two midterms.
Right now, I consider myself done with my probability review, and am looking forward to finally taking an actual probability class in about a week.
Retrospective – What Worked …
- Solving practice problems – The multitude of problems with answers were extremely effective in providing me with applications of seemingly abstract concepts as well as ways to approach a certain problem. The exam questions on the two midterms were very similar and, in my opinion, much easier than the practice ones. Thus, I achieved high marks.
- Following a textbook – The lecture notes only scratched the surface of the topic, so I had to rely on an outside source. Fortunately, the topics in my professor’s textbook were very much in parallel with MIT’s slides, making the learning process much easier.
- Skipping the homework assignments – There were no solutions provided to the homework, and frankly, I was able to do well on the actual midterms just by looking at some old practice midterms.
… and What Didn’t
- Doing a few lectures per day – I was surprised by how ineffective this seemed. I retained far more information when going over lectures in bulk. It seemed like if I didn’t understand a concept in one of the lectures, I would spend hours agonizing over it, when I would instead be able to understand it far more clearly when facing a question that dealt with the topic.
- Emphasis on proof-writing and derivation of formulas – Most of the proofs I understood were from my professor’s textbook. But 18.440’s exams were entirely computational, making the effort I spent understanding certain obscure proofs wasted.
Much progress has been made with respect to the amount of film time that is captioned. The Federal Communications Commission has the following basic requirements:
Beginning in July 1993, the Federal Communications Commission (FCC) required all analog television receivers with screens 13 inches or larger sold or manufactured in the United States to contain built-in decoder circuitry to display closed captioning. As of July 1, 2002, the FCC also required that digital television (DTV) receivers include closed captioning display capability.
Of course, there are some exceptions to closed captioning requirements, most of which have to do with undue economic burdens. I’ll talk more about that later.
As someone who relies heavily on reading subtitles or captioning when watching film, I’m supportive of any policy that requires them as long as the captions don’t unreasonably hinder my view of the screen. While there’s a slight distinction between subtitles and captioning, I will only refer to “captioning” here for the purposes of brevity.
Today, I will focus on captioning as it pertains to airlines. According to this page, they are only required on safety videos. Consequently, many people have voiced their concerns that airlines are failing to fully accommodate the hearing impaired by not having fully accessible videos.
My most recent captioning-related issue (that compelled me to write this entry) happened from flying to and from Honolulu, Hawaii on United Airlines. On my flight from Chicago to Honolulu, there were many small television screens, each of which was shared among eight to twelve passengers. Despite how the flight and total screening time lasted about nine hours, the only time that I saw full captioning was during the introductory video that lectures about safety and emergency on the airline. While I commend the airline for providing captions, it left much to be desired when the four movies that proceeded to be shown were not captioned. Passengers had to use a set of earphones to listen to the audio — something that would not be feasible or safe for me to use with my hearing aids and profound hearing loss. I ended up ignoring the movies entirely and wrote some blog entries on paper.
The return trip, also on United, was slightly better. This time, one of the flights was from Honolulu to Washington D.C., and on that one, all passengers received their own television screen in front of them. Still, it was difficult for me to obtain captioning. In fact, it took a little luck and some experimenting to figure out a workaround that got captioning on some, but not all, of the 170 movies offered (in economy class). I left the airline with an acerbic mood and a paper pad filled with the writing of what would become the foundation of this blog entry.
My Thoughts and Complaints
United Airlines is one of the largest airlines in the world, so I assume they at least can’t use financial destitution as a rationale for not providing fully accessible captioning. My hope is that, starting with the largest airplanes, whenever a movie is shown, captions are either (1) mandatory, or (2) optional but always accessible. This should be a policy that’s part of all passenger classes and does not depend on how much one has paid for the tickets.
Scenario (1) should happen on flights similar to the one I described from Chicago to Honolulu. Here, passengers share television screens. Viewers should not have to go through the trouble of manually putting captions on and then worry that others will turn them off due to umbrage or other reasons.
Scenario (2) should occur on flights like mine from Honolulu to Washington D.C., where each passenger gets his or her own television screen. For movies, captions should always be an option if their corresponding language is used in the audio soundtrack. A movie offering soundtracks in English, Japanese, and Spanish, for instance, should have English, Japanese, and Spanish captions as an option.
Obviously, I’m not saying these changes should happen at once. I understand that in this economy, airlines are operating under razor-thin profits. But progress has to start somewhere, and it has to move at a reasonable rate. I hope that the largest airlines can implement these changes on their biggest airplanes. It doesn’t have to be all at once. Perhaps all English captions should be imported, and then those in different languages if they are not there already. I’m not sure if captions are fully accessible for those passengers in first or business class seats, because I’ve never sat in those. But if they’re there, then good. That’s a start — it needs to be expanded to economy class. And the continuing accessibility trend needs to trickle down to as many small commercial planes as possible.
I’m a realist. Full accessibility is almost certainly never going to happen. But we can get as close to it as is reasonable.
As I mentioned earlier, I wrote part of this blog on my flight. Some of it ended up in a letter form that I sent to the Aviation Consumer Protection and Enforcement. I’ve decided to reproduce it below. Anyone else who feels compelled to take similar action, please do so. I appreciate relevant feedback and thoughts.
Dear the Aviation Consumer Protection and Enforcement,
I am a twenty year old, deaf college student who has recently completed a round trip from Albany, New York, to Honolulu, Hawaii via United Airlines. As someone who cannot easily understand audio tracks, I am concerned over the lack of captioning on most of United Airlines’ movies and film clips. My hope is to raise attention to this issue and see if United Airlines can eventually add captioning to all clips shown on their television screens. My letter is not meant to single out and traduce United Airlines in particular; it is to address a common situation among many airlines in the hopes that at least one will be able to recognize the appropriate course of action.
I will focus primarily on my 9-hour flight (number 144) from Honolulu to Washington D.C. (Dulles) that took off on August 12, 2012. While walking to my economy class seat, I was ecstatic to see that each passenger had his or her own personal television screen. I anticipated being able to watch many of the movies stored in the plane’s database.
I immediately tried to see if I could obtain captioning or subtitles for the movies, and was disappointed when I couldn’t figure out a way to do so. A flight attendant confirmed my concerns after telling me she did not know how to get captioning.
I was not about to give up. Eventually, after a few minutes of tinkering and some luck, I figured out that there was a way to get captioning, but only on a certain subset of the movies. Out of the seven genres of movies United Airlines offered, which were New Releases, Action/Thriller, Classics, Comedy, Drama, Family, and World Cinema, only one category provided a guarantee of captioning. That would be the World cinema genre, accounting for 36 out of the 170 total available movies for me to watch. I suspect this is due to those movies being filmed in non-English speaking countries.
But this still means I am denied the ability to enjoy most of the movies offered. If it is not a huge burden to add captioning as an option to all movies, then this is a violation of the Americans with Disabilities Act, Title 47 (formerly Title IV) that deals with telegraphs, telephones, and radiotelegraphs.
Perhaps the most unfortunate realization of my experience on that flight was, in terms of accessibility for the deaf and hard of hearing, it represented a best-case scenario. I at least had the option to watch a smaller selection of movies with captioning, even if it did not include my top choice of movies to watch. I still have not seen The Hunger Games which was offered, but not captioned, on that flight.
I call this a best-case scenario because on most flights, I do not have that luxury of choice. I typically have to share a screen along with eight to twelve other passengers. And on the many flights I’ve been on in my life — my guess is a little over a hundred — I don’t think I have ever seen any movie shown that was captioned. An example of an experience like that was during my United Airlines flight from Chicago to Honolulu that occurred about two weeks earlier than the flight 144 I previously discussed. In that scenario, I am clearly denied the ability to enjoy the in-flight entertainment.
Thus, my experience as a flight passenger has often been frustrating. I hope to help push United Airlines in the correct direction. I commend them for at least getting captions on the introductory safety and security video. I only ask that these services get fully extended for all featured movies, whether they are on shared screens or part of a package for individual passenger screens. In the case of my experience on flight 144, I would guess that almost all of the non World Cinema movies offered multiple language audio tracks. If these services are provided, then what justification can be used to explain the lack of imported English captions?
I believe captioning needs to be mandatory on shared television screens during movies, and should always be an option if individuals have their own screens. At the moment, I am not going to ask the same for other audio services that I cannot understand, such as Passenger Announcements, since those are excluded under the Federal Communications Commission. Taken right from their page that deals with captioning:
[Exceptions] include public service announcements that are shorter than 10 minutes and are not paid for with federal dollars […]
But I believe that movies should not be an exception to captioning laws.
It was disheartening for me to see the vast majority of passengers watch movies that I would not be able to enjoy. In the first few hours of the flight, I made fake trips to the restrooms so I could observe how many passengers enjoyed the comfort of their movies and earphones. (My hearing aids prevent me from using earphones.)
My guess would be around ninety percent of all passengers, many of whom no doubt take their hearing for granted. By providing captions to all movies and videos on board, United Airlines will be taking an appropriate and necessary step towards increasing accessibility towards the deaf and hard of hearing.
I recommend reading the actual text of the Americans with Disabilities Act. Airlines should be covered under the ADA. They weren’t always, but I think a recent ruling in 2008 or 2009 changed the situation. That’s something I’d like to investigate further. Another interesting website to observe is the Aviation Consumer Protection and Enforcement, as linked earlier. They have a record of all recent disability complaints filed.
(Photo by Airliner’s Gallery)
I don’t watch a whole lot of shows or movies. But I do enjoy watching Skip Bayless, Stephen A. Smith, and others over at ESPN’s First Take hilariously debate over sports topics ranging from how much blame LeBron James deserves after getting a triple double in a 2011 finals loss to whether it was appropriate or not for Tim Tebow to run topless (yes, they debated about that!). The episodes are each two hours long, so it’s rare that I can watch them entirely. So I turn to the highlight videos uploaded on ESPN.com.
And as you can see, they have closed captioning! There’s a little toggle to the bottom right of the video screen. I’ve found the captions to be reliable. They’re not perfect, but they’re significantly more accurate than computer-generated captioning.
Unfortunately, not all of these highlights offer me that option. (The full, 2-hour long episodes aired on ESPN are completely captioned.) When I filtered the 159 most recent videos on ESPN.com (it seems like they archive old videos somewhere) to see only those that offered closed captioning, that narrowed the options down to 64. But a closer look indicated that the oldest video listed as having captions was uploaded in January 2012, and the oldest video overall on ESPN First Take’s current archives are dated as 2010. I noticed, upon inspection, that the most recent videos were captioned at a higher percentage than the older videos.
That’s optimistic news, but I wanted to know more. One gripe I have is that it’s sometimes difficult to understand seemingly arbitrary captioning policies. Does ESPN have a specific criteria for what online video highlights have captions? I emailed ESPN about my concerns and actually got a response, which was surprising considering the volume of messages they receive. The response, unfortunately, missed my main point, but did indicate that ESPN has room to improve with regards to accessibility:
Thank you for contacting us.
ESPN, Inc. does not release or sell copies of its programming or promos.
I assume it’s impossible to get transcripts from ESPN, although their may be third party releases out on the web. This isn’t optimal, but ESPN at least took a step to increase accessibility for the deaf and hard of hearing by having some captions for online video highlights. I commend them for doing so even though the short length of these clips may deter captioning. Virtually every other sports highlight reel I’ve seen online has been bereft of captions.
I will continue to discuss accessibility issues on this blog. Some entries related to accessibility will focus on minor issues, such as this one. But others will be more extensive and possibly include mailed letters. Obviously, these topics will be a little more serious than the one discussed today, and I have an entry prepared for that tomorrow. Stay tuned.