My Blog Posts, in Reverse Chronological Order
subscribe via RSS or by signing up with your email here.
How to be More Deaf-Friendly: The Search for Simple, yet Stunningly Effective Strategies
Over the next year, I hope to embark on a long-term blog writing project. These posts will center around a key concept: how to be more deaf friendly. I don’t think I will restrict myself to one setting or entity — e.g., how a college or university can be deaf friendly. I want to search for strategies that apply broadly, whether they are used in an educational, social, working, or other environment.
I feel inspired to start this because too many times ignorant people have made seemingly simple situations, such as one-on-one conversations, much more difficult for me than is necessary. With only slight modifications in accommodation, demeanor, and other strategies, those situations can be made much more appeasing, beneficial and invigorating to all parties involved.
My plan is to go beyond what the Americans With Disabilities Act and other laws require. I want to focus particularly on either human behavior, which cannot entirely be regulated by law, or simple strategies that are largely unknown. Obviously, I’ll have to be reasonable with what I can expect, and I anticipate that some of my suggestions may be controversial. But I also plan to argue that the tactics that can improve life for the deaf and hard of hearing have benefits that extend to hearing people.
Possible topics I may discuss in the future will likely fall in one of the following categories:
- accommodations
- body language/demeanor
- captioning/subtitles (though I have touched on the topic here and here)
- environment/setting
- speaking/speech
I am solicitous to see how this turns out, and how much I learn from this project.
My Experience with the “Miniature MIT Challenge”
My summer job ended on July 28, 2012, and I was back home with about three or four weeks to go until the start of my junior year at Williams. I wanted to put whatever time I had to good use, and one way I thought of doing so was to prepare for my probability course this fall. Of the four courses I’m taking in the fall 2012 semester, probability is the easiest to prepare for since my professor gave out lecture notes in the middle of the summer to students (like me) who wanted to prepare well in advance.
And even better, I was able to take advantage of the wonderful repository of information on MIT’s Open Courseware. There’s a mathematics course at MIT called Probability and Random Variables, or 18.440, with a full set of lecture notes and exams with solutions.
The syllabus for my probability class closely matched what was covered in 18.440, so I wanted to challenge myself and see if I could pass the two midterms and final (using the same constraints as actual MIT students) for that course before setting foot in my actual probability class. It’s a miniature version of Scott Young’s MIT Challenge. For obvious reasons, I don’t have the time nor desire to take a full curriculum of MIT classes.
But I thought investing in understanding 18.440 would pay dividends later.
I started my personal challenge on August 13 and took the first and second midterms on the 22nd and 28th, respectively. I scored a 90 on the first midterm, and a 100 on the second. At that point, there were just seven lectures left (out of thirty or so) before the final exam, but I could tell that most of the information beyond the second midterm would not be included in my real probability class. With other unfinished business this summer, I opted to skip the final. It turned out to be a wise decision; their practice final (they didn’t have an actual one) was almost an exact recitation of the lecture slides, with half of their answer key consisting of “look at the slides for lecture X.” I doubt it would be an effective measure of how well I retained the material. I still reviewed it, but I didn’t take it under timed conditions as I did with the two midterms.
Right now, I consider myself done with my probability review, and am looking forward to finally taking an actual probability class in about a week.
Retrospective – What Worked …
- Solving practice problems – The multitude of problems with answers were extremely effective in providing me with applications of seemingly abstract concepts as well as ways to approach a certain problem. The exam questions on the two midterms were very similar and, in my opinion, much easier than the practice ones. Thus, I achieved high marks.
- Following a textbook – The lecture notes only scratched the surface of the topic, so I had to rely on an outside source. Fortunately, the topics in my professor’s textbook were very much in parallel with MIT’s slides, making the learning process much easier.
- Skipping the homework assignments – There were no solutions provided to the homework, and frankly, I was able to do well on the actual midterms just by looking at some old practice midterms.
… and What Didn’t
- Doing a few lectures per day – I was surprised by how ineffective this seemed. I retained far more information when going over lectures in bulk. It seemed like if I didn’t understand a concept in one of the lectures, I would spend hours agonizing over it, when I would instead be able to understand it far more clearly when facing a question that dealt with the topic.
- Emphasis on proof-writing and derivation of formulas – Most of the proofs I understood were from my professor’s textbook. But 18.440’s exams were entirely computational, making the effort I spent understanding certain obscure proofs wasted.
United Airlines, Where are the Captions?
Much progress has been made with respect to the amount of film time that is captioned. The Federal Communications Commission has the following basic requirements:
Beginning in July 1993, the Federal Communications Commission (FCC) required all analog television receivers with screens 13 inches or larger sold or manufactured in the United States to contain built-in decoder circuitry to display closed captioning. As of July 1, 2002, the FCC also required that digital television (DTV) receivers include closed captioning display capability.
Of course, there are some exceptions to closed captioning requirements, most of which have to do with undue economic burdens. I’ll talk more about that later.
As someone who relies heavily on reading subtitles or captioning when watching film, I’m supportive of any policy that requires them as long as the captions don’t unreasonably hinder my view of the screen. While there’s a slight distinction between subtitles and captioning, I will only refer to “captioning” here for the purposes of brevity.
Today, I will focus on captioning as it pertains to airlines. According to this page, they are only required on safety videos. Consequently, many people have voiced their concerns that airlines are failing to fully accommodate the hearing impaired by not having fully accessible videos.
My most recent captioning-related issue (that compelled me to write this entry) happened from flying to and from Honolulu, Hawaii on United Airlines. On my flight from Chicago to Honolulu, there were many small television screens, each of which was shared among eight to twelve passengers. Despite how the flight and total screening time lasted about nine hours, the only time that I saw full captioning was during the introductory video that lectures about safety and emergency on the airline. While I commend the airline for providing captions, it left much to be desired when the four movies that proceeded to be shown were not captioned. Passengers had to use a set of earphones to listen to the audio — something that would not be feasible or safe for me to use with my hearing aids and profound hearing loss. I ended up ignoring the movies entirely and wrote some blog entries on paper.
The return trip, also on United, was slightly better. This time, one of the flights was from Honolulu to Washington D.C., and on that one, all passengers received their own television screen in front of them. Still, it was difficult for me to obtain captioning. In fact, it took a little luck and some experimenting to figure out a workaround that got captioning on some, but not all, of the 170 movies offered (in economy class). I left the airline with an acerbic mood and a paper pad filled with the writing of what would become the foundation of this blog entry.
My Thoughts and Complaints
United Airlines is one of the largest airlines in the world, so I assume they at least can’t use financial destitution as a rationale for not providing fully accessible captioning. My hope is that, starting with the largest airplanes, whenever a movie is shown, captions are either (1) mandatory, or (2) optional but always accessible. This should be a policy that’s part of all passenger classes and does not depend on how much one has paid for the tickets.
Scenario (1) should happen on flights similar to the one I described from Chicago to Honolulu. Here, passengers share television screens. Viewers should not have to go through the trouble of manually putting captions on and then worry that others will turn them off due to umbrage or other reasons.
Scenario (2) should occur on flights like mine from Honolulu to Washington D.C., where each passenger gets his or her own television screen. For movies, captions should always be an option if their corresponding language is used in the audio soundtrack. A movie offering soundtracks in English, Japanese, and Spanish, for instance, should have English, Japanese, and Spanish captions as an option.
Obviously, I’m not saying these changes should happen at once. I understand that in this economy, airlines are operating under razor-thin profits. But progress has to start somewhere, and it has to move at a reasonable rate. I hope that the largest airlines can implement these changes on their biggest airplanes. It doesn’t have to be all at once. Perhaps all English captions should be imported, and then those in different languages if they are not there already. I’m not sure if captions are fully accessible for those passengers in first or business class seats, because I’ve never sat in those. But if they’re there, then good. That’s a start — it needs to be expanded to economy class. And the continuing accessibility trend needs to trickle down to as many small commercial planes as possible.
I’m a realist. Full accessibility is almost certainly never going to happen. But we can get as close to it as is reasonable.
As I mentioned earlier, I wrote part of this blog on my flight. Some of it ended up in a letter form that I sent to the Aviation Consumer Protection and Enforcement. I’ve decided to reproduce it below. Anyone else who feels compelled to take similar action, please do so. I appreciate relevant feedback and thoughts.
——
My Letter
Dear the Aviation Consumer Protection and Enforcement,
I am a twenty year old, deaf college student who has recently completed a round trip from Albany, New York, to Honolulu, Hawaii via United Airlines. As someone who cannot easily understand audio tracks, I am concerned over the lack of captioning on most of United Airlines’ movies and film clips. My hope is to raise attention to this issue and see if United Airlines can eventually add captioning to all clips shown on their television screens. My letter is not meant to single out and traduce United Airlines in particular; it is to address a common situation among many airlines in the hopes that at least one will be able to recognize the appropriate course of action.
I will focus primarily on my 9-hour flight (number 144) from Honolulu to Washington D.C. (Dulles) that took off on August 12, 2012. While walking to my economy class seat, I was ecstatic to see that each passenger had his or her own personal television screen. I anticipated being able to watch many of the movies stored in the plane’s database.
I immediately tried to see if I could obtain captioning or subtitles for the movies, and was disappointed when I couldn’t figure out a way to do so. A flight attendant confirmed my concerns after telling me she did not know how to get captioning.
I was not about to give up. Eventually, after a few minutes of tinkering and some luck, I figured out that there was a way to get captioning, but only on a certain subset of the movies. Out of the seven genres of movies United Airlines offered, which were New Releases, Action/Thriller, Classics, Comedy, Drama, Family, and World Cinema, only one category provided a guarantee of captioning. That would be the World cinema genre, accounting for 36 out of the 170 total available movies for me to watch. I suspect this is due to those movies being filmed in non-English speaking countries.
But this still means I am denied the ability to enjoy most of the movies offered. If it is not a huge burden to add captioning as an option to all movies, then this is a violation of the Americans with Disabilities Act, Title 47 (formerly Title IV) that deals with telegraphs, telephones, and radiotelegraphs.
Perhaps the most unfortunate realization of my experience on that flight was, in terms of accessibility for the deaf and hard of hearing, it represented a best-case scenario. I at least had the option to watch a smaller selection of movies with captioning, even if it did not include my top choice of movies to watch. I still have not seen The Hunger Games which was offered, but not captioned, on that flight.
I call this a best-case scenario because on most flights, I do not have that luxury of choice. I typically have to share a screen along with eight to twelve other passengers. And on the many flights I’ve been on in my life — my guess is a little over a hundred — I don’t think I have ever seen any movie shown that was captioned. An example of an experience like that was during my United Airlines flight from Chicago to Honolulu that occurred about two weeks earlier than the flight 144 I previously discussed. In that scenario, I am clearly denied the ability to enjoy the in-flight entertainment.
Thus, my experience as a flight passenger has often been frustrating. I hope to help push United Airlines in the correct direction. I commend them for at least getting captions on the introductory safety and security video. I only ask that these services get fully extended for all featured movies, whether they are on shared screens or part of a package for individual passenger screens. In the case of my experience on flight 144, I would guess that almost all of the non World Cinema movies offered multiple language audio tracks. If these services are provided, then what justification can be used to explain the lack of imported English captions?
I believe captioning needs to be mandatory on shared television screens during movies, and should always be an option if individuals have their own screens. At the moment, I am not going to ask the same for other audio services that I cannot understand, such as Passenger Announcements, since those are excluded under the Federal Communications Commission. Taken right from their page that deals with captioning:
[Exceptions] include public service announcements that are shorter than 10 minutes and are not paid for with federal dollars […]
But I believe that movies should not be an exception to captioning laws.
It was disheartening for me to see the vast majority of passengers watch movies that I would not be able to enjoy. In the first few hours of the flight, I made fake trips to the restrooms so I could observe how many passengers enjoyed the comfort of their movies and earphones. (My hearing aids prevent me from using earphones.)
My guess would be around ninety percent of all passengers, many of whom no doubt take their hearing for granted. By providing captions to all movies and videos on board, United Airlines will be taking an appropriate and necessary step towards increasing accessibility towards the deaf and hard of hearing.
Sincerely,
Daniel Seita
——
Further Reading
I recommend reading the actual text of the Americans with Disabilities Act. Airlines should be covered under the ADA. They weren’t always, but I think a recent ruling in 2008 or 2009 changed the situation. That’s something I’d like to investigate further. Another interesting website to observe is the Aviation Consumer Protection and Enforcement, as linked earlier. They have a record of all recent disability complaints filed.
——
(Photo by Airliner’s Gallery)
On My New Theory of Computation Series
UPDATE May 13, 2015: I only managed to do half of what I wanted for this series, but at least I did something. As of now, I’m not going to go back to working on this because my current academic and research interests have shifted.
The fall 2012 semester is approaching. It’s not as fast as those winter waves in Waimea Bay, but close enough. (Yes, the above photo I took is of the same beach — click for a larger view.)
Here are all the courses I’m taking:
- Applied Abstract Algebra
- Computer Graphics
- Probability
- Theory of Computation
All are lectures, with Computer Graphics being the only course that includes a lab component. Classes 1 and 3 satisfy my math major requirements, while 2 and 4 are for computer science. For the first semester ever, I won’t have at least one class that doesn’t fall outside of my two majors. So on the one hand, this means I’ll maximize my dedication in these classes, and will probably get high marks (famous last words).
But unfortunately, it means I won’t have as many options if I get a little burned out of computer science and math. I tend to spend long hours studying for exams and working on homework, so I’m going to try and do something that will hopefully alleviate some of the workload. This is purely an experiment, and one that I plan to continue if it brings solid results this semester.
The Plan
I’m going to make a series of blog posts on my Theory of Computation class (henceforth, CS 361). For reference, here’s the course description from the Williams Course Catalog that delineates the fun stuff coming up for me:
This course introduces a formal framework for investigating both the computability and complexity of problems. We study several models of computation including finite automata, regular languages, context-free grammars, and Turing machines. These models provide a mathematical basis for the study of computability theory–the examination of what problems can be solved and what problems cannot be solved–and the study of complexity theory–the examination of how efficiently problems can be solved. Topics include the halting problem and the P versus NP problem.
After every few classes, I hope to record on Seita’s Place what I learned and any relevant information going above and beyond the classroom discussion. By the time I take the midterm and final, I’ll have a nice repository of information online to help do quick review. I will strive to start these entries as soon as possible in draft form, and will add information to them a few hours after each CS 361 class.
There will be a consistent format for each of the posts. Each entry will be titled “CS Theory Part X: Y” where X is some natural number, and Y is a phrase that relates with the material I’ve learned and will cover in the blog entry. I want this to be like a personal Wikipedia that makes heavy use of rigorous proofs and outside sources.
The Benefits
So why do I want to do this? The most important benefit will be that it deepens my knowledge of theoretical computer science in a way that avoids long study hours and memorization session. Again, as I plan to update these entries soon after my classes end, I will minimize the amount of material I forget due to time. Furthermore, by writing these entries in my own words, I force myself to understand the material well, a prerequisite for explaining a subject in depth. (There’s a whole host of information online that backs up the previous claim.) Since I don’t want to write a book on theory, I have to pick the right spots to focus on, which requires me to be able to effectively judge the importance of all the concepts hurled at me in the class. Also, using the Internet over paper to express these posts makes it easier to link together concepts in a web, as explained by Scott Young’s holistic learning method.
But this begs the question: why this class, and not one of the other three?
My long-term goal is to pursue a Ph.D in computer science. As part of the process, I’ll be taking the computer science GRE and Ph.D qualifying exams. As you may expect by the course description, the material in CS 361 is most closely related with what’s going to be on the test than the material in the other three classes. Taken from the Educational Testing Service Website, we see that 40 percent of the material is theory!
III. THEORY AND MATHEMATICAL BACKGROUND — 40%
A. Algorithms and complexity
Exact and asymptotic analysis of specific algorithms
Algorithmic design techniques (e.g., greedy, dynamic programming, divide and conquer)
Upper and lower bounds on the complexity of specific problems
Computational complexity, including NP-completenessB. Automata and language theory
Models of computation (finite automata, Turing machines)
Formal languages and grammars (regular and context-free)
DecidabilityC. Discrete structures
Mathematical logic
Elementary combinatorics and graph theory
Discrete probability, recurrence relations and number theory
I suspect the amount of theory material on Ph.D qualifying exams is similar. These vary among institutions, so there’s no standard.
Computer graphics, while no doubt an interesting subject, isn’t as important in terms of the subject test material.
IV. OTHER TOPICS — 5%
Example areas include numerical analysis, artificial intelligence, computer graphics, cryptography, security and social issues.
It would also be more difficult for me to post graphics-related concepts online, as I’m certain that would involve an excessive number of figures and photos. I do have a limit on how many images I can upload here, and I’m not really keen on doing a whole lot of copying from my graphics class’ webpage; I prefer to have the images here be created by myself.
I also chose CS 361 over my two math classes. If I’m planning to pursue doctoral studies in computer science, it makes sense to focus on CS 361 more than the math classes. I was seriously considering doing some Probability review here, but the possibly vast number of diagrams and figures I’d have to include (as in graphics) is a deterrent.
Finally, another benefit of writing the series will be to increase my attention to Seita’s Place. I hope to boost my writing churn rate and my research into deafness and deaf culture. Even though it’s relatively minor, this blog has become more important to me over the past year, and I want to make sure it flourishes.
I’ll keep you posted.
Hearing Aids: How They Help, and How They Fall Short in Group Situations
Daniel, how much do your hearing aids help you hear?
I’m surprised I don’t get asked this question often by my classmates, colleagues, professors, and others. Perhaps it’s because my speaking ability causes hearing people to believe that I can hear as well as them. Or maybe they believe that hearing aids can cure hearing loss?
Unfortunately, that’s not what they do. They amplify sound, and allows me to be aware of its existence. If someone’s talking, then I know that the person is talking, regardless of whether I’m looking at him or her. The difficulty for me, and for other people who wear hearing aids, is understanding the fine details.
Here’s an analogy. Suppose you’re on your computer and are looking at a tiny image that’s 50×50 pixels in size. From what you can see, it’s interestingly complex and alluring. It’s bright and colorful. There are wondrous curves that weave together and form some figure you can’t make out. You want to better understand what the image conveys, so you do the logical thing and try to resize it by pasting it into Photoshop and then dragging its edges to fit your entire monitor. (I’m assuming you’re a little naive — no offense intended.)
But there’s a problem.
When you do that, the image doesn’t become clearer. Your computer has to invent new pixels that coincide with the original pixels. As a result, the “large” image is a badly distorted version of the smaller one, and you still can’t fully understand what the image means. But perhaps it does convey more information than the really small image did, so despite its flaws, you stick with this method of understanding tiny pictures.
As I mentioned earlier, hearing aids let me understand sound that I would otherwise be unable to understand with the naked ear. A lot of sounds. In fact, I can hear no sounds with an intensity of 90 decibels or lower with my natural hearing; it’s the epitome of being “totally deaf.” For me to hear and understand a person talking in a normal tone without my hearing aids, my ears must be inches away from his or her mouth. Obviously, that’s not happening during most of my conversations, so I wear hearing aids almost all the time when I’m not sleeping.
A Typical Situation
I want to discuss a challenge that occurs often in my life, which is socializing in groups. Most of its difficulty is due to mechanical limitations of hearing aids, but there are also other forces in play. An example of a hypothetical situation would be during my 2012 [Bard College REU][1], where I would often amble with one or several other students for ten minutes at a time, which was the walking distance from our dorm to the heart of campus.
In the company of just one student, I can easily start and maintain a satisfying conversation. I still have to ask my companion to repeat every fifth or sixth sentence he says, but at least I get the general direction of what we’re talking about.
But things get exponentially worse with three, four, and more people. And this is where the hearing aid’s lack of ability to make sound distinctive hurts me. What typically happens is the other people get involved in a conversation that I can’t follow. There are multiple factors that hinder my hearing in this kind of situation.
The first is what I explained before, with the hearing aid’s difficulty in clarifying amplified sound. Incidentally, I won’t go into depth on technical reasons, since there are a whole host of articles that such as this one that talk about hair cells. A second one is that in a group conversation, people often don’t look directly at me when talking, thus rendering my lip-reading skills useless. (Lip-reading can account for about 25 percent of my comprehension.) Moreover, my hearing aids are designed to best amplify sound when it’s coming directly at me, towards my eyes, which isn’t always the case. It’s even less common if all of us are moving (i.e. walking), which adds another roadblock to my comprehension.
A third source of hindrance is simultaneous talking and ambient noise. As you can imagine, when my companions keep interjecting each other or laugh as a group, it adds another level of complexity. In order for me to understand as well as a hearing person could in that situation, my hearing aids would have to somehow partition the various sources of sound into “Sound coming from Person A,” “Sound combing from Person B,” and so on. Things get worse when there happen to be, say, thirty young campers in a group next to us yelling, and the like, which is why I prefer quiet cafeterias and restaurants.
A fourth problem — yes, there’s more! — is the “Deaf Nod,” which doesn’t so much relate to hearing aids but is a consequence of being deaf. This occurs when a deaf person who doesn’t understand what’s said in a conversation decides to do a weak nod to give the impression that he or she understands what’s going on, despite how it’s the exact opposite! I’m so guilty of the Deaf Nod that I feel shameful. Part of this stems from frustration in lack of understanding, and another motivation is not to seem like a hassle to the people who I’m conversing with. The ultimate result is that I just play along with a conversation that I’m unfamiliar with, which has a chain effect since I then don’t fully understand what’s discussed next if it builds up on previous dialogue. It’s more common for me to do the Deaf Nod when there’s at least five people involved, or if I’m not really familiar with my conversationalist.
Finally, my hearing aid batteries could die at any moment. Fortunately, I usually receive some form of notification, such as several awkward beeps together. But sometimes, there’s no warning, and my batteries die suddenly. If it’s my right hearing aid that stops working, then it’s not much of a problem, because I obtain most of my hearing from my left ear. But any extra bit that I can hear helps. And it won’t always be possible for me to have easy access to my batteries. I usually put them in a small pouch in my backpack, so I have to dig in, get the battery out, replace the old battery, and insert it. Even though the entire process takes a few seconds, simply doing it detracts from my focus on the ongoing conversation, making it even harder to get back in the mix. And I’m not going to even discuss the case of when my left hearing aid is the one that dies — I can barely understand anything if that happens!
Battery failure is the most common of hearing aid technical difficulties, but there are others. Even though my hearing aids are generally reliable, I have experienced many cases of when technical difficulties have ruined current and potential conversations.
To recap, here’s a list of the barriers I experience:
- Difficulty in clarifying amplified sound
- Lack of eye-contact
- Simultaneous talking
- Frustration and the “Deaf Nod”
- Hearing aid technical difficulties
So while the benefits of hearing aids are enormous, the (non-exhaustative) previously listed
challenges make it impossible for me to experience what life is really like for hearing people,
especially during group situations. There are other cases when the hearing aid falls short of
optimality, such as when I’m watching television, but I can write an entire rant blog entry about that later.
Now for the Good Part
I don’t want to give the impression that hearing aids don’t help me at all. I was merely highlighting a salient downside. But the reality is without them, I would never be aware of many noises that exist in today’s world. I don’t think I would have done as well in school if I didn’t have hearing, and I certainly wouldn’t be able to do well at an academically rigorous school like Williams College (second in Forbes’s 2012 college rankings!), since I rely heavily on communication with my professors.
And I also wouldn’t be as eager to go to computer science graduate school.
Look at the home pages of the computer science faulty at your college or university, and go see their non-dissertation publications. How many papers do not have another author? I checked as many Williams computer science faculty publications as I could. And I would guess that just two or three percent of them (textbooks and informal papers excluded) didn’t have two or more authors.
I’m sure that most of the communication involved was email, but from my own experience, I’m convinced that it’s so much easier to conduct group research face-to-face. And that’s the baseline of what I need from my hearing aids. I need to hear just enough to make working on a research project feasible, which means communication should not be a research roadblock.
Conclusion
I love hearing aids. I put them in my ears as soon as I wake up every morning. I take care of my hearing aids by cleaning and drying them regularly. I store all pairs in soft containers and use my portable hearing aid dryer to ensure they [don’t break down due to moisture][4]. Even though it’s really tempting to do so, I never take them for granted, and constantly remind myself of how dependent I am on them. (Writing this entry was one way to do that, for instance.)
Hearing aids have offered me the chance to experience the euphony of the world and to be capable of socializing with the vast majority of people I meet. But they perform poorly in many situations, falling short of normal hearing, and they are incapable of truly curing significant hearing loss.
Java to Python Transition
Java was the first programming language I felt comfortable enough to write lengthy programs that, for instance, could be used to advance the goals of a research project. So for my first program needed for my summer research at the Bard College REU, I used Java. I wrote code to create a random sentence generator. That was during my second week of the REU, and I had writen one significant Python script in my life, which was for a bonus question from my Algorithm Design & Analysis homework.
Let’s fast forward a bit. By the time the REU ended, I had written over 20 significant programs … in Python. So what happened?
At the start of the REU, I knew much more about Java than Python. But after some prodding by my advisors, and the fact that everyone else in my research project was using Python, I switched languages. I soon found — as they said I would — that Python was so much easier to write than Java. In particular, the file input and output is so stunningly simple, yet incredibly useful, a must for all the scripts we wrote that involved manipulating files. My project, after all, was about text simplification, and all the relevant corpora were stored in files.
I also found it easier to understand the official Python documentation than Java’s, so looking up things was less of a challenge. Like the guy who wrote this (heavily biased) article about Python versus Java, I have to look up a lot of things for Java, whereas the same became a bit less true for Python.
My experience confirmed what I’ve heard that learning new languages is easier once one becomes proficient at another language. Well, with the possible exception of Malbolge.
Wrapping Up my Summer Research
I’ve written my final report to the National Science Foundation for my summer work at the 2012 Bard College Mathematics & Computation Research Experience for Undergraduates. My project focused on text simplification, which can be broadly defined as the process of making (English) text easier to read while maintaining as much underlying content as possible. I would consider it to be a subfield of machine learning, which is a branch of artificial intelligence.
The primary objective of my work was to improve on existing results of text simplification. There is no standard way to measure the quality of simplification, so my research team — consisting of me, another undergraduate, and two Bard college professors — decided to use BLEU scores. The advantage to using those scores is that, in the summer of 2011, another research group used it as a measure of their level of simplification. Our goal was to beat their BLEU scores.
To carry out the translation process, we used the open-source Moses software. To train the system to simplify text, we used aligned, parallel data from Wikipedia and Simple English Wikipedia. Moses is designed to be able to translate across different languages, but we considered “Simple English” as its own language. Thus, we viewed the project as encompassing an English-to-English translation problem, where the “foreign” language is derived from Wikipedia, and the “simple” language is derived from Simple English Wikipedia. Our hope was that Moses would be able to understand the steps involved in making English text simpler and act as an effective means of translation.
We soon discovered, though, that our parallel data was of low quality. We therefore used the LIBSVM and LIBLINEAR classification softwares to improve the data by switching or deleting certain pairs of lines from the parallel corpus. For instance, if there was a short sentence in the Wikipedia data that was aligned to an obviously more complex sentence from the Simple English Wikipedia data, it made sense to switch those two sentences so they were in the more appropriate data set. After all, the data was made by random people contributing to the two Wikipedias, so there were bound to be a few bad samples here and there.
Our group successfully classified a random sample of sentences as belonging to the Wikipedia set or the Simple English Wikipedia set with a higher degree of accuracy than previous researchers did. My professors are still performing some experiments, so it remains to be seen if we can get higher BLEU scores.
Overall, I’m glad I had the chance to participate in the Bard REU, and I’m optimistic that we will produce a strong research paper. I thank the professors there for accepting me into their program, and the National Science Foundation, which has now sponsored my second straight summer program.
Don’t Get Hearing Aids with Touch Screens
A few months ago, I made a Facebook status update announcing how happy I was to have new hearing aids. Well, I’m still glad to have them, but I have one main gripe about their touch screen. The slightest drop of sweat that contacts the touch screen will cause the hearing aid to act unpredictably.
My hearing aid model can be found here. And as you can guess, instead of having the old-fashioned hearing aid switch that you click to increase the volume control, there’s a touch screen instead. Pushing your fingertips up the back of the hearing aid will increase volume; pushing it down will decrease volume. And touching the hearing aid will change the mode (e.g. the T-coil, or telecoil).
Actually, that’s almost as annoying to me as my main gripe, which I mentioned earlier. Suppose I have an itch near the back of my ear. If I inadvertently touch the back of the hearing aid while trying to relieve myself, I’ll change the mode when I don’t want to. This forces me to make a few more touches in order to get it back to the old mode. And switching modes causes a temporary blockage of sound from entering my ear.
But to me, the bigger problem is that if I engage in any sort of physical activity for just a few minutes, the sweat that gets in touch with the hearing aid will cause it to behave abnormally. When my hearing aids first acted weirdly by making a lot of beep-beep-beep sounds due to sweat, I thought they were breaking down. But then I realized that the sweat was causing the hearing aid mode to change! The sweat seemingly perturbs the touch screen and causes it to touch itself. So I end up tapping the hearing aid a few more times to get it back to the right mode. But then it changes modes again! And again! The cycle continues.
So, for instance, when I go to the weight room, I make sure I have my backup hearing aids on, which do not have a touch screen. Alternatively, I’ll just take off the hearing aids. (This presents a multitude of additional risks, so I wouldn’t recommend it if you’re not experienced with lifting weights, or if the gym is especially crowded.)
I’ve learned my lesson. I’m still happy to wear these hearing aids, but when I get new pairs in a few years, I’ll be sure to avoid the ones with touch screens.
Project Euler 85
A few weeks ago, I remember solving a Project Euler question that particularly intrigued me. I saw problems similar to this when I was in middle school, but never recalled myself successfully solving something them. For reference, the problem was Project Euler 85, counting the number of rectangles in a rectangular grid.
But when I attempted this particular problem? It took me fewer than thirty minutes to think of a solution and code it. So what changed?
I believe it’s been my new algorithmic approach to solving problems.
I fully admit that I’m a brute-force, “try-them-all” kind of person. Whenever I see a problem of the form “How many X satisfy the property Y,” my first instinct is to go through all possible candidates of X and store a count of how many satisfy Y. I first approached this problem by remembering what I did years ago — count rectangles by hand. The questions I experienced back then were much easier in that it was feasible, though time consuming, to actually count all the possible rectangles. Of course, it’s so easy to miss or double-count a few rectangles here and there that I was bound to be off in my final answer.
So I realized I needed a formulaic, or algorithmic, approach to this problem. There had to be some formula out there that could take as input just the dimensions of the largest rectangle, and use that information to compute the total amount of rectangles contained within it.
There was. I started by counting rectangles by hand for rectangles of a small scope (e.g. a 2×3 one), and then testing my conclusions on other small rectangles. This was my algorithm:
total = 0
for i: 1 to n, inclusive
for j: 1 to m, inclusive
total += (n-i+1)*(m-j+1)
return total.
Basically, I consider all the possible dimensions of a smaller rectangle within a larger, n-by-m rectangle. (For instance, I could consider all 1×1 rectangles.) Then I use the (n-i+1)*(m-j+1) formula I derived which computed all the possible rectangles! It was very pleasing for me to solve this problem, and I know that I would be able to solve questions relating to counting rectangles within rectangles with an algorithms-based approach.
For reference, here was my actual code. It’s written in Python. (I used Java to solve the first 50 or so problems, but since then, it’s been Python all the way.) The code gave me the correct answer in fewer than five seconds.
I have to thank Project Euler for not just giving me a medium through which to practice programming, but also for lending me a new perspective on problem solving approaches. I’m just two solutions away from level 3 at this time….
This post is part of a series of posts related with my solutions to selected Project Euler questions.
Accommodations at Conferences and Talks
As a prospective computer science graduate student, I know I will likely be attending — and talking — at conferences. And my worry is that accommodations will be either lacking or unsuitable for the task. Just recently, I read a few startling messages on an email list aimed at people with disabilities interested in science, technology, engineering, and mathematics. A deaf student was in a tough situation regarding accommodations. Here was the message that started it all, with the author, location, and relevant names protected:
I’ve registered for two […] conferences this summer. […] I sent emails to the coordinators asking for interpreter support, but they have not responded. Since it’s illegal under the ADA for organizations to refuse to provide reasonable disability accommodations, what would the best approach be here? I don’t want to come off too strongly and alienate them.
Ouch. The student can’t gain the full experience of the conference without accommodations. But if he lets them know about the ADA’s (The Americans with Disabilities Act) requirements, will the coordinators view him as a nuisance and possibly block him from coming? The student’s position doesn’t improve with his update, in which he told us how the coordinator responded:
“We do not have the capability to provide an interpreter, but it will absolutely be no problem to accommodate if you obtain one yourself.
Let me know how else I can help.”
At least the coordinator responded, but not in a fair way! I don’t expect everyone to be experts on the ADA, but from what I can see, this message was, in my opinion, poorly constructed and displays a lack of research about accommodations on the coordinator’s part. I would hope that the writer would offer a better excuse1 rather than the current bland response. Fortunately, the deaf student received support from people on the email list. Here are some segments of the most scintillating response:
It is amazing, but not uncommon for anyone in any organization not to know the relevant rules or laws governing disabilities in general now. […] I do know for a fact, that there are several students in the country that have disabilities that earn their PHD’s with very little accommodations, because they don’t want to be seen as the troublemaker in their respective departments. They may win the grievance or lawsuit in the end, but don’t get the recommendations of their department heads when they start looking for faculty positions after that. This is an unwritten game that plays out each and every quarter and semester at a university in the country. I currently hear from people that want a solution that does not require them to file a grievance or lawsuit. Unfortunately, it is not limited to schools and businesses, it is also extremely prevalent in governmental agencies as well. I have seen it at the local and State levels, but it is still very common among the myriad of Federal agencies.
[…]
Fortunately, there were a lot of people in the disability rights movement that came along before me to help pave the way that has allowed me to be successful in life. I feel that it is my responsibility to continue to break down barriers that will allow even more people to benefit from the lives they want to lead. If that requires me to educate some people, then I gladly accept the role. If it requires me to kick the door down, then I can also achieve this as well.
This is well said. I admit that I feel like a burden when asking for accommodations, but I know that in the end I need them in order to perform well in my studies and work. My aim is to become the best computer scientist I can, and if others view me as a troublemaker, so be it. I will just have to take advantage of my accommodations and do the best work possible to show to others that I deserve to work wherever I please.
Another person wrote a short email that sums up all of our sentiments:
It shouldn’t be about how much the event costs. You have the right to get the same benefit from it as someone who is able to hear.
There has been no update from the original deaf student since his last message which thanked the respondents for their support. I hope it has worked out for him.
-
My idea of an “excuse” would be just what the coordinator said — a lack of capability to provide interpreters — but there needs to be justification and evidence that the conference did as much as it could to provide accommodations. If the conference was in the middle of nowhere with no interpreters available within a 2-hour radius, then I can understand. But who would organize a conference in the middle of nowhere? ↩
Williams College Spoils Me
Midway through my Bard College summer REU, it is becoming clearer to me how I have been spoiled. At Williams College, all ASL interpreters who work for me are required to possess RID (Registry of Interpreters for the Deaf) certification. To obtain that title, interpreters have to possess a standard, national excellence of sign language fluency, knowledge, and skill. They have to pass tests and other requirements detailed on the linked website. The situation was the same at the University of Washington’s Summer Academy. During an “Academy Base,” which was the 9:00 to 10:30 AM time slot when all the students would gather around in a room and listen to several lecturers, the disability coordinator there once said verbatim: “Have you noticed that all the interpreters here are really good?”
They were outstanding, and I’m wishing the same quality of services existed at the Bard College REU. I am grateful that Bard has generously provided me with interpreting services for all talks, even for those based on such abstruse topics that I would never be able to understand. The interpreters here, unfortunately, are not in the same class as those at Williams College or the University of Washington. They remind me of my interpreters during high school. That is the effect of being spoiled; you are gift-wrapped something outstanding, and do not want to release it and obtain a lesser version the next day. Even though the law requires that someone like me is entitled interpreting services, institutions can provide accommodations of varying quality.
So for all RID-certified interpreters out there, thank you for taking the extra step to ensure that you are delivering high quality interpreting services. I can only hope that my own signing will be up to your standards one day.
ASL Guidelines, Revisited
Eight months ago (wow, has it really been that long?) I made the first of what I hoped would be a series of posts related with American Sign Language (ASL) guidelines. You can view that blog entry here. I hope to expand on Axiom IX:
Axiom IX: The simplest way to manage personal pronouns is to point.
With footnote:
Axiom IX Footnote: To sign the general word “he,” point your finger in the air.
For the purposes of brevity and clarity, I will focus on the personal pronouns listed in the corresponding Wikipedia entry. And by looking at that table, I realized that my axiom was slightly incorrect. Not all personal pronouns are indicated with the index finger. If one is signing a possessive pronoun, e.g. my, yours, his, and her, it’s best to use the entire hand with the palm facing towards the correct entity. More specifically, the hand should be flat and look as if it is the sign for the letter “b” but without the thumb curling towards the center of the palm.
Example: You are signing the equivalent English sentence of “That book is yours” to a friend. A correct ASL depiction would begin with pointing to the “book entity” — pointing to the actual book if it is visible to both of you, or pointing to any non-previously indexed location if it is a “virtual” book, followed by the sign for book. Then, the “your” sign would follow, with your flat palm facing towards your friend. Add emphasis by pushing your hand forward slightly.
The words his, her, and their have similar signs, except the hand will be pointing towards wherever he, she, or they are located (indexed), respectively. And clearly, “my” or “mine” will be the reverse of “your.” Your flat hand should be pointing towards your chest, possibly touching it.
So when is the finger (I mean … using a finger) appropriate? Right now, I think it’s the exclusive sign for he, she, and it. That’s fewer examples than I thought, so the axiom definitely needs to be reworded. And things get even more complex when including the reflexive personal pronouns: myself, yourself, herself, himself, itself. For those signs, you would use a “thumbs-up” on the dominant hand. Direction still needs to be respected; “myself,” for instance, is signed by tapping the “thumbs-up” hand slightly on your chest.
Given that there are multiple ways to express personal pronouns, and that all of them deal with respecting the orientation of the targeted entity, I think the axiom should be reworded as:
Axiom IX: To manage personal pronouns, indicate the targeted entity by pointing your hand in the appropriate location. In general, use one of the index finger, a flat palm, or a thumbs-up.
The related footnote would accentuate the distinctions between the dominant hand using its index finger, a flat palm, or a thumbs-up.
The Power of Intense Focus
Facebook Chief Operating Officer (COO) Sheryl Sandberg leaves work at 5:30 every day. That doesn’t surprise me at all. I’m also familiar with the stigma the article mentions towards people who leave work early, which the article’s author defines as before 8:00 PM. The latter view exists because it’s common sense to assume, if two people are working in the same job, and person A leaves at 6:00 PM while person B leaves at 10:00 PM, that person B is more hard working and the better employee. And Person B should be getting the promotions … the recommendations … the list goes on. But does it really have to be this way?
During the past few years, I keep reminding myself of “intense focus.” I consider my studying good when it is productive. That is, I have a high rate of material retention and understanding per hour of my studying. I hate, hate spending hours reading, thinking, or writing, and feeling like I haven’t made progress in whatever task I’m doing. And in almost all of those “wasteful” scenarios, a lack of focus is the issue. Which is, therefore, why I am unsurprised and pleased about Sandberg’s announcement. My hypothesis is that, when faced with a time constraint, people will be increasingly pressed to be productive and efficient during their work.
I certainly share this experience. I didn’t have to think hard for an easy example. In my Real Analysis class last semester, we had three exams. The first was a 4-hour take home exam, and the other two were 24-hour take home exams. Surprisingly, the exam length of the three was roughly similar. The first exam had seven questions, and the other two had eight. And the problems were relatively even in terms of time needed to solve them. Many students preferred the longer exam, since it gave them more time to think about and revise their answers with less fear of a time constraint.
But I argue that a shorter time constraint is beneficial because it forces me to stay alert. Knowing I had plenty of time on the later two exams, I felt myself uncontrollably browsing ESPN, my email, and other websites in between solved questions. But on the first exam, I “marathon-ed” the questions, refusing to spend my time on such trivial tasks. I only took deliberate breaks. Those are the ones that I put on my schedule before taking the exam, to make sure I do not suffer from burnout. Even though I got As on all three exams, the feeling of fruitfulness I had while taking the first exam was vastly different than what I experienced during the other two. This is why I generally advocate for time constraints on work. It’s okay if they are self-imposed. What matters is being efficient and not using the “I have all the time in the world” excuse when you’re taking unnecessary breaks.
I particularly wonder about work habits in academia. What happens to those professors who tend to leave work early1 as compared to those who stay up past their students pulling all-nighters? I’d be interesting in collecting data and seeing if those who spend more time in their offices may actually be doing themselves a disservice. But the problem is that times can be wildly unpredictable. A professor could leave work at 5:00 PM one day, leave at 3:30 AM the next, and not show up to his or her office at all on the third day. And of course, people vary. Some can sustain a level of incredible focus for long periods of time, while others may require more frequent breaks. Finally, there may be certain deadlines that cause people to work longer.
But what about me? What can I do to ensure that I take advantage of intense focus? As I mentioned before, I am working at Bard College this summer. This past week was my first on the job, and I was in the lab (during weekdays) from 9:00 AM to 6:00 PM. The 6:00 PM departure is an excellent time; it allows me to comfortably work out in the weight room before the 8:00 PM closing time, and I can also eat dinner in the 7:00 to 9:00 PM range, which is when I start getting hungry. And my weekends look like they will be free, allowing me to pursue other hobbies such as programming and running (and blogging, of course).
The lazy days between the end of my sophomore year and the start of my research internship are past me. It’s time to set laziness aside and … focus!
-
By early, I arbitrarily mean before 6:00 PM. ↩
Project Euler 179
UPDATE May 13, 2015: Migrated the code syntax to match Jekyll’s syntax.
Project Euler is an interesting website that offers about 400 different mathematics and computer programming questions. They range from easy (finding the sum of some set of big numbers) to impossible (navigating through Rudin-Shapiro sequences). Just recently, I solved the 179th question with the help of some Java code. While my program gave me the correct answer, the execution time on my Macbook Pro laptop was 80 seconds — and there is an informal “60 seconds” rule that implies that code should be able to solve a problem in fewer than 60 seconds. So I wanted to determine in what ways I could optimize my code.
Here was the question: Find the number of integers 1 < n < 10^7, for which n and n + 1 have the same number of positive divisors. For example, 14 has the positive divisors 1, 2, 7, 14 while 15 has 1, 3, 5, 15.
This wasn’t too bad for me. I already had a method that could compute the sum of the divisors of a number based on problem 23, so I revised it to add up the number of divisors, rather than the sum. Then I just iterated through each number from 1 to 10 million. Here was the first version of my code:
I’m not going to say what the answer was, but as mentioned before, the execution time (endTime – startTime) was about 80 seconds. Looking at the code, the limiting factor is the 10 million calls I make to the method numOfDivisors(). So how can I improve this? In other words, how can I avoid making all those calls to my static method here?
To start, I initialized an array of 10,000,001 elements, called divs, where divs[x] refers to the number of divisors of x. Then, I used two nested for loops to make sure that each entry of divs[x] did hold the number of divisors of x. The outer for loop went from int i = 1 to 10,000,000, and the inner for loop went as far from int j = 1 to as large a number such that i*j <= x. This implies that all divisors for a number are counted! For instance, if we had the number 2, which has the divisors of 1 and 2, then the entry divs[2] should be incremented twice — which it is, because of i=1 and j=2 first, then i=2, j=1 second.
Here, I avoid all the testing of “is a number is a factor of another number?”, as I do in my old code, because if I consider i*j = n, then I know that n has at least those two factors!
The updated code is as follows:
It gave me the right answer. And the runtime was an amazingly quick 1.9 seconds — much, much better! I don’t claim full credit for this second code, as I read the discussion forum for that problem after I solved it the first time, but it’s still nice to know how to optimize a program.
Project Euler 1 in Several Languages
UPDATE May 13, 2015: Wow, look at the Jekyll code support!
Here’s some code to answer Project Euler 1 in a few languages.
By the way, you do not need code to solve this problem…
(1) Java:
(2) Python:
(Alternatively, the one-liner below is probably “better”.)
(3) C++:
(4) Scala:
Note to self for Scala: to means it includes the last value, until means it does not.