My Blog Posts, in Reverse Chronological Order

subscribe via RSS or by signing up with your email here.

CS Theory Part 1 of 8: Finite Automata

Sep 17, 2012

As I mentioned before, I am writing a series of blog posts on my Theory of Computation class. This particular post will be somewhat image-heavy due to complete lack of experience on how to use LaTeX in accordance with state machine diagrams. Even the LaTeX I embed in these posts doesn’t look too great with this background, so I’ll have to do some more experiments. UPDATE May 16, 2015: I think the Jekyll + MathJax combination looks great now!

But anyway, in this class, I’m trying to understand three central areas: automata, computability, and complexity, and they are all linked by the following question:

What are the fundamental capabilities and limitations of computers?

Computability and complexity will come later in the course. Right now we’re focusing on automata.

To start off, let’s look at some basic computers, called finite automata. To put things formally, a finite automaton is a 5-tuple \((Q, \sum, \delta, q_{1}, F)\), where1

  1. \(Q\) is a finite set of states
  2. \(\sigma\) is a finite set known as the alphabet
  3. \(\delta: Q \times \sigma \longrightarrow Q\) is the transition function
  4. \(q_{1} \in Q\) is the start state
  5. \(F \subset Q\) is the set of accept states

But this is very abstract. Let’s get more specific by talking about the alphabet, and I’ll then return to discuss the other four points. An alphabet is defined to be any nonempty, finite set of symbols. For instance, \(\sum = \{0,1\}\) is a valid alphabet. And so is the alphabet composed of the 26 letters of the English language.

Related to alphabets, we have strings and languages. A string is just a finite sequence of characters derived from our alphabet. All the English words I type in this blog entry are strings of the alphabet composed of the 26 letters of the English language. The sequence of symbols in 11001100 is a valid string of the alphabet \(\sum = \{0,1\}\). Any binary number is a string that can be formed by the alphabet. And a language is a set of these strings.

Here is the key relation between languages and finite automata.

A language is called a regular language if some finite automaton recognizes it.

By definition, a finite automaton will recognize a language if all strings the automaton accepts are members of the language. That’s the key. And to indicate what I mean, let’s look at a state diagram. Many finite automata can be written using these diagrams, and it’s highly advantageous to do so given how intuitive it is. The following is a state diagram of a finite automata that recognizes some language.

dfa1

It’s a little blurry (future images will be better), but I hope you can still see the interesting symbols. First, there are four large circles, with one having a circle within it. Each of these four circles represents a state. Hence, \(\{q_{1}, q_{2}, q_{3}, q_{4}\} = Q\), the set of states (#1 on my list above). These are used to represent some situation that we encounter as we progress through a given string.

We have \(q_{1}\) forming our *start state *(#4 on my list above). This means when we progress through a given string, this is where we start before we make any “moves.” And \(q_{4}\) represents the lone accept state, indicated by the double circle outline. We can have multiple — or zero — accept states; this particular diagram only has one. If a string ends on one of these states, it’s “accepted” by the language. Otherwise, it’s rejected.

When I say “progress through a given string,” I refer to the process of determining whether a machine will recognize a certain string, and here is where we use the transition function (#3 on my list above), indicated by arrows in the diagram. Our alphabet in the above example is composed of just 0 and 1, so the machine only works with strings of that form. Let’s see what happens when we “input” a few strings into this machine.

  1. \(\varepsilon\) — This is the empty string. We start by going to \(q_{1}\) and we stop there, since we have no characters left. The finish state is the same as the start state, but this means the empty string is not accepted by the language since it did not finish in state \(q_{4}\).
  2. “1” — Here, we again start at the empty state. From this point forth, always assume we start at the start state. Since we have a “1” this means we go to state \(q_{3}\) as dictated by the transition function (i.e. arrows). We stop here, because we’re out of characters. But unfortunately, “1” did not end in the accept state so, like \(\varepsilon\), it is not accepted by the machine.
  3. “0” — We go from the start state to \(q_{2}\). Again, we stop, and just as in the previous two examples, the machine doesn’t accept the string.
  4. By now it should be clearer what should be accepted by the machine. Let’s go with “10” and see what happens. We start at \(q_{1}\) as usual.  We proceed to \(q_{3}\) since we started with a 1. Our next character is a 0, so we go to \(q_{4}\) and stop there. At last! We have a string that is recognized by the language! So whatever the language is here, it better include “10” but exclude “1”, “0”, and the empty string.
  5. Let’s try “010”. We start by going to \(q_{2}\) due to the leading zero. Then we have “10” left, so we follow the arrow for the “1” which loops back to the same state. Then our final task is to go where the “0” arrow points to, but again, that goes back to the same \(q_{2}\) state! Thus, “010” is not accepted by this machine.

We can go on and on, but at some point, we have to come up with the rules for this language the machine accepts. Notice that state \(q_{2}\) is a “death” or “trap” state, because as soon as a string enters that state, it must end there. All strings are finite, and no matter what, any symbol we get (which is only a 0 or a 1 here) will lead us back to the same state. In other words, it is impossible for a string to be accepted (i.e. finish at \(q_{4}\)) if it ever enters \(q_{2}\). This means that if a string has a leading zero, it will never be accepted by the language/machine.

So now we know the language this machine recognizes must have all strings starting with a 1. But are there further restrictions? The answer is yes. If we have a string that consists of all 1s, then we will always end at \(q_{3}\) due to the 1 that loops back to that state. This is not an accept state, so we need to consider having a 0 in some string. Notice that the accept state has a 0 that loops back to it. So if a given string ever reaches \(q_{4}\), as long as it ends in a 0, it will remain in that state and be accepted. It doesn’t matter if we have one, five, or a hundred zeroes. The only way a string can leave an accept state is by having a 1, which means that string goes back to \(q_{3}\). But notice that this is not a death state! It is possible to come back to the accept state if a string is in \(q_{3}\).

Now we can formalize things. This state machine recognizes the following language \(L\):

\[L = \{x \mid x \mbox{ is a binary number that begins with a 1 and ends with a 0.} \}\]

And we know that this language is regular. (To show a language is regular, it suffices to make a state diagram of a finite automaton.)

It’s also worth pointing out that the diagram I have above should represent the simplest possible state machine that accepts this language. There are infinitely many other diagrams that would also accept this language.

Now that was a simple example. I want to bring up a more complicated question that uses these same concepts.

Interesting Question

Let \(A\) be the language consisting of all strings over \(\{0,1\}\)  containing a 1 in the third position from the end.

Designing a finite automata that accepts some language is arguably harder than the reverse process, determining what language is accepted by a given machine. I have my solution to the above question below. The diagram only needs to keep track of the last three digits. There are four accept states that correspond to the last three digits being 100, 101, 110, or 111, which are the four possibilities we could have for the last three symbols of accepted strings. Naturally, the four other non-accept states correspond to the last three symbols being 000, 001, 010, or 011.

dfa2

Up to now, I assumed that my finite automaton were deterministic, so it was always possible to know what was happening. But soon I’ll be moving on to non-deterministic finite automaton …

  1. From my textbook, Introduction to the Theory of Computation, Third Edition, by Michael Sipser. 










Why you Should Always Speak as if you’re in an Interview

Sep 15, 2012

A few days ago, I said that I was on the hunt for simple, yet effective deaf-friendly strategies that most people would be able to apply in life.

Here’s a basic one.

As much as you can, articulate as if you’re in an interview.

To be more specific, suppose you’re on the job market with a freshly minted Ph.D. In today’s situation, you may be competing with 300 other qualified candidates for one tenure-track faculty position at your dream school. If you manage to get an interview, you should feel proud, but it’s not a job guarantee. A competent interview means you remain in the applicant pool, and a terrible one … well, you get the idea.

So during that interview, what are the odds that you’ll mumble, slur or fail to fully project your voice as your interviewer stands across you? If you’re serious about getting the job, I think you’ll articulate very clearly.

But do most people extend this kind of voice in informal conversations?

I don’t believe that’s the case. Certainly for me, I pay much more attention to my speech when I’m giving a lecture or talking to a prominent person. But I need to work on extending that mentality to all conversations. I want to always make a good impression on my conversationalist by speaking as well as I can. If everyone else tried to do the same thing, we would all benefit.

That’s why getting into a habit like this is useful.

So please, as much as possible, do the following:

  • Project your voice and articulate by moving your lips. This is similar to not mumbling.
  • Focus on pronouncing the ends of your phrases to avoid sound tailing off.
  • Don’t talk in a rushed manner; relax and know that, for the most part, everything will be okay.
  • Don’t talk abnormally loud so that hearing people would wonder if there’s something wrong.

In my case, the first point I listed is the most helpful for me when I hear. But for those who may lack more “raw” hearing even with the help of assisted listening devices, the fourth may be more effective. But the first one, in my opinion, should take priority.










How to be More Deaf-Friendly: The Search for Simple, yet Stunningly Effective Strategies

Sep 6, 2012

Over the next year, I hope to embark on a long-term blog writing project. These posts will center around a key concept: how to be more deaf friendly. I don’t think I will restrict myself to one setting or entity — e.g., how a college or university can be deaf friendly. I want to search for strategies that apply broadly, whether they are used in an educational, social, working, or other environment.

I feel inspired to start this because too many times ignorant people have made seemingly simple situations, such as one-on-one conversations, much more difficult for me than is necessary. With only slight modifications in accommodation, demeanor, and other strategies, those situations can be made much more appeasing, beneficial and invigorating to all parties involved.

My plan is to go beyond what the Americans With Disabilities Act and other laws require. I want to focus particularly on either human behavior, which cannot entirely be regulated by law, or simple strategies that are largely unknown. Obviously, I’ll have to be reasonable with what I can expect, and I anticipate that some of my suggestions may be controversial. But I also plan to argue that the tactics that can improve life for the deaf and hard of hearing have benefits that extend to hearing people.

Possible topics I may discuss in the future will likely fall in one of the following categories:

  1. accommodations
  2. body language/demeanor
  3. captioning/subtitles (though I have touched on the topic here and here)
  4. environment/setting
  5. speaking/speech

I am solicitous to see how this turns out, and how much I learn from this project.










My Experience with the “Miniature MIT Challenge”

Sep 1, 2012

mit_challenge

My summer job ended on July 28, 2012, and I was back home with about three or four weeks to go until the start of my junior year at Williams. I wanted to put whatever time I had to good use, and one way I thought of doing so was to prepare for my probability course this fall. Of the four courses I’m taking in the fall 2012 semester, probability is the easiest to prepare for since my professor gave out lecture notes in the middle of the summer to students (like me) who wanted to prepare well in advance.

And even better, I was able to take advantage of the wonderful repository of information on MIT’s Open Courseware. There’s a mathematics course at MIT called Probability and Random Variables, or 18.440, with a full set of lecture notes and exams with solutions.

The syllabus for my probability class closely matched what was covered in 18.440, so I wanted to challenge myself and see if I could pass the two midterms and final (using the same constraints as actual MIT students) for that course before setting foot in my actual probability class. It’s a miniature version of Scott Young’s MIT Challenge. For obvious reasons, I don’t have the time nor desire to take a full curriculum of MIT classes.

But I thought investing in understanding 18.440 would pay dividends later.

I started my personal challenge on August 13 and took the first and second midterms on the 22nd and 28th, respectively. I scored a 90 on the first midterm, and a 100 on the second. At that point, there were just seven lectures left (out of thirty or so) before the final exam, but I could tell that most of the information beyond the second midterm would not be included in my real probability class. With other unfinished business this summer, I opted to skip the final. It turned out to be a wise decision; their practice final (they didn’t have an actual one) was almost an exact recitation of the lecture slides, with half of their answer key consisting of “look at the slides for lecture X.” I doubt it would be an effective measure of how well I retained the material. I still reviewed it, but I didn’t take it under timed conditions as I did with the two midterms.

Right now, I consider myself done with my probability review, and am looking forward to finally taking an actual probability class in about a week.

Retrospective – What Worked …

  1. Solving practice problems – The multitude of problems with answers were extremely effective in providing me with applications of seemingly abstract concepts as well as ways to approach a certain problem. The exam questions on the two midterms were very similar and, in my opinion, much easier than the practice ones. Thus, I achieved high marks.
  2. Following a textbook – The lecture notes only scratched the surface of the topic, so I had to rely on an outside source. Fortunately, the topics in my professor’s textbook were very much in parallel with MIT’s slides, making the learning process much easier.
  3. Skipping the homework assignments – There were no solutions provided to the homework, and frankly, I was able to do well on the actual midterms just by looking at some old practice midterms.

… and What Didn’t

  1. Doing a few lectures per day – I was surprised by how ineffective this seemed. I retained far more information when going over lectures in bulk. It seemed like if I didn’t understand a concept in one of the lectures, I would spend hours agonizing over it, when I would instead be able to understand it far more clearly when facing a question that dealt with the topic.
  2. Emphasis on proof-writing and derivation of formulas – Most of the proofs I understood were from my professor’s textbook. But 18.440’s exams were entirely computational, making the effort I spent understanding certain obscure proofs wasted.









United Airlines, Where are the Captions?

Aug 24, 2012

united

Much progress has been made with respect to the amount of film time that is captioned. The Federal Communications Commission has the following basic requirements:

Beginning in July 1993, the Federal Communications Commission (FCC) required all analog television receivers with screens 13 inches or larger sold or manufactured in the United States to contain built-in decoder circuitry to display closed captioning. As of July 1, 2002, the FCC also required that digital television (DTV) receivers include closed captioning display capability.

Of course, there are some exceptions to closed captioning requirements, most of which have to do with undue economic burdens. I’ll talk more about that later.

As someone who relies heavily on reading subtitles or captioning when watching film, I’m supportive of any policy that requires them as long as the captions don’t unreasonably hinder my view of the screen. While there’s a slight distinction between subtitles and captioning, I will only refer to “captioning” here for the purposes of brevity.

Today, I will focus on captioning as it pertains to airlines. According to this page, they are only required on safety videos. Consequently, many people have voiced their concerns that airlines are failing to fully accommodate the hearing impaired by not having fully accessible videos.

My most recent captioning-related issue (that compelled me to write this entry) happened from flying to and from Honolulu, Hawaii on United Airlines. On my flight from Chicago to Honolulu, there were many small television screens, each of which was shared among eight to twelve passengers. Despite how the flight and total screening time lasted about nine hours, the only time that I saw full captioning was during the introductory video that lectures about safety and emergency on the airline. While I commend the airline for providing captions, it left much to be desired when the four movies that proceeded to be shown were not captioned. Passengers had to use a set of earphones to listen to the audio — something that would not be feasible or safe for me to use with my hearing aids and profound hearing loss. I ended up ignoring the movies entirely and wrote some blog entries on paper.

The return trip, also on United, was slightly better. This time, one of the flights was from Honolulu to Washington D.C., and on that one, all passengers received their own television screen in front of them. Still, it was difficult for me to obtain captioning. In fact, it took a little luck and some experimenting to figure out a workaround that got captioning on some, but not all, of the 170 movies offered (in economy class). I left the airline with an acerbic mood and a paper pad filled with the writing of what would become the foundation of this blog entry.

My Thoughts and Complaints

United Airlines is one of the largest airlines in the world, so I assume they at least can’t use financial destitution as a rationale for not providing fully accessible captioning. My hope is that, starting with the largest airplanes, whenever a movie is shown, captions are either (1) mandatory, or (2) optional but always accessible. This should be a policy that’s part of all passenger classes and does not depend on how much one has paid for the tickets.

Scenario (1) should happen on flights similar to the one I described from Chicago to Honolulu. Here, passengers share television screens. Viewers should not have to go through the trouble of manually putting captions on and then worry that others will turn them off due to umbrage or other reasons.

Scenario (2) should occur on flights like mine from Honolulu to Washington D.C., where each passenger gets his or her own television screen. For movies, captions should always be an option if their corresponding language is used in the audio soundtrack. A movie offering soundtracks in English, Japanese, and Spanish, for instance, should have English, Japanese, and Spanish captions as an option.

Obviously, I’m not saying these changes should happen at once. I understand that in this economy, airlines are operating under razor-thin profits. But progress has to start somewhere, and it has to move at a reasonable rate. I hope that the largest airlines can implement these changes on their biggest airplanes. It doesn’t have to be all at once. Perhaps all English captions should be imported, and then those in different languages if they are not there already. I’m not sure if captions are fully accessible for those passengers in first or business class seats, because I’ve never sat in those. But if they’re there, then good. That’s a start — it needs to be expanded to economy class. And the continuing accessibility trend needs to trickle down to as many small commercial planes as possible.

I’m a realist. Full accessibility is almost certainly never going to happen. But we can get as close to it as is reasonable.

As I mentioned earlier, I wrote part of this blog on my flight. Some of it ended up in a letter form that I sent to the Aviation Consumer Protection and Enforcement. I’ve decided to reproduce it below. Anyone else who feels compelled to take similar action, please do so. I appreciate relevant feedback and thoughts.

——

My Letter

Dear the Aviation Consumer Protection and Enforcement,

I am a twenty year old, deaf college student who has recently completed a round trip from Albany, New York, to Honolulu, Hawaii via United Airlines. As someone who cannot easily understand audio tracks, I am concerned over the lack of captioning on most of United Airlines’ movies and film clips. My hope is to raise attention to this issue and see if United Airlines can eventually add captioning to all clips shown on their television screens. My letter is not meant to single out and traduce United Airlines in particular; it is to address a common situation among many airlines in the hopes that at least one will be able to recognize the appropriate course of action.

I will focus primarily on my 9-hour flight (number 144) from Honolulu to Washington D.C. (Dulles) that took off on August 12, 2012. While walking to my economy class seat, I was ecstatic to see that each passenger had his or her own personal television screen. I anticipated being able to watch many of the movies stored in the plane’s database.

I immediately tried to see if I could obtain captioning or subtitles for the movies, and was disappointed when I couldn’t figure out a way to do so. A flight attendant confirmed my concerns after telling me she did not know how to get captioning.

I was not about to give up. Eventually, after a few minutes of tinkering and some luck, I figured out that there was a way to get captioning, but only on a certain subset of the movies. Out of the seven genres of movies United Airlines offered, which were New Releases, Action/Thriller, Classics, Comedy, Drama, Family, and World Cinema, only one category provided a guarantee of captioning. That would be the World cinema genre, accounting for 36 out of the 170 total available movies for me to watch. I suspect this is due to those movies being filmed in non-English speaking countries.

But this still means I am denied the ability to enjoy most of the movies offered. If it is not a huge burden to add captioning as an option to all movies, then this is a violation of the Americans with Disabilities Act, Title 47 (formerly Title IV) that deals with telegraphs, telephones, and radiotelegraphs.

Perhaps the most unfortunate realization of my experience on that flight was, in terms of accessibility for the deaf and hard of hearing, it represented a best-case scenario. I at least had the option to watch a smaller selection of movies with captioning, even if it did not include my top choice of movies to watch. I still have not seen The Hunger Games which was offered, but not captioned, on that flight.

I call this a best-case scenario because on most flights, I do not have that luxury of choice. I typically have to share a screen along with eight to twelve other passengers. And on the many flights I’ve been on in my life — my guess is a little over a hundred — I don’t think I have ever seen any movie shown that was captioned. An example of an experience like that was during my United Airlines flight from Chicago to Honolulu that occurred about two weeks earlier than the flight 144 I previously discussed. In that scenario, I am clearly denied the ability to enjoy the in-flight entertainment.

Thus, my experience as a flight passenger has often been frustrating. I hope to help push United Airlines in the correct direction. I commend them for at least getting captions on the introductory safety and security video. I only ask that these services get fully extended for all featured movies, whether they are on shared screens or part of a package for individual passenger screens. In the case of my experience on flight 144, I would guess that almost all of the non World Cinema movies offered multiple language audio tracks. If these services are provided, then what justification can be used to explain the lack of imported English captions?

I believe captioning needs to be mandatory on shared television screens during movies, and should always be an option if individuals have their own screens. At the moment, I am not going to ask the same for other audio services that I cannot understand, such as Passenger Announcements, since those are excluded under the Federal Communications Commission. Taken right from their page that deals with captioning:

[Exceptions] include public service announcements that are shorter than 10 minutes and are not paid for with federal dollars […]

But I believe that movies should not be an exception to captioning laws.

It was disheartening for me to see the vast majority of passengers watch movies that I would not be able to enjoy. In the first few hours of the flight, I made fake trips to the restrooms so I could observe how many passengers enjoyed the comfort of their movies and earphones. (My hearing aids prevent me from using earphones.)

My guess would be around ninety percent of all passengers, many of whom no doubt take their hearing for granted. By providing captions to all movies and videos on board, United Airlines will be taking an appropriate and necessary step towards increasing accessibility towards the deaf and hard of hearing.

Sincerely,

Daniel Seita

——

Further Reading

I recommend reading the actual text of the Americans with Disabilities Act. Airlines should be covered under the ADA. They weren’t always, but I think a recent ruling in 2008 or 2009 changed the situation. That’s something I’d like to investigate further. Another interesting website to observe is the Aviation Consumer Protection and Enforcement, as linked earlier. They have a record of all recent disability complaints filed.

——

(Photo by Airliner’s Gallery)










Accessibility Analysis: An Enlightening Overview of ESPN’s First Take Highlights

Aug 23, 2012

first_take

I don’t watch a whole lot of shows or movies. But I do enjoy watching Skip Bayless, Stephen A. Smith, and others over at ESPN’s First Take hilariously debate over sports topics ranging from how much blame LeBron James deserves after getting a triple double in a 2011 finals loss to whether it was appropriate or not for Tim Tebow to run topless (yes, they debated about that!). The episodes are each two hours long, so it’s rare that I can watch them entirely. So I turn to the highlight videos uploaded on ESPN.com.

And as you can see, they have closed captioning! There’s a little toggle to the bottom right of the video screen. I’ve found the captions to be reliable. They’re not perfect, but they’re significantly more accurate than computer-generated captioning.

first_take_ccs

Unfortunately, not all of these highlights offer me that option. (The full, 2-hour long episodes aired on ESPN are completely captioned.) When I filtered the 159 most recent videos on ESPN.com (it seems like they archive old videos somewhere) to see only those that offered closed captioning, that narrowed the options down to 64. But a closer look indicated that the oldest video listed as having captions was uploaded in January 2012, and the oldest video overall on ESPN First Take’s current archives are dated as 2010. I noticed, upon inspection, that the most recent videos were captioned at a higher percentage than the older videos.

That’s optimistic news, but I wanted to know more. One gripe I have is that it’s sometimes difficult to understand seemingly arbitrary captioning policies. Does ESPN have a specific criteria for what online video highlights have captions? I emailed ESPN about my concerns and actually got a response, which was surprising considering the volume of messages they receive. The response, unfortunately, missed my main point, but did indicate that ESPN has room to improve with regards to accessibility:

Dear Daniel,

Thank you for contacting us.

ESPN, Inc. does not release or sell copies of its programming or promos.

Sincerely,

[…]

I assume it’s impossible to get transcripts from ESPN, although their may be third party releases out on the web. This isn’t optimal, but ESPN at least took a step to increase accessibility for the deaf and hard of hearing by having some captions for online video highlights. I commend them for doing so even though the short length of these clips may deter captioning. Virtually every other sports highlight reel I’ve seen online has been bereft of captions.

I will continue to discuss accessibility issues on this blog. Some entries related to accessibility will focus on minor issues, such as this one. But others will be more extensive and possibly include mailed letters. Obviously, these topics will be a little more serious than the one discussed today, and I have an entry prepared for that tomorrow. Stay tuned.










On a Super Hostile Series, and How I Should Relate to It

Aug 22, 2012

vechs

I might have mentioned this before, but to reiterate, I’m a huge fan of the sandbox computer game Minecraft. (That game and Sid Meier’s Civilization IV: Beyond the Sword currently take up about 95% of my gaming time.)

Today, I want to focus on one aspect of Minecraft, which is map making. Using a variety of third party world editing utilities, such as World Edit, players can generate gigantic, epic-looking buildings, landscapes, and other structures that would otherwise be too burdensome or impossible to build by hand. And when they’re satisfied with the quality of their map, players can upload their creation online to allow others to play it.

And there’s one mapmaker who has turned into a superstar.

That person would be Vechs. His Super Hostile series has become the most popular custom-made map or map series of all time, with well over a million total downloads, countless videos of his maps on YouTube, and spawning a new generation of Minecraft map makers who aspire to create similar maps as he. I’ve put a screenshot of the forums on August 16, which indicates that his topic has over three times as many replies as the one with the Skyblock map. Click for a larger view.

Whenever he updates or uploads a new map, hundreds of players download his map on the same day. He has become so good that Minecraft fans couldn’t ignore him. In fact, even a few developers of the game tried out his map. (They didn’t succeed.) Vechs’ source of map making success comes from creating essentially a new genre of Minecraft, called Complete the Monument [CTM] maps. Vechs started his series by making a variety of adventure maps where players could play like an ordinary Minecraft map.

But his series didn’t start to take of until he added the salient feature of a Victory Monument. Players now have to gather 19 different blocks scattered around his maps to successfully “complete the monument” and beat the map. The catch is that those blocks tend to be heavily guarded by enemies and ingenuous traps, hence the naming “Super Hostile.” To put things in perspective, beating the most difficult maps in his series is orders of magnitude more difficult than playing through a normally generated Minecraft map on hard difficulty.

So why am I discussing about him now? And why is this entry filed under computer science1?

I think I see a deep connection between what he’s done and what I hope to achieve in my computer science research. Substitute “Minecraft” for “computer science” and “Map Making” with any field of computer science, and my point gets clearer.

Vechs took a field of Minecraft — mapmaking — and specialized in one thing, CTM maps. His experience has unequivocally made him the current world expert on creating those kind of maps. There are other “fields” of Minecraft that people can become famous in. Examples include creating “Let’s Play” Youtube videos, popular modpacks, creative mode structure work … the list goes on. But stripping away most of his other options let Vechs maximize his focus on the one aspect that made him famous in the Minecraft world. His latest maps show significant improvement over the first few, indicating that even the best have room for improvement. But he needed time to practice on his map making skills before he got really good at it.

So what should I do?

Right now, simply based on my current research experience, my favorite field of computer science is machine learning. So over the next year, unless I suddenly experience an epiphany and fall in love with complexity theory, I should aim to learn as much machine learning as possible. Consequently, that will be the topic I hope to explore in my next REU project. And since I’m actually taking a course in machine learning in the spring 2013 semester, I know which of my four spring classes I should expend the most effort towards.

That is my mindset. I want to eventually get to the cusp of current knowledge in one field (not two, not three) of computer science — machine learning. Then, I hope to make an impact by making a leap in my field. As others have noticed, that’s how many new discoveries are made. Vechs created a “leap” in the map making community by inspiring a new genre of maps. Hopefully, I can help create generations of future research projects.

  1. Well, it was filed under computer science, under the old blog system (before May 15, 2015). 










Goals, Part 2

Aug 21, 2012

You’re in luck. There will be a quadruple-header of blog entries (four in four days). This is the opening post.

Almost a year ago, I posted a blog entry discussing some of my short-term goals. So this is the second post in a possible series of entries that delineate objectives I set for myself. In the previously linked entry, I mentioned that the goals were for the fall 2011 semester only, but since I didn’t make another set of goals for the spring, I’ll just extend those 18 to encapsulate the spring 2012 semester and the following summer. In that respect, I completed points number 1, 3, 4, 5, 7, 8, 16, 17, and 18.

But now, I want to be more serious. What are some of the five most important objectives I hope to achieve from now until the end of the summer of 2013? I’ve already mentioned blogging more, so I won’t say it again. But I did come up with …

1. Maintain a better sleeping schedule

My sophomore year’s sleeping schedule was shambolic. One night, I’d go to sleep at midnight. Then it would be 4:00 AM the following night, then back to midnight, and the madness continued. Too often, I let my work dictate how much I slept. This even caused me to uncharacteristically miss several classes by oversleeping.  So not only do I want my schedule to be stable, I also want to shift it back a few hours so I can comfortably wake up at 6:00 or 7:00 AM consistently. Since my earliest class on any day of the week is at 9:55 AM, I should have ample time to study, work, or blog in the morning.

2. Study for the Graduate Record Examinations

This is fairly straightforward. I’ll take the general GRE towards the end of my junior year, and I’ll probably take the computer science GRE subject test in the fall 2013 semester. With the exception of midterms and finals periods, I want to dedicate at least 4 hours per week to this task. The aim is to get a high enough GRE score so that my application is not immediately thrown out the window for the top computer science graduate programs. I’ve actually been studying a little this past summer by reviewing some GRE and even SAT vocabulary. Does anyone have additional recommendations for me?

3. Maintain a 3-day workout week

Since the start of November 2011, I’ve consistently been weight training about three days a week, with rare exceptions (they’re justified, trust me on this). Exercises include the squat, hang clean, bench press, pull ups, push ups, core circuits, leg press, calf press, and deadlifts. My strength has notably improved, and I’ve kept my body weight at around 150 pounds while avoiding any of the “Freshmen 15.” Why couldn’t I have maintained this level of dedication in high school?

4. Pick better situations to socialize

I’ve already discussed at length why I do not socialize well in certain situations. I need to strike a balance between avoiding them without giving the impression that I’m too reclusive. Related to that, I want to find ways to create more one-on-one conversations located in quiet environments. I don’t think I’ve done this as well as I could have in the past two years — but I have ideas to rectify that.

5. Engage in more computer science research

The final goal here relates both to my ongoing project of text simplification and what I hope to engage in the following summer. (I don’t think I can realistically take part in an entirely new research project during the academic year, with classes and my work-study teaching assistant duties.) My Bard College research team is still running some experiments, and I’ll be in touch with them to hopefully write up our results. I also want to focus on REU applications for the summer of 2013. I could work solo with a Williams faculty member over that summer, but collaborating on a project with a group of three or four researchers is much more compelling to me. I’ll be in touch with the Williams faculty nonetheless. The preliminary goal is to find a project in the area of artificial intelligence.

That’s all for now. It’s time for me to do them….










On My New Theory of Computation Series

Aug 14, 2012

waimea

UPDATE May 13, 2015: I only managed to do half of what I wanted for this series, but at least I did something. As of now, I’m not going to go back to working on this because my current academic and research interests have shifted.

The fall 2012 semester is approaching. It’s not as fast as those winter waves in Waimea Bay, but close enough. (Yes, the above photo I took is of the same beach — click for a larger view.)

Here are all the courses I’m taking:

  1. Applied Abstract Algebra
  2. Computer Graphics
  3. Probability
  4. Theory of Computation

All are lectures, with Computer Graphics being the only course that includes a lab component. Classes 1 and 3 satisfy my math major requirements, while 2 and 4 are for computer science. For the first semester ever, I won’t have at least one class that doesn’t fall outside of my two majors. So on the one hand, this means I’ll maximize my dedication in these classes, and will probably get high marks (famous last words).

But unfortunately, it means I won’t have as many options if I get a little burned out of computer science and math. I tend to spend long hours studying for exams and working on homework, so I’m going to try and do something that will hopefully alleviate some of the workload. This is purely an experiment, and one that I plan to continue if it brings solid results this semester.

The Plan

I’m going to make a series of blog posts on my Theory of Computation class (henceforth, CS 361). For reference, here’s the course description from the Williams Course Catalog that delineates the fun stuff coming up for me:

This course introduces a formal framework for investigating both the computability and complexity of problems. We study several models of computation including finite automata, regular languages, context-free grammars, and Turing machines. These models provide a mathematical basis for the study of computability theory–the examination of what problems can be solved and what problems cannot be solved–and the study of complexity theory–the examination of how efficiently problems can be solved. Topics include the halting problem and the P versus NP problem.

After every few classes, I hope to record on Seita’s Place what I learned and any relevant information going above and beyond the classroom discussion. By the time I take the midterm and final, I’ll have a nice repository of information online to help do quick review. I will strive to start these entries as soon as possible in draft form, and will add information to them a few hours after each CS 361 class.

There will be a consistent format for each of the posts. Each entry will be titled “CS Theory Part X: Y” where X is some natural number, and Y is a phrase that relates with the material I’ve learned and will cover in the blog entry. I want this to be like a personal Wikipedia that makes heavy use of rigorous proofs and outside sources.

The Benefits

So why do I want to do this? The most important benefit will be that it deepens my knowledge of theoretical computer science in a way that avoids long study hours and memorization session. Again, as I plan to update these entries soon after my classes end, I will minimize the amount of material I forget due to time. Furthermore, by writing these entries in my own words, I force myself to understand the material well, a prerequisite for explaining a subject in depth. (There’s a whole host of information online that backs up the previous claim.) Since I don’t want to write a book on theory, I have to pick the right spots to focus on, which requires me to be able to effectively judge the importance of all the concepts hurled at me in the class. Also, using the Internet over paper to express these posts makes it easier to link together concepts in a web, as explained by Scott Young’s holistic learning method.

But this begs the question: why this class, and not one of the other three?

My long-term goal is to pursue a Ph.D in computer science. As part of the process, I’ll be taking the computer science GRE and Ph.D qualifying exams. As you may expect by the course description, the material in CS 361 is most closely related with what’s going to be on the test than the material in the other three classes. Taken from the Educational Testing Service Website, we see that 40 percent of the material is theory!

III. THEORY AND MATHEMATICAL BACKGROUND — 40%

A. Algorithms and complexity

Exact and asymptotic analysis of specific algorithms
Algorithmic design techniques (e.g., greedy, dynamic programming, divide and conquer)
Upper and lower bounds on the complexity of specific problems
Computational complexity, including NP-completeness

B. Automata and language theory

Models of computation (finite automata, Turing machines)
Formal languages and grammars (regular and context-free)
Decidability

C. Discrete structures

Mathematical logic
Elementary combinatorics and graph theory
Discrete probability, recurrence relations and number theory

I suspect the amount of theory material on Ph.D qualifying exams is similar. These vary among institutions, so there’s no standard.

Computer graphics, while no doubt an interesting subject, isn’t as important in terms of the subject test material.

IV. OTHER TOPICS — 5%

Example areas include numerical analysis, artificial intelligence, computer graphics, cryptography, security and social issues.

It would also be more difficult for me to post graphics-related concepts online, as I’m certain that would involve an excessive number of figures and photos. I do have a limit on how many images I can upload here, and I’m not really keen on doing a whole lot of copying from my graphics class’ webpage; I prefer to have the images here be created by myself.

I also chose CS 361 over my two math classes. If I’m planning to pursue doctoral studies in computer science, it makes sense to focus on CS 361 more than the math classes. I was seriously considering doing some Probability review here, but the possibly vast number of diagrams and figures I’d have to include (as in graphics) is a deterrent.

Finally, another benefit of writing the series will be to increase my attention to Seita’s Place. I hope to boost my writing churn rate and my research into deafness and deaf culture. Even though it’s relatively minor, this blog has become more important to me over the past year, and I want to make sure it flourishes.

I’ll keep you posted.










Update from Honolulu, Hawaii

Aug 10, 2012

As I mentioned earlier, I’m in Honolulu, Hawaii. I’ve been taking many pictures, which gives me the chance to finally add a picturesque post to my text-based blog. And since I just turned 20, I can continue to post on my birthday, just as I did last year.

One of the highlights of my trip was visiting Kauai, the fourth Hawaiian island I’ve set foot on. I took a picture of the same falls that was on the cover page of the tour guide. It’s gorgeous.

hawaii1

Here’s a view of Waimea Canyon, or the Grand Canyon of the Pacific. I’ve never been to the real Grand Canyon, but it felt like I was looking at a similar structure of nature. The view was so nice that I decided to add a panorama of this to replace the ugly default image of this WordPress Blog template.

hawaii2

Back in Oahu, I was at Waimea Bay. (What’s up with the repeated name?) This is one of the most beautiful beaches I’ve set foot on. I’m glad this is the summer — I’ve heard that real waves appear in the winter.

hawaii3

There were also some nice Japanese-style temples on the way back from Waimea.

hawaii4

Afterwards, I made my way to the famous Nuuanu Pali Lookout, whose strong winds are (sadly) the only thing that can make my hair stick up. Looking at this reminded me that I should have increased the resolution of my images on my phone.

hawaii5

Finally, I’ve also had my share of mischievous moments.

hawaii6

All right, that’s enough for now. It’s back to more traditional blog entries later. I have a lot I want to talk about in the next month.










Hearing Aids: How They Help, and How They Fall Short in Group Situations

Aug 6, 2012

Daniel, how much do your hearing aids help you hear?

hearingaids

I’m surprised I don’t get asked this question often by my classmates, colleagues, professors, and others. Perhaps it’s because my speaking ability causes hearing people to believe that I can hear as well as them. Or maybe they believe that hearing aids can cure hearing loss?

Unfortunately, that’s not what they do. They amplify sound, and allows me to be aware of its existence. If someone’s talking, then I know that the person is talking, regardless of whether I’m looking at him or her. The difficulty for me, and for other people who wear hearing aids, is understanding the fine details.

Here’s an analogy. Suppose you’re on your computer and are looking at a tiny image that’s 50×50 pixels in size. From what you can see, it’s interestingly complex and alluring. It’s bright and colorful. There are wondrous curves that weave together and form some figure you can’t make out. You want to better understand what the image conveys, so you do the logical thing and try to resize it by pasting it into Photoshop and then dragging its edges to fit your entire monitor. (I’m assuming you’re a little naive — no offense intended.)

But there’s a problem.

When you do that, the image doesn’t become clearer. Your computer has to invent new pixels that coincide with the original pixels. As a result, the “large” image is a badly distorted version of the smaller one, and you still can’t fully understand what the image means. But perhaps it does convey more information than the really small image did, so despite its flaws, you stick with this method of understanding tiny pictures.

As I mentioned earlier, hearing aids let me understand sound that I would otherwise be unable to understand with the naked ear. A lot of sounds. In fact, I can hear no sounds with an intensity of 90 decibels or lower with my natural hearing; it’s the epitome of being “totally deaf.” For me to hear and understand a person talking in a normal tone without my hearing aids, my ears must be inches away from his or her mouth. Obviously, that’s not happening during most of my conversations, so I wear hearing aids almost all the time when I’m not sleeping.

A Typical Situation

I want to discuss a challenge that occurs often in my life, which is socializing in groups. Most of its difficulty is due to mechanical limitations of hearing aids, but there are also other forces in play. An example of a hypothetical situation would be during my 2012 [Bard College REU][1], where I would often amble with one or several other students for ten minutes at a time, which was the walking distance from our dorm to the heart of campus.

In the company of just one student, I can easily start and maintain a satisfying conversation. I still have to ask my companion to repeat every fifth or sixth sentence he says, but at least I get the general direction of what we’re talking about.

bard_science_center

But things get exponentially worse with three, four, and more people. And this is where the hearing aid’s lack of ability to make sound distinctive hurts me. What typically happens is the other people get involved in a conversation that I can’t follow. There are multiple factors that hinder my hearing in this kind of situation.

The first is what I explained before, with the hearing aid’s difficulty in clarifying amplified sound. Incidentally, I won’t go into depth on technical reasons, since there are a whole host of articles that such as this one that talk about hair cells. A second one is that in a group conversation, people often don’t look directly at me when talking, thus rendering my lip-reading skills useless. (Lip-reading can account for about 25 percent of my comprehension.) Moreover, my hearing aids are designed to best amplify sound when it’s coming directly at me, towards my eyes, which isn’t always the case. It’s even less common if all of us are moving (i.e. walking), which adds another roadblock to my comprehension.

A third source of hindrance is simultaneous talking and ambient noise. As you can imagine, when my companions keep interjecting each other or laugh as a group, it adds another level of complexity. In order for me to understand as well as a hearing person could in that situation, my hearing aids would have to somehow partition the various sources of sound into “Sound coming from Person A,” “Sound combing from Person B,” and so on. Things get worse when there happen to be, say, thirty young campers in a group next to us yelling, and the like, which is why I prefer quiet cafeterias and restaurants.

A fourth problem — yes, there’s more! — is the “Deaf Nod,” which doesn’t so much relate to hearing aids but is a consequence of being deaf. This occurs when a deaf person who doesn’t understand what’s said in a conversation decides to do a weak nod to give the impression that he or she understands what’s going on, despite how it’s the exact opposite! I’m so guilty of the Deaf Nod that I feel shameful. Part of this stems from frustration in lack of understanding, and another motivation is not to seem like a hassle to the people who I’m conversing with. The ultimate result is that I just play along with a conversation that I’m unfamiliar with, which has a chain effect since I then don’t fully understand what’s discussed next if it builds up on previous dialogue. It’s more common for me to do the Deaf Nod when there’s at least five people involved, or if I’m not really familiar with my conversationalist.

Finally, my hearing aid batteries could die at any moment. Fortunately, I usually receive some form of notification, such as several awkward beeps together. But sometimes, there’s no warning, and my batteries die suddenly. If it’s my right hearing aid that stops working, then it’s not much of a problem, because I obtain most of my hearing from my left ear. But any extra bit that I can hear helps. And it won’t always be possible for me to have easy access to my batteries. I usually put them in a small pouch in my backpack, so I have to dig in, get the battery out, replace the old battery, and insert it. Even though the entire process takes a few seconds, simply doing it detracts from my focus on the ongoing conversation, making it even harder to get back in the mix. And I’m not going to even discuss the case of when my left hearing aid is the one that dies — I can barely understand anything if that happens!

Battery failure is the most common of hearing aid technical difficulties, but there are others. Even though my hearing aids are generally reliable, I have experienced many cases of when technical difficulties have ruined current and potential conversations.

To recap, here’s a list of the barriers I experience:

  1. Difficulty in clarifying amplified sound
  2. Lack of eye-contact
  3. Simultaneous talking
  4. Frustration and the “Deaf Nod”
  5. Hearing aid technical difficulties

So while the benefits of hearing aids are enormous, the (non-exhaustative) previously listed challenges make it impossible for me to experience what life is really like for hearing people, especially during group situations. There are other cases when the hearing aid falls short of optimality, such as when I’m watching television, but I can write an entire rant blog entry about that later.

Now for the Good Part

I don’t want to give the impression that hearing aids don’t help me at all. I was merely highlighting a salient downside. But the reality is without them, I would never be aware of many noises that exist in today’s world. I don’t think I would have done as well in school if I didn’t have hearing, and I certainly wouldn’t be able to do well at an academically rigorous school like Williams College (second in Forbes’s 2012 college rankings!), since I rely heavily on communication with my professors.

And I also wouldn’t be as eager to go to computer science graduate school.

Look at the home pages of the computer science faulty at your college or university, and go see their non-dissertation publications. How many papers do not have another author? I checked as many Williams computer science faculty publications as I could. And I would guess that just two or three percent of them (textbooks and informal papers excluded) didn’t have two or more authors.

I’m sure that most of the communication involved was email, but from my own experience, I’m convinced that it’s so much easier to conduct group research face-to-face. And that’s the baseline of what I need from my hearing aids. I need to hear just enough to make working on a research project feasible, which means communication should not be a research roadblock.

Conclusion

I love hearing aids. I put them in my ears as soon as I wake up every morning. I take care of my hearing aids by cleaning and drying them regularly. I store all pairs in soft containers and use my portable hearing aid dryer to ensure they [don’t break down due to moisture][4]. Even though it’s really tempting to do so, I never take them for granted, and constantly remind myself of how dependent I am on them. (Writing this entry was one way to do that, for instance.)

Hearing aids have offered me the chance to experience the euphony of the world and to be capable of socializing with the vast majority of people I meet. But they perform poorly in many situations, falling short of normal hearing, and they are incapable of truly curing significant hearing loss.










Java to Python Transition

Aug 5, 2012

Java was the first programming language I felt comfortable enough to write lengthy programs that, for instance, could be used to advance the goals of a research project. So for my first program needed for my summer research at the Bard College REU, I used Java. I wrote code to create a random sentence generator. That was during my second week of the REU, and I had writen one significant Python script in my life, which was for a bonus question from my Algorithm Design & Analysis homework.

Let’s fast forward a bit. By the time the REU ended, I had written over 20 significant programs … in Python. So what happened?

At the start of the REU, I knew much more about Java than Python. But after some prodding by my advisors, and the fact that everyone else in my research project was using Python, I switched languages. I soon found — as they said I would — that Python was so much easier to write than Java. In particular, the file input and output is so stunningly simple, yet incredibly useful, a must for all the scripts we wrote that involved manipulating files. My project, after all, was about text simplification, and all the relevant corpora were stored in files.

I also found it easier to understand the official Python documentation than Java’s, so looking up things was less of a challenge. Like the guy who wrote this (heavily biased) article about Python versus Java, I have to look up a lot of things for Java, whereas the same became a bit less true for Python.

My experience confirmed what I’ve heard that learning new languages is easier once one becomes proficient at another language. Well, with the possible exception of Malbolge.










Year 1 Retrospective

Jul 29, 2012

Update February 14, 2020: I made several changes to my earlier posts, so some (if not most) of the content in this blog post is not relevant anymore.

I started this blog on August 1, 2011, so I get to celebrate that I’ve been blogging for about a year. I now want to take a step back and see what I have achieved and what goals I should set for the future.

This is my 36th entry, so if you do the math, that’s 3 posts every month, which is a pace I hope to at least maintain in the next few years. There were definitely times when I didn’t feel interested in blogging, but as I explained in my [Blog Productivity][1] entry, I anticipate that to change. Of course, this assumes I maintain my normal blog entry size of about 500 words.

Of my 36 entries, 12, 9, and 15 (including this one) are filed in the categories Computer Science, Deafness, and Everything Else, respectively. I’ve found that it’s a lot easier to come up with topics to write about in the “Everything Else” category than in the other two. But I’m trying to carve out a niche for myself in the blogging world. Anyone can write a generic blog about any topic, but I want to devote more attention to computer science and deafness, two subjects that encapsulate much of my life.

I’ve also started the habit of writing multiple drafts before publishing posts. In the past, I would start blog entries from scratch and post them as soon as I was tired of writing. (And any future edits would require me to fix something that was already published.) But now, I have multiple drafts of posts saved in my WordPress Dashboard. I think my past urge to publish as soon as possible was because I wanted to get as much content up on my blog, even if it represented less than my full effort. But now that I’ve already got a good amount of material, I think it’s been easier for me to sit back, sift through my drafts, make (possibly major) revisions, and then publish only if I feel that I’ve really poured in enough effort in that entry.

Finally, I’m trying to increase the traffic to Seita’s Place. I’ve been searching and following other blogs if I deem that they are significantly related to the topics I post here. Hopefully this will lead to some reciprocal action. However, since I enjoy writing, I will still be blogging even if I don’t get many visitors.

All right, let’s switch gears. Now that my Bard College REU is over, the next chapter of summer 2012 starts — in Honolulu, Hawaii! I’ll be there for two weeks and I hope that I’ll have a great time with my family. And true to my new habits, I have a few interesting posts backed up (no less than five), which will likely be published upon my return to Albany, New York. Stay tuned.










New Page Added: Top Ten Entries

Jul 28, 2012

UPDATE May 13, 2015: With the move of this blog to Jekyll, some of these points might not be relevant anymore.

With the way the blog has been growing recently, I’ve decided to add a new page that contains my ten favorite entries here with rationales, called “Top Ten Entries.” You can find it at the very top of the blog, next to the currently-existing “About” page. In a way, my new page serves as self-motivation, since whenever I’m in the process of writing a yet-to-be-published entry, I’ll be wondering how I can tweak it so it can crack my top ten list.

Furthermore, the page can be used by readers who want to see a sampling of my work or who do not have the time to read all entries. I also suggest that people visiting this blog for the first time should take a look at that page. In the future, I may make a separate page, “Start Here,” just for that purpose, but I think the page I just set up now will be well-suited as a starting point. Any constructive feedback on the structure of “Seita’s Place,” of course, is more than welcome.










Wrapping Up my Summer Research

Jul 27, 2012

I’ve written my final report to the National Science Foundation for my summer work at the 2012 Bard College Mathematics & Computation Research Experience for Undergraduates. My project focused on text simplification, which can be broadly defined as the process of making (English) text easier to read while maintaining as much underlying content as possible. I would consider it to be a subfield of machine learning, which is a branch of artificial intelligence.

The primary objective of my work was to improve on existing results of text simplification. There is no standard way to measure the quality of simplification, so my research team — consisting of me, another undergraduate, and two Bard college professors — decided to use BLEU scores. The advantage to using those scores is that, in the summer of 2011, another research group used it as a measure of their level of simplification. Our goal was to beat their BLEU scores.

To carry out the translation process, we used the open-source Moses software. To train the system to simplify text, we used aligned, parallel data from Wikipedia and Simple English Wikipedia. Moses is designed to be able to translate across different languages, but we considered “Simple English” as its own language. Thus, we viewed the project as encompassing an English-to-English translation problem, where the “foreign” language is derived from Wikipedia, and the “simple” language is derived from Simple English Wikipedia. Our hope was that Moses would be able to understand the steps involved in making English text simpler and act as an effective means of translation.

We soon discovered, though, that our parallel data was of low quality. We therefore used the LIBSVM and LIBLINEAR classification softwares to improve the data by switching or deleting certain pairs of lines from the parallel corpus. For instance, if there was a short sentence in the Wikipedia data that was aligned to an obviously more complex sentence from the Simple English Wikipedia data, it made sense to switch those two sentences so they were in the more appropriate data set. After all, the data was made by random people contributing to the two Wikipedias, so there were bound to be a few bad samples here and there.

Our group successfully classified a random sample of sentences as belonging to the Wikipedia set or the Simple English Wikipedia set with a higher degree of accuracy than previous researchers did. My professors are still performing some experiments, so it remains to be seen if we can get higher BLEU scores.

Overall, I’m glad I had the chance to participate in the Bard REU, and I’m optimistic that we will produce a strong research paper. I thank the professors there for accepting me into their program, and the National Science Foundation, which has now sponsored my second straight summer program.