My Blog Posts, in Reverse Chronological Order
subscribe via RSS
On November 5, I attended part of the Fall 2014 Retreat for the Berkeley Vision and Learning Center (BVLC). The BVLC is a new group of faculty, students, and industry partners in EECS that focuses on research in vision (from computer vision to visualization) and machine learning. The retreat was held in the Faculty Club, a nice, country-style building enclosed by trees near the center of the UC Berkeley campus. While there were events going on all morning (and the day before, actually), I only attended the poster session from 5:00pm to 7:00pm and the dinner after that.
The poster session wasn’t as enormous as I thought it would be, but there were still quite a few people crowded in such a small area. I think there were around 15 to 20 posters from various research groups. I brought one about the BID Data project, whose principal investigator is John Canny. I’m hoping to become a part of that project within the next few weeks.
As far as the people who actually attended, there were a good number of faculty, postdocs, senior graduate students, and even industry people (from Microsoft, NVIDIA, Sony, etc.). For faculty, I saw Pieter Abbeel, Trevor Darrell, Alexei (Alyosha) Efros, Michael I. Jordan, and Jitendra Malik at various times throughout the evening. (Trevor is the head of the group so he was guaranteed to be there.) I had two interpreters for the poster session, which was probably overkill, but they were able to help me describe what a few people were saying when I went to see two specific posters that were interesting to me.
I didn’t have anyone there for dinner, though, which meant it was a struggle for me to communicate. Also, during dinner, we listened to guest speaker Andy Walshe of Red Bull Stratos. His talk was titled Leveraging Crossmodal Data from High Performance Athletes at Red Bull. Andy mostly talked about the limits of human performance, and as far as I can tell, his talk was not an advertisement for the actual drink known as Red Bull, which as everyone knows is dangerous to consume. Even so, I was often wondering why this kind of talk was being given, because I would have expected a “traditional” machine learning talk — but maybe I missed something at the start when Trevor Darrell was introducing Andy. (This is one of the things one should realize about me; dinners and talks are some of the most difficult situations for me to be in, while they may be quite easy to get involved in for other people.)
I could tell that the talk was not overly technical, which meant that there was a lot of discussion and questions once the talk was over. In particular, Michael Jordan and Alexei Efros asked consecutive questions that made everyone in the room (except me) roar with laughter. I’ll have to find someone who can explain what they said….
(Note: the image at the top — taken from the Faculty Club website — shows the location where we had dinner and where Andy gave his 30-minute multimedia presentation.)
Richard Ladner showed me a link to the September 2014 SIGACCESS newsletter, which contains a personal essay on why he made a career transition from being a computer science theorist to an accessibility researcher. (Frequent readers of my blog will know that I met Richard Ladner as part of the Summer Academy.) As usual, I’m a bit late with posting news here on this blog — this one is a few months old — but here it is and hopefully you enjoy his essay. Some highlights:
- Richard: “Although I am not disabled, disability is in my fabric as one of four children of deaf parents. Both my parents were highly educated and were teachers at the California School for the Deaf, then in Berkeley, California. They both used American Sign Language (ASL) and speech for communication, although not simultaneously.”
- Richard: “When I started at the University of Washington in 1971 I had no intention of doing anything in the area of technology for people with disabilities. I worked exclusively in theoretical science where I had some modest success. Nonetheless, some where in the back of my mind the transformative nature of the TTY helped me realize the power of technology to improve people’s lives.”
- Richard: “A light bulb went off in my head when I realized that innovation in technology benefits greatly when people with disabilities are involved in the research, not just as testers, but as an integral part of the design and development team.”
- Richard: “In 2002, with the arrival of Sangyun Hahn, a new graduate student from Korea who happens to be blind, I began my transition from theoretical computer scientist to accessibility researcher. By 2008 the transition was complete.”
- Richard: “One activity that I am particularly proud of is the Summer Academy for Advancing Deaf and Hard of Hearing in Computing that I developed with the help of Robert Roth who is deaf. […] Eighty-three students completed the program over its 7-year run from 2007-13. About half of these students became computer science or information technology majors.”
- Richard: “For students who want to become accessibility researchers I also have one piece of advice. Get involved at a personal level with people with disabilities. With this direct knowledge you are more likely to create a solution to an accessibility problem that will be adopted, not one that will sit on the shelf in some journal or conference proceedings.”
On a related note, Richard isn’t the only scientist who has made a late-stage research transition. I personally know several scientists/professors (though none as well as Richard) who have substantially changed their research agenda. One interesting trend is that people who do make transitions tend to move towards more applied research. It’s almost never the other way around, and I suspect that it’s due to a combination of two factors. First, theory-oriented research requires a lot of mathematical background to make progress, which can be a deterring factor. And second, I think many theorists wish their work could have more of a real world impact.
Lately, I’ve been disappointed at my lack of ability to effectively socialize in various mingling sessions. Examples of these include the Berkeley graduate student social events, the Williams College math and computer science “snack/social” gatherings, research-style poster sessions (see this Williams math post for some sample images), and basically any kind of party. Typically, if I attend these events, I end up saying hi to a few people, stand around awkwardly by myself for a while, and then leave long before the event concludes, feeling somewhat dejected. This has been an issue throughout my entire life.
I don’t normally have anyone with me (such as an ASL interpreter) to help me out with communication, so I know that I’m already disadvantaged to start with, but I would like to think that I can manage social events better. I’ve tried various tactics, such as coming early to events, or going with someone else. Even when I arrive early, though, when the event starts to gather some steam and more people arrive, they tend to immediately conglomerate into groups of two or more, and I am often left out of any conversation. Furthermore, there’s no easy way for me to convince a group of five laughing students to include me in their conversation, and to also ask them to move to a corner of the room to decrease background noise.
Also, in my past experience, when I’ve attended an event with at least one other person, I can briefly remain in a conversation with the group I went with, but we end up splitting at some point. This usually means they’ve found someone else to talk with, but I haven’t.
The worst case scenario, of course, is if I arrive alone and late to a loud social event. By that time, everyone’s found a group to stick with and I don’t know what else to do but watch a bunch of people chat about some mysterious topics … or leave.
So what should I do then? I’m not someone who can just walk in a room and command the attention of everyone else here, and as evident from past experience, I’m going to need to work to get involved in a non-trivial conversation.
Unfortunately, I can’t (and shouldn’t) avoid social events all together, and the reason has to do with academic conferences. Especially in a field like computer science, where top-tier conference publications are what “count” for a Ph.D. student’s job application, it’s crucial for Ph.D. students to attend conferences and network with other people in the field. Even though I’ve already started graduate school, I have still (!) not attended a single academic conference, though I hope to do so in the future, and I worry about how I will handle the various social events they offer. I wrote a little bit on the topic of academic conferences before, but I’m more concerned with the social aspect here, not about the process of obtaining accommodations, which hopefully won’t be too bad with the resources that Berkeley has at its disposal.
I don’t have any answers to this right now, so I would appreciate any advice if you have them. In the meantime, I’ll continue brainstorming different strategies to improve my social situation in events that involve mingling, because I’m attending a poster session in three days.
My office is in the fifth floor of Soda Hall, and is part of a larger laboratory that consists of several open cubicles in the center, surrounded by shared offices (for graduate students and postdocs) and personal offices (for professors). Thus, I can peek into other offices to see what people are doing. And the graduate students I observe are almost always ensconced in their chairs.
I know that even Berkeley students take breaks now and then, but I still think that many us end up sitting down for five to six hours daily. (That’s assuming graduate students work for only eight hours a day … definitely an underestimate!)
I don’t like sitting down all day. In fact, I think that’s dangerous, and lately, I’ve joined the crowd of people who alternate between sitting and standing while at work. My original plan when I arrived in Berkeley was to ask my temporary advisor to buy a computer station that has the capability to move up and down as needed. Fortunately, I haven’t had to do that, because I somehow lucked into an “office” that looks like this:
Heck, I don’t even know what those metal-like objects are to the left. Fortunately, they’re set at the perfect height for a person like me, and they’re really heavy, so it’s provides a firm foundation for me to put my laptop there and stand while working. My current work flow is to default by standing up, and then sit down only when my feet start getting sore. Then I stand up once I start feeling stiff. Seriously, it doesn’t get any easier than that. You don’t need a fancy treadmill desk, though it’s an option — one faculty member at Cornell has this in her office. All you need is a nice stack of sturdy objects to put on top of something. And especially if you only plan to use your laptop, I can’t believe anyone (e.g., a boss) would complain if you built a simple station yourself. For more tips, you can also check out this excellent Mark’s Daily Apple article about standing at work.
There are other ways of avoiding the curse of a sitting-only job. For instance, some people might benefit from long walks during work, a thought that came to me due to a New York Times article that appears to have turned some heads. Personally, I find walking overrated. Every time I go for a walk, I can’t focus on my work — my mind always switches to whatever random thought happens to be flowing around. So I prefer to just sit and stand as needed during a pure work day, and I hope that other students (and faculty!) consider doing that.
I’m sure that most long-time hearing aid users such as myself have gone through this scenario: you’re outside, wearing your hearing aids, and the weather (sunny, 75 degrees) is great. Perhaps you’re taking a walk around your neighborhood, or you and a friend are having lunch outside. But then all of a sudden, the weather takes a nasty turn and it’s pouring rain. Since you don’t have an umbrella or a rain jacket, you scramble to find shelter. While you are doing so, you also wonder if you should take off your hearing aids, as they are (sadly) not waterproof. You consider a few important questions. Is it raining hard enough? Can you reach shelter quickly? Is it safe to take off your hearing aids?
All this is due to one rather unfortunate feature of hearing aids: they are not (generally) waterproof. Even a waterproof label might be misleading because that means a hearing aid passed a specific test, not that you can throw it in your backyard pool and expect it to work when you pick it up a month later. I’m actually planning on writing a more extensive post on the issue of hearing aids and moisture, as I’ve only briefly mentioned that topic in this blog (e.g., in this article, where I talked about touch-screen hearing aids). But I can say from my own experience that I get disappointed every time I get what is advertised as “the latest water resistant hearing aid” only to see it break down midway through a game of Ultimate Frisbee. I don’t typically have problems with rain anymore, because I’m usually prepared with an umbrella — or I just stay indoors.
Anyway, I’m happy to report that hearing aid wearers in the San Francisco Bay Area need not worry about rain. I moved in Berkeley on August 13, so it’s been almost two months. And I only remember one day when it rained. That was a few weeks ago, and it was a light drizzle at that. I brought two umbrellas and a rain jacket when I moved in, and they’re just collecting dust in my room, waiting for the next rainy day to occur. As indicated by my screenshot of the current forecast, that may not come for a while. It’s not as if the weather is scorching hot either, which might induce unusual amounts of sweat (another threat to hearing aids). It’s usually around 60 to 85 degrees here.
There was a newspaper article a few weeks ago that touched on the topic of rain in the Bay Area, so from what I can tell, I should expect more rain once it’s winter, but probably not that much. (I’m also aware that California’s in a historic drought, so I do feel guilty for being happy about the lack of rain.) Needless to say, the weather here is vastly different from the weather in Williamstown, MA. I remember when it would rain for days in September, thus ruining the Ultimate Frisbee fields. So far, the weather in Berkeley has been terrific, which is probably one of many reasons why graduate students come here from all over the world.
As I said in a recent post, I’ve been using a mixture of captioning (also known as CART) and interpreting services for various Berkeley-related events. For my two classes, I decided to forgo interpreting services in favor of captioning. Part of this was out of a desire to try something new, but I think most of it was because when I was at Williams, I experienced enormous frustration with my inability to sufficiently understand and follow technical lectures with interpreting services. (I had to rely on hours of independent reading before or after the talks for the material to make sense.)
This isn’t a knock on the interpreters, or a criticism of Williams. I’ve said before and will gladly continue to say that I was very happy with the accommodations Williams was able to provide me, and how my interpreters have put up with me for four years as I consistently enrolled in the classes that they hated the most.
The problem is the technical term dilemma that continues to plague my experience in the classroom.
In the best case scenario, using captioning services would let me focus primarily on the professor talking, and if there was something I missed, I could fall back on the captions to catch up on a few sentences. To make it clear, the way CART usually works is that the captioner will type on a laptop with the text small enough so that I can quickly look at the screen to see what was being said 10 seconds ago. With interpreting services, one can’t go “back in time.”
The other advantage I was hoping to gain from CART pertained to preserving the spelling of technical terms. An interpreter can’t really sign the word Gaussian, but a captioner can at least type out that word correctly once the professor has said it often enough (or has written it on the board).
To top it all off, I was told during my first meeting with the Disabled Students’ Program (DSP) that CART would be able to capture content with 99 percent accuracy.
Unfortunately, theory hasn’t matched with reality and, if anything, my experience in Berkeley classes so far has been more frustrating than with my Williams classes.
I’m not trying to criticize Berkeley as a school, which so far looks like they’re excellent with regards to accommodations (no issues that have shown up in other schools so far!). This article is more of a holistic frustration at the whole education system.
Let me be a little more specific about what has happened so far. This semester, I’m taking two graduate-level computer science classes, natural language processing (NLP) and statistical learning theory (SLT). The former is an Artificial Intelligence course that’s heavy on programming, and the latter is a math course with problem sets. At the time of this writing, I have sat through eleven lectures for each class.
Natural Language Processing
One interesting wrinkle is that I have remote captioning for my NLP class. This means for each lecture I bring a microphone hooked up to my laptop, and a captioner in a different area (perhaps at her own house) will connect to my computer through Skype or Google Hangout and type what’s being said. I see the captions via another program that lets me see the captioner’s computer screen on my laptop. It’s pretty cool, actually. (One student in the class thought it was a sophisticated automatic speech recognition system.)
Berkeley had to provide remote captioning because there were too many requests for CART during the class time slot. I was fine with it because, well, why not?
Unfortunately, I didn’t anticipate there being multiple factors that would result in a tiring and frustrating classroom experience.
First, my NLP class moves at a very fast pace. (Since it is a graduate level computer science course, I expected that it would move quickly, though perhaps not as fast as it has so far.) As a consequence, my captioner has had a hard time keeping up with the conversational pace. It’s common for her to type in all the sentences for about thirty seconds, then to take a five second break, and then to come back to captioning. I can’t blame her — it’s impossible to type nonstop for the eighty-minute lecture, but it does throw a wrench in my plan to try and understand everything from the transcript, because there’s so much that could be missing.
To be fair to the professor, we do have a lot to discuss, and the students here are skilled enough so that most can absorb plenty of knowledge even when it’s coming at a fast pace. So while I do feel like the lecture rate is a bit too high, I know it’s not something that can be addressed easily without causing some additional problems. I’ve already talked to the professor about possibly slowing down the lecture rate, and he was happy I brought it to his attention and would see what he could do without reducing the material we cover.
My other frustrations in the class stem from the remote connection. The microphone that Berkeley’s DSP gave me is powerful, but when other students ask questions in the class, their voices are often too quiet for the captioner to pick up. As a result, most of the time when students ask questions, the captioner has been forced to write down “(Inaudible)” which is the standard way of marking down unknown comments, so I don’t understand the flow of conversation between the students and the professor. And knowing what the other students are saying was one of the major benefits of having interpreting services! In a classroom setting, the professors are much easier for me to hear than the other students, even if those students are physically closer to me. I haven’t been asking the professor to repeat what the students have said, which is my fault — I need to start doing that!
My other, and perhaps most significant frustration with the remote captioning service, pertains to the logistic and technical difficulties we have experienced. The first lecture was fine, but the second was not. I had an on-campus captioner act as a substitute for the remote captioner, but the substitute didn’t get the right room assignment because the professor had to change the room (due to over-enrollment), and I didn’t update it with DSP because, well, a remote captioner doesn’t need to have a room number.
After emailing the substitute about the new room, she was able to find it thirty minutes into lecture, and by that time I was lost since I spent more time worrying about the captioner rather than the lecture material. And even when she was there, it’s hard to catch up on the last fifty minutes when you’ve missed the first thirty.
The third lecture was much better, even if the captioner had trouble typing in some of the technical terms — I sent her spellings some of the terms to make things easier. For the fourth lecture, though, I had a substitute remote captioner who needed to use Google Hangout to connect to me (I had used Skype earlier, as was the default). And we ran into a problem: even after connecting ourselves with Google Hangout, she couldn’t hear anything that was going on in the class.
We finally resolve the issue thirty minutes later — she installed Skype, and I removed the microphone I had been provided with and relied on my laptop’s internal microphone, and suddenly that worked. I have no idea why. But that class was a disaster. For the first thirty minutes, I was constantly on Google Chat with my remote captioner, trying to fiddle around with settings on my laptop to get her to hear what was going on in the class (I bet the students sitting near me were wondering what I was doing). And again, when you miss the first thirty minutes of a lecture, it’s hard to catch up on the last fifty.
Fortunately, I don’t think I will have connection issues in the future. I had a meeting with the primary remote captioner and we spent an evening trying to resolve our technical difficulties. Berkeley’s DSP also provided me with a more powerful microphone.
The fifth, sixth, and seventh lectures were okay, but technical problems continued during lectures eight and nine. In both cases, the captioner ran into problem with her own computer, so I wasn’t able to get captioning shown on my laptop until 28 and 12 minutes after lecture started (respectively). And while the tenth and eleventh lectures were free of notable problems, I’ll still be carefully monitoring any future technical difficulties (as I have been doing so far) and will send Berkeley’s DSP a report on it at the end of the semester, when I will re-evaluate whether I want captioning services at all (and if so, whether they should be remote).
So I guess the point is, while remote services sound pretty cool, be wary of technical difficulties that could happen, along with heightened difficulty of knowing what other students are saying.
Now let’s talk about my other class.
Statistical Learning Theory
As I mentioned earlier, SLT is a standard mathematics and statistics course. The professor lectures by writing on a chalkboard (we have no slides) and assigns us biweekly problem sets. I sit next to my captioner in the front of the classroom.
It might be hard to believe, given my description of NLP earlier, but captioning for SLT has been perhaps even less effective, thought this time it’s largely due to the material we cover in class.
Consider how captioners do their job. When captioners type, they type based on sound cues, and their special machines combine those cues together to form common English words. Captioners do not type word by word on a QWERTY keyboard like most of us do, because that would be too slow and introduce numerous typographical errors.
By now, you might see the problem: their machines are designed to recognize and auto-complete common English words. By typing in several sound cues, a captioner can quickly print phrases or long words on the screen that are automatically spelled correctly. With a technical class, however, these phrases or words suddenly aren’t that common, so the screen doesn’t auto-fill their text because advanced statistics terminology isn’t in its dictionary. The way to get around this is to pre-assign words to sound cues in the machine. For instance, my captioner has assigned the word Gaussian to the spell-checker so that it will print it out according to the appropriate sound cues, rather than print text like “GAU SAY EN” on screen. (Note to anyone who’s taking CS 281a: you’ll be playing around with Gaussians a lot.) But it’s still a problem in my class because new and old advanced terms are thrown around every lecture.
And to make matters a little worse, not everyone in the class has great articulation (according to my captioner).
Putting it All Together
There’s a common factor to both of my classes that might be a reason why I’m not getting the most out of the lectures: I’m not used to CART. So maybe there’s a bit of an adjustment period as I determine the optimal combination of looking at the professor and looking at the computer screen.
But I don’t think adjustment can explain all the difficulty I’m having in my classes. At the start of the semester, I sat through one of Maneesh Agrawala’s lectures on visualization, and my captioner had no problem at all (and I understood what was going on). In fact, I think that she did obtain around 99 percent accuracy in that lecture. Maneesh has a remarkable ability to speak at a reasonable pace and he throws out pauses in judicious locations. It shows that one’s experience with captioning can vary considerably depending on the speaker and other factors.
That doesn’t change the fact that, so far, I feel disappointed that I haven’t gotten more out of class lectures. I do make this up by spending a lot of my own time reviewing. Every few days, I will spend a full workday, 9:00am to 5:00pm, just reviewing lecture material. I don’t mind doing a lot of this work on my own, but I’m worried that if I have to keep doing this, it will take away time from my research. I don’t want to be consumed with classes, but I also have minimum GPA requirements, so I can’t slack off either. The better thing would be to do a lot of reading before the class, which admittedly I’ve been slacking off on due to giving priority on research and homework, but if I’m not getting much out of my classes, I’ve got to change my strategy.
Overall, being in my classes has been an incredibly frustrating experience for me, as I’ve had to spend several full days reading my textbooks about concepts that I think most other students got right out of lecture. This has been a major factor in what was an unusually brutal September for me, though again, to be fair to Berkeley, last September was arguably less stressful for me than September of 2013.
Nonetheless, I do feel like I am learning a lot, and I do feel like things will improve as the semester progresses. But in the meantime, I know there’s only one thing that can make this easier: doing a ridiculous amount of self-study. Do the readings, find online tutorials, do whatever it takes to learn the stuff discussed in lecture, ideally before the lecture occurs. Doing a ton of reading before lectures has proven to be a rock-solid learning strategy for me.
Last month, the New York Times published an interesting article that connected with my experience working as a computer scientist. The idea is that there’s so much data out there — in case you’ve been living under a rock, it’s the age of Big Data — but it’s becoming increasingly harder for us to make sense of it so that we can actually use the data well. Here’s a relevant passage:
But if the value comes from combining different data sets, so does the headache. Data from sensors, documents, the web and conventional databases all come in different formats. Before a software algorithm can go looking for answers, the data must be cleaned up and converted into a unified form that the algorithm can understand.
So why does this article connect to me? Every major computer science project I’ve worked on has involved a nontrivial amount of data “wrangling” (for lack of a better word), such as the one I worked on at the Bard REU. I also had a brief internship last summer where my job was to implement Latent Dirichlet Allocation, and it took me a substantial amount of time to convert a variety of documents (plain text, .doc, .docx, .pdf, and others) into a format that the algorithm could easily use.
Fortunately, many researchers are trying to help us out, such as professors Jeff Heer at the University of Washington and Joe Hellerstein at the University of California, Berkeley. I met Jeff when I was visiting the school a few months ago, and he gave me an update on the amazing work he and his group have done.
Meanwhile, as I finished reading the article, I was also thinking about how our computer science classes should prepare us for the inevitable amount of data wrangling we’ll be doing in our jobs. The standard machine learning computer science project, for instance, will tell us to implement an algorithm and run it on some data. That data, though, is often formatted and “pre-packaged,” which makes it easier for students but typically doesn’t provide the experience of having to deal with a haphazard collection of data.
So I would suggest that in a data-heavy computer science class, at least one of the projects should involve some data wrangling. These might be open-ended projects, where the student is given little to no starter code and must implement an algorithm while at the same time figuring out how to deal with the data.
On a related note, I should also add that students should appreciate it when their data comes nicely formatted. Someone had to assemble the data, after all. In addition, for many computer science projects, such as the Berkeley Pacman assignments, much of the complicated, external code has already been written and tested, making our jobs much easier. So to anyone who is complaining about how hard their latest programming project is, just remember, someone probably had to work twice as hard as you did to prepare the project and its data in the first place.
I’ve only been a Berkeley student for about three weeks, but I’m already appreciating how quick and easy it has been to get accommodations for various events. To do so, one just needs to go to the Disability Access Services website, fill out a two-page online form, and submit. I’ve filed about half a dozen requests already, an indication of how many meetings I’ll need to be attending to during my time in Berkeley. (Though I’m probably better off than the tenured professors here in that regard.)
The services one can request fall in two categories: communication and mobility. I’m only familiar with the communications aspect, which includes sign language interpreting and real-time captioning. Since this is the first time I’ve really been able to take advantage of captioning availability, I’m trying out a mix — some events with captioning, some with interpreting.
Not only is it easy to obtain these services, it’s also quite reliable. I’ve never had a request denied or forgotten. In fact, I even got a captioner for a new graduate student meeting despite giving only 36 hours of advance notice. (I had forgotten that it was happening … won’t do that again!) I’ve met a few of the people who work at the access services group, and they’re all really friendly. They are closely related to the Disabled Students’ Program at Berkeley, which is designed to help accommodate students for class-related purposes.
I think even people who aren’t affiliated with Berkeley in some way can request accommodations for events, though they might need to pay a small fee. Berkeley students can get them for free.
My life has been busy in the past few weeks as I’ve gotten adjusted to life in Berkeley. Part of this process has been going through orientation. I sat through a new EECS graduate student orientation and a general graduate student orientation.
For the most part, what we discussed during the orientations wasn’t too surprising. Here are a few highlights from the EECS-specific one.
- There were 1,615 applicants to the computer science doctoral program. Berkeley accepted 83, for an acceptance rate of 5.1%. The yield was 43, not including five extra students coming in from last year’s cycle. Interestingly enough, this information doesn’t seem to be available anywhere and I’ve heard acceptance rates range from as high as 9% to as low as 2%, so it was nice to see these values come directly from the department chair. There were even more applicants for the electrical engineering program (at least 1,800). Coming from a school that has no engineering courses, I would have thought that computer science would have been more popular than electrical engineering. All together, we have 98 entering EECS Ph.D. students.
- The orientation made it clear that the department is passionate about supporting the well-being of its graduate students. The chair emphasized the need to be inclusive of people from all backgrounds. We also had a psychologist and a member from the Berkeley Disabled Students Program speak to us. Finally, there were representatives from the Computer Science Graduate Student Association (CSGSA), an organization designed by the students to support each other school (there’s also an EE version). I really did come out of this orientation feeling like Berkeley cares about their EECS graduate students.
- The end of the orientation was mostly about working and getting funding. There was too much information to absorb in one day, but fortunately the handouts we got contained the relevant information.
The general graduate student orientation, held the following day, was less useful than the department-specific one, and I could tell by the size of the crowd that most of the EECS students probably didn’t go. Some highlights:
- The most important one for me was learning about residency, residency, and residency. As a public school, Berkeley charges out-of-state students non-resident tuition, including graduate students. The EECS department pays for this during the first year, but from the second year onwards, we pay an extra $8,000 unless we’ve established California residency.
- I also attended workshops relating to student health services and “surviving and thriving” in Berkeley.
- And for any graduate student who expects to be hungry often, there was free breakfast and lunch.
In addition to orientation, I’ve had a few classes and research group meetings. I’ll talk about the research later — stay tuned.
I’m 22 years old and have been wearing hearing aids for most of my life. But for some reason, I’ve never read a hearing aid instructions manual. Now that I live in California, far away from my audiologist in New York, I’m going to need to be a bit more independent about managing my hearing aids. So I read the manual for my new Oticon Sensei hearing aids. Here are some of its important messages and the comments I have about them, which probably apply to many other types of hearing aids.
- “The Sensei BTE [Behind the Ear] 13 is a powerful hearing instrument. If you have been fitted with BTE 13, you should never allow others to wear your hearing instrument as incorrect usage could cause permanent damage to their hearing.” My comment: I already knew this, and I think it’s a point worth emphasizing again. Your hearing aids are for you and not for anyone else!
- “The hearing instrument hasn’t been tested for compliance with international standards concerning explosive atmospheres, so it is recommended not to us the hearing aids in areas where there is a danger of explosions.” My comment: again, this is straightforward, because generally anything with batteries can have a risk of explosion, but I think the better strategy is to not go near those places at all. (And if you’re a construction worker, I’d ask for a different work location.)
- “The otherwise non-allergenic materials used in hearing instruments may in rare cases cause a skin irritation or any other unusual condition.” My comment: I had the misfortune of experiencing skin irritation a few months ago. Some new earmolds I had were designed differently from what I was used to, causing skin in my inner ear to harden. I had to dig into an old reserve of earmolds and fit those to my hearing aids to comfortably wear them.
- “[When turning off hearing aids] Open the battery door fully to allow air to circulate whenever you are not using your hearing instrument, especially at night or for longer periods of time.” My comment: I sort of knew this, but now it’s concrete. From now on, I’ll keep the battery doors open when I put them in the dryer each night. Unfortunately, the manual didn’t specify whether the battery should stay in the compartment or not.
- “Hearing instruments are fitted to the uniqueness of each ear […] it is important to distinguish between the left hearing instrument and the right.” My comment: For someone like me, who relies more on one ear for hearing than the other, keeping track of what goes left and what goes right is crucial. I’ve gotten confused several times about this when I replaced earmolds for various hearing aids.
- “Although your hearing instrument has achieved an IP57 classification, it is referred to as being water resistant, not waterproof. […] Do not wear your hearing instrument while showering, swimming, snorkeling or diving.” My comment: as usual, one needs to be careful about the distinction between water **resistant versus being *waterproof. *From my own experience, the Oticon Sensei does an excellent job resisting sweat, and I can only remember a handful of times when they stopped working normally during or after a gym session. (As I mentioned before, the same isn’t true for some types of hearing aids.)
I emphasize the importance of reading these manuals because if one is going to be using a hearing aid often, it’s important to know as much about them as possible, and I think this aspect gets glossed over in today’s busy lives. Similarly, don’t forget to learn more about your cars, houses, phones, laptops, and other expensive items — you might learn something useful.
Tomorrow, I will finish up a software engineering internship. I usually work at home, and lately I’ve been getting out of bed, wolfing down breakfast (berries, broccoli, and eggs), and conducting my morning coding session, all without putting on my hearing aids. Sometimes, I don’t touch them until the afternoon.
This raises the following question:
Is it better for someone like me to work without hearing aids?
Naturally, this would only apply during individual work sessions. If I’m working on a team project with a partner right by my side and we need constant communication, I’ll keep my hearing aids on. The one exception would be if that other person wants to speak using ASL, but that’s generally not a common occurrence.
I recall performing this “no hearing aid” tactic during my time working in the Williams College computer science lab. During peak hours, usually Sunday or Thursday evenings, the lab would get so packed that I couldn’t focus with all the screaming going on. (It sounds like screaming when 30 regular-volume conversations are happening in one small area.) If I wasn’t holding a TA session, then I would go to a corner of the back room of the lab, turn off my hearing aids, and work in peace.
The advantage of this is that I often reap the benefits of a short-term focus spike; it’s definitely nice to be able to mute all conversations under those circumstances. But should eschewing hearing aids be my default behavior when I work on something myself? Even if the only external noise is a fan?
Okay, I have to confess: part of the reason why I haven’t put on my hearing aids until so late during the past few days has partly been out of experimental interest. I want to see how effectively I work with and without hearing aids while having little to mild background noise. (It’s not a perfect experiment, because my surroundings are too quiet.) My impression is that I think turning off hearing aids can be useful under extremely noisy circumstances, but for most cases, I would not recommend it because there are too many downsides:
- I’m more vulnerable to danger. If the roof of my house were about to collapse due to hail, but I couldn’t feel it (I know this example is crazy…) then you can imagine what would happen.
- It creates some awkwardness if I need to turn on my hearing aids when someone wants to talk to me. My hearing aids — the Oticon Sensei — take roughly six seconds to start up from the moment I press the switch. So … I have to figure out how to stall for six seconds. And what if that person just wanted to say hi?
- Related to that previous point, when I turn off my hearing aids, it’s not at all obvious to anyone else in the same room that I actually do have them off. My hearing aid’s on and off states are hard to distinguish unless a person has a clear side view of me. Perhaps if I physically took them out of my ears, but that creates a whole host of other complications. In this situation, if someone needs my attention, he or she is going to have to work a harder to reach me, and everyone else in the room will probably be watching us.
- One thing I’ve also noticed in the past few days is that, when I turn off hearing aids, it blocks external noise but doesn’t silence my brain. It seems like if I don’t hear any natural sounds, sometimes my brain tries to “fill in” for me by repeating voices and sounds, which can be annoying. I think if I have my hearing aids on, some of the natural sounds can break that up (but not always).
Thus, while turning off hearing aids is useful when faced with prolonged noise exposure, it is not generally a long-term solution. With situations such as shared offices, which are a typical work environment for graduate students, I think the benefits decrease and the drawbacks (as stated earlier) become more striking. (At Berkeley, I’m pretty sure graduate students periodically interrupt each other to talk about research.) As a possible alternative, I could utilize noise-canceling headphones that cover my hearing aids (without causing any “ringing”) which would take care of some of the problems I mentioned. Interestingly enough, the last time I tried wearing noise-canceling headphones over my hearing aids, they didn’t cancel out any noise! So it seems to me that I just need to get used to working with background noise.
In about a week, I’ll be heading over to Berkeley to begin my graduate career.1 Consequently, I thought I’d take some time to reflect on what’s been going on this summer, particularly with regards to preparation for graduate school. Perhaps this will be useful to future generations of Berkeley EECS Ph.D. students.
Once students confirm that they are going to Berkeley, then they’ll be put on a mailing list (or more accurately, a “Google Group”) that includes all incoming EECS students, a few existing EECS students, and a few staff members. Important emails will be flying around by early May, so technically one’s preparation for Berkeley should start even before the summer begins.
Of the emails that are being sent, by far the most important ones to read are those pertaining to the quality of ice cream in the Berkeley area. The second most important emails to read are the ones about housing.
For people like me who don’t have any connections in the Bay Area, contacting other incoming students about housing opportunities is extremely important, unless you want to hedge your bets on living by yourself or with non-EECS students. Fortunately — at least during the summer of 2014 — there seemed to be enough people in my situation that finding a group to live with wasn’t too difficult. I did have to go through several failed attempts at forming a group, as well as one rejected housing application (that really hurt), but by the start of July, I had secured a place to live. One key tip is to keep in touch with the incoming students who are already around the Bay Area; they’ll be the ones conducting most of the house visits to make sure that the house you found on craigslist isn’t terrible. That reminds me: if you have no experience with craigslist, I suggest learning how to use it. And another tip about housing: I think it’s easier to get housing if you can find a nice place to rent and then advertise it to the group, rather than if you form a group first and then find a house.
Of course, there are other emails to read as well. Most of the non-housing emails fall into the category of incoming students asking current students questions. But worry about those after housing.
The Berkeley Graduate Division also sends out monthly emails. Those emails are short but have links to a bunch of detailed PDFs and websites. There’s too much information to absorb at once, but read as much as you can. You’ll also want to read a little more about the department’s Ph.D. requirements. Here’s a refresher.
At the start of July, you’ll also be assigned a temporary advisor. Send him or her a few emails (but not too many … see the Email Event Horizon for why). You may ask advice on what courses to enroll in, but the class schedule is online and most students have a good idea of what to take anyway. You can sign up for classes starting in August, but be careful not to take more than two a semester.
Finally, if you were to ask me advice on what to do during the summer before graduate school, I would recommend either a research or software engineering internship to keep your skills sharp, but it’s OK to use this time to travel or pursue other interests. While you can pursue them at Berkeley, the 167-hour work week makes things a little time-intensive.
Just in case you were wondering, I do plan on maintaining this blog during my time in Berkeley. I haven’t run out of things to say. ↩
If you’re interested in taking a free online course, consider Coursera. It takes seconds to make an account and filter through the 700 or so classes currently in the database to find what interests you. Classes are generally affiliated with a university, and professors are often the ones lecturing in the videos online. In addition to video lectures, there are homework assignments and exams, which are submitted electronically, as well as user discussion forums where the students can discuss class concepts.
Coursera embodies the concept of the massive open online course (MOOC) which aims to have unlimited participation to allow (theoretically) anyone in the world to obtain an education for free. Founded in 2012 by Daphne Koller and Andrew Ng of Stanford University, Coursera now has over 7 million users and sports an impressive list of university partners. (Check out this paper for an interesting discussion about MOOCs.)
Coursera is similar to the well-known MIT OpenCourseWare, but it has several advantages. The biggest one is that courses on Coursera will have all class material eventually available to the students who sign up, whereas on MIT OpenCourseWare, you face the repeated problem of lack of video lectures, lack of exams and solutions, and other information, especially with the upper-level courses. Coursera’s website design is also vastly superior. On the other hand, Coursera classes requires the user to sign up in a certain date range, so if you go on Coursera right now, chances are high that some of the classes that you want to take aren’t offered in the near future (and you might have to add it to your “watch list” for the next session).
In the meantime, I’ve been checking out Andrew Ng’s machine learning class, which was what really started Coursera. It’s designed to be a ten-week course, with the following syllabus:
- Week 1: Introduction, Linear Algebra Review, Linear Regression with One Variable
- Week 2: Linear Regression with Multiple Variables
- Week 3: Logistic Regression and Regularization
- Week 4: Neural Networks (Representation)
- Week 5: Neural Networks (Learning)
- Week 6: Applying Machine Learning Algorithms
- Week 7: Support Vector Machines
- Week 8: Clustering, Dimensionality Reduction
- Week 9: Anomaly Detection, Recommender Systems
- Week 10: Large-Scale Machine Learning
A third of the grade is based on multiple-choice quizzes, and the rest is determined by programming assignments, to be done in MATLAB or Octave, the latter of which is an excellent free version of the former. Octave is one of the simplest programming languages out there, so it shouldn’t be too difficult for one to get used to it.
After going through the first few weeks of the course, here are some quick impressions:
- Advantages: The class doesn’t have many prerequisites (no calculus, no probability, etc.) and is accessible to a broad audience. Professor Ng’s video lectures are excellent. In fact, it’s nice to see that someone who can write complicated papers can clearly explain the basics. There seems to be a lot of collaboration among the students. The class covers most of the concepts I’d expect in a machine learning class, but for some reason doesn’t seem to cover the naive bayes and decision tree learning algorithms.
- Disadvantages: The simplicity of the class is also its major drawback — to someone like me who already knows machine learning, the class is too easy and I watch video lectures (for review purposes) at 1.5x or 1.75x the speed (a nice feature, by the way). Professor Ng often has to say “the discussion of this concept is beyond the scope of this course….” Consequently, a student at Stanford is better off taking Professor Ng’s “actual” machine learning course.
Again, if you’re interested in learning more about any subject, I encourage you to check out Coursera. There’s definitely a heavy focus on computer science — not surprising, given that the founders are computer science professors — but there are courses in subjects as diverse as health, law, engineering, and music.
Cholesterol, Saturated Fat, Grains, Meat, and Other Diet Controversies: why are There so Many People Challenging Conventional Wisdom?
As I mentioned in my recent post introducing Mark’s Daily Apple, I have become more interested in understanding diet, nutrition, and health. Sadly, this doesn’t come without challenges, and in this post, I’d like to discuss some of the current controversies that make it difficult for me to decide what to eat in order to maintain a healthy life.
First, let me provide some background. During elementary and middle school, I learned about the United States Department of Agriculture’s infamous food pyramid. Of course, like most Americans, I didn’t adhere to it exactly, but I at least kept it in mind, so it did impact the way I ate for most of my life.
After reading Fast Food Nation, I also avoided most forms of fast food starting in high school. On the surface, this diet approach seems to be excellent — just follow the food pyramid and avoid McDonald’s. Unfortunately, up until now, I had been unaware of the vast amount of misinformation, politics, and shoddy science of food that plague the country and are likely correlated with the shocking prevalence of obesity, heart disease, diabetes, and other chronic illnesses.
First, let’s go over a few hopefully non-controversial facts:
- In 2012, 34.9% of U.S. adults and 16.9% of U.S. youth were obese; in the early 1960s, obesity among U.S. adults was estimated at 13.4%. If we expand the pool of people to include those who are overweight (i.e., BMI of at least 25) then the percentage of overweight adults from 1962 to 2010 rose from less than 50% to more than 70%.
- Heart disease is now the leading cause of death for Americans, with the latest estimates pegging it in causing one out of every four deaths.
- In 2012, about 29.1 million Americans had diabetes, and almost two million new cases are diagnosed annually.
- The global pharmaceutical industry is expected to rake in $1,200 billion by 2016, of which the U.S. has the largest share.
- The estimated medicate costs of obesity in the U.S. is currently almost $150 billion.
I could go on and on, but I think the point is clear: the United States has a health crisis, and I’m pretty confident that we don’t need too many drugs to help us out. The human species, after all, cannot evolve so quickly over two generations to produce a much heavier population.
Fortunately, the rise in obesity and chronic illnesses has not gone unnoticed. The American Heart Association has established the following dietary recommendations to reduce the risk of heart disease:
- Choose lean meats and poultry without skin, saturated fat, and trans fat
- Eat fish at least twice a week
- Select low-fat or no-fat dairy products
- Cut back on food with partially hydrogenated vegetable oils (i.e., trans fat)
- Reduce consumption of saturated fat to lower cholesterol
- Avoid sugary beverages
- Prepare food without using too much salt to lower blood pressure
- Drink alcohol in moderation
- Pay attention to portion sizes
This is what I will refer to as conventional wisdom, and nowadays, the public perception is that diets high in fat result in weight gain, which subsequently leads to a whole host of other problems. Meanwhile, a healthy diet is low-fat, low-cholesterol, low-sodium, and plant-based (including grains). An example is the Ornish diet, which claims to reduce the incidence of heart disease by requring that no more than 10 percent of calories come from fat. It was created by Dr. Dean Ornish, Clinical Professor of Medicine at the University of California, San Francisco, whose books and nutrition program have won him widespread acclaim. After all, not everyone can say that they serve as a health advisor to Bill Clinton.
Unfortunately, the past few decades has suggested a paradox. Indeed, spurred by the government war on fat, Americans have been consuming less fat in recent decades, as shown by that 1998 paper (by the way, please let me know if you find a more recent reference). In 1965, the estimated daily intake of fat for American men and women was 139 grams and 83 grams, respectively, and in 1995, those figures were 101 grams and 65 grams, respectively. Furthermore, there is a consistent decrease in the percent of daily calories from fat, from 45% in 1965 to 34% in 1995. The caveat here is that the total caloric consumption of Americans has also increased, which might mitigate the “positive” effect of lowering fat, but if so, shouldn’t the rise in chronic illnesses be blamed to whatever else we’re eating to get those calories?
In addition, the percentage of American adult smokers dropped from 42.4% in 1965 to 19.0% in 2011. So something must be counteracting this beneficial effect because all signs point to an increase in chronic illnesses. Also, while Americans are living longer, that doesn’t mean our final years are that great. Our extended lifespan is largely due to better medical treatment that wasn’t available in earlier eras, and not due to an improved diet.
Good Calories, Bad Calories
In Good Calories, Bad Calories, Gary Taubes argues that United States government and health organizations have given us dietary advice that contradict the science. Taubes starts by explaining some of the earliest observational studies of the health of native populations before and after dietary changes (i.e., “Westernization” of diet). He then moves his way towards the mid-1900s, which coincided with the prominence of Ancel Keys and a new era of dietary advice that encouraged consumption of carbohydrates (including “white” bread/rice/cereal/pasta) and demonized saturated fat as the cause of heart disease. According to Taubes, the science showed, and continues to do so today, that saturated fat has little correlation with heart disease while the link between refined carbohydrates, sugars, and chronic illnesses is much stronger.
Good Calories, Bad Calories is ultimately a brutal attack on conventional wisdom. (By the way, I would like to point out that there’s a fair amount of support for following a vegan diet to obtain optimal health, so these people are also technically challenging conventional wisdom, but for the purposes of this post, as you have probably determined, I am mainly going to discuss the low-carb paradigm.)
In addition to what I mentioned earlier about the AHA and the Dean Ornish diet, conventional wisdom also proclaims that people can obtain a healthy body weight by “eating less and exercising more.” But this is problematic, Taubes says, because exercising more tends to cause an increase in appetite. For instance, athletes are known to require more calories than the average sedentary person.
When considering the totality of conventional wisdom, here is one of my attempts to sum up Taubes’ advice in one sentence:
“To be healthy, be sure to eat the right kind of calories from unprocessed meats and vegetables and avoid refined carbohydrates and sugars (including whole grains and processed foods); good exercise, while beneficial, cannot make up for a terrible diet.”
What is my opinion on Good Calories, Bad Calories? I have mixed feelings. Taubes is right on diet in many respects, but I’m not sure if people should be switching to a meat-heavy diet, which is what Taubes appears to advocate (but he never explicitly says so). Before reading the book, I was already aware of how saturated fat and cholesterol probably aren’t as bad as we (by that, I mean “conventional wisdom thinkers”) think they are. I had read Chris Kresser, Mark Sisson, and Zoe Harcombe, among others, give their take on cholesterol and similar topics, but I still tried to read the book with an open mind and a healthy level of skepticism, as none of the three people I just mentioned are true medical researchers. Good Calories, Bad Calories is definitely on the dense side in terms of writing style, but a lot of his analysis makes sense, and I have to say that the history of nutrition science and advice is interesting. My conclusion is that I think that anyone who has serious interest in diet and health should take a look at this book. It’s dense and has sixty six (!) pages of references, which is necessary to ensure that, as Taubes would later say, “to never take what I say on trust alone.” In fact, Taubes is not a nutritionist, but a science journalist; he got a Bachelor’s degree in physics from Harvard University.
Please don’t interpret the preceding paragraph as a full-on support of Taubes. I knew after reading this book that I needed to see if there was any legitimate criticism that would make me second-guess his advice. And by far, by far, the best review I’ve found of Good Calories, Bad Calories is a series of blog posts by a nutrition guy named Seth. And … wow, if Taubes relentlessly criticized the government in his book, Seth takes that kind of criticism, multiples it by ten, and levels it back at Taubes! It’s definitely a good read just to make sure that you don’t get too trapped into the whole “low-carb” ideology.
Just to be clear, Taubes doesn’t disagree with all of conventional wisdom. No one is out there advocating that Coke and Pepsi belong in a healthy diet, and that we should eschew non-starchy vegetables. The controversy is on the role that saturated fat, cholesterol, grains, and meat play in a healthy diet. Taubes never gives specific dietary advice, but his reader-friendly version (Why We Get Fat) does give a diet plan, which allows mainly unprocessed meats and non-starchy vegetables. He also has a cholesterol blog post, where he boasts about his cholesterol numbers while describing his diet as “eggs, sausage, cheese, cheeseburgers (no bun), steaks … high in fat, low in carbohydrates.”
Now, I know that the obvious reaction is to dismiss Taubes as a outsider to nutrition who doesn’t know what he’s talking about. But let’s start looking at some other sources that clearly have credibility.
Here’s one: the American Diabetes Association (ADA). They have to have some dietary advice, right? Good Calories, Bad Calories goes at length to explain how refined carbohydrates and sugars can induce diabetes, so it will be interesting to see how Taubes’ ideas match up.
I went to their page on Grains and Starchy Vegetables, where I saw this:
There is no end in sight to the debate as to whether grains help you lose weight, or if they promote weight gain. Even more importantly, do they help or hinder blood glucose management?
One thing is for sure. If you are going to eat grain foods, pick the ones that are the most nutritious. Choose whole grains. Whole grains are rich in vitamins, minerals, phytochemicals and fiber.
Reading labels is essential for this food group to make sure you are making the best choices.
Every time you choose to eat a starchy food, make it count! Leave the processed white flour-based products, especially the ones with added sugar, on the shelves or use them only for special occasion treats.
With that, the ADA just made me worried. I have to be honest: if I had diabetes, how could I feel comfortable eating grains — even whole grains — if the ADA can’t take a definitive stance on this, and suggests the possibility of “promote weight gain” and “hinder blood glucose management” as side effects? They do gently suggest eating whole grains on other parts of their website … but why not here, on the page that actually discusses it?
Here’s a second source that should be credible: the Harvard School of Public Health Nutrition Guidelines. But unfortunately, I remain worried. Here’s what they have to say in their 2011 article:
Nearly two decades ago, the U.S. Department of Agriculture (USDA) created a powerful icon: the Food Guide Pyramid. This simple illustration conveyed in a flash what the USDA said were the elements of a healthy diet. The Pyramid was taught in schools, appeared in countless media articles and brochures, and was plastered on cereal boxes and food labels.
Tragically, the information embodied in this pyramid didn’t point the way to healthy eating. Why not? Its blueprint was based on shaky scientific evidence, and it barely changed over the years to reflect major advances in our understanding of the connection between diet and health.
Wait … “shaky scientific evidence?” That’s not what I want to hear! Could Taubes be onto something after all? They continue their description by criticizing how current guidelines don’t penalize white/refined grains enough (so this is a point for Taubes), don’t penalize red meat enough (so this is a point against Taubes) and recommend too much dairy (I don’t think Taubes talks too much about dairy, so I’ll consider this a wash). The Harvard pyramid and associated article is an interesting read, and makes me feel much better that I base my diet on vegetables and have never been a huge milk drinker despite how doctors and others told me to drink more milk when I was young (thank goodness I didn’t!).
So, from the Harvard guidelines, whole grains play a foundational role in a healthy diet, but refined grains are almost as bad as you can get in terms of food! Is this the right answer? Are whole grains really that much better? Well, I obviously don’t know the answer. I’ll have to be honest, I’m leaning towards supporting the Harvard pyramid, but again, there are many experts who would disagree; Dean Ornish, for instance, would oppose the allowance of egg yolks, which he classifies as among the worst foods to eat. And he advised Bill Clinton on his diet! But here’s something else that’s interesting … Bill Clinton also has a second dietician, Mark Hyman, who has encouraged Clinton to eat more fat! Thus, Clinton has two major medical minds giving him polarizing advice on the amount of fat (and eggs) to eat. As the article suggests, if Clinton can’t come to a consensus, what hope is there for the rest of us? The best-case scenario, of course, is if both diets are great. And they obviously are, when the baseline is a diet of McDonald’s and Pizza Hut. But how do they compare against each other? That’s the major question.
Going Against Conventional Wisdom
Given that there are some aspects of diet that don’t seem to have a consensus, I thought I’d just try and see what people are saying. My goal is to synthesize some of the well-known books that advocate at least one of the prevalent themes from Good Calories, Bad Calories, which among them are “fats are fine, refined carbohydrates and sugars far worse,” “low-carb living,” and “governmental impact on nutritional science.” Most or all of these books will substantially challenge conventional wisdom.
This in no way means I support these arguments — the purpose of seeing all these books is that it raises doubt about conventional wisdom. While it does sound disconcerting, I ultimately think it’s best if we know all this information, because then we can do our own independent research and can make informed decisions. And again, a healthy level of skepticism (but not too much) is needed as part of science, and this is what makes nutrition science so great — it’s generally accessible to people who are outsiders, at least way more than computer science.
I tried to ignore books that were diet or recipe-related (there are a lot of “Paleo Recipe” books out there) in favor of ones that take at least a scientific approach to nutrition by citing studies and forming logical arguments. The one exception might be the Atkins diet book, but that was the one that really started the whole low-carb movement, and I think Atkins (who was a cardiologist) did have some science to back him up beyond his trials on himself and his co-workers.
The books are ordered by their original publication date, though many have been updated at least once. A number of them, such as Taubes’ books, are New York Times or National bestsellers. You’ll also notice that the majority of them are from the recent decade. This is not surprising; Taubes explicitly mentions at the end of Good Calories, Bad Calories, that the Internet was the main reason why he was able to find all the sources he did.
So, here’s a list of books I found:
- Pure, White, and Deadly: How Sugar is Killing Us, and What We Can Do to Stop It, by John Yudkin. (1972, updated in 1986 and 2012)
- Dr. Atkins’ Diet Revolution, by Robert Atkins. (1972, updated 2009)
- Protein Power, by Michael and Mary Dan Eades. (1997)
- The Great Cholesterol Con, by Anthony Colop. (2006)
- Good Calories, Bad Calories: Fats, Carbs, and the Controversial Science of Diet and Health, by Gary Taubes. (2007)
- The Great Cholesterol Con: The Truth About What Really Causes Heart Disease and How to Avoid It, by Malcolm Kendrick. (2008)
- The Primal Blueprint: Reprogram Your Genes for Effortless Weight Loss, Vibrant Health, and Boundless Energy, by Mark Sisson. (2009, updated 2013)
- The Paleo Solution: The Original Human Diet, by Robb Wolf. (2010)
- Wheat Belly: Lose the Wheat, Lose the Weight, and Find Your Path Back to Health, by William Davis. (2011, updated 2014)
- Why We Get Fat: And What to Do About It, by Gary Taubes. (2011)
- The Art and Science of Low Carbohydrate Living, by Stephen Pinney and Jeff Volek. (2011)
- The Art and Science of Low Carbohydrate Performance, by Stephen Pinney and Jeff Volek. (2011)
- The Great Cholesterol Myth: Why Lowering Your Cholesterol Won’t Prevent Heart Disease — and the Statin-Free Plan That Will, by Johnny Bowden and Stephen Sinatra (2012)
- Fat Chance: Beating the Odds Against Sugar, Processed Foods, Obseity, and Disease, by Robert Lustig. (2012)
- Grain Brain: The Surprising Truth about Wheat, Carbs, and Sugar — Your Brain’s Silent Killers, by David Perlmutter. (2013)
- Death by Food Pyramid: How Shoddy Science, Sketchy Politics, and Shady Special Interests Have Ruined Our Health, by Denise Minger. (2013)
- Eat the Yolks, by Liz Wolfe. (2014)
- The Big Fat Surprise: Why Butter, Meat, and Cheese Belong in a Healthy Diet, by Nina Teicholz. (2014)
- Keto Clarity: Your Definitive Guide to the Benefits of a Low-Carb, High-Fat Diet, by Eric Westman and Jimmy Moore (coming soon!)
Wow. That’s a lot of books, and honestly, it didn’t take me a long time to find these. And as you can see by looking at the authors, we’re seeing more and more people with medical degrees support the general idea of following a low-carb diet.
One major problem with these books, and indeed, this entire low-carb argument, is the “cherry-picking” involved, when authors look at research studies and selectively choose the ones that fit their hypothesis while ignoring other studies that don’t. Then again, who’s to say that the same isn’t happening for people who advocate eating lots of starch? And even books that take a scientific-based approach to suggesting a high-starch or low-meat diet, such as The China Study, have their own critics. I haven’t read The China Study, but it’s on my agenda.
By the way, I should mention that the author of The China Study, T. Colin Campbell, is Professor Emeritus of Nutritional Biochemistry at Cornell University. He advocates a low-fat, vegan diet, but that’s not what the Harvard food pyramid implies. In fact, even Dean Ornish doesn’t advocate a vegan diet. Look at his spectrum of food choices again, and you’ll see egg whites and fat free milk in his “Group 1: Most Healthful” foods. (And, uh … beer is also in the “Most Healthful” category. That’s interesting … I don’t know how that ended up on there.)
If there is this much out there that seems to contradict what the American Heart Assocation and conventional wisdom dictate, then doesn’t this at least raise some doubt? (Don’t forget to also consider books that suggest a vegan diet.)
I guess my point is that if we are getting advice about nutrition from the government, it should be established beyond a reasonable doubt. Otherwise, I would rather see phrases like “we are still considering all evidence and are unsure about this….”
Dr. Yudkin and Sugar
Unfortunately, one of the biggest takeaways that I got from Good Calories, Bad Calories and my own brief research is that, after fat became demonized, people and industry looked for alternatives, and they found that in sugar. In fact, Taubes said that prominent nutritionists and professors at elite universities were recommending sugar, and that it was a safe alternative to fat, even as late as the 1980s. But Dr. John Yudkin disagreed, and published Pure, White, and Deadly: How Sugar is Killing us, and what we can do to Stop it. So perhaps sugar and some forms of fat (trans, saturated?) are bad for us, but sugar is a stronger risk?
According to a 2014 article on The Telegraph, Ancel Keys and the sugar industry ruthlessly attacked Dr. Yudkin, hindering his credibility. But as the past few decades has witnessed, Dr. Yudkin may have been right all along, at least with respect to the dangers of using sugar as a substitute for fat. I’m not confidently sure which one of sugar or saturated fat is more of a risk factor for chronic illnesses — they probably both are — but if you told me to pick the greater risk factor right now, I would say sugar. The World Health Organization — a credible source, I would like to add — has recommended lowering sugar intake, but is expecting a battle with the sugar industry, as highlighted by a 2014 Nature.com article. (I highlight these dates to show how recent this whole anti-sugar movement began.) In addition, Dr. Yudkin’s work was recently revived thanks to the efforts of Robert H. Lustig, a professor at the University of California, San Francisco medical school. You can see Dr. Lustig in his YouTube video, Sugar: The Bitter Truth. Dr. Yudkin’s book was then re-published in 2012 due to growing demand, which gives an idea of how it has stood the test of time.
Summers are nice because they offer me a break from an intense academic environment. As a result, I’ve had the chance to explore other fields that interest me, and one of them concerns the human diet. Simply put, I’m trying to figure out what I should eat in order to maintain a healthy life.
The Original Food Pyramid
There’s a lot of information available that we can use for diet advice. For instance, the United States Department of Agriculture has their famous (or infamous, as I’ll get to shortly) 1992 food pyramid:
Let’s suppose we use this as a guide to optimal health, which seems reasonable because it’s from a United States government organization. (It shouldn’t be, because that pyramid has already been scrapped in favor of new dietary guidelines, but it’s good to discuss it to see the historical perspective on food.)
Unfortunately, even without consulting outside sources, I can already see several problems:
- It makes no distinction between whole or minimally processed foods and heavily processed foods. (I’ll throw in whole grains in the “minimally processed foods” category.) The former group includes fruits, vegetables, and animal products obtained from their natural state. The latter group would include pizza, chemically-laden meats, and so on.
- It suggests consuming fats, oils, and sweets sparingly, but the dairy and protein groups already include substantial amounts of fat. And my understanding is that fat has long been essential for human health. Our early ancestors ate lots of plants, but they would also eat the complete carcass of animals, including fat-dense organs that we shun today.
- It suggests that the serving counts should not be exceeded, which might impose unnecessary restrictions. Consider my situation: I love eating huge salads, and I also have a habit of downing an entire bag of baby carrots as an afternoon snack. This means that I easily rack up 7-10 servings of vegetables daily (depending on how you define a serving), but according to this pyramid, I shouldn’t be eating so many vegetables.
I know that no pyramid can disseminate detailed information in such a small amount of space, but such simple modifications could go a long way.
This brings me to the next part of this post.
Mark’s Daily Apple
My quest for learning more about health, diet, nutrition, and food led me to Mark’s Daily Apple. It’s an extensive blog written by Mark Sisson, a well-known advocate of eating the Paleo diet (though he calls it “Primal”) and preventing chronic diseases of civilization (e.g., diabetes and heart disease) by lifestyle choices. I didn’t think much of this at first, but the more I thought about the food I ate, the more I kept coming back to his blog. It also didn’t hurt that he’s another Williams alum, which might have piqued my curiosity.
Mark Sisson advocates his own food pyramid, which emphasizes meats (including fish, eggs, and fowl), vegetables, fats, fruits, and some carbohydrates. Notice the distinct lack of bread, rice, cereal, and pasta! It’s a long story about why he excludes them, but Sisson explains this in his blog and has some decent (but in my opinion, not overwhelming) evidence to back up his claims. For the most part, I favor his food pyramid over the 1992 USDA food pyramid, but I think Sisson’s pyramid should have kept vegetables as the “base” group to reiterate how they should compose the bulk of the diet in terms of volume.
If you want more information about his philosophy towards food and life, I’ll refer you to his Start Here page. When reading Mark’s Daily Apple, realize that Mark Sisson’s focus is not just on nutrition, but indeed, on a lifestyle. His advice encompasses sleep, play, exercise, and many other factors that affect our health. There’s so much out there that Mark Sisson posts new entries daily and still has no shortage of topics to talk about.
In a future blog post, I’ll delve more deeply into diet controversies. (Don’t worry — this digression doesn’t mean that I’m turning into a nutritionist…)