An interview with Mukund Thattai

Friday, October 17th, 2014
thattai

Can you describe some highlights from your current research for me?

We study the evolution of eukaryotic cells. There are two kinds of cells on the planet: prokaryotes, such as morphologically simple bacteria and archaea, and then the relatively more complex eukaryotes - for example plant and animal cells, amoeba, paramecium, and so on. Eukaryotes have enclosed nuclei and compartmentalized organelles, which prokaryotes lack.  Usually when you see such a big jump in complexity you expect to find intermediate forms, for example in the fossil record or in living organisms.  The mystery is that there are no intermediate eukaryotes.

One way to examine this leap is to study the huge diversity of eukaryotic cells which are alive today. This shows us that in the genomes of all eukaryotes, prior to 1.8 billion years ago, there were multiple gene family expansions as gene duplications occurred.  Bill Martin at Dusseldorf has a nice argument that the genome multiplied in size in eukaryotes, not adaptively, but because energy constraints were removed with the incorporation of energy-producing mitochondria. A consequence of these gene family expansions was to create variants of proteins involved in decorating membranes, important for eukaryotes in particular.

So we have two parts to the general story.  One is at the informational-genetic level. You have mitochondria that relieved energy constraints in eukaryotes, leading to gene family expansions. Simultaneously we know that eukaryotes became morphologically complex, with new compartments created.  The goal of our group is to make a connection between these two levels of the larger picture.  The general idea is that somehow gene duplications drive the emergence of new compartments. They're certainly necessary for making new compartments, but are they sufficient?

A few years ago a student of mine, Rohini Ramadas, came up with a biophysical model that incorporated what cell biologists know about how a eukaryotic cell works, how membranes are formed, how vesicle transport occurs, and so on. What she found was that in this model, duplicating genes at the level of DNA actually created new compartments at the cellular level. That makes it more plausible that an increase in gene copy number could have driven an increase in organelle number, compartmentalization and so on.

So we decided then to look at what gene duplications actually did happen.  Recent work by my student Ramya Purkanti looked at a protein called dynamin, which forms rings around membrane necks and pinches them. One variant of dynamin pinches off endocytic vesicles, while another is necessary for dividing mitochondria.  We reconstructed the 1.8 billion year old ancestral protein by piecing together stretches of dynamin that converge across the genetic codes of plants and animals.  From the literature we found that this ancient protein remains intact as a living fossil in a few eukaryotes, where it both divides mitochondria and pinches vesicles. That means, ancestrally, it was a protein that did both these things, unlike in our bodies where it specializes. Now that would've been a nice story on its own, but we found these same eukaryotes have the FtsZ protein left over from prokaryotic days which is also involved in division. In more ways than one, they've preserved this 1.8 billion year old divisional apparatus to the present day.  Pure biology, but we're probing processes on a time scale that you usually find in astrophysics.

There are other versions of these projects.  Somya Mani is working on a model of vesicle fusion and creation, trying to understand again how you couple the informational level with the morphological, cellular level. Anjali Jaiman is doing a similar kind of project about a group of molecules called glycans, which are sugar molecules that form branched polymers, universally present as identifying markers on the surfaces of eukaryotic cells.  To encode a branched polymer you can't just store the template, in the way that amino acid chains are stored in DNA.  Instead we have developed a model for making branched structures using algorithmic self-assembly, which means that rather than encoding the entity itself you encode a process that builds it, and then let that process run.  So far our model seems to work, and it tells us very interesting things about how a cell decorates its surface.


Over the course of your career, you have shifted from Physics into Biology.  How would you describe your own personal intellectual trajectory?

During my bachelor's degree in Physics at Cornell I was interested in dynamical systems. I applied for a PhD, and I thought I'd work on something like general relativity or quantum gravity. When I got into the PhD program at MIT, however, I realized I was a little more interested in condensed matter physics, because condensed matter theory is actually technically very challenging, but it also covers a lot of very real world systems. Unlike say string theory, which is technically challenging but you might work for instance on the evaporation rate of a fourteen dimensional black hole. I started working on condensed matter theory, and then I accidentally discovered an interest in biology when I came to Bangalore for winter break, after attending a lecture at NCBS.

I went back to MIT, took a course on biomolecular computations, and started getting into biology. Shortly after that I was lucky enough that someone joined the physics department using ideas from physics to test issues of randomness in cells.  It turns out that a lot of the tools I had studied in statistical physics were quite useful for modelling stochastic biological processes.  We made models of cellular stochasticity and found that the experimental side was not hard, you just needed a wet lab capable of doing bacterial culture, and a little molecular biology. As things got more interesting I went completely into it, to the point where now I'd say I'm a complete biologist.

For the first several years at NCBS, I continued along the same lines as my PhD work. We were looking at gene networks, feedback loops, oscillators, those kinds of things. That field grew a little bit crowded, however, and the science became very incremental.  So I've carved out this little niche for myself in the last four years.

 

Can you describe some of the challenges of doing interdisciplinary work?  What advice would you give to students starting out who are interested in the interface between physics and biology?

People shouldn't start off planning to become interdisciplinary. The best thing is to find a subject which you like and really get deeply into it, after school and in college. For me, and for many people I know, the things that I know best I learned in college. Not after that. That subject you know deeply will define your best skills, the things that you can keep up to date with on your own. If you never get that deep, and you can't keep yourself up to date, you're sunk.

But once you have that training and find yourself coming across an interesting problem at the interface with another discipline, fair enough. Just keep in mind that once you get into something interdisciplinary, it can be more than twice as hard as two individual subjects. There will be a language barrier at first, and new ways of thinking which will take a lot of time to master.  For the first few years you might even be publishing papers and so on, but it takes time to learn how the other subject actually works on the inside.

But there is a big reward for all this. Somebody standing at the interface who has a deep skill set in one subject, and has committed to and understood the language and the phenomena of some other subject, will see things that people have not seen. In many cases these are low hanging fruit. Although it takes a long time, once you get to such an interface there's a lot of cool stuff. Don't go out and seek it, but if circumstances take you there, then it's great fun.

 

Can you tell me a bit about the Monsoon School which happened in July?

The Monsoon School is our way of introducing a whole generation of physics students who might not have seen biology outside of school to how fun and interesting biology is these days.  I think it's worked very well. We've been getting a huge number of very good applications, of which we can take forty per year.  I think this is going to make a huge impact on the core size of quantitatively trained people who are aware of what is biology.

 

What challenges do you face as a mentor, guiding students who might not have been trained in all the fields within which you work?

My impression is that mentoring students is a life-long learning process. How you mentor a student completely depends on the personality, the likes, dislikes, and so on of a particular student.

Nevertheless, I can make some overarching statements.

When I first joined NCBS, I had one student who did experiments, and on the side I continued to do a little bit of theory, programming, calculations myself. A student in that mode will be able to finish their PhD, but the problem arises when they go on to become an independent researcher, for which they must know at least one subject deeply. What happens then is they end up doing good work while they're in a group where myself or other students are able to provide the mathematical background, but when they leave that kind of work cannot be sustained, without some extraordinary effort on their part.

So I've decided more recently to only take on students who are able to engage with me on the subjects I know best. Such a student could have been trained in biology, just as long as they can also do the mathematics.  Three out of the four students I have right now have some sort of biology or biotechnology background, but they know programming and they know some mathematics.  And I encourage everybody, if they happen to come from mathematics or physics, to maintain their skills in that primary area.

What is true is that after a certain age, which means shortly after college or the first two years of graduate school, if you're not already a good experimentalist you're never going to become one, and if you're not already completely comfortable with math you're never going to become comfortable. There is an age cut off. People should be aware.

 

Speaking of age and learning, why do you think it is that most mathematicians make their most significant contributions at a younger age? With some notable exceptions, of course.

The real question is, how come other scientists can make contributions when they're old?  Of course it's not true that mathematicians stop contributing to math after a certain age. They continue to do good work, although it is true that many huge breakthroughs occur when somebody is younger.  I think it's just the fact that the field of mathematics is extremely difficult, with problems that people have been working on for thousands of years. Our brains slow down after the age of twenty-five - I think that definitely has to do with it.

In physics, and in fact in biology especially, the longer you're steeped in it the better you get at doing the following thing: identifying a problem that has a good chance of being tractable, and interesting, and important. Knowing what is important, knowing whether some small observation might be important to some bigger thing, that only gets better with experience.  A really good scientist, of whatever kind, that's the one thing they're good at. Choosing the problem.

 

Can you tell me a little bit about the Simons Centre?

The people who are in the centre now, five of us - Madan, Sandeep, Shachi, myself and Madhu - have all been around at NCBS and interacting with one another and other faculty at NCBS since long before the Simons Centre was here. So you could ask what has changed in the last year, and I think being in one space has prompted more conversations and collaborations. I learn what the others are doing, not just well-developed projects but also the half baked ones. Our students run into each other, and I've been interacting with students from other groups.  It's been a lot of fun, and has already led to two or three collaborations which would not have existed before.

Now, being an associate with the Simons Foundation is also a very exciting thing, because it gives us a certain visibility and leverage. We can attract all kinds of people to come and visit from all over the world and spend a month, two months here. That's already been happening over the last year, and has triggered a lot of interesting collaborations.  Keep in mind that visitors can collaborate with anyone else on campus. In some sense the centre was conceived as a place where people from physics could access a real experimental biology institute, but through this sort of buffer that we were hoping to provide.

The other thing we're hoping to at the Simons Centre is come up with our own unique way of doing biology, a new school of thought. The name of the Centre is interesting, that way - the Centre for the Study of Living Machines.  The idea behind that is that we want to think explicitly, not just about the molecules that make up living systems, but about the functions these systems are meant to carry out. Function is a loaded word in biology - in our case, we are asking whether the function of proteins, cells and organisms is reflected in how they are put together, bringing in ideas from both engineering and evolutionary biology.

 

What do you see as the larger role of mathematical models and theories within biology?

There are two kinds of insight you can draw from quantitative data from a biological experiment.  One is predictive insight. By making a model, you can go out and say something about an experiment that hasn't been done yet. In my opinion, the right kinds of models to work with are small, abstract, contain just enough ingredients to get to the essence of a problem. These are not meant to be huge, detailed simulations of a system. Boil a system down to its essence, and ask whether those essential pieces provide any insight.

Models can have predictive power, or they can also have explanatory power.  These kinds of models take a lot of disparate existing data and explain them with a simple framework. That's also worthwhile.  It's sort of shorthand. It says - "This is what we know so far, and I'm going to compress it to these few features. Therefore I can now look at other aspects of the system."

The danger in relying on explanatory models is that is what you discover might be restricted to your particular system.  Biology needs extrapolation, because we cannot afford to catalogue everything completely.  What theory lets you do is extrapolate from a restricted set of data to a broader canvas. Theory acts as an amplifier. In physics this is very obvious. We do experiments on a table top scale and we extrapolate to stars and we extrapolate to atoms.

 

What are some misconceptions that you often encounter about your work?  What questions do you dread?

People seem to believe that models and quantitation are necessary when you understand very little about a system, when in fact it's the opposite. Once you have accumulated a large enough amount of data, that's when you can really gain insight by doing theory or models. In the absence of large amounts of constraining data, there's no point getting into the game of abstraction.

As for questions that I dread, a standard one is "What are you trying to cure?".  For some reason, all biologists are assumed to be trying to cure something. If somebody asks me that, sometimes I just tell them something absurd, like I'm trying to cure evolution.  There's a huge misconception that biology is about human health, when it's not.

Other than that, there are no questions I dread.  It's good to have questions.

 

Given problems that might extend over several years, experiments many times don't work out - what keeps you personally interested, from day to day?

As a student I had the good fortune not to work on a single problem at a time, which remains true today.  Every day I work on something different, and there are many things going on at the same time.  All these projects will be in different stages of completion.  Many will be half baked, and then perhaps two to four are quite well advanced.  One problem might be at the stage of manuscript writing.  I've found that three to four is about the number of actual well-developed problems that you can keep in your head at one time.

 

What is the best advice you've ever gotten?

Best piece of advice was not to quit graduate school.

 

Anjali Vaidya is a freelance writer.

 

 

Comments

I enjoyed reading this,

I enjoyed reading this, really inspiring!

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Bookmark and Share