The Inner Workings of the Inner Ear
A Conversation with Robert Fettiplace, James Hudspeth, and Christine Petit
The 2018 Kavli Prize laureates discuss how they deciphered the inner workings of the inner ear—and potential to develop new ways to treat deafness and hearing loss.
by Amber Dance
For most of us, hearing comes as naturally as breathing. But we can only detect sound waves because of the finely tuned detectors, so-called ‘hair cells’ for their thin, hair-like projections, that stand in wait within the spiral-shaped cochlea in each of our ears. Hair cells convert sound—whether it’s a symphony or a baby’s cry—into electrical signal our brains can process.
Illuminating how this happens has been the life’s work of Robert Fettiplace, James Hudspeth, and Christine Petit, who were honored with the 2018 Kavli Prize in Neuroscience “for their pioneering work on the molecular and neural mechanisms of hearing.”
Biophysicists Hudspeth and Fettiplace studied the mechanics of sound-sensing, prodding the bundles of hairs atop each hair cell, and recording the nerve signals the cells made in response. Hudspeth discovered how those hairs open doors that let in charged particles and create the electrical signals our brain interprets as hearing. Fettiplace determined that the hair cells are arranged like the keys on a piano, each tuned to a different, increasing frequency along the cochlea. Both have studied how the hairs vibrate even in silence, which allows them to amplify incoming sounds.
Petit brought the auditory system into the molecular era by dissecting the genes involved in hearing. Working with families affected by deafness, she has discovered more than 20 different genes that, when missing or mutated, cause hearing impairment. This enabled her to characterize the key cellular and molecular mechanisms underlying hearing and hearing loss.
These researchers have not only figured out much about how hearing works; they’re also beginning to investigate new ways to combat hearing loss, which is estimated to affect 466 million people worldwide, including 34 million children.
The Kavli Foundation spoke to the trio about their discoveries, their paths to the science of sound-sensing, and what’s next in hearing research and medicine.
2018 Kavli Prize in Neuroscience
A. James Hudspeth, Robert Fettiplace, and Christine Petit shared the 2018 Kavli Prize in Neuroscience for their pioneering work on the molecular and neural mechanisms of hearing.
The following is an edited transcript of the roundtable discussion. The participants have been provided the opportunity to amend or edit their remarks.
Let’s talk about your interest in hearing. Dr. Hudspeth, I’ve read that you’ve compared science to art, in that it’s something you pursue for the beauty of it, to uncover hidden knowledge. Where do each of you find the beauty in the science of hearing?
JAMES HUDSPETH: What I find remarkable, again and again, is the fact that natural selection has found ways of dealing with the various challenges that the world poses, particularly that hearing poses, and these solutions are often surprising but very elegant. Hearing operates right at the limits imposed by physics, so we can hear sound right down to the sloshing of molecules in our ears.
ROBERT FETTIPLACE: It’s really interesting, if you look even just within classes of vertebrate animals, how evolution experimented with different mechanisms to adapt hearing in different creatures. For example, turtles, lizards and alligators have quite dissimilar hair cell arrangements, even though they are all reptiles, and mammals differ from all of the other vertebrates. That’s really amazing.
CHRISTINE PETIT: Hearing is highly connected to language and music, and thus to social interactions and music appreciation. I have a high sensitivity to music and its importance, perhaps because I grew up in a family in which we all learned to play an instrument. I played the piano and flute. What I find beautiful is how each hair cell along the cochlea differs from the others, in size, stiffness, and the sound frequency to which it responds.
HUDSPETH: And it’s the only sense we have that amplifies itself. I find all of this very beautiful, very surprising, and really an impetus for further research.
Dr. Fettiplace and Dr. Hudspeth, both of you started your research on hearing not in people or lab mice, or even mammals, but in bullfrogs and turtles. What led you to choose these animals?
HUDSPETH: The fundamental behavior of the ear seems to be conserved throughout all four-legged animals—amphibians, reptiles as well as birds and mammals—so the principles that one can find in any of those systems are, by and large, applicable to all.
But in some animals, such as mammals, hair cells are couched in the hardest bone in the body, the petrous bone, with the firmness of ivory. To go drilling in there, and collect those cells, is really tough. We used the bullfrog, in part, because in younger animals the ear is largely made of cartilage instead of bone. So we were able to shave our way in.
In addition, the animals are not endangered, and they are particularly hearty. Their cells live for a long time in the lab.
FETTIPLACE: Like with bullfrogs, you can keep turtle hair cell preparations alive for a long time.
I actually started with caimans, which are a type of South American crocodilian, when I was working in England. For a time, I imported these animals from California, but the supply was intermittent. So we worked on turtles instead, and eventually I gave up on the caiman.
Dr. Hudspeth, you managed to push those bullfrog hair bundles a teeny amount—you’ve likened it to the tip of the Eiffel Tower shifting by a thumb’s breadth—to determine that movement opens doors, or ion channels, and causes electrical signals in the hair cells. What was challenging about these experiments, and how did you finally succeed?
HUDSPETH: Using the frog with its cartilaginous ear, and finding the right salt solution to keep the cells happy in an experimental chamber, were important factors. Then, the problem was to find a way of moving the hairs by as little as one-thousandth of a degree. We, Robert, and others, spent a great deal of time developing piezoelectric stimulators and monitors, which use crystals to convert electrical signals into mechanical stimuli, to consistently produce such small stimuli and measure the very small displacements.
Dr. Petit, what did your work with genetics then add to our understanding of hearing?
PETIT: Thanks to scientists like Jim and Robert, we had these fantastic advances in understanding the way the cochlea processes sound, but we had no information on the biological molecules involved, and how they assemble to ensure specific functions. The cochlea contains only a few cells—too few to use many classic biochemical or even genetic approaches to identify these molecules. The only way to get access to the molecular mechanisms was to search for genes involved in hearing, and then to study their function. This is, in fact, the power of genetics: its ability to identify a gene and, thus, a protein playing a key functional role even in a context in which the gene is expressed in very few cells. Almost everything we know about hearing at the molecular scale is based on the genetic approach.
When we started, none of the genes responsible for deafness at birth had even been mapped to human chromosomes. We worked with physicians and geneticists in regions around the Mediterranean Sea, where intermarriage between close kin and isolation of communities, often founded by a small number of individuals, are common. This meant that we could be confident that a single causal mutation was responsible for deafness in a given family. We could then localize this gene in the human genome.
We have identified several genes this way, providing key pieces of the puzzle. We then developed mutant mice in which these genes were inactivated, to investigate the functions of the proteins they encode. We worked with many other scientists to analyze these mouse models, which revealed the molecular machineries that encode acoustic information in the cochlea’s sensory cells, converting it into electrical signals which are then conveyed to the brain.
How have each of your works complemented or benefited from the research of the other two?
PETIT: Without the knowledge gathered by Jim and Robert, the results we obtained when we began studying the molecular mechanisms for hearing would have constituted no more than a list of key proteins. Thanks to the biophysical and physiological knowledge they have gathered, we could go more deeply into the function of the proteins encoded by genes that cause deafness.
FETTIPLACE: Thanks in part to Christine’s research, we started using mutant mice to study the process of transduction, that is, how the movements of the hairs are turned into electrical signals. If we take away one of the genes that has been linked to human deafness, this allows us to study the role of that gene in detail.
HUDSPETH: I know Robert’s papers have influenced me, and I imagine the reverse is true, too. For example, the main focus of my lab for the past 20 years has been on the process by which hair cells amplify sounds. What really began that work was a paper by Robert and his colleague in 1985. They showed that hair bundles could oscillate, spontaneously, which turned out to be the way they produce amplification.
Let’s talk a bit about your backgrounds. Dr. Petit, you trained in medicine initially. How has that informed your studies?
PETIT: I never really practiced medicine, but I always have an eye to what can be transferred to hearing-impaired individuals. For me, it’s both a dream and a duty. I really hope that our research will change the lives of hearing-impaired individuals.
Dr. Hudspeth, what drew you to the science of hearing?
HUDSPETH: It was, first of all, an accident. When I was a graduate student, in 1968, the faculty in the neuroscience program wanted to avoid giving lectures. So they assigned three lectures on the least palatable subjects, one of which was hearing, to the first three graduate students. In the course of planning the lectures, I became very curious about hearing.
Dr. Hudspeth and Dr. Fettiplace, when you were getting started, I believe little was known about the sense of hearing. How did that influence your research interests?
FETTIPLACE: Back in the 1960s, there was a big focus on sensory receptors as a way to understand the brain, because you knew exactly what the input was—some sight or sound or touch or smell—and you could look for the response in the brain. When I was a postdoctoral fellow, people were working on the photoreceptors involved in vision. I thought, well, I don’t want to work on photoreceptors. It had all been done, you see. I wanted to find something different.
I spent a lot of time talking to Andrew Crawford, my collaborator at the University of Cambridge in England, and we settled on two options: we could work on the hair cells, or we could work on the olfactory cells involved in smell. We couldn’t figure out how to do experiments on the olfactory receptors, so, of course…
HUDSPETH: As Robert said, there was at that time an enormous wealth of people working on vision. It was very exciting, but everybody was doing it. In the case of the auditory system, there was very little work being done at the cellular level: the ear was a black box. I decided that would be a worthwhile thing to pursue.
I was surprised to read about how, as you’ve mentioned, hair bundles vibrate a bit on their own to amplify sounds. Moreover, that this vibration can even produce a sound. For each of you, what has been the biggest surprise in your research on hearing?
HUDSPETH: Well, one surprise is related to what you just described. We’ve found that sound is amplified, as much as 1,000 times, by the hairs quivering. This active amplification gives us a broad range of hearing: There’s a million-fold range from the faintest sounds we can hear to the loudest ones we can tolerate. If you lose this active vibration, say due to overstimulation of the ear by loud sounds, you typically don’t become totally deaf, but you become hard of hearing. You lose about 99 percent, or more, of your sensitivity to sound.
As you mentioned, hair cells also produce sound. What that means is that approximately 70 percent of people with normal hearing can, in fact, emit sounds from one or both ears in a suitably quiet environment. There’s no reason to believe the emissions are useful; they reflect the fact that we have a system that can turn its sensitivity up and down. If you are in a super-quiet environment, the system turns itself up, up, up, until you can hear a pin drop. If it’s tuned up too much, it goes unstable, and sound begins to come out. If you put a sensitive microphone in the ear, you can detect one or several high-pitched tones. These sounds don’t have much impact on the people who have them; like other sounds in the environment, they are screened out by the brain.
FETTIPLACE: One surprise, for me, was the electrical tuning of individual hair cells, such that some respond to low tones, some to high, and so on for everything in between. Ion channels in the hair cells have different properties at different positions, and that regulates the sound frequency the cells “hear.”
HUDSPETH: This was a surprise to me as well, and explains why hearing aids are so often unsatisfactory. If the ear is damaged, you can still hear, but the tuning Robert described is deficient. Even if the sound is made louder, say by a hearing aid, one’s ability to discriminate frequencies remains impaired.
FETTIPLACE: I was also surprised by the work of Christine, and others, who have found mutations in single genes can produce deafness. Before that work, genetic deafness was regarded as a single, general condition, but she showed that there are many kinds, each caused by a mutation in a different individual gene. I think that’s extraordinary.
PETIT: There are always surprises when you genetically dissect a system. Initially, you have no way to anticipate which cells are affected by a gene defect, or how. For example, we have studied genes involved in Usher syndrome, which causes deafness and blindness. We expected to encounter difficulties putting together the pieces of the puzzle. The big surprise was that, making full use of patients’ symptoms, we rapidly identified the first big protein complex involved in hair cell development and function.
Let’s move into what you expect for the future. What do you see as the major unanswered questions in the field, and how are you going after them?
FETTIPLACE: My immediate interest is in the structure of the mechanoelectrical transducer channel complex, which is a group of proteins that detect the movement of hairs. It’s a complex which requires a lot of different proteins to function, and I think an essential question is what those proteins do.
I’m also interested to learn more about a special protein called prestin that’s needed for hair cells to amplify sound. It can change its shape, very rapidly, to produce vibration. How can it do that? I think that’s a really interesting question.
HUDSPETH: I think one of the key objectives is to learn how to regenerate hair cells. About 10 percent of the population in this country, and in other industrialized counties, has hearing problems. That’s just getting worse due to an aging population and more hearing-damaging noise from industry and the military. Thirty million people in the United States have significant problems; two million of them are totally deaf. Almost all of those people suffer from a lack of hair cells. Stem-cell transplants may be one way of getting a therapy, if one could find ways of ‘seducing’ stem cells to turn into hair cells.
The other possibility I’m interested involves the “supporting cells” found between hair cells in the ear. These can also turn into hair cells, and my group has had some success coaxing the supporting cells to replicate and form new hair cells. We’re screening small molecules to find ones that would stimulate supporting cells to replicate. From 80,000 compounds we tested, we found two candidates that work in the inner ear, so now we’re trying to see if either of those is worth pursuing as a potential drug.
PETIT: We have long been actively involved in the search for general therapies for hearing impairment, taking advantage of the key protein network we have identified. We’re particularly interested in finding a way to protect cells against hearing loss induced by noise, which is by far the most frequent cause of hearing impairment. In addition, we are developing gene therapy approaches, which have been made possible by our work elucidating the various pathogenic processes underlying the hereditary forms of deafness, telling us which cells to target, when and how. We have already obtained proof of concept in several mouse models of human deafness that such approaches can not only prevent hearing impairment, but also restore hearing in profoundly deaf mutant mice. With this result in hand, we hope to be able to perform clinical trials for some inherited forms of deafness in the near future, bringing the dream of restoring hearing closer to reality.
How many people would be eligible to have their deafness treated with gene therapy?
PETIT: In theory, we could help every individual with deafness resulting from a single gene defect. This is the case for one newborn in 700, who has severe to profound deafness. But it is also the case for forms of progressive deafness with a later onset. The prevalence of these forms is not as certain. Theoretically, gene therapy could also be used in the regeneration of neurons after sound exposure or acoustic trauma. But for the time being, we don’t know enough about age-related deafness, which results from a combination of genetic and environmental factors, to assess the potential contribution of gene therapy to its treatment.
More than 300,000 people around the world have cochlear implants to help them hear. Why is it important to look beyond implants, to these stem-cell and gene-therapy treatments we’ve just discussed?
PETIT: When children are fitted with cochlear implants, they can learn to speak, even on a telephone. However, in noisy environments, their hearing is often compromised. The hope is that gene therapy will improve hearing restoration.
Dr. Hudspeth, I read that you’re now studying hair cells in zebrafish, like the ones sold in pet stores. How does studying that model help us understand the hearing of people?
HUDSPETH: The great advantage of the zebrafish is the hair cells in the fish’s so-called ‘lateral line’ are on the surface of the animal, where they detect water flow. In contrast to the inner ear of mammals, the hair cells are readily accessible. You can see the hair cells in a living fish larva under the microscope. You can treat the fish in various ways, such as with drugs, or remove the hair cells with lasers, the study what happens. You can label the cells with fluorescent tags and watch the choreography with which the hair cells divide, find their place, produce their hair bundles and connect with nerves.
The other reason that zebrafish are a valuable model for studying hearing is that there are dozens of mutants available, many in the same genes that Christine studies in humans and mice. One can readily make mutations in other genes using a gene-editing technique called CRISPR/Cas9.
And finally, how does the study of hearing connect to bigger questions or themes in neuroscience?
FETTIPLACE: One example I’m curious about relates to how the cochlear map is set up during development. The hair cells are aligned like keys in a piano, with each one detecting a different frequency from the next. We’ve figured out that this is based on the number of membrane proteins, called ion channels, in each hair cell. How does this get set up in the embryo? There must be some chemical that is present at a high concentration at one end of the embryonic cochlea, and low concentration at the other end, a gradient that sets up this range of membrane proteins and hair cell frequencies.
HUDSPETH: This reflects a common feature in neuroscience, in which there are sensory maps in areas of the brain that processing sensory information, such as light, touch or sound. What this means is that for every point along the cochlea, there will be a corresponding point in the next part of the brain that gets information about the same frequency, and this pattern recurs all the way to the auditory cortex where the brain interprets the signals. The same thing happens in vision, where the retina and other elements of the sensory pathway contain a two-dimensional map of the world. As Robert said, it’s easy to conceive of some sort of a chemical diffusing from one end of the cochlea to the other and setting up the gradient of frequency response.