Mitochondrial dysfunction and autism

Beautiful pic of mitochondria

Mitochondria are the powerhouses of the cell, the biology teachers will tell you. These organelles also happen to be likely former bacteria that once were independently living cells, capable of dividing on their own to make new mitochondria. Indeed, they continue to divide by a kind of binary fission as our cells divide, ensuring that a double dose is available for partitioning into the two new cells that result from cell division.

To achieve these feats, mitochondria have their own DNA, their own proteins, and their own protein-making machinery. That means that they also have the potential to undergo genetic mutations that affect the sequence of the proteins their genes encode. Because most of the proteins in mitochondria are mission critical and must function exactly right, the persistence of such mutations is relatively rare. But they do happen, causing disease. One question that has arisen in the study of the causes of autism is whether or not such changes might underlie at least a portion of the cases of this developmental difference.

The high-profile Hannah Poling case

Certainly lending a high profile to this question was the case of Hannah Poling, whose mitochondrial disorder appeared to be linked to her autism symptoms and may have interacted with a bolus of vaccine doses she received, followed by a high fever. Fevers can tax our cellular powerhouses, and if mitochondrial function is already compromised, the high temperatures and extra burden may result in chronic negative outcomes.

Poling’s case brought to the forefront the question of whether or not people with autism might have mitochondrial dysfunction at greater rates. A recent study in the Journal of the American Medical Association (which steadfastly keeps its articles unavailable behind a paywall) has sought to address that question by measuring markers of mitochondrial dysfunction in children with autism and comparing these endpoints with outcomes in children without autism.

Study specifics: “Full-syndrome autism”

The autistic group in the study had what the researchers called “full syndrome autism,” which I take to mean intense symptoms of autism. They used the Autism Diagnostic Inventory-Revised

(ADI-R) and the Autism Diagnostic Observation Schedule (ADOS) to confirm this diagnosis and to ensure as uniform a population among their autistic group as possible. Ultimately, the study included 10 children in this group, recruited consecutively in the clinic based on their fulfillment of the selection criteria. This study was essentially case control, meaning that the control group consisted of 10 non-autistic children, selected to match as closely as possible the demographic characteristics of the autistic group.

The authors report that while only one child among the 10 who were autistic fulfilled the definitive criteria for a mitochondrial respiratory chain disorder, the children with autism were more likely to have indicators of mitochondrial dysfunction.

A problem with pyruvate dehydrogenase (break out your Krebs notes, folks)

Specifically, six out of ten showed lowered levels of activity for one parameter, while eight out of ten showed higher levels than controls for another metabolic endpoint, and two of ten showed higher levels than controls of a third metabolic endpoint. Overall, the results indicated low activity of a mitochondria-specific enzyme, pyruvate dehydrogenase, which is involved in one of the first steps of carbohydrate metabolism that takes place in the mitochondria. Reduced activity of an enzyme anywhere in this process will result in changes in the enzyme’s own products and products further down the pathway and throw off mitochondrial function. Further, half of the autistic group exhibited higher levels of DNA replication, an indicator of cellular stress, more frequently than controls and also had more deletions in their DNA than controls. Statistical analysis suggested that all of these differences were significant.

What does it mean for autism?

Do these findings mean that all or most people with autism have mitochondrial dysfunction? No. The study results do not support that conclusion. Further, the authors themselves list six limitations of the study. These include the possibility that some findings of statistical significance could be in error because of sample size or confounders within the sample and that there were changes in some of the endpoints in the autistic group in both directions. In other words, some autistic children had much higher values than controls, while some had lower values, muddying the meaning of the statistics. The authors note that a study like this one does not allow anyone to draw conclusions about a cause-and-effect association between autism and mitochondria, and they urge caution with regard to generalizing the findings to a larger population.

If there is an association, questions arise from that conclusion. Does mitochondrial dysfunction underlie autism, producing autistic-like symptoms, as some argued in the Hannah Poling case? Or, do autistic manifestations such as anxiety or high stress or some other autism-related factor influence the mitochondria?

Chickens, eggs, MRI, mitochondria, autism

As interesting as both of these recent autism-related studies are, we still have the “Which came first” question to deal with. Did autism cause the brain or mitochondrial differences, or did the brain or mitochondrial differences trigger the autism? Right now, these chicken-and-egg questions may not matter as much as the findings do for helping to identify autism more specifically and addressing some of its negative aspects. Regardless of your stance on neurodiversity or vaccine or acceptance or cure or the in-betweens where most of us fall, it would be difficult to argue that a mitochondrial dysfunction shouldn’t be identified and ameliorated or that an awareness of brain structure differences won’t lead to useful information about what drives autism behaviors.

——————————————–

Note: More lay-accessible versions of this post and the previous post are available at BlogHer.

Advertisements

Roll over eggs…it’s time for (unrolled) tobacco leaves

Tobacco leaf infected with Tobacco Mosaic Virus. Courtesy of Clemson University - USDA Cooperative Extension Slide Series

Timeline, 2008: If you’ve ever been asked about allergy to egg products before receiving a flu vaccine, you have had a little encounter with the facts of vaccine making. Flu viruses to produce the vaccine are painstakingly grown in chicken eggs because eggs make perfect little incubators for the bugs.

So…many…eggs

There are problems—in addition to the allergy issue—that arise with this approach. First of all, growing viruses for a million vaccine doses usually means using a million fertilized, 11-day-old eggs. For the entire population of the United States, 300 million eggs would be required. Second, the process requires months of preparation, meaning a slow turnaround time for vaccines against a fast-moving, fast-changing disease. Last, if there is anything wrong with the eggs themselves, such as contamination, the whole process is a waste and crucial vaccines are lost.

The day may come when we can forget about eggs and turn to leaves. Plants can contract viral disease just like animals do. In fact, an oft-used virus in some research fields is the tobacco mosaic virus, which, as its name implies, infects tobacco plants. It gives a patchy look to the leaves of infected plants, and researchers use this feature to determine whether the virus has taken hold.

Bitter little avatars of evil used for good?

Tobacco plants themselves, bitter little avatars of evil for their role in the health-related effects of smoking, serve a useful purpose in genetic research and have now enhanced their approval ratings for their potential in vaccine production. Plants have caught the eye of vaccine researchers for quite a while because they’re cheaper and easier to work with than animal incubators. Using plants for quick-turnaround vaccine production has been a goal, but a few problems have hindered progress.

To use a plant to make a protein to make a vaccine, researchers must first get the gene for the protein into the plant. Previous techniques involved tedious and time-consuming processes for inserting the gene into the plant genome. Then, clock ticking, there was the wait for the plant to grow and make the protein. Add in the Byzantine process of obtaining federal approval to use a genetically modified plant, and you’ve got the opposite of “rapid” on your hands.

One solution to this problem would simply be to get the gene into the plant cell cytoplasm for immediate use. It’s possible but involves meticulously injecting a solution with the gene sequence into each leaf. Once the gene solution is in, the plant will transcribe it—copy it into mRNA—in the cell cytoplasm and then build the desired protein based on the mRNA code. But there has been no way to take hand injection to the large-scale production of proteins, including for vaccines.

Age-old vacuum suction =  high-tech high-throughput

To solve this problem, researchers turned to one of our oldest technologies: vacuum suction. They grew tobacco plants to maturity and then clipped off the leaves, which they submerged in a solution. The solution was spiked with a nasty bug, Agrobacterium tumefaciens, a pathogen responsible for the growth of galls, or tumors, on plants. Anyone working in agriculture fears this bacterium, a known destroyer of grapes, pitted fruit trees, and nut trees. But it does have one useful feature for this kind of work: It can insert bits of its DNA into plant cells. The researchers tricked A. tumefaciens into inserting another bit of DNA instead, the code for the protein they wanted to make.

To get the solution close to the cells, the investigators had to get past air bubbles, and that’s where the vacuum came in. They placed the submerged leaves into a vacuum chamber and flipped a switch, and the activated chamber sucked all the air out of the leaves. When the vacuum was turned off, the solution flowed into the now-empty chambers of the leaf, allowing the A. tumefaciens-spiked solution to bathe the plant cells. After 4 days and a few basic protein-extraction steps, the research team had its protein batch. According to the team lead, “any protein” could be made using this process, opening up almost unlimited possibilities and applications for this approach.

Vaccines…or combating bioterrorism?

The technology has come far enough that a US company has taken steps toward manufacturing vaccines using tobacco leaves.  And it appears that the applications go beyond vaccines, as one news story has noted…the tobacco plants might also be used to produce antidotes to common agents of bioterrorism.

Your mother *is* always with you

Mother and child, microchimeras

When you’re in utero, you’re protected from the outside world, connected to it only via the placenta, which is supposed to keep you and your mother separated. Separation is generally a good thing because you are foreign to your mother, and she is foreign to you. In spite of the generally good defenses, however, a little bit of you and a little bit of her cross the barrier. Scientists have recently found that when that happens, you often end up toting a bit of mom around for decades, maybe for life.

The presence of cells from someone else in another individual is called microchimerism. A chimera in mythology was a beast consisting of the parts of many animals, including lion, goat, and snake. In genetics, a chimera carries the genes of some other individual along with its own, perhaps even the genes of another species. In microchimerism, we carry a few cells from someone else around with us. Most women who have been pregnant have not only their own cells but some cells from their offspring, as well. I’m probably carrying around cells from each of my children.

Risks and benefits of sharing

Microchimerism can be useful but also carries risks. Researchers have identified maternal cells in the hearts of infants who died from infantile lupus and determined that the babies had died from heart block, partially from these maternal cells that had differentiated into excess heart muscle. On the other hand, in children with type 1 diabetes, maternal cells found in the pancreatic islets appear to be responding to damage and working to fix it.

The same good/bad outcomes exist for mothers who carry cells from their children. There has long been an association between past pregnancy and a reduced risk of breast cancer, but why has been unclear. Researchers studying microchimerism in women who had been pregnant found that those without breast cancer had fetal microchimerism at a rate three times that of women who with the cancer.

Microchimerism and autoimmunity

Autoimmune diseases develop when the body attacks itself, and several researchers have turned to microchimerism as one mechanism for this process. One fact that led them to investigate fetal microchimerism is the heavily female bias in autoimmune illness, suggesting a female-based event, like pregnancy. On the one hand, pregnancy appears to reduce the effects of rheumatoid arthritis, an autoimmune disorder affecting the joints and connective tissues. On the other hand, women who have been pregnant are more likely to develop an autoimmune disorder of the skin and organs called scleroderma (“hard skin”) that involves excess collagen deposition. There is also a suspected association between microchimerism and pre-eclampsia, a condition in pregnancy that can lead to dangerously high blood pressure and other complications that threaten the lives of mother and baby.

Human leukocyte antigen (HLA)

The autoimmune response may be based on a similarity between mother and child of HLA, immune-related proteins encoded on chromosome 6. This similarity may play a role in the immune imbalances that lead to autoimmune diseases; possibly because the HLAs of the mother and child are so similar, the body clicks out of balance with a possible HLA excess. If they were more different, the mother’s immune system might simply attack and destroy fetal HLAs, but with the strong similarity, fetal HLAs may be like an unexpected guest that behaves like one of the family.

Understanding the links between microchimerism and disease is the initial step in exploiting that knowledge for therapies or preventative approaches. Researchers have already used this information to predict the development of a complication in stem cell transplant called “graft-versus-host disease” (GVH). In stem cell transplants, female donors with previous pregnancies are more associated with development of GVH because they are microchimeric. Researchers have exploited this fact to try to predict whether or not there will be an early rejection of a transplant in kidney and pancreas organ transplants.

(Photo courtesy of Wikimedia Commons and photographer Ferdinand Reus).

How the genetic code became degenerate

Our genetic code consists of 64 different combinations of four RNA nucleotides—adenine, guanine, cytosine, and uracil. These four molecules can be arranged in groups of three in 64 different ways; the mathematical representation of this relationship is 4 x 4 x 4 to illustrate the number of possible combinations.

Shorthand for the language of proteins

This code is cellular shorthand for the language of proteins. A group of three nucleotides—called a codon—is a code word for an amino acid. A protein is, at its simplest level, a string of amino acids, which are its building blocks. So a string of codons provides the language that the cell can “read” to build a protein. When the code is copied from the DNA, the process is called transcription, and the resulting string of nucleotides is messenger RNA. This messenger takes the code from the nucleus to the cytoplasm in eukaryotes, where it is decoded in a process called translation. During translation, the code is “read,” and amino acids assembled in the sequence the code indicates.

The puzzling degeneracy of genetics

So given that there are 64 possible triplet combinations for these codons, you might think that there are 64 amino acids, one per codon. But that’s not the case. Instead, our code is “degenerate;” in some cases, more than one triplet of nucleotides provides a code word for an amino acid. Thus, these redundant codons are all synonyms for the same protein building block. For example, six different codons indicate the amino acid leucine: UUA, UUG, CUA, CUG, CUC, and CUU. When any one of these codons turns up in the message, the cellular protein-building machinery inserts a leucine into the growing amino acid chain.

This degeneracy of the genetic code has puzzled biologists since the code was cracked. Why would Nature produce redundancies like this? One suggestion is that Nature did not use a triplet code originally, but a doublet code. Francis Crick, of double-helix fame, posited that a two-letter code probably preceded the three-letter code. But he did not devise a theory to explain how Nature made the universal shift from two to three letters.

A two-letter code?

There are some intriguing bits of evidence for a two-letter code. One of the players in translation is transfer RNA (tRNA), a special sequence of nucleotides that carries triplet codes complementary to those in the messenger RNA. In addition to this complementary triplet, called an anticodon, each tRNA also carries a single amino acid that matches the codon it complements. Thus, when a codon for leucine—UUA for example—is “read” during translation, a tRNA with the anticodon AAU will donate the leucine it carries to the growing amino acid chain.

Aminoacyl tRNA synthetases are enzymes that link an amino acid with the appropriate tRNA anticodon.  Each type of tRNA has its specific synthetase, and some of these synthetases use only the first two nucleotide bases of the anticodon to decide which amino acid to attach. If you look at the code words for leucine, for example, you’ll see that all four begin with “CU.” The only difference among these four is the third position in the codon—A, U, G, or C. Thus, these synthetases need to rely only on the doublets to be correct.

Math and doublets

Scientists at Harvard believe that they have solved the evolutionary mystery of how the triplet form arose from the doublet. They suggest that the doublet code was actually read in groups of three doublets, but with only the first two “prefix” or last two “suffix” pairs actually being read. Using mathematical modeling, these researchers have shown that all but two amino acids can be coded for using two, four, or six doublet codons.

Too hot in the early Earth kitchen for some

The two exceptions are glutamine and asparagine, which at high temperatures break down into the amino acids glutamic acid and aspartic acid. The inability of glutamine and asparagine to retain structure in hot environments suggests that the in the early days of life on Earth when doublet codes were in use, the primordial soup must have been too hot for stable synthesis of heat-intolerant, triplet-coded amino acids like glutamine and asparagine.

%d bloggers like this: