Did humans and their fire kill off Australia’s megafauna?

Genyornis. Courtesy of Michael Ströck & Wikimedia Commons.

Timeline, 2005: For those of us who do not live in Australia (and live instead in, say, boring old Texas), the animals that live on that continent can seem like some of the most exotic species in the world. The kangaroo, wombat, and Tasmanian devil, and most of all, the platypus, are high on the list of the unusual and bizarre in the animal kingdom.

But modern-day Australia has nothing on the Australia of 50,000 years ago when humans first arrived from Java. They encountered huge kangaroos, marsupial lions, 25-foot lizards, and tortoises the size of a subcompact car. Yet, within 5000 years, many of these animals had disappeared permanently. And since the dawn of the study of paleontology, researchers have wondered why.

Of course, it’s our fault

Of course, humans feature as the culprits in most scenarios. Just as the first people in the Americas are usually blamed at least in part for the disappearance of the American megafauna, like mammoths or giant sloths, the first people in Australia have also been suspected of hunting these animals to extinction or exposing them to diseases that decimated the populations.

As it turns out, humans may be to blame, but not through direct destruction or disease transmission. Instead, it may be the mastery of fire, the turning point in our cultural history, that ended in the extinction of many species larger than 100 pounds on the Australian continent.

Fire!

Australia’s first people probably set huge fires to signal to one another, flush animals for hunting, clear paths through what was once a mosaic of trees, shrubs, and grasses, or to encourage the growth of specific plants. The byproduct of all of this burning was catastrophic to the larger species on the continent.

The fires, according to one study, wiped out the drought-adapted plants that covered the continent’s interior, leaving behind a desert of scrub brush. The change in plant cover may have resulted in a decrease in water vapor exchange between the earth and the atmosphere with the ultimate effect of ending the yearly monsoon rains that would quench the region. Without the rains, only the hardiest, desert-ready plants survived.

You are what you eat…or ate

How could researchers possibly have elucidated these events of 45,000 years ago? By looking at fossilized bird eggs and wombat teeth. Using isotopic techniques, they assessed the types of carbon present in the bird eggs and teeth that dated back from 150,000 to 45,000 years ago. These animals genuinely were what they ate in some ways, with some isotopic markers of their diet accumulating in these tissues. Because plants metabolize different forms of carbon in different ways, the researchers could link the type of carbon isotopes they found in the egg and teeth fossils to the diet of these animals.

They found that the diet of a now-extinct species of bird, the Genyornis, consisted of the nutritious grasses of the pre-human Australian landscape. Emu eggs from before 50,000 years ago pointed to a similar diet, but eggs from 45,000 years ago indicated a shift in emu diet from nutritious grasses to the desert trees and shrubs of the current Australian interior. The vegetarian wombats also appear to have made a similar change in their diets around the same time.

Or, maybe not

And the species that are still here today, like the emu and the wombat, are the species that were general enough in their dietary needs to make the shift. The Genyornis went the way of the mammoth, possibly because its needs were too specialized for it to shift easily to a different diet. Its teeth showed no change in diet over the time period.

The researchers analyzed 1500 fossilized eggshell specimens from Genyornis and emu to solve this mystery and to pinpoint human burning practices as the culprits in the disappearance of these megafauna in a few thousand brief years. Today’s aboriginal Australians still use burning in following traditional practices, but by this time, the ecosystems have had thousands of years to adapt to burns. Thus, we don’t expect to see further dramatic disappearances of Australian fauna as a result of these practices. Indeed, some later researchers have taken issue with the idea that fire drove these changes in the first place, with some blaming hunting again, and as with many things paleontological, the precise facts of the situation remain…lost in the smoky haze of deep history.

Roll over eggs…it’s time for (unrolled) tobacco leaves

Tobacco leaf infected with Tobacco Mosaic Virus. Courtesy of Clemson University - USDA Cooperative Extension Slide Series

Timeline, 2008: If you’ve ever been asked about allergy to egg products before receiving a flu vaccine, you have had a little encounter with the facts of vaccine making. Flu viruses to produce the vaccine are painstakingly grown in chicken eggs because eggs make perfect little incubators for the bugs.

So…many…eggs

There are problems—in addition to the allergy issue—that arise with this approach. First of all, growing viruses for a million vaccine doses usually means using a million fertilized, 11-day-old eggs. For the entire population of the United States, 300 million eggs would be required. Second, the process requires months of preparation, meaning a slow turnaround time for vaccines against a fast-moving, fast-changing disease. Last, if there is anything wrong with the eggs themselves, such as contamination, the whole process is a waste and crucial vaccines are lost.

The day may come when we can forget about eggs and turn to leaves. Plants can contract viral disease just like animals do. In fact, an oft-used virus in some research fields is the tobacco mosaic virus, which, as its name implies, infects tobacco plants. It gives a patchy look to the leaves of infected plants, and researchers use this feature to determine whether the virus has taken hold.

Bitter little avatars of evil used for good?

Tobacco plants themselves, bitter little avatars of evil for their role in the health-related effects of smoking, serve a useful purpose in genetic research and have now enhanced their approval ratings for their potential in vaccine production. Plants have caught the eye of vaccine researchers for quite a while because they’re cheaper and easier to work with than animal incubators. Using plants for quick-turnaround vaccine production has been a goal, but a few problems have hindered progress.

To use a plant to make a protein to make a vaccine, researchers must first get the gene for the protein into the plant. Previous techniques involved tedious and time-consuming processes for inserting the gene into the plant genome. Then, clock ticking, there was the wait for the plant to grow and make the protein. Add in the Byzantine process of obtaining federal approval to use a genetically modified plant, and you’ve got the opposite of “rapid” on your hands.

One solution to this problem would simply be to get the gene into the plant cell cytoplasm for immediate use. It’s possible but involves meticulously injecting a solution with the gene sequence into each leaf. Once the gene solution is in, the plant will transcribe it—copy it into mRNA—in the cell cytoplasm and then build the desired protein based on the mRNA code. But there has been no way to take hand injection to the large-scale production of proteins, including for vaccines.

Age-old vacuum suction =  high-tech high-throughput

To solve this problem, researchers turned to one of our oldest technologies: vacuum suction. They grew tobacco plants to maturity and then clipped off the leaves, which they submerged in a solution. The solution was spiked with a nasty bug, Agrobacterium tumefaciens, a pathogen responsible for the growth of galls, or tumors, on plants. Anyone working in agriculture fears this bacterium, a known destroyer of grapes, pitted fruit trees, and nut trees. But it does have one useful feature for this kind of work: It can insert bits of its DNA into plant cells. The researchers tricked A. tumefaciens into inserting another bit of DNA instead, the code for the protein they wanted to make.

To get the solution close to the cells, the investigators had to get past air bubbles, and that’s where the vacuum came in. They placed the submerged leaves into a vacuum chamber and flipped a switch, and the activated chamber sucked all the air out of the leaves. When the vacuum was turned off, the solution flowed into the now-empty chambers of the leaf, allowing the A. tumefaciens-spiked solution to bathe the plant cells. After 4 days and a few basic protein-extraction steps, the research team had its protein batch. According to the team lead, “any protein” could be made using this process, opening up almost unlimited possibilities and applications for this approach.

Vaccines…or combating bioterrorism?

The technology has come far enough that a US company has taken steps toward manufacturing vaccines using tobacco leaves.  And it appears that the applications go beyond vaccines, as one news story has noted…the tobacco plants might also be used to produce antidotes to common agents of bioterrorism.

Complex amphibian responses to past climate change

Eastern tiger salamander: Ambystoma tigrinum, courtesy of Wikimedia Commons

We were like gophers, but now we’re like voles

Timeline, 2005: There is a cave in Yellowstone packed with fossils from the late Holocene, from about 3000 years ago. We can glean from this trove of stony bone how different taxa respond to climate change at the morphological and genetic levels and define and make predictions about the current response of the world to such changes.

The cave, which is in a sensitive area of the park off limits to visitors, houses the fossilized bones of rodents, wolves, amphibians, bears, coyotes, beavers, and elk, among others. This fossil cornucopia has yielded so much in the way of stony evidence that sorting it all is in itself a mammoth task. But two climatic stories have emerged from the samples it has yielded.

A global warming story…from the Middle Ages

The first story is about salamanders and climate change. No, it’s not a 21st-century story about global warming, but a Middle Ages story about a hotter planet. From about 1150 to 650 years ago, the earth underwent a brief warming period known as the Medieval Warming Period. During this time, the sea surface temperature was about a degree warmer and overall, the planet was much drier. This climatic anomaly was followed by what many climatologists call the Little Ice Age, a period that ended around 1900.

During the warm and dry period, animals in what would become Yellowstone National Park responded in ways that left clues about how animals may respond today to our warming planet. Amphibians make particularly sensitive sentinels of environmental change, alerting us to the presence of pollutants or other alterations that affect them before larger manifestations are detectable. And they even provide us evidence in their fossils.

Hot times, smaller paedomorphic salamanders

A group from Stanford excavated the fossils of Ambystoma tigrinum (the tiger salamander) from 15 layers at the Yellowstone site and divided them into five time periods based on their estimated age. They then divided the fossils again based on whether they represented the tiger salamander in its larval, paedomorphic, early adult, or later adult stages. The tiger salamander exhibits paedomorphism, in which the animal achieves reproductive capacity or adulthood while still retaining juvenile characteristics. In the case of the tiger salamander, this translates into remaining in the water, rather than becoming a terrestrial adult, and into retaining characteristics like frilly gills. The molecular determinant of whether or not an amphibian undergoes complete metamorphosis from juvenile to adult is thyroid hormone; when levels of this internal signal are low, the animal will remain juvenile.

The researchers found that during the medieval warming period, the paedomorphic salamanders became smaller than they were during cooler times. This outcome would be expected because when water is cooler, thyroid hormone levels will be lower, and the animal will continue growing as a juvenile.

Hot times, larger adult salamanders

On the other hand, the terrestrial adult salamanders were much larger during the warm period than during cooler periods. Again, this outcome would be expected because the heat on land would encourage faster metabolism, which would result in faster growth. The researchers found no difference in actual numbers between groups at cool vs. warm periods, but express concern that drying in Yellowstone today as a result of global warming might reduce the number of aquatic paedomorphs, affecting aquatic food webs.

From amphibians to gopher teeth

The same group also studied DNA from fossilized teeth of gophers and voles discovered in the cave. They found that during the dry period, gophers, who were stuck underground and isolated, experienced genetic bottlenecking, a reduction in diversity that persists today. However, the mobile, above-ground voles sought mates far and wide during the dry, warm period and actually experienced an increase in diversity. The lead researcher in the group compares early groups of isolated humans to the gophers, saying that they would have experienced a loss of diversity. But today’s population, with our ability to travel the globe with ease, is probably undergoing an increase in diversity since we’re able to mate with people a hemisphere away.

The cooperative, semi-suicidal blob lurking beneath the cow pasture

Aggregating Dictyostelium discoideum; photo from a series available at Wikimedia Commons

Amoebae gone to pasture

Timeline, 2009: Lurking beneath the soil and cow patties of a southeast Texas cow pasture, the cellular slime mold cloned itself and cloned itself again until it was about 40 feet across, billions of cells dense. The thing is, this enormous creature was really neither a slime nor a mold, but a group of social microbes, amoebas that were all exactly identical to one another.

It’s a cell, it’s a colony, it’s…a multicellular organism!

No one is quite sure why some microbes are social. But the cellular slime mold, Dictyostelium discoideum, makes a great model for figuring out why. This organism, a eukaryote, can start out life as a single cell, one that hangs out waiting for a hapless bacterium to wander by so it can have dinner. But if bacteria are in short supply or the soil dries up, slime molds can do some marvelous things. They can start socializing with each other, forming colonies—apparently quite large colonies. But they also can shift from being single-celled organisms or single-celled colonial organisms to becoming a multicellular organism. In the really bad times, these cells can signal to one another to differentiate into two different tissues, officially becoming multicellular.

In addition, these brainless blobs also exhibit altruism, a kind known as suicidal altruism in which some cells in a colony commit cell suicide so that cells elsewhere in the colony can thrive. And as the discovery in the Texas cow pasture shows, the organisms can exist in enormous clonal colonies, a quivering gelatinous mass just under the soil, waiting for conditions to improve.

An affinity for dung

How did the researchers even find this slime mold? They had to get themselves into the “mind” of the slime mold and figure out where this species might be most prone to going clonal, a response to an uninviting environment. Given that slime molds are fond of deep, humid forests, the researchers opted to search for the clonal variety in a dry-ish, open pasture at the edge of such forests. Why would they even expect D. discoideum to be there? Because the amoebae also happen to have an affinity for dung, as well as soil.

After careful sampling from 18 local fields, which involved inserting straws into cow patties and surrounding soil, the research team tested their samples for the presence of amoebae. Once the dishes showed evidence of the slime mold, the investigators then sampled each group and sequenced the genome. That’s when they realized that 40 feet of one cow pasture was one solid mass of amoebae clones.

The winner clones it all?

These findings raise intriguing questions for microbial ecologists. One problem they’re tackling is what determines which clone gets to reproduce so wildly at the edges of acceptable amoeba territory. Is it just one clone that arrives and gets to work, or do several show up, have a competition, and the winner clones all? They tested this idea in the lab and found that none of the clones they tested seemed to have any particular competitive advantage, so the selection process for which amoeba gets to clone at the edges remains a mystery.

Another question that intrigues the ecologists is why these single-celled organisms cooperate with one another in the first place. Such clonal existence is not unknown in nature—aspen trees do it, and so do sea anemones. And researchers have identified as many as 100 genes that seem to be associated with cooperative behavior in the slime mold. One explanation for cooperating even to the extreme of individual suicide is the clonal nature of the colony: the more identical the genes, the more the species derives from sacrificing for other members of the species. They ensure that the same genes survive, just in a different individual.

Machiavellian amoebae: cheaters among the altruists

Not all amoebae are altruistic, and some apparently can be quite Machiavellian. Some individual amoeba carry cooperation genes that have undergone mutations, leading them to be the cheaters among the altruists and take advantage of the altruistic suicide to further their own survival. What remains unclear is why cooperation continues to be the advantageous route, while the mutant cheats keep getting winnowed out.

Amoebae gone to pasture

By Emily Willingham

Word count: 672

 

Lurking beneath the soil and cow patties of a southeast Texas cow pasture, the cellular slime mold cloned itself and cloned itself again until it was about 40 feet across, billions of cells dense. The thing is, this enormous creature was really neither a slime nor a mold, but a group of social microbes, amoebas that were all exactly identical to one another.

No one is quite sure why some microbes are social. But the cellular slime mold, Dictyostelium discoideum, makes a great model for figuring out why. This organism, a eukaryote, can start out life as a single cell, one that hangs out waiting for a hapless bacterium to wander by so it can have dinner. But if bacteria are in short supply or the soil dries up, slime molds can do some marvelous things. They can start socializing with each other, forming colonies—apparently quite large colonies. But they also can shift from being single-celled organisms or single-celled colonial organisms to becoming a multicellular organism. In the really bad times, these cells can signal to one another to differentiate into two different tissues, officially becoming multicellular.

In addition, these brainless blobs also exhibit altruism, a kind known as suicidal altruism in which some cells in a colony commit cell suicide so that cells elsewhere in the colony can thrive. And as the discovery in the Texas cow pasture shows, the organisms can exist in enormous clonal colonies, a quivering gelatinous mass just under the soil, waiting for conditions to improve.

How did the researchers even find this slime mold? They had to get themselves into the “mind” of the slime mold and figure out where this species might be most prone to going clonal, a response to an uninviting environment. Given that slime molds are fond of deep, humid forests, the researchers opted to search for the clonal variety in a dry-ish, open pasture at the edge of such forests. Why would they even expect D. discoideum to be there? Because the amoebae also happen to have an affinity for dung, as well as soil.

After careful sampling from 18 local fields, which involved inserting straws into cow patties and surrounding soil, the research team tested their samples for the presence of amoebae. Once the dishes showed evidence of the slime mold, the investigators then sampled each group and sequenced the genome. That’s when they realized that 40 feet of one cow pasture was one solid mass of amoebae clones.

These findings raise intriguing questions for microbial ecologists. One problem they’re tackling is what determines which clone gets to reproduce so wildly at the edges of acceptable amoeba territory. Is it just one clone that arrives and gets to work, or do several show up, have a competition, and the winner clones all? They tested this idea in the lab and found that none of the clones they tested seemed to have any particular competitive advantage, so the selection process for which amoeba gets to clone at the edges remains a mystery.

Another question that intrigues the ecologists is why these single-celled organisms cooperate with one another in the first place. Such clonal existence is not unknown in nature—aspen trees do it, and so do sea anemones. And researchers have identified as many as 100 genes that seem to be associated with cooperative behavior in the slime mold. One explanation for cooperating even to the extreme of individual suicide is the clonal nature of the colony: the more identical the genes, the more the species derives from sacrificing for other members of the species. They ensure that the same genes survive, just in a different individual.

Not all amoebae are altruistic, and some apparently can be quite Machiavellian. Some individual amoeba carry cooperation genes that have undergone mutations, leading them to be the cheaters among the altruists and take advantage of the altruistic suicide to further their own survival. What remains unclear is why cooperation continues to be the advantageous route, while the mutant cheats keep getting winnowed out.

Microbes redirect our best-laid plans

Greeks at War (pottery from the British Museum; photo courtesy of Wikimedia Commons)

The madness of King George

Timeline, 2006: It’s not that unusual for disease to alter the course of history–or, history as humans intended it to be. Some scholars believe, for example, that the intransigence of King George III of England arose from his affliction with porphyria, a heritable metabolic disorder that can manifest as a mental problem, with symptoms that include irrational intractability. But it’s rarer for a disease to shift the balance of power so entirely that one nation gains the upper hand over another. Yet that appears to be what happened to the city-state of Athens just before the Peloponnesian Wars of around 430 B.C. The upshot of the wave of disease and the wars was that Sparta conquered Athens in 404 B.C.

Spartans had a little microbial assistance

Sparta may have owed its big win to a small bacterium, Salmonella enterica enterica serovar Typhi, the microbe responsible for typhoid fever. A plague swept across Athens from 430 to 426 B.C., having traveled from Ethiopia to Egypt and Libya before alighting in the Greek city and destroying up to a third of its population. In addition, it brought to a close what became known as the Golden Age of Pericles, a time when Athens produced some of its most amazing and widely recognized art, artists, and philosophers, including Aeschylus, Socrates, the Parthenon, and Sophocles. Pericles was a statesman who oversaw the rebuilding of Athens following the Greek win in the Persian Wars, and he guided the city-state to a more democratic form of rule and away from the dictatorships of the previous regimes. In the process, the city flourished in art and architecture.

And then along came the plague. The Greek historian Thucydides, who chronicled the Peloponnesian Wars, left behind such a detailed account of the plague, its symptoms, and what happened to its victims, that intrigued medical detectives have ever since debated about what might have caused it. Thucydides, himself a plague survivor, vividly described the sudden fever, redness of the eyes, hemorrhaging, painful chest and cough, stomach distress, and diarrhea that ultimately led to death in so many cases. He also mentioned pustules and ulcers of the skin and the loss of toes and fingers in survivors. This litany of symptoms produced many candidate causes, including bubonic plague, anthrax, smallpox, measles, and typhoid fever.

Construction crews yet again uncover something interesting

In the mid-1980s, a construction crew was busy digging a hole for a subway station in the city of Kerameikos when they uncovered a mass burial site. Unlike other Greek burial sites, this one bore marks of hasty and haphazard burials, and the few artefacts that accompanied the bones dated it to the time of the plague that destroyed Athens. Researchers were able to harvest some teeth from the site and analyze the tooth pulp, which retains a history of the infections a person has suffered. They examined DNA sequences from the pulp for matches with suggested microbial agents of the plague, and finally found a match with the typhoid bacterium.

Typhoid Mary: Intent on cooking, ended up killing

One discrepancy between the disease pattern of typhoid fever and that described by Thucydides is the rapidity of onset the Greek historian detailed. Today, typhoid fever, which still infects millions of people worldwide, takes longer to develop in an infected individual, and sometimes never develops at all. People who bear the virus but don’t become ill themselves are “carriers.”

Perhaps the most famous carrier was Typhoid Mary, Mary Mallon, a cook in New York City at the beginning of the 20th century. It is believed that she infected hundreds of people, with about 50 known cases and a handful of deaths being directly associated with her. Typhoid Mary was told not to work as a cook any longer or she would be quarantined, but she simply disappeared for awhile and then turned up under a different name, still working as a cook. After another outbreak was traced to her, she was kept in quarantine for 23 years until she died.

It’s a blob. It’s an animal. It’s Trichoplax

The irresistibly attractive Trichoplax. Oliver Voigt, courtesy of Wikimedia Commons

Timeline, 2008: One of the greatest quests in animal biology is the search for the “ur-animal,” the proto-creature that lies at the base of the animal family tree. For many years, the sponge held the low spot as most primitive animal, a relatively simple cousin of ours consisting of a few tissues and a tube that filters water for nutrients.

An animal, simple and alone

Research now suggests, however, that there’s an even older cousin at the base of the tree, an animal-like organism with three cell layers and four cell types that moves by undulating and exists alone in its own taxon. This organism, with the species name Trichoplax adhaerens, is the sole living representative of the Placozoa, or “flat animals.” It has been a bit of a mystery creature ever since the amoeba-like things were first noted a century ago in a German aquarium. Today, new techniques in systematics, the study of how living things are related, have helped pinpoint its place on the tree of life. These techniques have, however, left us with a few complex questions about this “simple” animal.

The animal kingdom family tree gets complex fast, with an early division in the trunk. Whatever lies at its base—and we’re still not sure what the organism was—that ancestor yielded two basic animal lineages. One led to sponges (Porifera) and branched to Cnidarians—corals, hydras, and organisms like jellies that have primitive nervous systems. The other lineage is the Bilateria, which includes everything from flatworms to bugs to beluga whales to us. Obviously, this lineage also has developed a nervous system, in many cases one that is quite complex.

Why a sponge will never be nervous

Earlier thinking was that the sponges, in lacking a nervous system, represented some earliest ancestor from which the two lineages sprang. But recent molecular analysis of the sponge and placozoan genomes has left a few systematists scratching their heads. The big headline was that the sea sponge was no longer the most primitive or oldest taxon. That honor may now belong to the placozoan, although some analyses place it as having evolved after Porifera.

An ur-animal? Or just another ur-cousin?

It may go a ways back, but it’s not so far that scientists can say that T. adharens is the “ur-animal,” or “mother of all animals.” In fact, it appears to share a few things in common with Bilateria, such as special protein junctions for holding tissues together, but computer analysis places it squarely in the Cnidaria branch of the animal family tree. Thus, this “weird, wee beastie,” while possibly older than the sponge, is still not the proto-ancestral animal condition. Rather than being the “ur-animal” from which all other animals sprang, it’s more like an “ur-cousin” of Bilateria.

Nervous system genes but no nervous system?

T. adhaerens also represents another conundrum for evolutionary biologists and systematists. It and the sponge both carry coding in their genomes for neural proteins, yet neither have nervous systems. The Cnidarian nervous system itself may have evolved parallel to that of the Bilateria, an evolutionary phenomenon known as “convergent evolution.” When evolutionary processes occur in parallel, unrelated species may share adaptations—such as a nervous system—because of similar selection pressures. The results are what we call “analogous structures,” which have similar functions and may even seem quite similar. But they are shared because of similar evolutionary pressures, not because of a common ancestry.

Lost traits make evolutionary biologists tear out their hair

The primitive placozoan does not have a nervous system, although organisms that arose later in the Cnidarian lineage, such as jellies, do. Yet those neural genes in its genome leave an open question and a continuing debate: Is the placozoan an example of another common evolutionary phenomenon in which a trait arises, but then is lost? Some scientists have suggested that there may have been an even older version of a nervous system, predating the Cnidarian/Bilateria split. This trait then vanished, leaving behind only these traces in the primitive placozoan and sponge genomes. With this scenario, the two nervous systems would have a shared ancestry: instead of being analogous traits resulting from convergent evolution, they would represent homologous traits, shared because of a common neural ancestry.

Your mother *is* always with you

Mother and child, microchimeras

When you’re in utero, you’re protected from the outside world, connected to it only via the placenta, which is supposed to keep you and your mother separated. Separation is generally a good thing because you are foreign to your mother, and she is foreign to you. In spite of the generally good defenses, however, a little bit of you and a little bit of her cross the barrier. Scientists have recently found that when that happens, you often end up toting a bit of mom around for decades, maybe for life.

The presence of cells from someone else in another individual is called microchimerism. A chimera in mythology was a beast consisting of the parts of many animals, including lion, goat, and snake. In genetics, a chimera carries the genes of some other individual along with its own, perhaps even the genes of another species. In microchimerism, we carry a few cells from someone else around with us. Most women who have been pregnant have not only their own cells but some cells from their offspring, as well. I’m probably carrying around cells from each of my children.

Risks and benefits of sharing

Microchimerism can be useful but also carries risks. Researchers have identified maternal cells in the hearts of infants who died from infantile lupus and determined that the babies had died from heart block, partially from these maternal cells that had differentiated into excess heart muscle. On the other hand, in children with type 1 diabetes, maternal cells found in the pancreatic islets appear to be responding to damage and working to fix it.

The same good/bad outcomes exist for mothers who carry cells from their children. There has long been an association between past pregnancy and a reduced risk of breast cancer, but why has been unclear. Researchers studying microchimerism in women who had been pregnant found that those without breast cancer had fetal microchimerism at a rate three times that of women who with the cancer.

Microchimerism and autoimmunity

Autoimmune diseases develop when the body attacks itself, and several researchers have turned to microchimerism as one mechanism for this process. One fact that led them to investigate fetal microchimerism is the heavily female bias in autoimmune illness, suggesting a female-based event, like pregnancy. On the one hand, pregnancy appears to reduce the effects of rheumatoid arthritis, an autoimmune disorder affecting the joints and connective tissues. On the other hand, women who have been pregnant are more likely to develop an autoimmune disorder of the skin and organs called scleroderma (“hard skin”) that involves excess collagen deposition. There is also a suspected association between microchimerism and pre-eclampsia, a condition in pregnancy that can lead to dangerously high blood pressure and other complications that threaten the lives of mother and baby.

Human leukocyte antigen (HLA)

The autoimmune response may be based on a similarity between mother and child of HLA, immune-related proteins encoded on chromosome 6. This similarity may play a role in the immune imbalances that lead to autoimmune diseases; possibly because the HLAs of the mother and child are so similar, the body clicks out of balance with a possible HLA excess. If they were more different, the mother’s immune system might simply attack and destroy fetal HLAs, but with the strong similarity, fetal HLAs may be like an unexpected guest that behaves like one of the family.

Understanding the links between microchimerism and disease is the initial step in exploiting that knowledge for therapies or preventative approaches. Researchers have already used this information to predict the development of a complication in stem cell transplant called “graft-versus-host disease” (GVH). In stem cell transplants, female donors with previous pregnancies are more associated with development of GVH because they are microchimeric. Researchers have exploited this fact to try to predict whether or not there will be an early rejection of a transplant in kidney and pancreas organ transplants.

(Photo courtesy of Wikimedia Commons and photographer Ferdinand Reus).

%d bloggers like this: