The rising cost of poor diet: ADHD symptoms in children

A handful of Halloween candy later, and you are skipping around the room, your hands are fidgety, speech is jittery, and it’s hard to contain your burst of energy. You are probably quite familiar with the sugar rush that affects you this one day a year… two days a year? Three days? Almost every day?

Time and again, nutritional studies and long-term research has shown that poor dietary patterns of children influence neural development and behavior, especially hyperactivity and attention capacities, leading to diagnoses like ADHD.

A study from 2009 followed a group of children over several years and found that for significant increases in “junk food” consumption at ages 4-5, the risk for hyperactivity at age 7 was elevated. Another study, from 2004, demonstrated a prolonged association between artificial food coloring and hyperactivity. While the cause of ADHD is still undetermined, dietary patterns certainly play a role. Not only do high-fat, high-sugar diets cause issues in the developing nervous system, but these diets tend to also be low in valuable vitamins and micronutrients.

A more recent study, published online in March 2018 in European Journal of Clinical Nutrition, showed a positive correlation between a “processed” food pattern, a “snack” food pattern, and ADHD symptoms in children age 3-6. On the other hand, there was a negative correlation between a “vegetarian” food pattern and ADHD symptoms. This study used data collected from over 14,000 preschoolers in Ma’anshan City, China, and was the first large scale study in mainland China to investigate connections between diet and ADHD in children. The prevalence of ADHD symptoms in the group studied was 8.6%.

Researchers asked caregivers and parents to answer questionnaires about the recent food consumption and food choices of their children, and gave the children a Conners Abbreviated Symptom Questionnaire to assess for ADHD symptoms. Then, the researchers allocated five dietary patterns to represent the common answers expressed by the caregivers and parents: (1) “processed,” for fast food, fried foods, preserved fruit, and other high-fat food items, (2) “snack,” for high-sugar foods like sweets, biscuits, cakes, and chocolates, (3) “protein,” for red meat, poultry, eggs, fish, fruit, and rice, (4) “beverage,” for flavored milk, soda, and yogurt, and (5) “vegetarian,” for grains, beans, and fruit or vegetable juice.

Children who had a “processed” or “snack” dietary pattern had a significantly higher likelihood of demonstrating ADHD symptoms, hyperactivity, and attention problems. There was no correlation between ADHD symptoms and the “protein” or “beverage” categories, but the “vegetarian” dietary pattern seems to act as a protector against ADHD symptoms.

This study can not show causation, or that a poor diet causes ADHD. However, it can not be refuted that diet is an influencing factor in development and behavior in children. High-fat, high-sugar foods tend to be cheaper, more accessible, and more conveniently packaged in bags and wrappers for on-the-go, and they don’t need to be refrigerated. They taste good. But, the cost is a rising prevalence of ADHD and similar disorders. The relationship between food and hyperactivity is further evident through studies that have shown that elimination diets and fish oil supplements can reverse the ADHD symptoms. Fatty foods and sugary foods should be eaten in moderation; Halloween should not be occurring every day.

Source:

Yan, S., Cao, H., Gu, C., Ni, L., Tao, H., Shao, T., Xu, Y., & Tao, F. (2018). Dietary patterns are associated with attention-deficit/hyperactivity disorder (ADHD) symptoms among preschoolers in mainland China. European Journal of Clinical Nutrition [Published online March 13, 2018]. doi:10.1038/s41430-018-0131-0.

The Comeback of the Chesapeake Bay

Once known for its beauty and abundance of seafood, the Chesapeake is now known for its poor health and struggling ecosystems. Excessive pollution from the areas within the Bay have caused the collapse of fisheries and the creation of dead zones. The neglected health of its waters has come with a hefty price tag, costing the economy and those who depended on the Bay as a way of life. Recognizing the urgent need to save the Chesapeake, the government and scientific agencies have come together to take on the huge task presented.

A new study explains how 30 years of environmental policy governing the Bay has led to the successful recovery of its aquatic ecosystems. Researchers observe the increase in Submerged Aquatic Vegetation (SAV) due to declining nutrient pollution. This proves as an example of how recovery can be achieved through management of nutrients and human stressors. Since 1984 the amount of nitrogen in the water has decreased by 23% and in return, there has been a 316% increase in SAV. To understand how nutrient pollution affects SAV, they conducted two analysis. The first observed 120 subestuaries that could impact local watershed nutrient loads and the second linked environmental conditions to SAV populations. Both of the analysis demonstrate that increased nutrient pollution from nonpoint source and point source reduce the amount of SAV. This is due to excess nitrogen that causes either increased algae cover or the accumulation of sulfides and excess phosphorus that causes phytoplankton bloom and therefore, decreased sunlight penetration.

Using aerial surveys, biogeochemical monitoring data, historical information, and watershed models the researchers concluded that the Chesapeake is indeed improving and without a doubt, due to the conservation and restoration efforts put in place. The recovery of SAV is especially important because these grasses provide habitat for crabs and fish, and are a clear indicator of healthy water quality. Slowly but surely, the Chesapeake will make a full recovery. Results from the Chesapeake Bay Foundation’s “State of the Bay” assessment show that 2016 was a record year, with the highest score of Bay health in 18 years. Both the CBF and the researchers agree that though this is promising news, there is still much more to be done and efforts should continue to strive for more.

If you live within one of the six states that the Bay occupies, consider how your actions can benefit or harm the amazing ecosystem that is the Chesapeake Bay. Visit http://www.cbf.org/ for more information.

 

Article Sources: Lefcheck, J.S., et al. 2018. Long term nutrient reductions lead to the unprecedented recovery of a temperate coastal region. PNAS. http://www.cbf.org/about-the-bay/state-of-the-bay-report/2016/

Tumors, No Longer a Guessing Game

What makes one tumor different from another? The word ‘cancer’ can be used to describe hundreds of maladies, from a misshapen mole, to sickle shaped blood cells, to massive tumors in the heart, lungs, or anywhere else. But what makes a heart tumor different from a lung tumor? And why are there four different types of lung cancer? The answer lies in the DNA of each cancer cell.

Currently, doctors diagnose different types of cancer based primarily on where they are, what they look like in scans, and what molecules they are made of when samples are taken in biopsies. A new study introduces a novel method, looking at the methylation of cancer DNA, a process that inhibits or expresses specific functions in a cell that can characterize different disorders. By identifying the specific DNA methylations in each tumor, they can be attributed to a specific diagnosis; this is done simply by removing a small piece of the tumor, and analyzing it in the lab.

This study looked specifically at tumors on the central nervous system (CNS) and performed the methylated cancer DNA diagnostic test for 1104 patients, each diagnosed with one of 64 distinct cancers resulting in CNS tumors. In 76% of cancers, the test matched the existing diagnosis. In 12% of cases, the original diagnosis was revised as a result of this test. In the remaining instances, the test could not match to a known methylation class.

These results are extremely encouraging, and point to the end of subjective cancer diagnoses, where doctors must act as detectives, gathering clues to point to the most likely diagnosis. But this new test is a fingerprint; every cancer has a different one, and once they can be told apart, they need only consult a database stocked with thousands of cancer prints.

And this database is already live and free the world over, the data for every cancer case tested in this manner will be available and shared. Doctors will be able to readily compare the DNA in the cancers they encounter with other cases, which will likely lead to the discovery of new, unique cancers, and dramatically improve the accuracy of diagnoses overall.

 

Source:

Capper D, Jones DTW, Sill M, Hovestadt V, Schrimpf D, et al., (2018) DNA methylation-based classification of central nervous system tumours. Nature. doi:10.1038/nature26000

Immunotherapy a Potential Treatment for Breast Cancer

Breast Cancer Awareness

Immunotherapy is the treatment of a disease by changing the immune system. It was believed in the past that breast cancer did not trigger an immune response. Immune response is the reaction of cells of the immune system against a foreign substance that is not recognized as a part of the body. In early 2018, researchers at the University of Washington in Seattle published a study assessing the benefits and potential future of immunotherapy as a form of treatment for breast cancer. 

In the last decade, after the detection of an immune response in breast cancer patients, there have been numerous studies that have considered immunotherapy a possible treatment for this type of cancer. A research conducted on women 6 months prior to a breast cancer diagnosis showed high rates of T-cells (a type of white blood cell) against tumor associated proteins in women who would go on to develop breast cancer. Immune checkpoint inhibitor therapy is a mechanism that uses drugs to block certain proteins which prohibit immune cells from killing cancer cells. These agents allow T-cells to recognize cancer cells, and limit tumor growth by dividing and growing themselves. 

Research studies have shown that with the progression of cancer, the development of antigens (substances that trigger an immune response) diminishes, reducing immune response. This implied that in order to come up with a proper immunotherapeutic mechanism, researchers must focus on developing strategies that reverse this effect and increase immune response that promotes tumor destruction. One of the first breast cancer associated antigens was the MUC-1 (Human Mucin-1) protein. T-cells related to this protein were in low number in patients with the disease. So, boosting these numbers would be a potential therapeutic mechanism.

It should be noted that despite recent discoveries, breast cancer is still a poor producer of immune response. Tumor infiltrating lymphocytes (TIL), another type of white blood cells infiltrate tumor tissues and cause direct physical contact between them, which results in the physical destruction of tumor cells. However, its occurrence and concentration varies based on the type of breast cancer.

Studies are currently ongoing to identify factors that identify the patients who are most likely to benefit from immune checkpoint inhibitor therapies. This is aided by the fact that despite fluctuating amounts, TILs are always present to some extent. So, one strategy for successful patient identification is to increase the number of TIL beforehand. Immunization has been found as a possible way of accomplishing this. Research has shown that vaccine induced T-cells can travel to the tumor and induce an increase in TIL. Currently, there are several vaccines under development to target multiple breast cancer antigens at the same time to make treatment more effective.

In addition to immunization, standard therapies like radiation and chemotherapy can also increase the amount of TIL. As we understand more immune response inducing effects of traditional treatment methods, we can use them more effectively. Thus, the immune environment of the tumors can be used to combine standard and novel therapeutic strategies to develop more effective treatments. Research on such methodologies is still a field in progress and there is much to learn before we can use the knowledge of immune response in breast cancer cells to improve treatment approaches.

Reference: Disis, M.L., Stanton, S.E. 2018. Immunotherapy in Breast Cancer: An Introduction. The Breast 37:196-199.

Link to article

Link to image

Soil Cannot Mitigate Climate Change

Field
Crop Field: Using crops to transfer carbon dioxide into the soil has been found to be an unrealistic option.

It was once a groundbreaking idea that that climate change mitigation was plausible by burying carbon in the ground. However, in late February of 2018 scientists at Rothamsted Research published their findings in the journal Global Change Biology that soil data stretching back to the mid 19th century demonstrates that carbon emissions cannot be stored in the ground. The researchers concluded this by analyzing of the rate of change in carbon levels in soil.

The original idea of using crops to collect carbon from the atmosphere and burying it in the soil was proposed in 2015 at an international conference. The aim of this proposal was to increase carbon sequestration (the removal of carbon dioxide from the atmosphere and holding it solid or liquid form) by “4 parts per 1000.” The researchers at Rothamsted insisted that this rate was unrealistic for such large areas all over the planet, stating that levels of soil carbon are not unrestricted; as the levels increase, they move towards equilibrium and eventually stop growing.

Data from 16 experiments on three different soil types were examined, giving 110 treatment comparisons. The researchers observed the “4 per 1000” rate of growth in soil carbon levels in some cases, but only when such extreme measures were taken that they would be impractical in a real-life setting.

Not only did these experiments prove the impracticality of the “4 per 1000” initiative, but also displayed that high rates of soil carbon increase can be achieved by removing land from agriculture. However, this extreme decrease in agriculture over vast expanses of land would be incredibly damaging to global food security. To mitigate this problem, researchers have suggested returning residue from crops to soil as an effective solution to increase carbon soil sequestration; this has been observed as an practical method used by some countries in smallholder agriculture settings.

The researchers also suggested that long-term crop rotation with occasional introduction of pasture could lead to significant soil carbon increases. While the environmental benefits are clear, this method is economically impractical for most farmers. In order for an effective change in agricultural methods to be plausible, there would have to be implementation of new policy or guidelines.

Overall, the “4 per 1000” initiative is unrealistic as a major contribution to climate change mitigation. The scientists at Rothamsted Research suggest that there has to be more logical reasoning for promoting practices that increase soil carbon levels is more important to ensure sustainable food security and wider ecosystem services.

Paul Poulton, Johnny Johnston, Andy Macdonald, Rodger White, David Powlson. Major limitations to achieving “4 per 1000” increases in soil organic carbon stock in temperate regions: Evidence from long-term experiments at Rothamsted Research, United KingdomGlobal Change Biology, 2018; DOI: 10.1111/gcb.14066

The Race to Mars: The Ins and Outs of How it Will Happen via the Deep Space Gateway

Picture of the Red Planet

For many, the fantasy of going to Mars will soon become a reality. In 2017, Michelle Rucker and John Connolly of the Mars Study Capability Team at NASA gave a PowerPoint presentation on the specifics of just how humans will get to Mars. A key aspect of this mission will be the Deep Space Gateway (More information on DSG can be found in my blog post from last week!). As a quick refresher, the Deep Space Gateway is a space station/area around the moon that will allow people to inhabit the space between the Earth and the Moon (cislunar) and will aid in transporting astronauts to Mars.

The team began their presentation by discussing how the Mars mission fits into the vision for coordinated human and robotic exploration of our solar system. This vision is entitled the Global Exploration Roadmap and one of its main goals is to explore and have astronauts live on Mars and in space. The pair of researchers highlighted the Deep Space Gateway as a way to “provide a convenient assembly, checkout, and refurbishment location to enable Mars missions”. After explaining the role of the Deep Space Gateway (DSG), the team went into the specifics of the phases of the project and the parts of the Mars mission.

After the set up and construction of the DSG and its components (Phase 1), Phase 2 begins with a 180 day Deep Space Transport (DST) checkout and a one year shakedown cruise. During this cruise, the Deep Space Gateway remains in orbit and is supplied with astronauts and cargo by the Orion capsule. At the same time, the DST takes a path that encircles the moon to simulate the deep space trip to Mars.  This piece is critical, as the DST will be the vessel that actually takes astronauts to Mars.

Phase 3 consists of our first mission to Mars via the DST! Phase 3 is only a “fly by.” The DST enters Mars orbit without interaction between humans and the Mars surface. This will prove our technical ability to travel all the way to Mars from Earth.  The most interesting things occur in Phase 4, when the first humans will be landing on Mars. The first three missions of Phase 4 will revisit the same landing site in order to create a field station on the Mars surface. The Deep Space Gateway will also play an important role, as it will provide an easy access point in order to make any necessary repairs to the Mars mission.

OPINION: This project has been going for a LONG time, but unfortunately there is still a long way to go, and we won’t see anyone standing on Mars until the 2030’s. It’s incredible to think that all of this progress has been made, but we are still so far away. There’s a lot of work that still needs to be done, but I’ll be happy and very interested to see where NASA goes with this.

LINK: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180000122.pdf

Rucker, M., Connolly, J. 2017. Deep Space Gateway – Enabling Missions to Mars. NASA Technical Reports Server: JSC-E-DAA-TN49931.

Technology Exposed

Image from Zoopah
Image by Zoopah

How much energy do our technologies consume? Researchers from McMaster University answer this question in their study on the trends of global emissions and lifespan of Information and Communication Technology (ICT) devices and services.  They based their study on smart phones, tablets, displays, notebooks, desktops, and data centers.  Based on their current results, ICT infrastructures like data centers and communication networks are the largest contributor to energy consumption and CO2 emissions.

Data centers emit 1314 to 3743 kg CO2-e/year (carbon dioxide equivalent) while they are in use. This is equivalent to 33% of the global greenhouse gas emission (GHGE) footprint by ICT devices in 2010. The average life span of data centers are ten years, and the servers attached to the centers last three to five years. Since the data centers are supporting the internet and telecommunication system, they are in constant use, resulting in higher energy consumption. In comparison, communication networks that encompass telecom operator networks, office networks, and customer premises access contribute to 28% of the global footprint in 2010. Combined, the information of energy consumption of data centers from 2007-2012 will increase by 12% in 2020.

Following data centers and communication networks of ICT greenhouse gas footprints, smart phones will contribute to 11% of energy in 2020, compared to 4% in 2010. Smart phones, specifically Apple IPhones in the study, have an average lifespan of 1.8 to 2 years. Based on the researchers’ model on absolute terms of GHGE footprints, it predicted a 730% increase in GHGE from 2010 to 2020. In 2020, smart phones will release 125 MT CO2-e into the environment. The increase of emissions is due to the short life span of these devices. Therefore, more phones need to be produced due to their ephemeral life span. Planned obsolesce is intentional in technological design, which contributes to a profitable business model for the phone manufactures and telecom industry.

In contrast to data centers, communication networks, and smart phones, the footprints of displays, notebooks, and desktops will decrease in 2020 due the transition to high phone usage.  Below, Figure 1 displays the change of GHGE by ICT category.

comparing global energy consumption in 2010 and 2020.
Figure 1: Data from Belhir and Elmeligi on the relative contribution of each Information Communication Technology categories in 2010 and 2020

Why do these numbers matter? Based on the Paris Agreement, 196 nations agreed to limit global warming below 2°C.  If the production of ICT devices and services continue as is, we will fall short on this commitment. In 2007, the global greenhouse gas emissions were at 1-1.6%; this number could exceed 14% worldwide by 2040 if we continue our current practices. More importantly, these would the global initiative to maintain the global temperature.

paris climate change
Image from WIRED

So, what now?  The researchers suggest that we should instill sustainable practices in the production and operations of data centers and communications through the use of renewable energies. Also, it will be important to raise awareness on global energy consumption from technology. This research provides incite how the environmental impacts of our technology. To meet our global initiative, it will be crucial to adapt a new method.

Source: Lotfi Belkhir, Ahmed Elmeligi. Assessing ICT global emissions footprint: Trends to 2040 & recommendations. Journal of Cleaner Production, 2018; 177: 448 DOI: 10.1016/j.jclepro.2017.12.239

Coral Reefs getting Slammed, This Time Plastics are to Blame

Coral bleaching isn’t the only major issue affecting our coral reefs. Plastic waste is once again damaging marine life, this time in the form of the coral reefs. Scientists around the world including those at Cornell University and James Cook University in Queensland, Australia found that corals entangled in plastic are more likely to be infected with pathogens and diseases then ones which aren’t.  These scientists surveyed 159 coral reefs and visually examined over 120,000 individual corals all throughout the Asia pacific region. Of the corals surveyed, 1/3 of them were wrapped in at least one piece of plastic greater then 50mm amounting to 2.0-10.9 plastics per 100m2. It was then found that corals within the presence of plastic waste saw a likelihood of contracting a disease increase by more than a factor of 20 to 89.1 ± 3.2%. This is a significant jump from that of the normal rate in corals which is 4.4 ± 0.2%. Given the widespread distribution of plastic debris on coral reefs and the continued pollution rates,  by 2025 it was estimated that if everything continues as it is, over 15.7 billion plastic items will be wrapped up in coral reefs, meaning huge increases in mortality rates, which 3/4 of the plastic debris causing diseases eventually lead to.

Plastic bag on Coral
Plastic bag entangles in coral

With over 275 million people worldwide relying on coral reefs as food sources, this plastic waste is a major problem. The hard part is, plastics aren’t uni-formally distributed. More waste is found closer to poorer regions of the world, as they tend to recycle less and pollute more. These third world countries also tend to rely more heavily on the ocean, particularly coral reefs, as their main food source, thus this phenomenon is only gonna affect them to a higher degree. For example the study found Indonesia (a third world country) had the highest amount of plastic debris in their surrounding regions and thus also had the highest percentage of coral reefs in contact with plastics, compared to everywhere else studied

The overall impact? Not good. It has been stressed over and over again and will continue to be; waste management is a must. Decreasing amounts of debris that enter the ocean is imperative to marine life as well as our own. It is vital that we reduce the amount of plastic on our coral reefs and thus the associated diseases they cause.

Source

Lamb J. B. et al., 2018. Plastic waste Associated with Disease on coral reefs. Science Vol. 359(6374): 460-462.

Asthmatics Breathe a Sigh… of Exasperation

Asthmatics Breathe a Sigh… of Exasperation

Glucocorticoids are a class of steroids patients with asthma use the world over by way of an inhaler. They are one of several drug classes that can reliably save a persons life in the event of an asthmatic exacerbation. But as many know all too well, severe asthmatic episodes can still persist despite the application of this drug. Conventional wisdom among parents and doctors has been to increase the dosage of administered glucocorticoids in patients with more serious symptoms.

But a new study on asthmatic children across the United States by the National Heart, Blood, and Lung institute finds that there is no significant differences in severity of asthma symptoms or frequency of episodes across average and quintuple doses of glucocorticoids. Further, the study even suggests high doses may stunt growth in children, revealing the possibility of more serious problems in childhood use. The article was published in The New England of Medicine.

The year long study consisted of over 250 asthma sufferers aged 5 to 11 years old, and divided them into 2 groups of continued low-level use, and quintupled use. The experiment was carried out such that neither the patients, their parents, nor the doctors knew which treatment they were receiving.

The results suggest no significant difference between the occurrence of asthmatic exacerbations or their intensity regardless of how much glucocorticoid is administered. The study also collected data on the childrens height and weight during the period and found that the patients taking the quintuple dose displayed slightly shorter stature when controlling for various differences. This may be an indicator of serious developmental problems caused by continuous high use of the drug, and should be a concern for anyone taking or prescribing it.

This study highlights a need for more effective medications in asthma sufferers, and a better understanding of how existing medications work. Glucocorticoids are self-administered in the form of an inhaler, often on an as needed basis. Add to this the fact that children are the most heavily afflicted by asthma, and it becomes extremely easy for a patient to take too much of their prescription. Granted, quintuple dose is a far cry from the average, but when double or triple dose does not lessen the symptoms, one can tend to take too much. In an age of ultra targeted and molecular approaches to medicine, a more precise asthma treatments should be available, given the low threshold of effectiveness on glucocorticoids, let alone its possible ill affects.

 

 

 

Jackson DJ, Bacharier LB, Mauger DT, Boehmer S, Beigelman A, et al., (2018) Quintupling Inhaled Glucocorticoids to Prevent Childhood Asthma Exacerbations. DOI: 10.1056/NEJMoa1710988

How to Feed 7.6 billion People

Can our current farming systems keep up with a growing population, while also protecting the land we eat from? It’s a tough question, but a study published earlier this year suggest it is possible. The research focuses on farms in the Northern Plains of the United States, specifically those under a conventional corn production system versus those under a regenerative agriculture system. Farms included in the study that were using regenerative agriculture practices never tilled their fields, did not use insecticides, grazed their livestock on the cropland, and grew a mix of cover crop species. The conventional farms included in the study practiced tillage, used insecticides, and left the soil bare after harvest.

Researchers collected soil cores from each farm to determine the amount of organic matter within. This, along with the abundance of pest, yield, and profit were assessed. Yield in this case was the gross revenue. The study found that regenerative agriculture systems had 29% lower grain production, but had 78% higher profits- two times that of conventional agriculture. In addition, there were ten times the amount of pest on fields treated with insecticides, than those that were not. All of this is because regenerative agriculture allows for nature to do its job. Spraying insecticides on a field is not only harmful to the environment, but is ineffective. Insects can adapt to new chemicals and will persist even more when their natural predators are eliminated by insecticides. Biodiversity within cropland can reduce the amount of pest and their persistence. Regenerative agriculture raises organic matter in the soil which in return allows for increased soil infiltration, diverse soil life, less fertilization, and lower input costs. Also, systems that incorporate livestock and cropland can see higher profits from the livestock as they can feed on the cover crops, reducing fodder input and allowing more of the corn harvested to feed humans.  Conventional farming sees smaller profits because of the high seed, fertilizer, and insecticide investments.

Regenerative agriculture has become a sustainable alternative to traditional farming because it provides ecosystem services, while producing higher profits than the more input intensive conventional system. Like many recent studies, the outcomes favor the unconventional farming method and show increased profitability and farm health for those using regenerative agriculture. The abundance of new research in agriculture shows that we can feed the world if we simply change how we grow our food. There needs to be a shift in farming values that prioritize the land, resources, and the quality of food over high yield numbers.

Source: LaCanne, C.E., and Lundgren, J.G. 2018. Regenerative agriculture: merging farming and natural resource conservation profitably. PeerJ 6e4428.

Photo source:  Flickr

What Would You Call a Reliable Tuberculosis Test?

TB testing machine

Latent Tuberculosis infection (LTI) is when an individual’s immune system shows a response to the bacteria that causes Tuberculosis, but the person has no clinical signs and is not infectious to others. If untreated, LTI has the chance of progressing into active tuberculosis. Tuberculosis Skin Test (TST) and blood test (IGRA) are the methods of detecting LTI. However, the results of these tests show vast differences, especially in patients with compromised immunity who have received the BCG vaccine against TB.

TST is conducted by putting a small amount of TB protein under the top skin layer of the inner forearm. Individuals with LTI show a firm red bump ≥ 5 mm when inspected after 48-72 hours. Researchers at Anakara University School of Medicine in Turkey conducted a study from 2013 to 2017 to check if higher cut-off values for TST (diameter of the red bump) would increase the specificity and agreement between the two tests. The study was conducted on three groups: all participants, solid organ transplantation (SOT) candidates, and patients schedules for anti-TNFα treatment (for people with immunosuppressive conditions like rheumatoid arthritis). All the subjects were BCG vaccinated. Patients with a history of active TB were excluded.

In order to test if a change in the cut-off value for TST would give better results, both TST and IGRA were conducted for all three groups at three different cutoff values. The diagnostic agreement was very poor for 5 mm and 15 mm but increased slightly for 10 mm in the anti-TNFα group. Overall, the results showed that although false positive results decreased with higher cut-off rates, false negative results increased.

TST is known to give false-positive results in BCG vaccinated individuals. 

Studies have shown that the type and timing of vaccination affect the TST response. In the US, BCG vaccines are given once during infancy and once during school age. Vaccination at infancy is believed to stop affecting TST after 10-15 years, however, repeated vaccinations after infancy are known to have a longer effect. Most of the differences in the two tests were observed to be positive for the skin test (TST) and negative for the blood test (IGRA). Researchers were of the opinion that the vaccines after infancy might have been the cause of this discrepancy. It was concluded that higher cut-off values for TST were not very effective in decreasing the variation between the two tests.

Analysis of the direct costs and probable consequences of the tests have shown a greater percentage of TST patients receiving preventive treatments due to the high possibility of false-positive results. This also means an increased waste of resources for unnecessary treatment. Moreover, there is the potential risk of numerous treatments leading to a resistance to TB-drugs. The rates are lower for IGRA alone and for two step screening strategies including both TST and IGRA. The American Thoracic Society (ATS) has stated that there is a lack of sufficient data to recommend any of these three methods, but specific guidelines for immunocompromised patients recommend IGRAs.

There is a need for more conclusive results to assess the reliability of the tests for LTI, especially in light of the inconvenience that TSTs cause for BCG vaccinated individuals.

Reference: Erol, S., Ciftci, F. A., Ciledag, A., Kaya, A., Kumbasar, O. O. 2018. Do higher cut-off values for tuberculin skin test increase the specificity and diagnostic agreement with interferon gamma release assays in immunocompromised Bacillus Calmette-Guerin vaccinated patients? Advances in Medical Sciences 63(2): 237-241

Link to article

Image