1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 8th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Extremely misleading & NEGLIGENT news today by MIT’s Technology Review, the journal Nature, and Ambry Genetics regarding 23andme and direct testing

Discussion in 'Laboratory and genetic testing, medical imaging' started by BeautifulDay, Mar 29, 2018.

  1. BeautifulDay

    BeautifulDay Established Member (Voting Rights)

    United States
    Please forgive grammar issues and any spots where I don’t complete a thought or I repeat myself. I’m tired, but I need to get this off my mind.

    Today, I was reading my google generated news stream on genetics when I ran across an article on MIT’s Technology Review website. It was so outright misleading, that my mind has been ruminating on it ever since. Here is my vent.

    In all honesty, I have a love hate relationship with 23andme. They were the first DNA test I did before I got deeply into Whole Exome Sequencing (WES) and Whole Genome Sequencing (WGS). 23andme relies on genotyping which is not nearly as good or broad as WES or WGS. 23andme clearly points out to consumers that there is an error ratio and they tell customers to retest anything that comes up positive. For someone who is ill and on a budget and who understands 23andme is not perfect and any results should be retested is going in with their eyes wide open.

    I’ve had a few positive results come up on 23andme (before I did WGS). I posted somewhere about my experience and how when there are a large number spots tested (think 100,000+) and there is an error ration of maybe 2%, there will be several that come up falsely as variants when they shouldn’t be (false positives). Some of those false positives will be on locations where variants are rare. Therefore, for people like me who do computer runs filtering for pathogenic mutations with low population frequencies, it will appear that 23andme’s error ration is much higher than it really is due to the fact that severely deleterious pathogenic mutations appear rarely (otherwise the human population would be wiped out). This is why all people should re-test any deleterious mutations that they are worried about with low population frequencies.

    However, read carefully the link address on today’s article by MIT’s Technology Review is https://www.technologyreview.com/th...sults-from-consumer-dna-tests-might-be-bogus/
    Yes, you read that right. “Up to 40% of results from consumer DNA tests might be bogus.”

    There is so much there that is wrong. First, the original research article that appeared in the journal Nature at
    was written and authored by Ambry Genetics, a competitor to 23andme in the respect that they are a clinical medical grade lab and not direct to consumer. They also get a lot of money from redoing genetic tests done by 23andme and other direct to consumer companies. Since they have their hand in the pot (make money based on the issue in this article), Nature and MIT should have been very careful in reviewing the information for misleading and wrong information.

    I agree in part with their “Conclusion” that “Our results demonstrate the importance of confirming DTC (Direct to Consumer) raw data variants in a clinical laboratory that is well versed in both complex variant detection and classification.”

    Not all raw data variants are pathogenic or deleterious, therefore not all variants need to be confirmed with a clinical lab - that's a spot where they should have chosen their words more carefully. 23andme themselves tells people to retest their results in clinical labs. Where the authors move from poorly chosen words to outright negligence (and where I believe 23andme has potential legal recourse) is with this line.

    “Results: Our analyses indicated that 40% of variants in a variety of genes reported in DTC raw data were false positives.”

    Of the three deleterious results from 23andme that I initially had retested with medical labs, one of them was confirmed to be a false positive. That’s a 33% error rate. However, when running my 23andme results against my WGS done from a clinical lab, the error rate in much much lower when looking at all variants and not just filtering for rare deleterious pathogenic mutations. As I mentioned above, the error rate looks much higher when pulling out variants that are deleterious and that are rare (low in population).

    In addition, two of the variants called by 23andme correctly had rs#’s (SNPs). The third one was hidden behind a 23andme internal number (i#). When using internal number with positions, 23andme tells consumers that those especially cannot be relied upon.

    There are many things that I would like to see 23andme change. However, they are in a good price point for budget conscious individuals. They clearly tell everyone that they have an error rate.

    So then what did MIT pick up when reading this biased research article and then report? The title on the MIT site is “Up to 40 percent of DNA results from consumer genetic tests might be bogus.” It is accompanied by a picture of 23andMe’s test kit.


    No, not up to 40% of DNA results from consumer genetic tests might be bogus. No, it’s much lower than that. It’s when one sorts for rare pathogenic variants, that the errors are picked up in higher numbers. That doesn’t mean that if someone is looking at all their results from 23andme on the BRCA1 gene (which might have many breast cancer causing mutations) that 40% of those mutations are potentially wrong.

    23andme tested 24 positions for me on the BRCA1 gene. No issues came up. The negligent article would make someone not familiar with the issue think that 40% of my results are wrong. So then that would mean that there is a good chance that some of the breast cancer causing mutations that 23andme said I don’t have, I actually may have. Nope. WGS found that I have 124 variants on BRCA1, but not one of them is pathogenic or deleterious or even potentially troublesome (ex. missense with low population frequency). For the 24 positions tested by 23andme, the WGS test came up with the same results. Therefore 0% error ratio for me on BRCA1 for 23andme.

    Comparing my WGS with 23andme, the error rate is very low overall. It only looks high when looking at rare deleterious pathogenic mutations.

    Do you know what’s going to happen next? The news industry is going to pick up on this very false headline and run with it.

    For all I know I could be in that Ambry research of 49 patients. One of the 3 positive 23andme tests that my doctor initially retested with a lab – was done with Ambry Genetics and they found that variant was false. So yes, I could be in that sample. Yet while 33% of the initial tests on me that were sent out to be confirmed by outside labs were false, that does not mean 23andme has a 33% error ratio. It all comes down to the filtering down the variants for unusual and troublesome ones.

    I hope 23andme fights off this attack. The DTC tests need the media to be educating people about the pros and cons of tests and what to expect, but to blatantly say there is a 40% error ratio is completely false and is just being used to scare people. Ambry Genetics should be ashamed at using this type of self-promotion and spreading false information.

    They could have presented the subject in a truthful manner which would still back up the position to retest all concerning mutations -- but they chose not to. And then the journal Nature and MIT’s Technology Review decided to give this error wings. Shame on them for not doing their research.
    Last edited: Mar 29, 2018
    Inara, ukxmrv, Angel27 and 8 others like this.
  2. BeautifulDay

    BeautifulDay Established Member (Voting Rights)

    United States
    Allele likes this.
  3. Hip

    Hip Senior Member (Voting Rights)

    So you found by comparing your 23andme results to your whole genome sequencing result that there was only around a 2% error rate in 23andme; but in the Nature study, when they looked at important clinically actionable genes (such as the BRCA cancer risk genes) the error rate was 40%.

    Why do you think the error rate is so much higher in these particular clinically actionable genes?
    Last edited: Mar 29, 2018
    Inara, ukxmrv and BeautifulDay like this.
  4. sea

    sea Senior Member (Voting Rights)

    NSW, Australia
    It has been clear for many years that general consumers do not understand the way 23andme has presented its information. So many times I have read others saying “23andme is wrong because it told me I am at risk for xxxx and I don’t have it, or I’m not at risk for xxx and I do have it.” This latest info will muddy the waters even more.
    I’m not concerned with the small error rate of false positives, it’s easy enough to get them checked out. I do wonder though about false negatives that might lead people to dismiss something important.
    BeautifulDay likes this.
  5. BeautifulDay

    BeautifulDay Established Member (Voting Rights)

    United States
    Most people looking for answers in their genes only get as far as looking for dominant pathogenic variants that cause serious health issues that match their symptoms. That’s what I did first time around looking at my 23andme data.

    It comes back to the way we filter through our data for disease causing variants the first time around.
    23andme’s v4 chip included 13,537 SNPs for disease or trait positions.
    Here is what most of us then do on our first run through our raw data trying to figure out our illness:
    - we throw out the mutations that are for traits (example red hair)
    - we throw out the mutations that just increase the chance of getting a disease (ex. 10% increased chance),
    - we throw out the mutations that don’t fit our symptoms (ex a variant that is for a goiter and that is not your issue)
    - we throw out the mutations that are not severely pathogenic or deleterious,
    - we throw out the common variants and search instead for those that are rare (<1% of the population), and
    - we throw out the mutations that are recessive inheritance (require 2 bad copies when we only have one).

    Let’s take 23andme’s 13,537 SNPs for disease or trait positions. If 23andme has a 2% error ration, then there should be about 270 variants that were mis-called. Of those 270 mis-calls, the filtering we did above, should bring that number down to about 1 or 2 severe mis-calls per person who tests with 23andme. Then an individual might have 2 or 3 more mutations that 23andme called correctly that a geneticist might be interested in also confirming that meet the above criteria.

    In my initial round (pre-WGS and pre-really diving into genetics), I had 3 variants in the 600,000 positions called by 23andme that my docs wanted re-tested by a medical grade lab. Of those 3 tested, 2 were correct. For me, 23andme did just what I needed them to do -- which was on a tight budget, to look through some of my DNA and provide me the raw data that I could then analyze. Then once I had the one’s I was interested in, I brought them to my doctors to have them retested and confirmed or eliminated.

    However, for Ambry Genetics to call this a 40% error ratio is completely false. For the news media to run with the 40% error ratio is also extremely misleading and negligent.
  6. BeautifulDay

    BeautifulDay Established Member (Voting Rights)

    United States
    I agree that the error rate on 23andme could provide a false negative on a disease causing variant. The only way to overcome this is through WES or WGS at a high quality lab with the right depth to really catch those issues.

    Even Ambry Genetics has an error rate. In my test documentation from Ambry Genetics, Ambry itself does not claim to be perfect. Errors can occur anywhere in the process.

    The biggest issue is that the scientists have yet to discover all the mutations that are deleterious or combinations that are deleterious. It's one of the things we have to realize being on the leading edge of looking into our own DNA.

    I am concerned that many companies don't want people to have access to their own DNA and believe we are not smart enough to get it. There have been many past fights trying to make it illegal for us to have this data. If a Senator believed that we shouldn't be able to self-test our DNA, then he'd probably glob onto the false 40% error ratio as a reason for making it illegal for us to do self-testing.
    Inara and sea like this.
  7. Hip

    Hip Senior Member (Voting Rights)

    @BeautifulDay, I am still not clear on why 23andme can have a 40% error rate on some SNPs, but a 2% rate on most others.
  8. BeautifulDay

    BeautifulDay Established Member (Voting Rights)

    United States
    There are some SNPs where 23andme miscalled them for many people (some might have a 10% miscall, others might have a 90% miscall rate). That's not the 40% being reported by Ambry Genetics.

    Across all locations tested on their chip (600,000 on chip v4) I believe 23andme's self reported error ratio is 2% (but I need to look that up). Also, I'd like to see an audit of 23andme to prove error ratio.

    Ambry Genetics came up w 40% by just using the ones they were asked to test to confirm variants found on 23andme and other self tests and they found of those, 40% in that population we're wrong. That's not significantly different than my 33% I found personally.

    The issue is they are reporting in their conclusion that this 40% error ratio applies across the board to all 23andme results.

    They reported that: "Results: Our analyses indicated that 40% of variants in a variety of genes reported in DTC raw data were false positives.”

    Then you have news articles reporting false info that "Up to 40 percent of DNA results from consumer genetic tests might be bogus.”

    If 40% of variants in a gene we're wrong regularly, we would have heard an uproar.

    The reason for the low error ratio across the board becoming a higher ratio on Ambrys end is because they are only being brought rare deleterious mutations that stand out. No doctor is going to test common non-deleterious mutations.

    Most variants are very common (example some variants occur in 40% of the population). Those aren't going to be fatally deleterious to any human beings when so common. Otherwise, there'd be no more humans.

    So when 23andme makes an error on a common variant, it's not going to be retested at Ambry. Those are no big deal.

    But when we filter for very rare mutations, we are going to pick up a higher percentage of errors because when many locations are 99% the regular allele, then errors for the rare variant will be picked up in higher numbers.
    Amw66 likes this.
  9. Hip

    Hip Senior Member (Voting Rights)

    So you are saying that there is a higher error rate on rare mutations, compared to common mutations? Would you know the reason for that?
  10. BeautifulDay

    BeautifulDay Established Member (Voting Rights)

    United States
    It comes down to probabilities and statistics. Boy I disliked those courses. :banghead: Let me give you an example problem. Statistics still hurts my brain.

    Most pathogenic really deleterious health related variants are going to occur in less than 1% of the population. On those same SNPs, most people (>99% of the population) will have the normal allele.

    Out of the 3 billion available base pairs in humans, 99.9% are exactly the same from one person to anther person. The differences are contained in those .1% of variants.

    I'm going to make a very simple example. It's much more complex than this in real life, but I believe this example might help.

    In this example, let's say that there are 100,000 SNPs in the total population of SNPs tested on a chip. In this hypothetical example, out of these 100,000 SNPs being tested we are given that the normal allele appears on this SNP in >99% of the population and the variant (mutation) appear in <1% of the population.

    Therefore, this is the group that we find when looking for rare mutations in less than 1% of the population because that is where more of the severely deleterious health related mutations will be found. The SNPs that are normal for a test subject that are miscalled are going to turn into rare mutations. The SNPs that are rare for a test subject that are miscalled are going to turn into common alleles.

    For simplicities' sake, I'm going to say in this example that there are 100,000 SNPs tested on this chip. For this example, there are no SNP's that occur in 98% of the population on this chip, nor in 60% of the population. It's strictly the population that will create rare mutations (meaning they occur in >99% of the population) for simplicities sake.

    In 100,000 test subjects, there should be:
    10,000,000,000 total SNPs tested (100,000 test subjects x 100,000 SNPs tested for each subject)
    Of those 10 billion SNPs, using the known 99% to 1% of the population statistics for these SNPs, 99% of them are going to be for the common allele and 1% will be the rare allele.

    Under this testing method,
    9,900,000,000 tested alleles will be common and
    100,000,000 alleles will be rare

    The most deleterious health mutations are much more likely to be found in the 100,000,000 rare alleles on these SNPs rather than the 9.9 billion common alleles. Severely deleterious health variants occurring in high populations would result in the human race being wiped out. In nature, these deadly (or severely deleterious) variants don't last long (low procreation when ill or dead so low ability to pass on) and therefore occur in a small percent of the population.

    Let's presume a 2% error ratio for calling all SNPs on this chip.

    9,900,000,000 x .02 = 198,000,000 common variants are going to be miscalled [are going to be called incorrectly as allele's that are rare (<1% of the population)]. See they flip from very common (>99% of the population) to very rare (<1% of the population).

    100,000,000 x .02 = 2,000,000 rare variants are going to be miscalled [are going to be called incorrectly as allele's that are very common (>99% of the population)]. These ones flip from very rare (<1% of the population) to very common (>99% of the population).

    The 9,900,000,000 common mutations will have 198,000,000 errors and will result in 9,702,000,000 common mutations (9.9b - .198b)

    The 100,000,000 rare mutations will have 2,000,000 errors and will results in 98,000,000 rare mutations (100m - 2m)

    The 9,702,000,000 common alleles will now have the 2,000,000 errors from rare to common added to them. So now there is 9,704,000,000 common alleles coming up on the test results.

    The 98,000,000 rare alleles will now have 198,000,000 errors from common to rare added to them. So now there is now 296,000,000 rare alleles coming up on the test results.

    If you test the population of 9,704,000,000 common alleles that were found, you'd find 2,000,000 errors or a .0206% error rate. That is much below the 2% error rate.

    If you test the population of 296,000,000 rare alleles that were found, you'd find 198,000,000 errors or a 66.89% error rate.

    Therefore, statistically what we have done is created more errors where common alleles were turned into rare alleles on the tests.

    Therefore one could have a 2% error ratio across the board and turn it into a much higher error ratio when just looking at rare mutations.

    Since Ambry Genetics is being asked by doctors to confirm mostly rare deleterious mutations that popped up in the 23andme (and other DTC tests), then the error rate that Ambry sees is of course going to appear higher than it actually is across the entire board.

    The problem is Ambry Genetics has spun the error ratio they see as something that it is not. And now the media is now taking it and running with it -- as represented by this health headline on Healthcare Analytics News:
    "Study Finds 40% False Positives in Direct-to-Consumer Genetic Tests".

    Extremely misleading headline would lead anyone to believe 23andme and other companies have a 40% error ratio across the board.

    The headline above is on Healthcare Analytics News. Analytics my booty. Do they have a statistician on staff?

    There is going to be hell to pay on our end as this false story is set on fire by the media. I'm all for re-testing any findings. In fact I advocate for it. However, they are throwing 23andme and DTC testing under the bus where it could be made illegal or disregarded by doctors because of these headlines, rather than the important tool that it is.
    Last edited: Mar 30, 2018
    Inara, merylg and Hip like this.
  11. Hip

    Hip Senior Member (Voting Rights)

    Thanks for your detailed explanation @BeautifulDay. I am still not fully understanding it, but brain fog and maths don't go well together.
    merylg and BeautifulDay like this.

Share This Page