Wednesday, February 26, 2014 - 11:56
In our series on why $1000 genomes cost $2000, I raised the issue that the $1000 genome is a value based on simplistic calculations that do not account for the costs of confirming the results. Next, I discussed how errors are a natural occurrence of the many processing steps required to sequence DNA and why results need to be verified. In this and follow-on posts, I will discuss the four ways (oversampling, technical replicates, biological replicates, and cross-platform replicates) that results can be verified as recommended by Robasky et. al. . The game Telephone teaches us how a message changes as it is quietly passed from one person to another. In Telephone, mistakes in messages come from poor hearing, memory failures, and misunderstanding. By analogy, mistakes in DNA sequence results come from poor signals, artifacts introduced through many steps, and data processing. In Telephone terms, the errors we can identify and correct, through the four verification methods, can be expressed as:
- Oversampling; “please speak louder”
- Technical replicates; “please repeat that”
- Biological replicates; “please repeat that again”
- Cross-platform replicates; “get a second opinion”
A visual representation of randomly distributed reads aligned to a reference sequence. Each rectangle is one read. A variant base is shown in blue. The bases highlighted in red are most likely errors. For additional technical discussion see: http://gatkforums.broadinstitute.org/gatk/discussion/2541/screenshot-info-snp-visible-in-igv-but-is-not-called-by-unifiedgenotyper. The full size image can be viewed at http://postimg.org/image/pxdxoyl5d/fullThe basics of oversampling are simple, but how does it help verify results and reduce error? If each position within the read has a chance that the base determined for that position is an error, and the error occurs randomly, then additional measurements will ensure that most of the data have the correct base. To illustrate, if we collect 10 reads and the random error rate is 10%, then, on average, nine will be correct. If it seems that 10x coverage, possibly lower, should be sufficient, why is 30x a common standard? The answer is that the occurrence of DNA sequencing errors is not uniform over the length of a read. The ends of a read tend to have more errors (lower quality) and the middles of reads have fewer errors (higher quality). The fewest errors will occur if a genome is oversampled in the middle portions of reads. Achieving this goal requires that each read is obtained from a different place within the genome. Using the above example, a 10x oversampling would position each read every 15 bases if they were spaced evenly. After the first 10 reads we would have a 10x uniform depth across the entire genome. That is, if we could evenly sample the genome every 15 bases. Once again, statistics and the laws of physics make sure this won't happen. In random sampling, events occur with different frequencies. In 1988, Eric Lander and Michael Waterman developed models for mapping DNA based on random sampling. Briefly, if a 500 million base pair genome is sequenced to a 10x coverage, 20,000 bases will be missed. For a 3 Gbp genome even more bases will be missed. And, when regions of lower coverage are factored in (to achieve uniform oversampling), a greater total depth of coverage is needed. Of course, these numbers are based on mathematical models. When biases related to DNA fragmentation, PCR, or cloning are considered, coverage needs to be further increased. Thus 30x coverage is an accepted standard, but it is not a standard that everyone agrees to for the reasons stated and other nuances related to DNA structure. In summary, oversampling is good for reducing errors that occur in a random fashion. However, systematic errors that result from the local base compositions or instrument created artifacts will persist even when data are oversampled. Thus, other types of verification are needed. References and further reading  Robasky, K., Lewis, N. E.,, & Church, G. M. (2014). The role of replicates for error mitigation in next-generation sequencing. Nature Reviews Genetics DOI: 10.1038/nrg3655 Genome Sequencing Theory: https://en.wikipedia.org/wiki/DNA_sequencing_theory - Provides an overview random sequencing and Lander/Waterman and other references. We have a new newsletter at Digital World Biology! Sign up here.