PCR starts maximising positive always at 30 for short sequences. This is published and well known for almost 20 years. A gold standard would require a full unique long enough sequence. This is not usually possible to produce, let alone scale it. Without a gold standard the test has 4 possible logical results. Only subjective statistical …
PCR starts maximising positive always at 30 for short sequences. This is published and well known for almost 20 years. A gold standard would require a full unique long enough sequence. This is not usually possible to produce, let alone scale it. Without a gold standard the test has 4 possible logical results. Only subjective statistical methods could effectively deal with such scenarios under uncertainty. Impossible for the populace to understand the result is not binary and not possible to deploy at the individual level. FDA guidelines suggest to overcome this by limiting to 12 cycles. Usually luminescence is rendered quasi-mute at this cycle threshold for short sequences. The FDA relaxes said requirement to 18 cycles plus physician confirmation of symptoms. That's what Pfizer et al should have been doing in the polemic trials. But then again...
Given that information why was the guideline set at 30+? The probability of a false positive seems related to anything over 12 cycles for the sequencing match used for production. Or am I not understanding? I assume that somewhere is a relationship connecting sequence length, cycle count and false positive probability. Is that related to the specific reagent kit? Published?
Don't know but I suspect the obvious reasons. I call it the pandemic threshold. The FDA guidelines for trials are obviously based on some statistical analysis and results. My take on this is simply mathematical. After a given number of cycles the increased probability of having false positive or negative leads to super/exponential diffusion of the sample variance, so the 2nd moment is no longer bounded and cannot be described by an ergodic process. Basically you can no longer compute a measure of uncertainty of the system. This is assuming cycles as a measure of time and fluorescence of the outcome. Now, the specific outcome of course depends on the specific diffusion process. I briefly described a common diffusion outcome in a 2d Fokker-Plank sense. However, if you really assume all the 4 logical outcomes, the first result is that you cannot really ever solve the Fokker-Plank for such process. Likely... Anyway the analysis equates to quantum physics. The solution would be a wave. The more cycles you add the probability of the wave become both outcomes is always equal to one. Or as Mullis put the more cycles you add the more it looks like everything could be contained inside the initial small sample. To deal with this is possible using HMM Monte Carlo sampling and reduce to a 2d system again. All this is advanced math and computer topics. The 12 cycle threshold suggests that such analysis exists. The 18 cycle double confirmation method as the name of its author and for what I know it is widely accepted. Anything beyond is highly speculative. Think of your book as an initial letter soup. Start shuffling letters around, eventually the desired sequence would start poping up, and the shorter it is the more likely to pop up before the next shuffle. The ignorance rests on the belief that the sample size does not grow. It does. Combinatorics wise. Alphabet is bounded so all of this can be most likely computed. You should think on the probability that a combination exists given all possible shuffles. Sample each one randomly and eventually at a given number of shuffles the probability of said sequence exists starts growing exponentially. I have several posters on the topic of pcrs. See this one with the highlights of two papers from 15 years ago and check the pcr standards used. They are based on patents introduced more than 20 years ago by the same fellows. To my knowledge these test procedures were supposedly recalibrated and used in this pandemic. https://gab.com/MichelNeyXVII/posts/107924436659793818
Thanks. Glad you got it. Still even if the mathematics of these tests were sound for a given standard, we would still have to deal with measurement errors. I personally gave up following infection numbers when I understood that all incentives were set to feed the fraud no matter what. The real data is in hospitalisations and diagnosis. Death excess and specifically its peaks are very telling. The later is widely available, the former depends on the country. However, without wide spread autopsy results and with massive numbers of jabbed, adding to health systems plagued with nonsensical protocols, I believe that it will be impossible to unwind this story completely now. Plus, it seems that the charade will continue, to cover up the maximum of adverse health outcomes with the long covid mantra, which will require more waves, etc...
PCR starts maximising positive always at 30 for short sequences. This is published and well known for almost 20 years. A gold standard would require a full unique long enough sequence. This is not usually possible to produce, let alone scale it. Without a gold standard the test has 4 possible logical results. Only subjective statistical methods could effectively deal with such scenarios under uncertainty. Impossible for the populace to understand the result is not binary and not possible to deploy at the individual level. FDA guidelines suggest to overcome this by limiting to 12 cycles. Usually luminescence is rendered quasi-mute at this cycle threshold for short sequences. The FDA relaxes said requirement to 18 cycles plus physician confirmation of symptoms. That's what Pfizer et al should have been doing in the polemic trials. But then again...
Given that information why was the guideline set at 30+? The probability of a false positive seems related to anything over 12 cycles for the sequencing match used for production. Or am I not understanding? I assume that somewhere is a relationship connecting sequence length, cycle count and false positive probability. Is that related to the specific reagent kit? Published?
Don't know but I suspect the obvious reasons. I call it the pandemic threshold. The FDA guidelines for trials are obviously based on some statistical analysis and results. My take on this is simply mathematical. After a given number of cycles the increased probability of having false positive or negative leads to super/exponential diffusion of the sample variance, so the 2nd moment is no longer bounded and cannot be described by an ergodic process. Basically you can no longer compute a measure of uncertainty of the system. This is assuming cycles as a measure of time and fluorescence of the outcome. Now, the specific outcome of course depends on the specific diffusion process. I briefly described a common diffusion outcome in a 2d Fokker-Plank sense. However, if you really assume all the 4 logical outcomes, the first result is that you cannot really ever solve the Fokker-Plank for such process. Likely... Anyway the analysis equates to quantum physics. The solution would be a wave. The more cycles you add the probability of the wave become both outcomes is always equal to one. Or as Mullis put the more cycles you add the more it looks like everything could be contained inside the initial small sample. To deal with this is possible using HMM Monte Carlo sampling and reduce to a 2d system again. All this is advanced math and computer topics. The 12 cycle threshold suggests that such analysis exists. The 18 cycle double confirmation method as the name of its author and for what I know it is widely accepted. Anything beyond is highly speculative. Think of your book as an initial letter soup. Start shuffling letters around, eventually the desired sequence would start poping up, and the shorter it is the more likely to pop up before the next shuffle. The ignorance rests on the belief that the sample size does not grow. It does. Combinatorics wise. Alphabet is bounded so all of this can be most likely computed. You should think on the probability that a combination exists given all possible shuffles. Sample each one randomly and eventually at a given number of shuffles the probability of said sequence exists starts growing exponentially. I have several posters on the topic of pcrs. See this one with the highlights of two papers from 15 years ago and check the pcr standards used. They are based on patents introduced more than 20 years ago by the same fellows. To my knowledge these test procedures were supposedly recalibrated and used in this pandemic. https://gab.com/MichelNeyXVII/posts/107924436659793818
"Combinatorics wise" - Intersting way to think and likely reasonable. BTW your do good graphics. The Wagner dude is a real trip.
Thanks. Glad you got it. Still even if the mathematics of these tests were sound for a given standard, we would still have to deal with measurement errors. I personally gave up following infection numbers when I understood that all incentives were set to feed the fraud no matter what. The real data is in hospitalisations and diagnosis. Death excess and specifically its peaks are very telling. The later is widely available, the former depends on the country. However, without wide spread autopsy results and with massive numbers of jabbed, adding to health systems plagued with nonsensical protocols, I believe that it will be impossible to unwind this story completely now. Plus, it seems that the charade will continue, to cover up the maximum of adverse health outcomes with the long covid mantra, which will require more waves, etc...