No. It has been demonstrated on numerous occasions that all IQ tests are ineffective in predicting long term outcomes along any measure.
It’s also been demonstrated in numerous studies that IQ tests are not remotely objective - cultural and other variations produce massive gaps in measured IQ of individuals in different social groups. To the extent that you can engineer IQ tests “without bias” that demonstrate group A is smarter than group B, for almost any groups A and B.
That’s before we get into the borderline statistics, weak methodology, and vanishingly small sample size: 600 infants, so somewhere in the realm of 300/300 split, so they’re saying a p=0.05 result of a 1.3 point drop in IQ. You only need a minor number of infants with learning difficulties to cause that. Even if the testing methodology were sound (which it isn’t).
Addendum:
Per [1] the prevalence of learning disabilities is 7.8% of the population of 3-17 year olds, with 3.8% showing severe learning impairment. So if we take our 600 infants we’d expect around 40 kids with learning difficulties to some extent. But it should be easy to see that at the relatively tiny sample sizes (generously 150 boys in the F vs non-F groups) that minor variance would be more than sufficient to skew things.
Further sample considerations: controlling for all those variables necessarily increases the noise (legit not sure how they get to p=0.05) due to the further reduced group sizes.
It’s also been demonstrated in numerous studies that IQ tests are not remotely objective - cultural and other variations produce massive gaps in measured IQ of individuals in different social groups. To the extent that you can engineer IQ tests “without bias” that demonstrate group A is smarter than group B, for almost any groups A and B.
That’s before we get into the borderline statistics, weak methodology, and vanishingly small sample size: 600 infants, so somewhere in the realm of 300/300 split, so they’re saying a p=0.05 result of a 1.3 point drop in IQ. You only need a minor number of infants with learning difficulties to cause that. Even if the testing methodology were sound (which it isn’t).
Addendum:
Per [1] the prevalence of learning disabilities is 7.8% of the population of 3-17 year olds, with 3.8% showing severe learning impairment. So if we take our 600 infants we’d expect around 40 kids with learning difficulties to some extent. But it should be easy to see that at the relatively tiny sample sizes (generously 150 boys in the F vs non-F groups) that minor variance would be more than sufficient to skew things.
Further sample considerations: controlling for all those variables necessarily increases the noise (legit not sure how they get to p=0.05) due to the further reduced group sizes.
[1] https://www.ncbi.nlm.nih.gov/books/NBK332880/