"Yes, I saw that he claims to be a member. I'm not sure if I buy it tbh. I don't know what the author is bitter about. It just reads like it's written by someone who was rejected by a mensan. The language is emotionally loaded and dismissive; it reeks of butthurt. Just my opinion btw" comment from MENSA subreddit. Maybe even stronger evidence than what you presented!
I'm not too suprised by this finding. In Mensa France for instance, they even have a page where you see a test sample composed of abstract item so that you can practice it, so to speak, and feel more confident before applying to Mensa membership. Just the fact they made it publicly available speaks so much about what they think of IQ validity.
But if the mean if 117 and the distribution is naturally uneven with extremes in place does that not suggest that a much greater portion is dragging the value down?
A strong source of bias would be the age of Mensans and the Flynn effect. They tend to be old, and it did not take as much intelligence to get a score of 130.8 IQ decades ago as opposed to today. 50 years would explain 15 IQ points.
Good idea, but I would separate it from the Flynn effect and situate it, instead, in the domain of aging effects. But the sample's age wasn't that different from the adults in the norming sample, and correcting for it didn't change the result.
Wouldn't this render standardized tests (like the SAT) largely useless for the same reason (as a proxy for cognitive ability)? You do the same training in preparation for it. You're most likely able to get a much more varied sample of questions too. You can significantly improve vocabulary tests in them too just by mass-repeating old tests (because repetition happens).
No, as whether preparation biases standardized achievement/aptitude tests has been looked at and not supported. Perhaps they'd be biased for comparisons between the general population and Mensa members, but I can't confirm that.
Then how come preparation biases are significant when talking about the aptitude tests that are being taken by the Mensans? Do Mensans take tests which are *more* studiable than the SAT? Or is it just that the SAT's test questions are more general and so it's harder to be specifically good at SAT questions compared to, say, Raven's Matrices?
As far as retesting goes, I don't really understand why retesting improves these scores, but the huge amount of practice tests people take for the SAT, PISA, ACT, whatever, aren't affected by this?
The thing being compared here is typical preparation versus highly atypical preparation. Typical preparation is not so extreme that it makes tests worthless like they seem to be for Mensans.
I don't know. I've met a lot of people who spent massive amounts of time and money studying for the SAT. It's not atypical for folks to spend 20 hours a week studying for the SAT. Are Mensa hopefuls spending that much time cranking out matrices?
Interesting. Naively, this goes with what i would have thought. But I do wonder how this coheres with the oft-touted studies showing both that admissions tests (like the SAT or GRE; I don't know if there are similar studies for the MCAT, LSAT, or GMAT) are both good tests of IQ and are largely practice-resistant. I wonder if you could shed some light on this.
1. Some things are more able to be practiced for than others. For example, matrices tests are easily improved with practice while vocabulary tests are relatively impenetrable to it. But both tests' scores can be greatly improved with retesting.
2. Testing for measurement invariance doesn't actually tell us why measurements were biased. I think practicing and studying for tests intensely is the main reason results are invalid for Mensans, but that's just speculation. All we actually know is that their scores are incomparable with those from the general population and to what degree.
Very interesting. Could it also be that, if studying is common enough, some of this comes out in the wash? Since there are no stakes behind many of the tests Mensa accepts, you might get the Mensa-types preparing intensely for the test while the rest of the testing population doesn't really care. By contrast, everybody I know (admittedly a lot of selection in that in and of itself) studied for the admissions tests they were required to take.
Can someone explain me in a simple way , the logic behind the method that Cremieux uses to see the real IQ of mensa members without the practice effects they benefits ?
"Yes, I saw that he claims to be a member. I'm not sure if I buy it tbh. I don't know what the author is bitter about. It just reads like it's written by someone who was rejected by a mensan. The language is emotionally loaded and dismissive; it reeks of butthurt. Just my opinion btw" comment from MENSA subreddit. Maybe even stronger evidence than what you presented!
I'm not too suprised by this finding. In Mensa France for instance, they even have a page where you see a test sample composed of abstract item so that you can practice it, so to speak, and feel more confident before applying to Mensa membership. Just the fact they made it publicly available speaks so much about what they think of IQ validity.
A group of people with a mean IQ of 117 is still unusually smart.
But if the mean if 117 and the distribution is naturally uneven with extremes in place does that not suggest that a much greater portion is dragging the value down?
"I don't want to belong to any club that would accept me as one of its members." 😆
The math PhDs are busy solving real problems instead of practicing tests to get into MENSA.
Often Mensa is to intelligence as a bachelor's degree is to subject mastery.
Recent research indicates that reading Cremieux’s writings may boost one’s IQ. Further research, and its appropriate funding, is warranted.
A strong source of bias would be the age of Mensans and the Flynn effect. They tend to be old, and it did not take as much intelligence to get a score of 130.8 IQ decades ago as opposed to today. 50 years would explain 15 IQ points.
Good idea, but I would separate it from the Flynn effect and situate it, instead, in the domain of aging effects. But the sample's age wasn't that different from the adults in the norming sample, and correcting for it didn't change the result.
Regarding the Flynn effect and age, check out https://www.sciencedirect.com/science/article/pii/S0160289614001482
Wouldn't this render standardized tests (like the SAT) largely useless for the same reason (as a proxy for cognitive ability)? You do the same training in preparation for it. You're most likely able to get a much more varied sample of questions too. You can significantly improve vocabulary tests in them too just by mass-repeating old tests (because repetition happens).
No, as whether preparation biases standardized achievement/aptitude tests has been looked at and not supported. Perhaps they'd be biased for comparisons between the general population and Mensa members, but I can't confirm that.
Then how come preparation biases are significant when talking about the aptitude tests that are being taken by the Mensans? Do Mensans take tests which are *more* studiable than the SAT? Or is it just that the SAT's test questions are more general and so it's harder to be specifically good at SAT questions compared to, say, Raven's Matrices?
As far as retesting goes, I don't really understand why retesting improves these scores, but the huge amount of practice tests people take for the SAT, PISA, ACT, whatever, aren't affected by this?
The thing being compared here is typical preparation versus highly atypical preparation. Typical preparation is not so extreme that it makes tests worthless like they seem to be for Mensans.
I don't know. I've met a lot of people who spent massive amounts of time and money studying for the SAT. It's not atypical for folks to spend 20 hours a week studying for the SAT. Are Mensa hopefuls spending that much time cranking out matrices?
Interesting. Naively, this goes with what i would have thought. But I do wonder how this coheres with the oft-touted studies showing both that admissions tests (like the SAT or GRE; I don't know if there are similar studies for the MCAT, LSAT, or GMAT) are both good tests of IQ and are largely practice-resistant. I wonder if you could shed some light on this.
Two things:
1. Some things are more able to be practiced for than others. For example, matrices tests are easily improved with practice while vocabulary tests are relatively impenetrable to it. But both tests' scores can be greatly improved with retesting.
2. Testing for measurement invariance doesn't actually tell us why measurements were biased. I think practicing and studying for tests intensely is the main reason results are invalid for Mensans, but that's just speculation. All we actually know is that their scores are incomparable with those from the general population and to what degree.
Very interesting. Could it also be that, if studying is common enough, some of this comes out in the wash? Since there are no stakes behind many of the tests Mensa accepts, you might get the Mensa-types preparing intensely for the test while the rest of the testing population doesn't really care. By contrast, everybody I know (admittedly a lot of selection in that in and of itself) studied for the admissions tests they were required to take.
Anything's possible!
Can someone explain me in a simple way , the logic behind the method that Cremieux uses to see the real IQ of mensa members without the practice effects they benefits ?
The method is to compute dMACS for all bias. This will be inclusive of practice-induced bias and any other bias.
Here's a simple and brief explanation with citations you can peruse: https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01507/full#:~:text=Estimating%20Effect%20Sizes%20in%20Item%20Bias%20in%20CFA%3A%20dMACS
Thank you a lot !
who hurt you ?
yikes.
None I've experienced, but I haven't really tried.