Is the time it takes to make the charts/graphics included in the hour, or is the hour just for typing? And do you have notes you're drawing from or are you just starting with a blank page and one hour on the clock and that's it? Because if you are, I don't know how you handle that kind of pressure, and it's very impressive. If you want to lower the pressure on yourself you might consider skydiving or bungee jumping or hunting wild hogs with a spear or something else along those lines.
The one hour did include producing the first three, the seventh, and the eighth graphic. The rest already existed, either in papers or on my blog or Twitter.
I did pause the timer at one point because an important message came up, but besides that, it was just a 57-minute grind from memory!
Several times in its history of mandatory schooling, the UK has raised the school leaving age, creating cohorts in a natural experiment. Do you know of any studies that have analysed these whole-population cohorts in the UK?
Gregory clark in the European Historical Economics Society did a review of (mostly), these kinds of natural experiments(among other casual estimates) on the impact of education on income, and although a naive meta-analysis would find that a year of education has an 8.5% return, a funnel plot shows clear evidence of publication bias, differing from analyses of mincerian regressions which showed a smaller amount of publication bias(Montenegro and Patrinos, 2014), but similar to previous meta-analysis of casual estimates(Ashenfelter,
Harmon, and Oosterbeek, 1999). There's also other evidence of significant publication bias, such as greater standard errors for estimates in low and middle income countries along with higher effect sizes(close to 2x), compared to high income countries, but this is not really explained by older papers being low quality, and new papers show similar evidence of publication bias. Gregory clark apparently found no evidenc of p-hacking, but his estimates correcting for publication bias(not a formal test, just looking at the estimates with the lowest standard errors and estimating the normal distribution of true results based on relative frequency of observed results), and they suggest the true effect size is 3% and 0.2% respectively.
Is the time it takes to make the charts/graphics included in the hour, or is the hour just for typing? And do you have notes you're drawing from or are you just starting with a blank page and one hour on the clock and that's it? Because if you are, I don't know how you handle that kind of pressure, and it's very impressive. If you want to lower the pressure on yourself you might consider skydiving or bungee jumping or hunting wild hogs with a spear or something else along those lines.
The one hour did include producing the first three, the seventh, and the eighth graphic. The rest already existed, either in papers or on my blog or Twitter.
I did pause the timer at one point because an important message came up, but besides that, it was just a 57-minute grind from memory!
Several times in its history of mandatory schooling, the UK has raised the school leaving age, creating cohorts in a natural experiment. Do you know of any studies that have analysed these whole-population cohorts in the UK?
Not for IQ, but for other outcomes, there have been analyses: https://www.aeaweb.org/articles?id=10.1257/aer.103.6.2087
Gregory clark in the European Historical Economics Society did a review of (mostly), these kinds of natural experiments(among other casual estimates) on the impact of education on income, and although a naive meta-analysis would find that a year of education has an 8.5% return, a funnel plot shows clear evidence of publication bias, differing from analyses of mincerian regressions which showed a smaller amount of publication bias(Montenegro and Patrinos, 2014), but similar to previous meta-analysis of casual estimates(Ashenfelter,
Harmon, and Oosterbeek, 1999). There's also other evidence of significant publication bias, such as greater standard errors for estimates in low and middle income countries along with higher effect sizes(close to 2x), compared to high income countries, but this is not really explained by older papers being low quality, and new papers show similar evidence of publication bias. Gregory clark apparently found no evidenc of p-hacking, but his estimates correcting for publication bias(not a formal test, just looking at the estimates with the lowest standard errors and estimating the normal distribution of true results based on relative frequency of observed results), and they suggest the true effect size is 3% and 0.2% respectively.
https://www.ehes.org/wp/EHES_249.pdf