An article in PNAS highlighting poor standards of reproducibility of many published finding, links to a John Hopkins ‘MOOC’ on ‘Data Science’. The course includes modules on R programming, experimental design and analysis. It all looks nicely laid out, although initially I thought you had to pay to run through the course — this is however not the case. You just have to pay if you want certification. This is via the Coursera platform. I know there are other courses covering some of the same topics, and it strikes me that teaching statistics without small group interaction is a hard task. But surely we must see this approach spread to use at the undergraduate level in subjects like medicine, where the best students will be able to demonstrate that they can acquire skills without the limitations of course structures designed for those who are less capable. But more importantly, it really is not hard to imagine that this approach will be superior to our current attempts to cover topics in which medical schools appear unable to invest in teaching staff or high level materials. I am quite optimistic about the latter but less so about such courses eradicating dodgy science.
In spite of the extra instruction that the on-campus students had, Figure 6 shows no evidence of positive, weekly relative improvement of our on-campus students compared with our online students.
A key paper comparing online with on campus.
Gregory Hays shares some nuanced thoughts on MOOCs, on reviewing a book and MOOC by Gregory Nagy.
Will the MOOC revolutionize education in a few short years, as the Virginia conspirators persuaded themselves? Will it remain a marginal though useful supplement to conventional college, like the Open University or the correspondence courses of the 1920s? Will it be merely a playground for retirees and intellectual hobbyists, the digital successor to “great lectures on tape”? Or will it prove an evolutionary cul-de-sac, like the fifth-grade filmstrip of the 1970s?
None of these questions really seems answerable as yet. In its current version, in fact, Nagy’s MOOC feels a lot like a conventional large lecture class: there’s a textbook, a professor who does most of the talking (sometimes alone, sometimes in obviously staged “dialogues”), a virtual discussion section, and tests in multiple choice and short-answer formats. As one browses the website one is struck by the ordinariness of the whole thing—even the classroom dynamics. Some participants are being lectured about courtesy on the bulletin boards, modern Greek students are insisting that only they can really understand Homer, and still others are—well, perhaps “disconnected” is the right word.3
None of this should surprise. It’s typical for new technologies initially to mimic an existing one; Gutenberg’s forty-two-line Bible is not easy to distinguish from a manuscript copy. It takes time to figure out what a new medium can do besides the same thing bigger, faster, or cheaper, and for its particular strengths and weaknesses to emerge. Fifty years after Gutenberg, printing had shown itself vastly superior for Bibles and legal texts, a cheap substitute for deluxe books of hours, and no replacement at all for wills, inventories, and personal letters.
His final sentence:
It is, after all, a medium, not a message. And as the typographer Alvin Doyle Moore observed, “if you’re really good, you can do it anywhere—even on the ground with a stick.”
The situation was a familiar one. Some time back, I was gossiping to a medical student, and he began to to talk about some research he had done, supervised by another faculty member of staff. I asked what he had found out: what did his data show? What followed, I have seen if not hundreds of times, then at least on several score occasions. A look of trouble and consternation, a shrug of embarrassment, and the predictable word-salad of ‘significance’, t values, p values, statistics and ‘dunno’. Such is the norm. There are exceptions, but even amongst postgraduates who have undertaken research, the picture is not wildly different. Rarely, without directed questioning, can I get the student to tell me about averages, or proportions, using simple arithmetic. A reasonable starting point surely. ‘What does it look like if you draw it?’ is met with a puzzled look. And yet, if I ask the same student, how they would manage psoriasis, or why skin cancers are more common in some people than others, I get —to varying degrees—a reasoned response. I asked the student how much tuition in statistics they had received. A few lectures was the response, followed by a silence, and then, “They told us to buy a book”. More silence. So this is what you pay >30K a year for? The student just smiled in agreement. This was a good student.
Statistics is difficult. Much statistics is counter-intuitive and, like certain other domains of expertise, learning the correct basics often results in a temporary —or in some cases a permanent —drop in objective performance.** That is, you can make people’s ability to interpret numerical data worse after trying to teach them statistics. On the other hand, statistics is beautiful, hard, and full of wonderful insights that debunk the often sloppy thinking that passes for everyday ‘common sense’. I am a big fan, but have always found the subject anything but easy. But, like a lot of formal disciplines, the pleasure comes from the struggle to achieve mastery. I also think the subject important, and for the medical ecosystem at least, it is critical that there is high level expertise within the community. On the other hand, in my experience many of the very best clinicians are (relatively) statistically illiterate. The converse is also seen.