Ashton concludes that trying “to create another Arm is as much folly as trying to create the next Google”. His recommendation is for the British government to focus instead on training and skills and providing a stable tax and regulatory regime.
But at a time when the US, EU and Chinese governments are pouring billions of dollars into subsidising their chip industries, this policy recipe seems thin gruel. Serendipity cannot substitute for strategy. And, as one industry executive is quoted as saying: “Without semiconductors, you’re nowheresville.”
Re: Serendipity cannot substitute for strategy, I am not so sure.It can for a while, anyway.
I am very fond of the Alan Kay line that the best way to predict the future is to invent it. Indeed, my default line with Alan Kay is to tend to believe that he is always right. But Audrey Watters disagrees (via Stephen Downes)
Their imaginings and predictions were (are) never disinterested, and I don’t agree at all with the famous saying by computer scientist Alan Kay that “the best way to predict the future is to build it.” I’ve argued elsewhere that the best way to predict the future is to issue a press release. The best way to predict the future of education is to get Thomas Friedman to write an op-ed in The New York Times about your idea and then a bunch of college administrators will likely believe that it’s inevitable.
A query with the catchy expression “global pandemic” or “global pandemic preparedness” in scientific databases, restricted to a 2009–19 range, will return more than 1400 results in JAMA (Journal of the American Medical Association), 30 in-depth papers in ArXiv (Cornell University), and a stunning 17,000 results in Google Scholar, which aggregates multiple repositories. As for the general public, it had the choice between no less than 98 TED Talks on the matter.
We had no excuses.
Just before the H1N1 episode in 2009, France had accumulated an inventory of 1 billion high protection masks (N95 equivalent). It was the consequence of the SARS epidemic. In the same way, the government had stored 20 million doses of vaccine. Later, the Health Ministry responsible for this precaution was blasted for this “excessive” stockpile — which was eventually destroyed as it decayed.
Frederik Filloux makes (and has for a while been making) an argument about journalism and journalism schools that I have not seen advanced by anybody else. The changing economics of the press mean that the modern Fourth Estate lacks expertise across many domains of modern life. He suggests that journalism schools need to regroup and change how they work and take advantage of the fact that most expertise will reside with those who did not go to journalism school in their 20’s. Rather, the press will need to rely on those with professional skills gained in particular domains. He writes:
The shortage of experts is also rooted in a priority shift that plagues major news organizations. All of them became obsessed with not being left in the dust by digital-native organizations riding the wave of social networks. As a consequence, newsroom managers, supported by bean-counters, found it clever to hire bunches of expendable digital “content” serfs who were mandated to keep up with the social frenzy. It was seen as a better investment than keeping a former doctor turned medical correspondent, even if he or she was loaded with decades of expertise, able to lean on a reliable network on practitioners, surgeons, epidemiologists, public health officials, etc. A pure cost vs. benefit choice, and ultimately a bad one.
I do not think there will be any shortage of candidates who possess medical degrees and medical experience.
Stanley Cohen has died. A special place for those of us hooked on the ectoderm. Some nice comments about him in the Lancet from Geoff Watts.
A May, 1962, issue of the Journal of Biological Chemistry included a deceptively arcane study on the isolation of a protein that could accelerate incisor eruption and eyelid opening in newborn mice. The author, Stanley Cohen, later to become Professor Emeritus of Biochemistry at Vanderbilt University School of Medicine (VUSM) in Nashville, TN, USA, had named his protein “tooth-lid factor”. Cohen’s subsequent studies would not only lead him to rename the protein epidermal growth factor (EGF), but also mark him out as one of the founders of a new area of biology and eventually win him a Nobel Prize.
[says Lawrence Marnett], “When he came here he began studying some growth factors in animal cell extracts. One was of mouse submaxillary gland…It had peptides in it, and when he injected them into newborn mice their teeth broke though earlier than normal, and their eyelids opened sooner.” Cohen’s subsequent studies revealed that his extract worked by stimulating the growth of epidermal cells. Having consequently renamed the material EGF, he devoted the rest of his career to studying it. “He went on to identify the EGF receptor and define target cells that would respond to EGF”, recalls Graham Carpenter, Emeritus Professor of Biochemistry at VUSM, who joined Cohen’s lab in 1973 and worked with him on EGF as a postdoctoral fellow. The EGF receptor proved to be a useful target for drugs, and Cohen’s discoveries opened the door to research on diseases ranging from dementia to cancer. “He understood EGF’s biological importance”, says Carpenter. “But we did not have any idea that this would extend to cancer biology in a major way.”
And as for that most successful of all biology labs, the style of exploration is familiar.
[Graham Carpenter] “In contrast to today, his research group was very small, seldom more than four people—himself, two technicians, and a postdoc…He was central to whatever was going on in the lab.” [Lawrence] Marnett also recalls that determination: “He was one of those guys that was just driven by his desire to understand how things work…It was a classic example of making an observation and then drilling down to try to understand it, not knowing what you’re going to find.” And at that time there was plenty to be found. Cohen, as Marnett puts it, was basically “mining gold”.
Frank Davidoff had a telling phrase about clinical expertise. He likened it to “Dark Matter”. Dark Matter makes up most of the universe, but we know very little about it. In the clinical arena I have spent a lot of time reading and thinking about ‘expertise’, without developing any grand unifying themes of my own worth sharing. But we live in a world where ‘expertise’ in many domains is under assault, and I have no wise thoughts to pull together what is happening. I do however like (as ever) some nice phrases from Paul Graham. I can’t see any roadmap here just perspectives and shadows.
When experts are wrong, it’s often because they’re experts on an earlier version of the world.
Instead of trying to point yourself in the right direction, admit you have no idea what the right direction is, and try instead to be super sensitive to the winds of change.
I used to use the phrase — with apologies to Freud — ‘eppendorf envy’ to describe the bias in much medical innovation whereby useful advance pretended it owed its magic to ‘basic’ science. Doctors wore white coats in order to sprinkle the laboratory magic on as a veneer. But I like this cognate term also: innovation theatre.
To be fair to the banks, they weren’t the first institutions to recognise the PR value of what Rich Turrin has dubbed innovation theatre. Many institutions before them had cottoned on to the fact that it was a way to score easy points with the public and investors. Think of high impact campaigns featuring “the science bit” for L’Oréal’s Elvive shampoo or Tefal appliance ads: “We have the technology because we have the brains”.
The financial sector has seen enough innovation theatre | Financial Times. The orignal reference is here.
This is why I have doubts about mechanical theories such as disruptive innovation. Too often, they’re presented as a type of physical law: You drop a glass of wine, it always falls to the ground with an acceleration of 32.17405 ft/s2. This truth is indisputable…but it ignores the drunken clumsiness of the oaf who knocked the glass over, and discounts the quick reflexes and imaginative solutions you only get when there’s a human nearby.
Jean-Louis Gassée. A nice summary of why human agency matters, and also why companies fail.
Direct URL for this post.
Not often I spot typos in the New York Review of Books, but here is one that matters. The article dealt with the price of prescription drugs, and there are of course plenty of villains to go around: crony capitalists; advertising spending being larger than research spending —because it works!; and sloppy thinking with regard to IPR and patents. The article on paper read:
In late October, however, just before the congressional elections, Azar declared to reporters that high prices constituted “the greatest possible barrier to patent access.” Democratic strategists gave prescription drug prices high priority in congressional campaigns. Yet leaders in both parties understood that curbing prices would be no easy task. The pharmaceutical industry, which has long deployed one of the most powerful lobbies in Washington, was increasing its representation in the capital.
Yes, should have read patient not patent, although no doubt pharma might not have agreed.
Direct URL for this post.
The quotes below are from an article in the FT (awhile back). They echo one of my rules, a rule that is more of the exception that proves the rule. Just as “no good lab has space” (because the bench space will always be taken up because many will want to work there), so when the grand new building arrives, the quality of work will already be past its peak (because how else would you have justified your future except by looking back). It is all about edge people, and just as social change usually starts at the edge, so do good ideas.
The principle of benign neglect may well operate on a larger scale. Consider Building 20, one of the most celebrated structures at Massachusetts Institute of Technology. The product of wartime urgency, it was designed one afternoon in the spring of 1943, then hurriedly assembled out of plywood, breeze-blocks and asbestos. Fire regulations were waived in exchange for a promise that it would be pulled down within six months of the war’s end; in fact the building endured, dusty and uncomfortable, until 1998.
During that time, it played host not only to the radar researchers of Rad Lab (nine of whom won Nobel Prizes) but one of the first atomic clocks, one of the first particle accelerators, and one of the first anechoic chambers — possibly the one in which composer John Cage conceived 4’33. Noam Chomsky revolutionised linguistics there. Harold Edgerton took his high-speed photographs of bullets hitting apples. The Bose Corporation emerged from Building 20; so did computing powerhouse DEC; so did the hacker movement, via the Tech Model Railroad Club.
Building 20 was a success because it was cheap, ugly and confusing. Researchers and departments with status would be placed in sparkling new buildings or grand old ones — places where people would protest if you nailed something to a door. In Building 20, all the grimy start-ups were thrown in to jostle each other, and they didn’t think twice about nailing something to a door — or, for that matter, for taking out a couple of floors, as Jerrold Zacharias did when installing the atomic clock.
Somewhat reminiscent of Stewart Brand’s ‘How Buildings Learn’