Now, tech giants are developing ever more powerful AI systems that don’t merely monitor you; they actually interact with you—and with others on your behalf. If searching on Google in the 2010s was like being watched on a security camera, then using AI in the late 2020s will be like having a butler. You will willingly include them in every conversation you have, everything you write, every item you shop for, every want, every fear, everything. It will never forget. And, despite your reliance on it, it will be surreptitiously working to further the interests of one of these for-profit corporations.
There’s a reason Google, Microsoft, Facebook, and other large tech companies are leading the AI revolution: Building a competitive large language model (LLM) like the one powering ChatGPT is incredibly expensive. It requires upward of $100 million in computational costs for a single model training run, in addition to access to large amounts of data. It also requires technical expertise, which, while increasingly open and available, remains heavily concentrated in a small handful of companies. Efforts to disrupt the AI oligopoly by funding start-ups are self-defeating as Big Tech profits from the cloud computing services and AI models powering those start-ups—and often ends up acquiring the start-ups themselves.
Yet corporations aren’t the only entities large enough to absorb the cost of large-scale model training. Governments can do it, too. It’s time to start taking AI development out of the exclusive hands of private companies and bringing it into the public sector. The United States needs a government-funded-and-directed AI program to develop widely reusable models in the public interest, guided by technical expertise housed in
I worry the UK government will sell all of the rights to NHS image libraries and raw patient data, rather than realise that the raw material is the gold dust. The models and tech will become a utility. Unless you steal, annotation of datasets is still the biggest expense ( the NHS considers it ‘exhaust’). Keep the two apart.
Although many of the digital gurus started out as idealists, to Lanier there was an inevitability that the internet would screw us over. We wanted stuff for free (information, friendships, music), but capitalism doesn’t work like that. So we became the product – our data sold to third parties to sell us more things we don’t need. “I wrote something that described how what we now call bots will be turned into these agents of manipulation. I wrote that in the early 90s when the internet had barely been turned on.” He squeals with horror and giggles. “Oh my God, that’s 30 years ago!”
It’s easy to see how digital minimalism’s first tenet, “clutter is costly,” applies: the average patient’s EHR has 56% as many words as Shakespeare’s longest play, Hamlet. Moreover, half these words are simply duplicated from previous documentation.
Progress isn’t inevitable. Not even likely in some domains over working lifetimes.
The first thing to understand is that “Silicon Valley” is actually a reality-distortion field inhabited by people who inhale their own fumes and believe they’re living through Renaissance 2.0, with Palo Alto as the new Florence. The prevailing religion is founder worship, and its elders live on Sand Hill Road in San Francisco and are called venture capitalists. These elders decide who is to be elevated to the privileged caste of “founders”.
Error? Era? Hope so.
From a review of “The Ascent of Information” by Caleb Scharf.
Every cat GIF shared on social media, credit card swiped, video watched on a streaming platform, and website visited add more data to the mind-bending 2.5 quintillion bytes of information that humans produce every single day. All of that information has a cost: Data centers alone consume about 47 billion watts, equivalent to the resting metabolism of more than a tenth of all the humans on the planet.
Scharf begins by invoking William Shakespeare, whose legacy permeates the public consciousness more than four centuries after his death, to show just how powerful the dataome can be. On the basis of the average physical weight of one of his plays, “it is possible that altogether the simple act of human arms raising and lowering copies of Shakespeare’s writings has expended over 4 trillion joules of energy,” he writes. These calculations do not even account for the energy expended as the neurons in our brains fire to make sense of the Bard’s language.
There was an article in the FT last week, commenting on an article in JAMA here. The topic is the use of AI (or, to be fair, other machine learning techniques) to help diagnose skin disease. Google will allow people to upload their own images and will, in turn, provide “guidance” as to what they think it is.
I think the topic important, and I wrote a little editorial on this subject here a few years ago with the strikingly unoriginal title of Software is eating the clinic. For about 8-10 years I used to work in this field but although we managed to get ‘science funding’ from the Wellcome Trust (and a little from elsewhere), and published extensively, we were unable to take it further via commercialisation. As is often the case, when you fail to get funded, you may not know why. My impression was that people did not imagine that there was a viable business model in software in this sort of area (we were looking for funds around 2012-2015). Yes, seemed crazy to me then, too (and yes, I know, Google have not proven there is a business model). Some of the answers via NHS and Scottish funding bodies were along the lines of come back when you prove it works, and then we will then fund the research.?
A few days back somebody interested in digital health asked me what I thought about the recent work. Below is a lightly edited version of my email response.
If only we had been funded…. ?. Only joking.
My experience is limited, but everything I know suggests that much IT in healthcare diminishes medical care. It may serve certain administrative functions (who is attending what clinic and when etc), and, of course, there are certain particular use cases — such as repeat prescription control in primary care — but as a tool to support the active process of managing patients and improving medical decision making, healthcare has no Photoshop.
In the US it is said that an ER physician will click their mouse over 4000 times per shift, with frustration with IT being a major cause of physician burnout. Published data show that the ratio of patient-facing time to admin time has halved since the introduction of electronic medical records (i.e things are getting less efficient). We suffer slower and worse care: research shows that once you put a computer in the room eye contact between patient and physician drops by 20-30%. This is to ignore the crazy extremes: like the hospital that created PDFs of the old legacy paper notes, but then — wait for it — ordered them online not as a time-sequential series but randomly, expecting the doc to search each one. A new meaning for the term RAM.
There are many proximate reasons for this mess. There is little competition in the industry and a high degree of lock-in because of a failure to use open standards. Then there is the old AT&T problem of not allowing users to adapt and extend the software (AT&T famously refused to allow users to add answering machines to their handsets). But the ultimate causes are that reducing admin and support staff salaries is viewed as more important than allowing patients meaningful time with their doctor; and that those purchasing IT have no sympathy or insight into how doctors work.
As far as UI is concerned — I think this is what personal/interactive computing is about, and so I always start with how the synergies between the human and the system would go best. And this includes inventing/designing a programming language or any other kind of facility. i.e. the first word in “Personal Computing” is “Person”. Then I work my way back through everything that is needed, until I get to the power supply. Trying to tack on a UI to “something functional” pretty much doesn’t work well — it shares this with another prime mistake so many computer people make: trying to tack on security after the fact …[emphasis added]
I will say that I lost every large issue on which I had a firm opinion.
Whenever I have looked at the CVs of many young doctors or medical students I have often felt saddened at what I take to be the hurdles than many of them have had to jump through to get into medical school. I don’t mean the exams — although there is lots of empty signalling there too — but the enforced attempts to demonstrate you are a caring or committed to the NHS/ charity sector person. I had none of that; nor do I believe it counts for much when you actually become a doctor1. I think it enforces a certain conformity and limits the social breadth of intake to medical school.
However, I did
do things work outside school before going to university, working in a variety of jobs from the age of 14 upwards: a greengrocer’s shop on Saturdays, a chip shop (4-11pm on Sundays), a pub (living in for a while ?), a few weeks on a pig-farm (awful) and my favourite, working at a couple of petrol stations (7am-10pm). These jobs were a great introduction to the black economy and how wonderfully inventive humanity — criminal humanity— can be. Naturally, I was not tempted?. Those in the know would even tell you about other types of fraud in different industries, and even that people actually got awarded PhDs by studying and documenting the sociology of these structures (Is that why you are going to uni, I was once asked).
On the theme of that newest of crime genres — cybercrime — there is a wonderful podcast reminding you that if much capitalism is criminal, there is criminal and there is criminal. But many of the iconic structures of modern capitalism — specialisation, outsourcing and the importance of the boundaries between firm and non-firm — are there. Well worth a listen.
I think there is a danger in exaggerating the role of caring and compassion in medicine. I am not saying you do not need them, but rather that I think they are less important that the technical (or professional) skills that are essential for modern medical practice. I want to be treated by people who know how to assess a situation and who can judge with cold reason the results of administering or withholding an intervention. If doctors were once labelled priests with stethoscopes, I want less of the priest bit. Where I think there are faults is in the idea that you can contribute most to humanity by ‘just caring’. The Economist awhile back reported on an initiative from the Centre for Effective Altruism in Oxford. The project labelled the 80,000 hours initiative advises people on which careers they should choose in order to maximise their impact on the world. Impact should be judged not on how much a particular profession does, but on how much a person can do as an individual. Here is a quote relating to medicine:
Medicine is another obvious profession for do-gooders. It is not one, however, on which 80,000 Hours is very keen. Rich countries have plenty of doctors, and even the best clinicians can see only one patient at a time. So the impact that a single doctor will have is minimal. Gregory Lewis, a public-health researcher, estimates that adding an additional doctor to America’s labour supply would yield health benefits equivalent to only around four lives saved.
The typical medical student, however, should expect to save closer to no lives at all. Entrance to medical school is competitive. So a student who is accepted would not increase a given country’s total stock of doctors. Instead, she would merely be taking the place of someone who is slightly less qualified. Doctors, though, do make good money, especially in America. A plastic surgeon who donates half of her earnings to charity will probably have much bigger social impact on the margin than an emergency-room doctor who donates none.
Yes, the slightly less qualified makes me nervous.