tech

The medical student as ChatGPT

by reestheskin on 26/05/2023

Comments are disabled

I am amused that people are slow to realise that large language models (ChatGPT etc) do not understand what they are saying, or that they make things up — that is, they hallucinate. Performance on “surface layer” testing does not equate to competence. Anybody who has taught medical students knows that humans are quite capable of exhibiting the same behaviour. It was one of the values of the old fashioned viva. You could demonstrate the large gulf between understanding (sense)on the one hand, and rote — and fluent rote at that — simulation on the other (garbage).

The medical educationalists, obsessed as they are with statistical reliability, never realised that the viva’s main function was for the benefit of teachers rather than learners. It is called feedback.

The medical student as ChatGPT

The danger isn’t that AI destroys us. It’s that it drives us insane

by reestheskin on 26/04/2023

Comments are disabled

The danger isn’t that AI destroys us. It’s that it drives us insane

Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’ | Jaron Lanier | The Guardian

Although many of the digital gurus started out as idealists, to Lanier there was an inevitability that the internet would screw us over. We wanted stuff for free (information, friendships, music), but capitalism doesn’t work like that. So we became the product – our data sold to third parties to sell us more things we don’t need. “I wrote something that described how what we now call bots will be turned into these agents of manipulation. I wrote that in the early 90s when the internet had barely been turned on.” He squeals with horror and giggles. “Oh my God, that’s 30 years ago!”

Medicine is awaiting its own Photoshop

by reestheskin on 05/10/2020

Comments are disabled

My experience is limited, but everything I know suggests that much IT in healthcare diminishes medical care. It may serve certain administrative functions (who is attending what clinic and when etc), and, of course, there are certain particular use cases — such as repeat prescription control in primary care — but as a tool to support the active process of managing patients and improving medical decision making, healthcare has no Photoshop.

In the US it is said that an ER physician will click their mouse over 4000 times per shift, with frustration with IT being a major cause of physician burnout. Published data show that the ratio of patient-facing time to admin time has halved since the introduction of electronic medical records (i.e things are getting less efficient). We suffer slower and worse care: research shows that once you put a computer in the room eye contact between patient and physician drops by 20-30%. This is to ignore the crazy extremes: like the hospital that created PDFs of the old legacy paper notes, but then — wait for it — ordered them online not as a time-sequential series but randomly, expecting the doc to search each one. A new meaning for the term RAM.

There are many proximate reasons for this mess. There is little competition in the industry and a high degree of lock-in because of a failure to use open standards. Then there is the old AT&T problem of not allowing users to adapt and extend the software (AT&T famously refused to allow users to add answering machines to their handsets). But the ultimate causes are that reducing admin and support staff salaries is viewed as more important than allowing patients meaningful time with their doctor; and that those purchasing IT have no sympathy or insight into how doctors work.

The context is wildly different — it is an exchange on the OLPC project and how to use computers in schools, but here are two quotes from Alan Kay that made me smile.

As far as UI is concerned — I think this is what personal/interactive computing is about, and so I always start with how the synergies between the human and the system would go best. And this includes inventing/designing a programming language or any other kind of facility. i.e. the first word in “Personal Computing” is “Person”. Then I work my way back through everything that is needed, until I get to the power supply. Trying to tack on a UI to “something functional” pretty much doesn’t work well — it shares this with another prime mistake so many computer people make: trying to tack on security after the fact …[emphasis added]

I will say that I lost every large issue on which I had a firm opinion.

We are not of this world

This is from Larry Page of Google (quoted in “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power” by Shoshana Zuboff)

CEO Page surprised a convocation of developers in 2013 by responding to questions from the audience, commenting on the “negativity” that hampered the firm’s freedom to “build really great things” and create “interoperable” technologies with other companies: “Old institutions like the law and so on aren’t keeping up with the rate of change that we’ve caused through technology. . . . The laws when we went public were 50 years old. A law can’t be right if it’s 50 years old, like it’s before the internet.” When asked his thoughts on how to limit “negativity” and increase “positivity,” Page reflected, “Maybe we should set aside a small part of the world . . . as technologists we should have some safe places where we can try out some new things and figure out what is the effect on society, what’s the effect on people, without having to deploy kind of into the normal world.

As for his comments on safe spaces, I agree. There are plenty of empty planets left.

All that glitters in silicon

San Francisco conducted its biennial point-in-time homelessness survey. The numbers are up sharply. Two observations: first, most people are from SF, not (contrary to myth) from elsewhere; and second, there are more people sleeping on the street in San Francisco (population: 870k) than in the whole of the UK (population: 66m). Link

Benedict’s Newsletter: No. 296

Smombies everywhere

My youngest daughter lived in South Korea for a while and I visited on a couple of occasions. It was a lot of fun in all sorts of ways. The following rings(!) true

The government initially tried to fight the “smombie” (a portmanteau of “smartphone” and “zombie”) epidemic by distributing hundreds of stickers around cities imploring people to “be safe” and look up. This seems to have had little effect even though, in Seoul at least, it recently replaced the stickers with sturdier plastic boards.

Instead of appealing to people’s good sense, the authorities have therefore resorted to trying to save them from being run over. Early last year, they began to trial floor-level traffic lights in smombie hotspots in central Seoul. Since then, the experiment has been extended around and beyond the capital. For the moment, the government is retaining old-fashioned eye-level pedestrian lights as well. But in future, the way to look at a South Korean crossroads may be down.

A dangerous creature is haunting South Korean crossroads – Smombie apocalypse

Direct URL for this post.

Software is eating..

Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study.

You can dice the results in various ways, but software is indeed eating the world — and the clinic. The (slow) transition to this new world will be interesting and eventful. A good spectator sport for some of us. (Interesting to note that this study in Lancet Oncology received no specific funding. Hmmm).

Direct URL for this post.

The digital skin web

On some Swedish trains, passengers carry their e-tickets in their hands—literally. About 3,000 Swedes have opted to insert grain-of-rice-sized microchips beneath the skin between their thumbs and index fingers. The chips, which cost around $150, can hold personal details, credit-card numbers and medical records. They rely on Radio Frequency ID (RFID), a technology already used in payment cards, tickets and passports.

Why Swedes are inserting microchips into their bodies – Bjorn Cyborg

One of these is going to end up being sectioned as some time….waiting for the first case-report. Not often I can get two puns in a three word title.

Direct URL for this post.

On  ratio scales and the spirits of invention

It is said that much of the foundations of 20th century physics was done in coffee houses (or in the case of Richard Feynman in strip bars), but things were once done differently in the UK

With neither institutional nor government masters to answer to, the British cyberneticians were free to concentrate on what interested them. In 1949, in an attempt to develop a broader intellectual base, many of them formed an informal dining society called the Ratio Club. Pickering documents that the money spent on alcohol at the first meeting dwarfed that spent on food by nearly six to one — another indication of the cultural differences between the UK and US cyberneticians.

The work of the British pioneers was forgotten until the late 1980s when it was rediscovered by a new generation of researchers… A company that I cofounded has now sold more than five million domestic floor-cleaning robots, whose workings were inspired by Walter’s tortoises. It is a good example of how unsupported research, carried out by unconventional characters in spite of their institutions, can have a huge impact.

A review from 2010 by Rodney Brooks of MIT of “The Cybernetic Brain: Sketches of Another Future” in Nature (For more on Donald Michie and “in spite of their institutions” see here).

Direct URL for this post.

How the Nobel are fallen

As John Hammerbacher, Facebook’s first research scientist, remarked: “the best minds of my generation are thinking about how to make people click ads… And it sucks.”

Quoted in Stand Out of Our Light, James Williams

Direct URL for this post.

Surgeons?

”A lot of patients are still having open surgery when they should be getting minimal access surgery,” said Mr Slack, a surgeon at Addenbrooke’s Hospital in Cambridge. “Robotics will help surgeons who don’t have the hand-eye co-ordination or dexterity to do minimal access surgery.”

Trial of new generation of surgical robots claims success | Financial Times

Direct URL for this post.

Turn-it-around

by reestheskin on 02/04/2019

Comments are disabled

A couple of articles from the two different domains of my professional life made me riff on some old memes. The first, was an article in (I think) the Times Higher about the fraud detection software Turnitin. I do not have any firsthand experience with Turnitin (‘turn-it-in’), as most of our exams use either clinical assessments or MCQs. My understanding is that submitted summative work is uploaded to Turnitin and the text compared with the corpus of text already collected. If strong similarities are present, the the work might be fraudulent. A numerical score is provided, but some interpretation is necessary, because in many domains there will be a lot of ‘stock phrases’ that are part of domain expertise, rather than evidence of cheating. How was the ‘corpus’ of text collected? Well, of course, from earlier student texts that had been uploaded.

Universities need to pay for this service, because in the age of massification, lecturers do not recognise the writing style of the students they teach. (BTW, as Graham Gibbs has pointed out, the move from formal supervised exams to course work has been a key driver of grade inflation in UK universities).

I do not know who owns the rights to the texts students submit, nor whether they are able to assert any property rights. There may be other companies out there apart from Turnitin, but you can see easily see that the more data they collect, the more powerful their software becomes. If the substrate is free, then the costs relate to how powerful their algorithms are. It is easy to imagine how this becomes a monopoly. However, if copies of all the submitted texts are kept by universities then collectively it would make it easier for a challenger to enter the field. But network effects will still operate.

The other example comes from medicine rather than education. The FT ran a story about the use of ‘machine learning’ to diagnose retinal scans. Many groups are working on this, but this report was about Moorfields in London. I think I read that as the work was being commercialised, then the hospital would have access to the commercial software free of charge. There are several issues, here.

Although, I have no expert knowledge in this particular domain, I know a little about skin cancer diagnosis using automated methods. First, the clinical material and annotation of clinical material is absolutely rate limiting. Second, once the system is commercialised, the more any subsequent images can be uploaded the better you would imagine the system will become. This of course requires further image annotation, but if we are interesting in improving diagnosis, we should keep enlarging the database if the costs of annotation are acceptable. As in the Turnitin example, the danger is that the monopoly provider becomes ever more powerful. Again, if the image use remains non-exclusive, then it means there are lower barriers to entry.

Deep problems

by reestheskin on 23/01/2019

Comments are disabled

News Feature: What are the limits of deep learning? | PNAS

 In addition to its vulnerability to spoofing, for example, there is its gross inefficiency. “For a child to learn to recognize a cow,” says Hinton, “it’s not like their mother needs to say ‘cow’ 10,000 times”—a number that’s often required for deep-learning systems. Humans generally learn new concepts from just one or two examples.

There is a nice review on Deep Learning in PNAS. The spoofing referred to, is an ‘adversarial patch’ — a patch comprising an image of something else. In the example here, a mini-image of a toaster confuses the AI such that a very large banana is seen as a toaster (the  paper is here on arXiv — an image is worth more than a thousand of my words).

Hinton, one of the giants of this field, is of course referring to Plato’s problem: how can we know so much given so little (input). From the dermatology perspective, the humans may still be smarter than the current machines in the real world, but pace Hinton our training sets need not be so large. But they do need to be a lot larger than n=2. The great achievement of the 19th century clinician masters was to be able to create concepts that gathered together disparate appearances, under one ‘concept’. Remember the mantra: there is no one-to-one correspondence between diagnosis and appearance. The second problem with humans is that they need continued (and structured) practice: the natural state of clinical skills is to get worse in the absence of continued reinforcement. Entropy rules.

Will things change? Yes, but radiology will fall first, then ‘lesions’ (tumours), and then rashes — the latter I suspect after entropy has had its way with me.

Annual Review of the ‘business’ that is ed-tech  by Audrey Watters.

Ed-tech is a confidence game. That’s why it’s so full of marketers and grifters and thugs. (The same goes for “tech” at large.)

Audrey Watters

 “criticism and optimism are the same thing. When you criticize things, it’s because you think they can be improved. It’s the complacent person or the fanatic who’s the true pessimist, because they feel they already have the answer. It’s the people who think that things are open-ended, that things can still be changed through thought, through creativity—those are the true optimists. So I worry, sure, but it’s optimistic worry.” Jaron Lanier. We Need to Have an Honest Talk About Our Data

Models of our mind and communities

by reestheskin on 18/12/2018

Comments are disabled

Google’s AI Guru Wants Computers to Think More Like Brains | WIRED

This is from an interview with Geoffrey Hinton who — to paraphrase Peter Medawar’s comments about Jim Watson — has something to be clever about. The article is worth reading in full, but here are a few snippets.

Now if you send in a paper that has a radically new idea, there’s no chance in hell it will get accepted, because it’s going to get some junior reviewer who doesn’t understand it. Or it’s going to get a senior reviewer who’s trying to review too many papers and doesn’t understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that’s really bad…

What we should be going for, particularly in the basic science conferences, is radically new ideas. Because we know a radically new idea in the long run is going to be much more influential than a tiny improvement. That’s I think the main downside of the fact that we’ve got this inversion now, where you’ve got a few senior guys and a gazillion young guys.

I would make a few comments:

  1. First the history of neural nets is long: even people like me had heard about them in the late 1980s. The history of ideas is often like that.
  2. The academy is being sidetracked into thinking it should innovate or develop ideas that whilst important are not revolutionary. Failure should be the norm, rather than the continued treadmill of grant income and papers.
  3. Scale and genuine discovery — for functioning of peer groups — seldom go together.
  4. Whilst most of the really good ideas are still out there, it is possible to create structures that stop people looking for them.
  5. Hinton makes a very important point in the article with broad relevance. He argues that you cannot judge (or restrict the use of) AI on the basis of whether or not it can justify its behaviour in terms of rules or logic — you have to judge it on it ability to work, in general. This is the same standard we apply to humans, or at least we did, until we thought it wise or expedient to create the fiction that much of human decision making is capable of conscious scrutiny. This applies to medicine, to the extent that clinical reasoning is often a fiction that masters like to tell novices about. Just-so stories, to torment the young with. And elsewhere in the academy for the outlandish claims that are made for changing human behaviour by signing up for online (“human remains”)courses (TIJABP).

All has been said before, I know, but no apology will be forthcoming.

The importance of obsession

by reestheskin on 16/12/2018

Comments are disabled

How a Welsh schoolgirl rewrote the rules of publishing | Financial Times by Gillian Tett

In 2011, Beth Reeks, a 15-year-old Welsh schoolgirl studying for her GCSE exams, decided to write a teenage romantic novel. So she started tapping on her laptop with the kind of obsessive creative focus – and initial secrecy – that has been familiar to writers throughout history. “My parents assumed I was on Facebook or something when I was on my laptop – or I’d call up a document or internet page so it looked like I was doing homework,” she explained at a recent writers’ convention. “I wrote a lot in secret… and at night. I was obsessed.”

But Reeks took a different route: after penning eight chapters of her boy-meets-girl novel, The Kissing Booth, she posted three of them on Wattpad, an online story-sharing platform …. As comments poured in, Reeks turned to social media for more ideas. “I started a Tumblr blog and a Twitter account for my writing. I used them to promote the book…[and] respond to anyone who said they liked the story,” she explained in a recent blog post. 

… while Reeks was at university studying physics, her work was turned into an ebook, then a paperback (she was offered a three-book deal by the mighty Random House) and, this year, Netflix released it as a film, which has become essential viewing for many teenage girls.

Norman’s Law of eLearning Tool Convergence

by reestheskin on 22/10/2018

Comments are disabled

Maybe more of a theory than a law, but still:

Any eLearning tool, no matter how openly designed, will eventually become indistinguishable from a Learning Management System once a threshold of supported use-cases has been reached.

They start out small and open. Then, as more people adopt them and the tool is extended to meet the additional requirements of the growing community of users, eventually things like access management and digital rights start getting integrated. Boil the frog. Boom. LMS.

Norman’s Law of eLearning Tool Convergence – D’Arcy Norman dot net

Publishers and universities.

by reestheskin on 07/10/2018

Comments are disabled

It is easy to make facile comparisons between universities,  publishing, and the internet. But it is useful to explore the differences and similarities, even down to the mundane production of ‘content’.

This is from Frederic Filloux form the ever wonderful Monday Note

Dear Publishers, if you want my subscription dollars (or euros), here is what I expect…

The biggest mistake of news publishers is their belief that the presumed uniqueness of their content is sufficient to warrant a lifetime of customer loyalty.

The cost of news production is a justification for the price of the service; in-depth, value-added journalism is hugely expensive. I’m currently reading Bad Blood, John Carreyrou’s book about the Theranos scandal (also see Jean-Louis last week’s column about it). This investigation cost the Wall Street Journal well over a million dollars. Another example is The New York Times, which spends about $200 m a year for its newsroom. The cost structure of news operations is the may reason why tech giants will never invest in this business: the economics of producing quality journalism are incompatible with the quantitative approach used in tech which relies Key Performance Indicators or Objectives and Key Results. (

In France, marketers from the French paid-TV network Canal+ prided themselves of their subscription management: “Even death isn’t sufficient to cancel a subscription,” as one of them told me once.

Carrot weather gets it right again

by reestheskin on 29/09/2018

Comments are disabled

Facebook accounts hacked? I thought that was the feature not the bug.

 

 

 

Carrot weather — the weather app with attitude.

 

Nature cannot be fooled — only investors

by reestheskin on 24/09/2018

Comments are disabled

Two quotes from Bad Blood: Secrets and lies in a Silicon Valley Startup, by John Carreyrou. Only without much silicon.

“Henry, you’re not a team player,” she said in an icy tone. “I think you should leave right now.” There was no mistaking what had just happened. Elizabeth wasn’t merely asking him to get out of her office. She was telling him to leave the company—immediately. Mosley had just been fired.

He also maintained that Holmes was a once-in-a-generation genius, comparing her to Newton, Einstein, Mozart, and Leonardo da Vinci.

The reality distortion field lived on. Medicine is indeed tricky.

It is all about incentives

by reestheskin on 21/09/2018

Comments are disabled

This is a scary story. But the lesson is (yet again) our inability to understand what makes humans tick.

The Untold Story of NotPetya, the Most Devastating Cyberattack in History | WIRED

How Maersk was taken down by Russian malware, and how it recovered. The passage that got the attention is the bit about flying a domain controller backup in from Ghana (the only one that survived). The one that matters is that they were still running Windows 2000 on some servers and hadn’t carried out a proposed security revamp because it wasn’t in the IT managers’ KPIs and so wouldn’t help their bonuses. Link  

Via Ben Evans

Radiologists and platforms

by reestheskin on 20/08/2018

Comments are disabled

It is not only taxi drivers that are being “uberised” but radiologists, lawyers, contractors and accountants. All these services can now be accessed at cut rates via platforms.

FT

The NHS became such a platform, for good and bad. That is the real lesson here. The tech is an amplifier, but the fundamentals were always about power.

MOOCs revisited

by reestheskin on 09/07/2018

Comments are disabled

One selling point of MOOCs (massive online open courses) has been that students can access courses from the world’s most famous universities. The assumption—especially in the marketing messages from major providers like Coursera and edX—is that the winners of traditional higher education will also end up the winners in the world of online courses.

But that isn’t always happening.

In fact, three of the 10 most popular courses on Coursera aren’t produced by a college or university at all, but by a company. That company—called Deeplearning.ai—is a unique provider of higher education. It is essentially built on the reputation of its founder, Andrew Ng, who teaches all five of the courses it offers so far. Link

The MOOC story is like so much of tech — or drug discovery for that matter. Finding a use for a drug invented for another reason often offers the biggest payback. This story has barely begun.

AI winter, revisited

by reestheskin on 25/06/2018

Comments are disabled

Hype is not fading, it is cracking.

I like the turn of phrase. It is from a post on the coming AI winter. Invest wisely.

AI winter – Addendum – Piekniewski’s blog

Pave paradise, and put up a parking lot

by reestheskin on 19/06/2018

Comments are disabled

A Magic Shield That Lets You Be An Assh*le? – NewCo Shift

The Internet of the 1990s was about choosing your own adventure. The Internet of right now over the last 10 years is about somebody else choosing your adventure for you.

link

“They took all the trees, put ’em in a tree museum, and they charged the people., a dollar and a half just to see ’em…”

 

Images aren’t everything — well, sometimes, maybe they..

by reestheskin on 12/06/2018

Comments are disabled

“It’s quite obvious that we should stop training radiologists,” said Geoffrey Hinton, an AI luminary, in 2016. In November Andrew Ng, another superstar researcher, when discussing AI’s ability to diagnose pneumonia from chest X-rays, wondered whether “radiologists should be worried about their jobs”. Given how widely applicable machine learning seems to be, such pronouncements are bound to alarm white-collar workers, from engineers to lawyers.

Economist

The Economist’s view is (rightly) more nuanced than Hinton’s statement on this topic might suggest, but this is real. For my own branch of clinical medicine, too. The interesting thing for those concerned with medical education is whether we will see the equivalent of the Osborne effect (and I don’t mean that Osborne effect).

Power, order and scale

by reestheskin on 31/05/2018

Comments are disabled

This is some text I recognise, but I had forgotten its source: Bruce Schneier.

Technology magnifies power in general, but the rates of adoption are different. Criminals, dissidents, the unorganized—all outliers—are more agile. They can make use of new technologies faster, and can magnify their collective power because of it. But when the already-powerful big institutions finally figured out how to use the Internet, they had more raw power to magnify.

This is true for both governments and corporations. We now know that governments all over the world are militarizing the Internet, using it for surveillance, censorship, and propaganda. Large corporations are using it to control what we can do and see, and the rise of winner-take-all distribution systems only exacerbates this.

This is the fundamental tension at the heart of the Internet, and information-based technology in general. The unempowered are more efficient at leveraging new technology, while the powerful have more raw power to leverage. These two trends lead to a battle between the quick and the strong: the quick who can make use of new power faster, and the strong who can make use of that same power more effectively.

Bruce Schneier

‘The most important thing humanity has ever built.’

by reestheskin on 28/05/2018

Comments are disabled

Well, this was the modest description of a ‘new’ way to test blood. Except it wasn’t. The reality distortion field in hyperspace. If you don’t know the Theranos story — or doubt the importance of real journalism — have a look.

Link

The journalist who broke the story, John Carreyrou, has a book coming out soon. Jean-Louis Gassée, a shrewd observer of Silicon Valley, has a nice piece about it. Note the turtle neck.