mastodon.online is one of the many independent Mastodon servers you can use to participate in the fediverse.
A newer server operated by the Mastodon gGmbH non-profit

Server stats:

11K
active users

oh i see that someone who got his phd at the media lab decided to use ChatGPT as a therapy standin for thousands of patients without their clear consent, was surprised when the patients hated it, and is now surprised that people are pissed about it.

wild.

oh i also see that he declared the study exempt from IRB and ethics review because it was for a business with no intent to publish in an academic journal and he thinks filing for IRB approval is hard.

_wild_.

@oddletters he posted a correction that people knew/gave consent on using GPT-3? Although don't know if the people who received therapy gave consent using their data to teach a network. Why do you say "no clear consent"?

@pl his correction indicated that the therapists gave consent, patients were not informed that ChatGPT was being used until later, and after they knew ChatGPT had been used, they were unhappy about it.

@oddletters @pl
I found the story interesting but didn't think about whether/how this is regulated by IRB.

But instead of throwing regulations at a problem as a carrier for criticism, curious as to two things:

1) what was risk of harm in this study? I found the risk to be relatively minimal.

2) how does one provide informed consent on interfacing with GPT if the study hinges on the quality of relationship with GPT? Doesn't it obliquely pollute the study?

@oddletters @pl @rajvsmachine The unwitting participants were disappointed/disillusioned when they found out they’d been talking to computers. That’s harmful. Perhaps not *significantly* harmful (though I’m skeptical!) but that seems like a question best answered by objective, independent experts on a review board.

Cromulent Outputs

@asherlangton @oddletters @pl

I don't agree that disappointment and disillusionment is harmful, it is like saying unhappiness is harmful. It's a natural part of a lived human experience and must be regularly experienced at manageable doses.

But I agree that there should be oversight on what constitutes a manageable dose.

How do you propose AI tools be ethically studied in clinical work?

@rajvsmachine @asherlangton @oddletters gtfo please, I don't care about you and your views. If you want to get therapy by a robot without knowing that's fine, but you clearly have no understanding of the work an ethics committee is doing.

@pl @asherlangton @oddletters

It looks like you just want to insult people who may not overlap with your views. I'd suggest finding a private forum and leaving public discussion boards to those who can share opinions and viewpoints like mature adults.

@rajvsmachine @asherlangton @oddletters you keep posting under my question without excluding me from the conversation. Never asked for your views

@pl @asherlangton @oddletters imagine being upset that viewpoints not explicitly asked for are offered on public threads.

@rajvsmachine You clearly don't understand this medium yet

@pl you are correct. But does anybody?

@rajvsmachine it's called thread hi-jacking. Next time only @-mention who you wanted to talk with and also ask yourself whether your opinion is wanted with a non-mutual.

@pl got it. I usually just press the reply button. Seems a bit convoluted to figure out who to include when I can't even scroll up to see who all the people involved are.

@rajvsmachine Blocking or muting works better than public scolding.

@pl @asherlangton @oddletters

Sorry need to defend myself here: the study concluded the opposite - that getting therapy from a robot is *not* good!

@rajvsmachine they suffered an adverse effect. That’s harm, by definition. I’m not a psychiatrist or a psychologist, so the extent of the harm is not something I can speak to with any authority or experience. But I think it’s at least plausible that some participants might view this as a betrayal and it could affect their willingness to seek help in the future. Maybe you’re right that the harm isn’t significant, but I don’t think it’s so trivial as to be handwaved away by people who don’t even work in the field.

@asherlangton eh, I think it's perfectly fine to ask questions based on logic when only a subset of those who work in the field are presenting only a subset of opinions about a complicated topic.

Handwaving away harm can be as much of a fallacy as assuming maximum harm, until you realize that "no impact" is actually the classical null hypothesis in any clinical study. Putting harm on an infinite gradient is fine but then you can't use the same word for infinite outcomes.

@rajvsmachine I’m not sure how it’s relevant that “no impact” is the null hypothesis. It’s unethical to subject people to potentially harmful experiments simply because the level of harm has yet to be established in controlled study, which is why these reviews are based on the informed opinions of subject matter experts.

@rajvsmachine @asherlangton @oddletters @pl
“How do you propose AI tools be ethically studied in clinical work?”

Ethics review boards have existed for a long time and work across a wide range of scenarios. AI is not so different or special that these conventions are no longer applicable.