A judge caused a stir in Colombia by announcing that he used the artificial intelligence chatbot ChatGPT to rule on a case concerning an autistic child, we learned Thursday from concordant sources.
“This opens up immense prospects, today it could be ChatGPT, but in three months it could be any other alternative to facilitate the drafting of legal texts on which the judge can rely”, declared Judge Juan Manuel Padilla, on local radio. “However, the goal is not to replace the judges,” he stressed.
In a Jan. 30 ruling, he ruled on a mother’s request for her autistic son to be exempted from paying for medical appointments, treatment and transportation to hospitals, as the family lacked the resources. funds needed to pay them.
Mr Padilla ruled in favour of the child and indicated in his judgment that he questioned the chatbot ChatGPT to render his decision.
“Is the autistic minor exempted from paying moderation fees for his therapies?”, asked the judge, according to the transcript of his decision. And the app replied, “Yes, that’s correct. Under Colombian law, minors diagnosed with autism are exempt from paying moderation fees for their therapies. “
“Judges are not fools, it is not because we ask questions to the application that we cease to be judges, thinking beings”, commented Mr Padilla.
According to him, ChatGPT today does what was previously provided by “a secretary”, “in an organized, simple and structured way”, which “could improve response times in the judicial sector”.
These statements sparked a lively debate.
Professor Juan David Gutiérrez, of Rosario University, explained in particular that he received different answers after asking the same questions. “As with other AIs in other fields, under the pretext of supposed efficiency, fundamental rights are put at risk,” he warned.
ChatGPT artificial intelligence has been causing a sensation in the world since November. Created by the Californian company OpenAI, the chatbot ChatGPT works on the basis of algorithms and huge databases.
It produces texts on simple requests, which can be used in particular by lawyers, engineers or journalists, with the risk of manipulation or misinformation.
“I suspect a lot of my colleagues will jump in and start ethically crafting their judgments with the help of artificial intelligence,” Padilla said.