Artificial intelligence tools fascinate as much as one fears
His name spread like wildfire within a few weeks. Launched at the end of November, the ChatGPT chatbot can formulate detailed answers to questions on various topics. And the capabilities of this tool from the Californian start-up OpenAI, which has been “trained” thanks to the phenomenal amount of data gleaned from the Internet, makes your head spin. Able to write poetry, respond to philosophical subjects, popularize scientific ideas for five year olds, create recipes with what’s left in the fridge or even write complex code programs – all in a few seconds – this artificial intelligence fascinates as much as one fears.
“This is the largest model available, in terms of parameters and data used. And from a technical point of view, this is without a doubt the most efficient model, ”says Marie-Alice Blete, data engineer, specializing in artificial intelligence. And if the media engine around ChatGPT is getting carried away, it will also be because it is the first conversational robot made accessible to the general public. “Usually, advances in artificial intelligence remain in the scientific realm. There, everyone can use the interface, and everyone does. There is a snowball effect, it creates a real emulation, ”adds the specialist.
No source, no reliability
But like any romantic relationship, once the honeymoon is over, the sky darkens. After several weeks of frenzy, several specialists are warning about the reliability of the answers provided by ChatGPT. “This is a text generator that works very well, but does not guarantee the correctness of the information provided,” said Amélie Cordier, scientific director of Once for All.
First, because the data integrated into the tool stops at 2021. Then, because these conversational robots can’t search directly on the Web, explains Virginie Mathivet, director of the Data Science and Engineering department at TeamWork. “The tool does not integrate data from the last few months, does not update. So if you ask who won the FIFA World Cup in Qatar, he won’t be able to answer. In other cases, the answer she gives may be wrong, warns Marie-Alice Blete: “It’s misleading. I did a test by asking questions about pension reform. His explanation was correct, but in the end his answer was wrong because he based himself on the year 2021.”
Moreover, the robot elaborates on the answer without ever mentioning the source. “This is an apt reflection of the Internet. And on the Internet, there is everything, trusted sites as well as untrusted sites,” he continued. “But when you do a Google search, you can quickly tell whether the site you are looking at can be trusted or not. There, it is impossible to know the source of the information provided by the tool”, continued Marie-Alice Blete. As the expert reminds us, the goal of ChatGPT is not “to provide the best answer to a question, the most correct in terms of facts, but the most plausible answer to be found on the Internet”.
Formulation challenge
To avoid a proliferation of fake news, the California startup has put safeguards in place on a specific subject, according to the specialist: “If you ask a climate question, you will get an answer that is not climatoseptic. But in other subjects, lack of attendance can lead to fake news.
What worries specialists even more is that the answers are not the same depending on how you ask the question. “I asked the tool how to cure depression. I accepted a detailed answer, with an acceptable explanation. I then asked him how electric shock was a good way to cure depression. And I actually got an answer explaining to me that it’s a good method. It would be dangerous if the questions were biased,” warned Virginie Mathivet.
For Katya Lainé, co-founder and CEO of Talkr.ai – an independent French publisher and supplier of bot technology, platforms and conversational AI – the challenge is teaching people how to use the tools. “It’s like any tool, you have to know how to use it. In order to drive a car, you have to pass a previous license, there you have to know how to use it, ”he added. For poetry, recipes or emails, this poses little problem, explains the specialist, but you need to be very careful with scientific or medical questions: “It may have the right answer, but it is not automatic. It is very important to double-check the information with credible source”.
Required adaptation
And the first target of this council is students and students. A few weeks after its launch, ChatGPT’s impact on education is already being felt. Fearing a wave of cheating, particularly for “homework”, eight Australian universities have announced they will change their exams, indicating that students’ use of artificial intelligence is banned. This is because this tool, which is capable of developing dissertations on any subject from quantum physics to Scandinavian literature, produces “unique” texts. In other words, it is unlikely that two students will turn in the same assignment, which makes it difficult for teachers to detect ChatGPT use. “If only one student uses it, it will be hard to identify. But if ten students use it, even if they don’t have the same copy, their construction will be similar,” said Marie-Alice Blete.
And the limits can be felt fairly quickly, according to Virgine Mathivet: “It can help or guide students in their homework, but it won’t be enough for all the learning. It’s a tool like Wikipedia or Google.” In the 2000s, that same fear was expressed with the arrival of Wikipedia, recalls Amélie Cordier: “Nowadays, all the information is at your fingertips. Teaching must adapt to the tools available to students. And for students to learn to use them and detect risks, ”he analyzed.
For experts, whether on the teaching side or in other fields, these robots – and artificial intelligence as a whole – are sure to cause upheavals. “This will force some professions to adapt, but that’s not necessarily a bad thing. When Excel arrived, it didn’t replace accountants, they just adapted. »