As the horse Hans once learned, artificial intelligence sometimes learns differently than it appears.
Photo: Commons.wikimedia.org/Karl Krall
How creative are artificial intelligence? If you ask this Chatgpt, one of the most widespread artificial intelligence currently widespread, the answer is quite clear. Machines are “less flexible and creative than humans, but due to the rapid processing of large amounts of data, it can be very efficient in solving specific tasks,” says the chatbot.
You shouldn’t believe everything. The cognitive researcher Surabhi Nath, who was active in the Max Planck Institute for Biological Cybernetics, which was first active in brain research and is now also investigating cognitive processes of machine intelligence, was involved in a comparison study on the creativity of a total of 220 human and 440 mechanical intelligence. The parameters of flexibility and persistence were examined.
“The output was quite similar,” says Nath “nd. The week”. When it comes to persistence – it was about finding as many alternative names as possible for brick and office clip in just two minutes – there were hardly any differences. With flexibility – in ten minutes a wide variety of ideas for the use of these two objects should be found; The range of spectators ranged from throwing to building – the researchers found out: “There were people who were more flexible than others, and artificial intelligence, which in turn acted more flexibly than others.” Creativity is therefore similar to people and machines.
Make more data more flexible
It was interesting, however, that of the 440 artificial intelligence that the research group of Max Planck Institutes in Tübingen and Leipzig, the University of Tübingen and the University of Amsterdam examined were only able to carry out the test. These included Chatgpt, Google’s Chatbot Gemini, the Claude AI and Llama AI from the Facebook Group Meta, developed by dropouts from Openai as an alternative to Chatgpt.
In those who operated comparatively flexibly during the brick stone, i.e. made semantic jumps that can be interpreted as a sign of creativity, the results when the office clip task were poorer. People, on the other hand, who creatively solved the first of the two tasks, started the second similarly creatively. Nath leads this to the following reasons: »We had selected bricks because there are most studies on this so far. We took office clip as a comparison because studies are not that widespread. Apparently, the artificial intelligence of bricks simply had more data available and were therefore more flexible. «
This confirms the common assumption: artificial intelligence is only good for specific tasks if they have been trained for exactly these with a variety of data. If less data are available, the results are also poorer.
Nd.Diewoche – Our weekly newsletter
With our weekly newsletter . We’re Doing Look at the most important topics of the week and read them Highlights our Saturday edition on Friday. Get the free subscription here.
To make a comparison with humans: the current language models are partially talented, in areas for which they are well trained. However, you should at least not think of them for universal geniuses who have the right answer to everything. And even if you have been trained with lots of data for the right topic, there are a lot of restrictions.
One is: you just don’t learn anymore. For example, if you ask Openais free-available version of GPT-4 according to your own knowledge horizon, it comes out: »My knowledge is based on the data that was available to me until my last training session in October 2023. At the moment I am no longer learning after my training has been completed. This means that I cannot record new information or develop myself on how a person would do. “
Feedback is not taken into account
Only one update would generate further knowledge. And not even the content created in the dialogues with users are later available to the chat bot: »As an AI model, which is based on a firm knowledge basis, I cannot learn directly from individual feedback or feedback in the ongoing conversation or adapt my knowledge permanently. This means that I can take into account the information you give me while our conversation, but these changes only remain temporary and do not influence my future behavior or my knowledge base. «
The learning processes before, at GPT-4 until October 2023, also differ greatly from those of human learning. If the feedback from outside, a positive or negative commentary on acting is relevant as a control size, the currently available chatbots only learn from the data and your own pattern recognition methods. GPT-4 explains the self-experienced process of self-monitoring learning: “I have learned from large quantities of text data, in which I myself generated› labels ‹or notes to recognize patterns and relationships.«
This would be transferred to human relationships comparable to watching a home improvement idea on YouTube-without feedback function-how to cover a roof. And then you do it yourself. This can work for people who are trained in terms of craftsmanship like carpenters or bricklayers. In the buildings of most other people, however, one would prefer to spend the night outdoors.
The limits of not monitoring and monitoring learning recently made another one Study of the Max Planck Institute for Biological Cybernetics. Systematic mistakes that make such a trained AI are now being researched as a “clever-han effect”. The name goes back to a Berlin horse in the last century, which was supposed to reckon. It showed the solution of simple math tasks by knocking with the hoof. However, a scientific commission found that the “Kluge Hans” was primarily an extremely good observer. From body language and facial expressions of the questioners, he read when the correct solution was approaching when knocking, and then set the – often correct – last accent with the hoof.
“The AI should be a partner in finding a solution.”
Surabhi NathCognitive researcher
It is similar with artificial intelligence. In 2018, a research group from France and Canada found out why a picture identification AI falsified some Huskys as wolves. She had been trained with wolf pictures in the snow – and wherever a husky could be seen in the snow, the AI discovered a wolf.
It’s still strange. However, a research group at the TU Berlin recently stated that that AIS used in medicine when analyzing X -ray images not only analyzed the images themselves, but also led to standing text – which led to incorrect diagnoses.
Advantages strengthening learning
The method of strengthening learning is closer to the usual learning process. This includes primarily feedback for intermediate results, praise and constructive criticism. The Canadian psychologist and computer scientist Richard Sutton has been advocating for the training of artificial intelligencies through strengthening learning for a long time.
According to the Sutton, advantages would be: less data is necessary for training, the effectiveness is potentially higher. He describes the encouraging learning as the third pillar of the training of artificial intelligence-in addition to the insecurities learning, which was typical of early data collection AIs, and learning to monitor the currently dominant self-monitoring.
Max Planck Doctoraland Nath is also convinced of this method. “We’re doing this right now! We try to develop a model to find out what good rewards could be, «she says. She first researches people and hopes to be able to apply these experiences later to artificial voice models. »An engineer for machine learning would have to say exactly how this works. But my intuition tells me that encouraging learning will be increasingly important. “
Risks of AI in the education sector
As a great danger, it is now being discussed that own cognitive services not only decrease when using AI, but unused brain areas can even be weakened. Brain and machine researcher Nath also sees this danger. For the use of AI in the education sector, she therefore suggests: »You could train the AI, not to give the final answer, but an interlock. She should be a partner in finding a solution. «So here too, the feedback, the dialogue, is a crucial component when learning – and developing creativity.
By the way, people will have a while a while, according to Nath: the creative jumps that arise because people are bored and look for new solutions. Boarest stage for artificial intelligence is possibly the development of a concept of boredom as a trigger for creativity.
judi bola link sbobet sbobet sbobet88