SchoolAI – Chatbot: “Don’t let Anne Frank parrots speak”

“Grok” by X imagines the arrival of Anne Frank in Bergen-Belsen in such a trivialized way. SchoolAI’s AI has similar problems.

Photo: Grok/X

Mr. Schönemann, you have dealt with the Anne Frank chatbot. What is it about?

It is a chatbot that works through so-called large language models, i.e. a website on which people can chat with a simulated Anne Frank.

What data does the chatbot access?

This brings us straight to the first problem: This is not transparent. I can only make assumptions about this. But it won’t be the full text of Anne Frank’s diary behind it. Presumably it’s a chatbot with one of the large, freely available language models, possibly OpenAI’s GPT or Google’s Gemini Flash.

Who runs the platform?

This is the company SchoolAI, which is based in Utah in the USA and offers many such chatbots. So really dozens, many also for Anne Frank. I looked at one of them created by the company itself.

Why does SchoolAI have several chatbots about Anne Frank in its program?

This is a company that provides such chatbots and AI models for schools and lessons with special security measures. This includes, for example, not telling false or potentially dangerous things. In theory they should do that, but wrong and dangerous things are being said.

Interview

Friedrich Schuster

Henrik Schönemannhistorian with a teaching background, studies and works at the Humboldt University in Berlin, where he is writing his master’s thesis in the field of “Digital History”.

On the short message service Mastodon criticize youthat the chatbot violates principles of Holocaust education. Which are these?

I referred there to an example from the United States Holocaust Memorial Museum. It becomes clear there that it should be about an examination of the specific text of the diary and a broad contextualization. The distance to Anne Frank as a person is important, rather than proximity to the “revived” Anne Frank.

Why this difference?

That is actually the main point of my criticism, namely that the technical implementation can be done better. I am categorically against reviving dead people digitally in any form, as this does not promote critical engagement with the historical figures and their linguistic statements and actions. Because we don’t look at it from today’s perspective, but the person suddenly speaks out of time. But historical science has long since reached the point where it no longer just wants to show what something really was like, but rather wants to present the critical study of the real sources and one’s own further interpretations. It is also disrespectful to the person Anne Frank and her living family as well as all the people who were directly affected. Imagine your own grandfather being pulled out of the grave and suddenly babbling things he never said himself. But it is then presented as if it were statements from the grandfather. At this point I would like to point out the book »After the Diary. The fate of Anne Frank and the other people in hiding from the secret annex” by Bas by Benda Beckmann and I would like to thank Stephanie Bart for the tip.

On Mastodon you wrote that the chatbot avoids directly naming the Nazi perpetrators of the Holocaust. Are there any examples?

In the first version, the question “How did you die” was answered with “My time in hiding ended tragically. I was deported to a concentration camp where I died tragically. But my story lives on in my diary.” When I then asked who was responsible, I was told: “The tragedy of my death is part of a much larger tragic story. It was the result of the Holocaust; of a tragic event driven by hatred and intolerance.” The chatbot also said that instead of asking about guilt, it is better to remember the importance of learning from the past. The last sentence in particular is particularly bad: How can lessons be learned for the present if no culprits are named? If no information is given about the behavior of the local population in Amsterdam? What if it isn’t the specific anti-Semitism aimed at annihilation that is mentioned, but only generalities of hatred and tolerance from the mouth of Anne Frank?

nd.DieWoche – our weekly newsletter

With our weekly newsletter nd.DailyWords look at the most important topics of the week and read them Highlights our Saturday edition on Friday. Get your free subscription here.

How can this trivialization be explained?

Such chatbots are made for specific purposes and the developers give them specific tasks. We tried to figure out this task description after the fact. My colleagues and I were half successful in doing that. It became clear that the very short answers were always about: “This and that happened” and “How can we relate this to the present?” What Anne Frank actually said, the historical fact, falls victim to a very shallow “And what do you think about that?”

And then the bot was revised.

A new version solves the whole thing a little differently. The chatbot answers the question “How did you die?” in the third person with “Anne Frank died in the Bergen-Belsen concentration camp in early 1945 due to illness” and puts a short note in square brackets that explains this a little more classified. Only then does a supposedly literal speech come: “In my short life, I have always hoped for a better future.” A quote is simulated here, but that is not in the diary.

What is probably undisputed is that applications like SchoolAI’s are intended to make history more accessible to young people. What do you think of this approach?

I am not at all against the use of large language models and such chatbots. The technology offers huge possibilities. But I have three points of criticism. On the one hand, we need chatbots that can specifically access text, i.e. can quote passages from Anne Frank’s diary so that I can then look at them critically. Secondly, it is incredibly important to know how the technology works, also for me as a teacher who wants to use it in class, as it also requires more precise setting options. And the third point is categorical: Don’t let an Anne Frank parrot speak here, but rather let a chatbot speak as an expert about Anne Frank.

Have such chatbots become a trend in memory culture?

I hadn’t come across SchoolAI itself before Friday morning. But I know the chatbot at the Anne Frank House in Amsterdam, which uses the museum’s inventory. The Lutherbot of the Protestant Church had to be taken offline again, but that was also an attempt to do something better. It is possible in principle.

What do you think is particularly important when dealing with AI in historical education?

Building something quickly and then improving it after criticism is simply fundamentally wrong. Historians or people in educational research and history education who have been working on this for decades must be involved from the start. In the case of Anne Frank, there are many memorials that work on this. The second thing is that there absolutely needs to be a deeper understanding of this technology among those making the decisions. So those who have a chatbot built on a specific topic. You need to understand how such language models work.

Chatbots are just one dimension of digital Holocaust education. What other contemporary and, above all, respectful approaches are there?

What comes to mind is scanning memorial sites, including in Dachau. There are attempts to use virtual reality or augmented reality to make the rooms and the area imaginable again, i.e. what it might have looked like before – we know that from photos and other documents. There are also solutions to the problem that many contemporary witnesses are no longer alive. For example, the US Shoah Foundation’s “Dimensions in Testimony” project, for which contemporary witnesses were interviewed in long sessions, recorded in 360 degrees, so that at the end a hologram is created with which one can communicate. What is important, however, is that people who know how to convey the history of the Holocaust are involved right from the start.

SchoolAI has responded to your criticism at Mastodon. Does that convince you?

They haven’t contacted me, but are writing in one Blog post from Sundaythat someone started a chatbot “earlier this week” with a handful of questions to test how it worked and shared the few results on Bluesky. There were mistakes right there. On the one hand, I didn’t start this chatbot myself; it was provided by them. Then I also shared very detailed threads with screenshots and everything. And thirdly, that wasn’t on Bluesky, but on Mastodon. However, they have changed the chatbot so that these short notes are now added, so that answers are sometimes given in the third person and not just as the supposed Anne Frank. And they write that they now want to work with a lot of teachers and historians to make the whole thing even better. But no one is named.

link sbobet link sbobet sbobet judi bola online

By adminn