More than 80 years after the Holocaust, historical truth is under threat as never before. A lack of knowledge, ignorance, and denial intersect with an algorithmically amplified post-truth culture which, in combination with AI-generated fake history, also undermines trust in scientific research and historiography. 

When the Nazi Party of Adolf Hitler came to power in Germany on January 30, 1933, the plan to identify, isolate, and ultimately remove the Jewish population of Germany had already been made. With the seizure of power, the Nazi movement gained the political and administrative options necessary to implement this plan, driven by antisemitic ideology. What was missing was a way to quasi-automatically register Jews within the sphere of the Third Reich—in other words, a method for the mass collection of data on this and other excluded groups and the comprehensive analysis of that statistical information, a process today described as datafication.

In the absence of a suitable machine—essentially, a computer—the Nazi administration used punch cards developed by the American company IBM to collect this data. With the help of these punch cards, originally developed in the late nineteenth century for the US Census Bureau, detailed personal data could be recorded automatically and rapidly analyzed using corresponding machines. On the basis of such information, the Nazis were able to identify and “manage” entire population groups—an administrative logic that, in the context of the regime’s antisemitic measures, included the deportation and mass murder of those identified as Jewish.

Nearly every concentration camp had a corresponding department with tabulating machines or card organizers. The code for Jewish prisoners was “6.” For Edwin Black, historian of this alliance between IBM and the Nazi regime, this marks a point of origin of our information age: “The Information Age, meaning the era of the individualization of statistics, or the identifying and quantifying of a specific person within an anonymous count,” Black wrote in 2021 in an article for the Begin-Sadat Center for Strategic Studies at Bar-Ilan University, “was born not in Silicon Valley, but in Berlin in 1933.”

We should take this specific interplay of information technology and mass murder as an occasion to reflect, in the context of Holocaust remembrance, on the possibilities and ethical limits of processing and statistically analyzing large quantities of data. Yet the question of the extreme consequences of datafication—especially in an age of vast accumulations of digital data and the associated capacity to generate synthetic information, texts, and images—concerns not primarily the technologies themselves, but above all the social contexts in which they are deployed.

Auschwitz, Poland
Auschwitz, Poland (credit: CHEN SCHIMMEL)

The enabling condition for the use of IBM machines in the realization of the Holocaust lay first and foremost in the delusion at its core: the exclusion and eventual murder of the Jewish population—that is, the antisemitic foundation of Nazi ideology—as well as in the willingness of a US corporation to continue supplying punch cards, the hardware necessary to apply this technology in such a context.

Computation and dehumanization

The Jewish émigré and computer scientist Joseph Weizenbaum, later known as a critic of the social uses of computer technology—especially in the military—once devised a thought experiment to illustrate this problem. In what he himself described as a disturbing vision, he imagined a concentration camp almost entirely controlled and administered by the help of computers. When one prisoner asks whether computers could not also be used for reasonable and humane purposes, another prisoner simply replies: “Yes, but not in a concentration camp.”

Weizenbaum, who developed one of the first chatbot simulations at the Massachusetts Institute of Technology (MIT) and was alarmed by the extent to which human qualities were attributed to his machine—and, conversely, by the tendency to view human beings themselves as imperfect and faulty “systems”—used this story to demonstrate that computation takes place within concrete social and historical contexts. Whether something is reasonable or not depends on these circumstances. This is all the more true in light of today’s complex large language models and their capacity to generate texts, images, programs, and data. To the same extent that the computer is anthropomorphized—critical AI research, drawing on Weizenbaum’s chatbot experiment, now speaks of the “ELIZA effect”—the human being is treated as a machine. This dehumanizing effect is not coincidentally reminiscent of the anti-Jewish Nazi propaganda.

Human beings, Weizenbaum—who fled the Nazis to the United States as a teenager in 1936—already pointed out early on, attribute to machines a power that becomes real precisely because we believe in it and act accordingly. Today, algorithms—present almost everywhere, from social media platforms and online marketplaces to drone programs—appear almost like quasi-organic entities. Yet this perception obscures what is actually at stake in these seemingly purely technological developments: human-made forms of social organization and communication.

When the aleph is missing

When Chaim Leib Pekeris completed the first Israeli computer, WEIZAC (Weizmann Automatic Computer), at the Weizmann Institute of Science in Rehovot in 1955, the Jewish religious philosopher Gershom Scholem, then working at the Hebrew University of Jerusalem, suggested to Pekeris—who had emigrated to Israel in 1948 and whose parents and youngest sister had been murdered in the Holocaust—that the new machine be named “Golem Aleph.” Although Albert Einstein, for example, considered the idea of a computer in Eretz Israel of little practical value, the decision to build a computer at the Weizmann Institute had already been made in 1947—shortly after the Holocaust and a year before the founding of the state. The project began in 1952, and three years later, in 1955, WEIZAC performed its first calculation. In June of that same year, Scholem gave a lecture at the Weizmann Institute in which, with a touch of irony, he referred to Pekeris’s creation as the “Golem of Rehovot.”

He thereby invoked the legend of an artificial being created by human intelligence, controlled by its creator—the legendary Rabbi Loew—who serves him but can at any time escape control and unleash destructive potential.

In this lecture, Scholem also referred to one of the earliest versions of the Golem story. In this version, the prophet Jeremiah and his son Sira recombine the letters of the alphabet, arranging them in such a way that an artificial human being is formed. On its forehead are written the words “God the Lord is Truth” (Emeth). Yet the creature removes the letter aleph from the word for truth, leaving only the word “dead” (Meth).

Today, Scholem’s reflections can be productively related to the rapidly advancing developments in generative AI, which are based on large language models and the statistical analysis of patterns—ultimately, on the automated synthesis of linguistically encoded training data. Synthetic texts are thus generated on the basis of probability. Where better to study the creative and generative power of language and language models than in Jewish tradition and thought? Yet the story of Jeremiah and Sira also reminds us of the central principle on which these interpretative practices are based: truth, emeth.

The minimal difference made by the letter aleph in Scholem’s story—the tension between truth as principle and self-obligation on the one hand, and death and destruction on the other—is reflected in a particularly striking way in the challenges that algorithms and generative AI pose to our democratic societies. Creative power turns into destructive force where the aleph is missing: where truth becomes negotiable or is deployed strategically, where lies are reframed as the desired “truth.”

The Holocaust and the question of truth

The question of truth—or more precisely, the principle of seeking and preserving truth—is fundamentally bound up with the history of the Holocaust. Policies of exclusion and extermination were based to a significant extent on the reinterpretation of lies as “information.” Nowhere was the destructive power of linguistic manipulation more evident than in the dehumanizing Nazi rhetoric and in it’s concealing euphemisms and metaphors, such as those that characterize the infamous protocol of the Wannsee Conference on the administrative questions of the “Final Solution” in 1942.

Conversely, securing truth through the collection of information was a central component of documenting Nazi crimes—even while they were still being committed. This is attested by the Oneg Shabbat archive in Warsaw, as well as by the meticulous efforts of the Jewish émigré Robert Kempner and others who, in gathering evidence for one of the subsequent Nuremberg trials, discovered in March 1947 the minutes of the Wannsee Conference authored by Adolf Eichmann. Today, as part of a unique initiative by Yad Vashem, large language models and AI-supported text analysis are helping to identify the names of previously unknown victims of the Holocaust within extensive archival holdings.

At the same time, Holocaust remembrance has repeatedly been confronted by attacks on historical truth—often employing the very instruments of “enlightenment” and pseudo-scientific analysis. A prominent example is that of the supposed engineer and expert Fred Leuchter, who in a 1988 court report claimed that no people had been murdered in the gas chambers of Auschwitz-Birkenau and Majdanek. Today, even forensic techniques of critical image analysis and data visualization—methods repeatedly used to identify AI-generated images and documents or to make hidden structures and patterns visible—are themselves being used to cast doubt on truth and as tools in disinformation campaigns.

This makes clear that the attack on truth as such—the willingness to manipulate and falsify—is always also an attack on the memory of the Holocaust. That memory depends fundamentally on the credibility of sources and the trust we place in historical testimony and accounts. This is not blind belief, but a form of trust that develops through active engagement with information and the search for truth. Yet this trust is shaken wherever truth becomes a matter of negotiation or is abandoned altogether—where distrust does not lead to critical inquiry but to total negation. Every falsified image, every manipulated piece of information, every misleading post, every technology deployed in a context where the principle of striving for truth and trust is not held as inviolable, is also an attack on the credibility of historical knowledge—and thus on the future of Holocaust remembrance.

The writer is Associate Professor for Visual Culture, Media and German Studies in the Department of Communication & Journalism and at the DAAD Center for German Studies of the Hebrew University of Jerusalem.