Skip to main content

Knuckleporn or phocomelia: GenAI’s dialectics of enlightenment

by Hito Steyerl
Articles / Article • 21.05.2025

This is an edited chapter from ‘Medium Hot: Images in the Age of Heat’ by Hito Steyerl, published in May 2025 by Verso Books.


The use of generative AI tools to portray humans has posed persistent problems. Human bodies are complicated. Fingers, movements and faces have turned out to be difficult subjects for image generators, often producing hands with anywhere between four and seven fingers and a lot of superfluous joints in between. This issue is not trivial. The question is: how do those systems see humans? What does a human body look like when generated from statistics? What other kind of forces act on this type of representation?

However, these problems are slowly being solved and are about to go extinct, especially as the industry currently seems to be moving away from photorealistic depictions towards an aesthetic of fantasy-oriented illustration. The era when limbs ended up all twisted seems to be over. Well, almost.

There is an interesting counterexample to the general trend. And it serves as a prism for the investigation of some larger question of human bodies’ representation by AI image generators – precisely because they are no longer caused by strictly technical issues. Instead, they are caused by social and even moral double binds or, to be more precise, questions of (missing) responsibility, liability and censorship. The immediate result of these problems is that generative AI’s output, in some cases, falls behind the pictorial realism of the Western so-called Renaissance into an age of almost-medieval monsters and cephalopods: generative AI’s own dialectics of enlightenment.

Consider the examples in FIG.1 and FIG.2. This creature emerged as an output to the quite harmless prompt ‘a girl lying in grass’, rendered by Stable Diffusion 3 (SD3) Medium – an open-source model released by Stability AI in June 2024. Researchers were able to uncover troves of dubious material, including child pornography in the training data of previous Stable Diffusion releases, leading to the temporary withdrawal of the whole dataset.1 Stability AI thus seems to have overcorrected its training data base in order to exclude any NSFW material or straight-up porn.2

It turns out that a machine learning model needs to be trained on nudes – just like any human, by the way – to understand how human bodies usually hold together in the first place. To quote an expert: ‘how the weights affect each other with this stuff is that by not properly capturing penises means that fingers look funny now’ FIG.3.3

If one extends the prompt to the title of Édouard Manet’s famous painting The Luncheon on the Grass FIG.4 and specifies the number of people, one gets the kind of result seen in FIG.5 FIG.6 FIG.7 FIG.8. These samples show a healthy dose of gay porn influence, paired with a wide-ranging inability (or unwillingness) to deal with boobs, hands or feet, or any other kind of joint. They also reveal that SD3 has difficulties distinguishing between human bodies and meals; indeed, it seems to be an ideal base model for generating illustrated food menus for cannibals with a special knack for yummy joints. It is optimised to generate what one could call knuckleporn.

The Manet prompt results in chimeras, which have rarely been seen in Western image production since pre-Enlightenment ages; Dark Age creatures: a bestiary of basilisks, monopods and struthopodes. The regress in the ability to represent human anatomy recalls medieval depictions of monsters in unknown parts of the world lacking heads or endowed with a lot of bonus arms.

The technologist and journalist James Bridle’s diagnosis of an imminent ‘dark age’ full of digital superstitions gets a pictorial confirmation. He argues: ‘that which was intended to enlighten the world in practice darkens it’.4 By the presumably enlightening part, he means an onslaught of digital information. And the renderings prove it. These monsters are not issued from a malfunction of technology – those times are past – but from the pictorial rules and legal and moral norms imposed by/on digital industries. These rules are at once prude and sensationalist, creating a tension that potentially twists and mutilates the bodies, with too many and too few limbs at the same time. They are also primarily intended to avoid liability if someone were to create illegal porn using those tools. The startup knows very well that some user will ‘fine-tune’ the model (i.e., bring back the illicit content and start generating realistic porn). They just do not want to be the ones responsible for it. Thus, knuckleporn is a result of an intentional refusal of responsibility.

 

Abstraction cloaks

In 2013 a series of sculptures by Damien Hirst were unveiled in front of a hospital in Qatar dedicated to women’s and children’s health. The series of large-scale bronzes, called The Miraculous Journey, depicted the process of a foetus gestating in fourteen steps, from conception to birth. Shortly after being inaugurated, they were covered up by large canvases, rendering the shapes abstract and quite intriguing. The reason for hiding them was never officially stated, but there had been a lot of criticism on social media about both the figurative character of the work and also the fact that the foetuses were, if that word makes any sense in this context, naked.

The covered shapes thus became expressions of the conflicts between art-lubricated tourism marketing and socially conservative values in Qatar, between art as spectacle and art as a (to some degree) faithful and unflinching depiction of the sometimes stark realities of anatomy and pregnancy.

As the bronzes had, for a brief moment, been exposed, it was known how they looked underneath the cloaks. They were big, if not fat bronzes, some of which were slightly reminiscent of H. R. Giger’s version of aliens, portraying a reptiloid creature slowly evolving over the course of several statues to resemble a feisty human baby. Some of them looked pretty scary, exploiting maximum shock value. And yet, at this specific moment in time, there also was an unexpected vulnerability to these statues. Besides being feudal marketing gadgets, they also embodied an idea of anatomic fidelity – a medical realism, so to speak. If one subtracted, for a moment, the whole ploy of drawing eyeballs and generating tourist revenue for a socially very conservative Gulf petrostate repeatedly accused of sponsoring Islamist militias, one could also see Hirst’s statues as a defence of medical science and a rational, fact-based worldview. The monstrous creatures demand respect for the sheer anatomical facts of reproductive health issues, which were supposed to remain covered up and hidden from view because of social and religious conventions.

Hirst’s bronzes were officially unveiled five years later. Their meanings have surely transformed in the meantime. It is likely that my prior interpretation was relevant only to a specific moment, when the cloaking, paradoxically, highlighted some of the values to be defended in showing nudes in public.

At its launch, SD3 might have profited from an automated cover-up function too. Just imagine the creatures in FIG.9 all fitted with lovely abstract, sail-like covers. The generator could even have used the oversized tablecloth to wrap all four figures up into a single Christo-themed package. Just as the Berlin Reichstag looked much better when Christo and Jeanne-Claude wrapped it in 1995, Stable Diffusion’s creatures would have benefitted aesthetically from an abstraction cloak (which might also have hidden the disturbing puppeteering issue going on in the right- hand side of the rendering).

Let’s not forget: all these pictorial issues in SD3 will be fixed, or at least patched up, very soon by some anonymous users, who will have to assume responsibility on behalf of an unwilling company. The technical issues are transient, just like Hirst’s covered-up statues – snapshots of a quite short and specific moment in time. They hold no deeper meaning about the model or the technology itself; they are merely a reflection of the social constellations of all these different elements at a particular moment.

 

A counteragent to nerve gas

There is, however, another way to interpret the missing limbs. Perhaps the model is not making mistakes at all but, in fact, depicts a specific kind of people: those with missing or non-normative limbs.

Maybe these renderings portray people affected by thalidomide poisoning. There are many real people with similar limbs, not by fate, or mistake, but because they were harmed by a medication marketed as safe and prescribed in the 1950s and early 1960s against morning sickness in pregnancy. Some were born with so-called phocomelia – a condition in which limbs resemble seal fins. SD3 could have developed the opinion that its subjects are affected by this condition. Why would this be?

The well-known AI researcher Yoshua Bengio points out that thalidomide today should be remembered as an example of the massive risk involved in trusting industries to act in the interest of the wider public.5 In a text that mainly deals with existential risk occurring through potentially rogue artificial general intelligences (AGIs), he mentions the thalidomide case as a potent warning of what happens when companies are blindly trusted to go forward with their product, whether AI-based or pharmaceutical. So, what was the thalidomide scandal about?

Thalidomide is a substance marketed in the 1950s by the West German pharmaceutical company Chemie Grünenthal. Sold (in Germany) under the name Contergan, it was used for treating conditions such as colds, flu, nausea and morning sickness in pregnancy. Taken in the early stages of pregnancy, it caused limb difference and a lot of other very serious neurological and internal damage to gestating foetuses. About 10,000 babies were born in the 1950s and 1960s with all kinds of congenital health issues. Many died, making the case of thalidomide the largest pharmaceutical scandal to date.

Chemie Grünenthal was established by Hermann Wirtz, a former Nazi Party member, after the Second World War. Wirtz hired a substantial number of former Nazi scientists with an astonishing combined criminal record. Otto Ambros, for one, had been part of the team that invented the nerve gas Sarin in 1939 at a military gas-production lab in Spandau. Sarin was created as a by-product of efforts to develop insecticides. He also worked on producing mustard gas, which was tested on inmates of a concentration camp. He was responsible for managing and building the Buna factory in Monowitz, part of an Auschwitz I. G. Farben plant, exclusively staffed by concentration camp slave labour. Primo Levi, a chemist who worked in this factory, famously described it in his writings.6 Ambros was sentenced to eight years in prison at the Nuremberg trials but was released after serving only three. He was the chairman of Grünenthal’s advisory committee during the development of thalidomide and a board member when Contergan was being sold.

Heinrich Mückter headed Grünenthal’s development programme. He had killed many prisoners in labour camps in Poland by experimenting on them with anti-typhus vaccines, without any legal consequences for him. Also on staff was Martin Staemmler, a medical doctor and one of the main proponents of the Nazi racial hygiene and eugenics programme as a co-editor of the magazine Volk und Rasse.7 He was classified as a ‘fellow traveller’ after the war.

Another former Nazi hired by Chemie Grünenthal was Heinz Baumkötter, who had been an SS officer and the chief medical officer at the Sachsenhausen concentration camp. When asked about his line of work during his trial in 1947, he answered: ‘I had to personally attend or to send a subordinate to the executions, to punishments, to shootings, hangings or gassings […] to make the list of sick detainees and of those unfit for work, who were to be transferred to other camps and, lastly, I had to make experiments in accordance with the orders received’. He was released from a Soviet labour camp after eight years.

It later turned out that thalidomide may also already have been developed during the Nazi period – as a counteragent to the nerve gas Sarin.8 A memo dated 13th November 1944 from Fritzter Meer, an I. G. Farben executive, informs Karl Brandt, Hitler’s personal physician, that a drug referred to as Drug #4589 was ready for use. The medical doctor James Linder Jones suggested that the substance referred to may have been thalidomide. However, it seems to have worked not as a counteragent to gas poisoning but as a potent sedative. This is why it ended up being prescribed against morning sickness for pregnant women and advertised as being fully safe in the 1950s, even though no studies had been conducted to test its effects in pregnancy. Even though more than 1,600 suspicious cases of congenital defects had been described by 1961, the company kept selling the drug.

The prosthetic limbs that some of the victims had to wear are heartbreaking to look at. Many of these artificial limbs had no function except to make the children (and later adults) appear to better conform to body norms, and even required existing limbs to be amputated to fit.9

Even though Chemie Grünenthal poisoned, killed and otherwise harmed thousands of people, no one involved was ever found guilty of any crime. Contergan (also marketed as Thalomid, among other brand names) was a commercial hit in fifty-two countries, second only in sales to aspirin. For years, Grünenthal denied, blocked and suppressed any reports of adverse effects of its product. They agreed to pay compensation into a fund for the victims in return for permanent legal immunity in Germany. A trial ended in 1970 with no finding of guilt. In the United States, the drug had never been approved; it was rejected by the Food and Drug Administration – proof that regulation sometimes works if institutions take it seriously.

 

Liability and regulation

Bengio brings up the thalidomide case to highlight the historical impacts of a lack of corporate liability. He is trying to counter arguments that market forces would serve as a disincentive to prevent companies from producing AGIs that could get out of control and cause damage. According to Bengio, there are many historical examples to the contrary that prove that profit maximisation is more than sufficient cause for corporate violations of public interests.

Thalidomide is a brilliant example because it shows that many of the people responsible for this case of egregious criminal malpractice got away with mass ‘murder’ not just once, but twice. Many of them were not (or barely) held responsible for the many crimes they committed during the Nazi period. They got around to harming and killing people a second time, and they got away with it once more. The first time, they were integrated into a fascist war economy, the second time into simple ‘profit maximisation’. They probably would have gotten away with it multiple more times in the amnesiac period of so-called reconstruction in West Germany. To be sure, self-regulation in the best interests of the public was not on the agenda for these people.

The case of thalidomide, moreover, documents the diffusion of corporate responsibility, beginning from insecticide development within a German chemical industry obsessed with fertilisation, toxic ideas of ‘hygiene’ and pest control; to the accidental discovery of the nerve gas Sarin – which in turn led to the accidental discovery of thalidomide as a sedative – by people involved in performing human experiments in concentration camps; to the dispersion of the resulting toxic substances among the populations of many countries in the 1950s and 1960s. All of this transpired without the actors involved having to take adequate responsibility for any of these steps in a pile-up of human misery in the name of science, progress and rationality.

This dialectics of enlightenment, by which rationality recoils into its absolute opposite, is not caused by any sort of automated metaphysical algorithm but by the fact that no one imposed effective sanctions or limitations on those committing these crimes. Instead, organised irresponsibility was left to freely dissipate – and disseminate the late repercussions of the Second World War-era poison gas development well into present times.

Bengio uses this extremely charged example to argue that self-regulation in AI is a terrible idea as well:

The problem comes when safety and profit maximization or company culture (‘move fast and break things’) are not aligned. The core argument is that future advances in AI are thought to be likely to bring amazing benefits to humanity and that slowing down AI capabilities research would be equivalent to forfeiting extraordinary economic and social growth […] In many cases, these accelerationist arguments come from extremely rich individuals and corporate tech lobbies with a vested financial interest in maximizing profitability in the short term. From their rational point of view, AI risks are an economic externality whose cost is borne by everyone. This is a familiar situation that we have seen with corporations taking risks (such as the climate risk with fossil fuels, or the risk of horrible side effects of drugs like thalidomide) because it was still profitable for them to ignore these collective costs.10

This was indeed the case with thalidomide. Compensation payments by the company (to German victims only) were doubled by the German government, thus leaving taxpayers to foot the bill for crimes committed by members of the corporation. Profits were privatised, and risks collectivised. Bengio’s argument is mainly concerned with the risk of potential super-intelligences going rogue, the likelihood of which is difficult to estimate for outsiders. It is also an argument that has been rightfully criticised for distracting from more actual material consequences for people’s livelihoods through AI-induced job loss, racialised surveillance and environmental devastation.

However, it makes no difference exactly what kind of worry is troubling Bengio to concur with his conclusion: ‘We need to make sure that no single human, no single corporation and no single government can abuse the power of AGI at the expense of the common good’.11 We could even expand his demand to include less powerful forms of AI. As the thalidomide case poignantly shows, the public risk is real precisely because there are not enough consequences for incurring public damage in the name of profit maximisation. Bengio’s demands could thus be expanded to call for non-proliferation treaties for certain types of machine learning-based systems, as well as the imposition of testing, regulation and liability to prevent and mitigate potential risks to human health, well-being and safety.

Thus, the missing limbs in the SD3 renderings should not be interpreted as direct illustrations of thalidomide victims affected by phocomelia. This would be far too literal, as well as factually incorrect. Their absence may, rather, indicate that any mitigation efforts to prevent future health and social hazards created by profit maximisation within AI industries are currently woefully inadequate, if not missing altogether. Their omission could be seen as evincing the dearth of safety stops to keep scientific rationality from once again regressing into superstition and criminal malpractice, as well as the lack of accountability for future AI-related damage and the ongoing diffusion of irresponsibility.

 

About this book

Medium Hot: Images in the Age of Heat

by Hito Steyerl

Verso Books, London, 2025

ISBN 978–1–80429–802–2

 

 

Order book

 

 

About the author

Hito Steyerl

is an artist, writer and educator based in Berlin. She has studied at the Academy of Visual Arts, Tokyo, and the University of Television and Film, Munich. She also completed a doctorate in philosophy at the Academy of Fine Arts, Vienna. In 2025 Steyerl received the Erich Fromm Prize. In 2021 she received the Honorary B3 BEN Award in the category Art. Steyerl is the recipient of the 2019 Käthe Kollwitz Prize from Akademie der Künste, Berlin. In 2015 Steyerl was awarded the EYE Prize from the EYE Film Institute Netherlands and the Paddy & Joan Leigh Fermor Arts Fund. In 2010 she received the New:Vision Award from the Copenhagen International Documentary Festival. The artist’s recent solo exhibitions include Leak. The end of the pipeline at Museum der bildenden Künste, Leipzig (2024), This is the Future at the Portland Art Museum (2023), Hito Steyerl: The City of Broken Windows at Museum der bildenden Künste, Leipzig (2023), and Hito Steyerl: A Sea of Data at National Museum of Modern and Contemporary Art, Seoul (2022). Steyerl is currently a professor for emergent digital media at the Munich Art Academy. She is also a judge of the 2025 Burlington Contemporary Art Writing Prize.



Footnotes

See also

Disordered Attention
Disordered Attention

Disordered Attention

20.06.2024 • Articles / Article

Hito Steyerl
Hito Steyerl

Hito Steyerl

07.05.2019 • Reviews / Exhibition