Can ChatGPT be a lovable monster?

Gizem Karaali

A close-up image of a small, realistically styled cactus character, emphasizing its individual charm. [Image created by ChatGPT / DALL·E on April 17, 2024.]

A close-up image of a small, realistically styled cactus character, emphasizing its individual charm. [Image created by ChatGPT / DALL·E on April 17, 2024.]

French philosopher Bruno Latour  starts his article "Love Your Monsters: Why We Must Care for Our Technologies as we do Our Children" with Mary Shelley and her most famous creation, the monster erroneously known as Frankenstein. Some of us, having read Shelley's Frankenstein, Or, the Modern Prometheus, might smirk knowingly when we hear that. Of course Frankenstein is not the monster's name; it is the name of its creator!

Then again, when you dig into Shelley's work, you might get other ideas. Who is the real monster? The creature that the scientist named Frankenstein created is innocent of any crime at the beginning. Latour spells it out clearly, "Dr. Frankenstein’s crime [is] not that he invented a creature through some combination of hubris and high technology, but rather that he abandoned the creature to itself." And that is why the creature becomes a monster; abandoned, unloved, uncared for, he turns to despair, misery, and crime. 

Latour's article is short (only eight pages), but it is packed full of interesting ideas. But what intrigues me most is his perspective on our responsibilities in relation to our technologies. He writes:

[S]cience, technology, and demography make clear that we can never separate ourselves from the nonhuman world — that we, our technologies, and nature can no more be disentangled than we can remember the distinction between Dr. Frankenstein and his monster.

Those who dislike generative AI or don't much care for the hype around it might want us to quit thinking about and working with it. But now that we have created it, can we really abandon it? 

Latour writes:

We blame the monster, not the creator, and ascribe our sins against Nature to our technologies. But our iniquity is not that we created our technologies, but that we have failed to love and care for them. It is as if we decided that we were unable to follow through with the education of our children.

Writing in 2011, Latour is talking about the environmental effects of our technological advances, but he could just as well be talking about ChatGPT and its relatives in a possible future where we dismiss, denigrate, or even try to kill off our creations. Our biggest failure, if I am allowed to channel Latour, would be that we have failed to love and care for a nascent non-human entity.

Latour also mentions the principle of precaution. Detractors of the principle, he writes, claim that the principle leads to inaction and despair, and that "[t]he only way to innovate [...] is to bounce forward, blissfully ignorant of the consequences or at least unconcerned by what lies outside your range of action." Latour on the other hand argues that "unexpected consequences are attached to their initiators and have to be followed through all the way." 

Latour asks us "Has God fled in horror after what humans made of His Creation?" He opens this up: 

[W]here have we ever seen a master freed from any dependence on his dependents? The Christian God, at least, is not a master who is freed from dependents, but who, on the contrary, gets folded into, involved with, implicated with, and incarnated into His Creation. God is so attached and dependent upon His Creation that he is continually forced (convinced? willing?) to save it. Once again, the sin is not to wish to have dominion over Nature, but to believe that this dominion means emancipation and not attachment.

I would love to ponder on how these ideas can productively apply to the context of generative AI, but this post is already long enough! So what do you think? Can we (or should we) think about ChatGPT and its relatives as monsters that we have created (in our image no less!) and that we have to care for? And if so, what would such caring look like?