If you’re writing right now, it is hard to know what to write about. It all happened so fast and painstakingly slow. The only thing I thought to write about is something I have wanted to talk about for a long time which is how porn-ridden Instagram Reels is. Turns out, in the twelve hours users migrated from TikTok to Reels, others wanted to talk about it as well. In less or more words, some less crass than others, TikTok users have been suggesting that Reels is “goon-pilling” young men by inundating their Reels feed with suggestive/adult content (or as far as moderation allows you to go). For selfish reasons, I wish it was just young men they are targeting. My Reels feed is mostly pornographic. After sharing a Reel with a friend of a young woman “making lasagna“ in which she dunks her breast into a bowl of sauce and then a pan of lasagna (strangely it was also a “paid partnership”), I thought it was objectively hilarious! Unfortunately, almost all videos I was suggested after were illicit at best, and borderline pedophilic at worst. Brayden and I often joke about our ill-comings being the result of a deep moral rot that we cannot escape and is impossible to stop from oozing out of us. Charged an over-sized bag fee? Probably the result of an indescribable evil attached to your soul. When I showed him my feed, he made a similar joke, and I became instantly embarrassed and defensive. I felt a cold interrogatory light shine on my unconscious, as though I had been caught by something for some thing. He made the same joke while reading this. The suggestion that these images appeared because I desired them repulsed me. It wasn’t until recently that I realized it is likely within the interests of Meta and other platforms to force culpability for content, especially upsetting content, onto the user. The past few days on TikTok, since being confronted with the potential switch to Reels, users have expressed almost unanimous distaste with Instagram’s short-form video feature. The most common complaint is that they are not seeing what they are interested in, and they are absolutely horrified by what they do see. So far, I have only seen a few comments taking the righteous position of “well MY feed is not like that,” so the following analysis is not in response to the general sentiment of current users, but rather what may grow to become it.

My thinking is largely inspired by Olivia Laing’s essay, “Unwell,” where she criticizes the lasting effects of the work of Louise Hay, a self-help author who asserts that even the most severe of bodily ailments can be resolved by treating underlying psychological issues. Laing explains that the insidious result of Hay’s “anti-science” teachings is the assertion that there is a “right way for a body to be, and that illness or disability is the consequence of failure.” The notion of “perfect” systems, such as a body, and their only faults being how human’s imperfectly interact with them, has emerged as the dominant narrative of many AI or, more generally, technology maximalists. The conflation between how “intelligent” technologies and the human brain operate, I believe, will emerge as the new Haysian ideology suggesting that issues with technology are simply user error or, in the case of disturbing video content, user perversion. Meta has more or less said this directly, coming off the heels of eliminating their fact checking division, stating that the human bias in content moderation does more to censor users than protect them. Conservative fodder aside, this move by Meta positions human beings as a barrier to technology’s ability to show you what you desire. The language around algorithms, even just from an advertising perspective, is that they know what you want before you do. Through this language, we bestow technology with an inaccurate amount of agency and sentience. This is the discussion of the second chapter of Mike Pepi’s new book Against Platforms: Surviving Digital Utopia. Entitled ”Computers Can’t Think,” the chapter works diligently to comb through how the brain and artificial intelligence came to be equivocated and the capitalistic motivation to do so. Pepi begins the chapter by addressing the lingual emergence in the late 90s to describe the brain as “processing information.” This emergence marks what Pepi refers to as a “connectionist” movement which rallies around the notion that computers can replicate human cognitive abilities and, in turn, possess intelligence. Of course, this also posits that artificial intelligence can perform humanness better, faster, and more rationally. Pepi weaves in punchy indisputable truths about the capabilities of computers, (e.g. AI cannot have a thought that cannot be put into words), that when read manifest a seething AI enthusiast on your shoulder, “it can’t now, but it will in the future.” The stakes of this thinking are not just intellectual dishonesty or a layman’s misunderstanding of complicated technology. Pepi makes the invaluable connection between unfounded trust in “intelligent” technology and the motivation for developers to side step regulation by positioning their product above ethics or responsibility. In turn, the role of the user must be to analyze the impact of AI rather than push for the technology to be improved. Which brings me back to Reels. Blame over content does not dissipate when someone like Zuckerberg gets on Instagram live and bows out of the duty of moderating it. I fear that the blame will be shifted to users who improperly “train” their algorithm or simply cannot confront what they desire (this will be coined “digital jouissance” when Meta hires the right Artforum hack).

here is a lot of half-baked psychoanalysis that could be done when it comes to social media and the user’s relationship to their digital self. A very common means for adult performers to circumvent community guidelines is to use a fake baby to mimic breastfeeding. The performer will lazily latch the fake baby to her nipple, as she whispers bittersweet nothings into the microphone. The analyst’s couch sort of materializes underneath you. While you could maybe write a term paper on how this is a manifestation of the libidinal desire for the Mother, it is much more simply a bastardization of a community guideline modification, in this case, one meant to protect the content of breastfeeding mothers. The psychoanalytical underpinning I am interested in, or concerned about, is that if social algorithms become increasingly moderated by AI, and if AI is culturally understood to be the optimal model of human cognition, will it be falsely imbued with an unconscious, just as it was with a conscious. The consequences, of course, are that harmful algorithms, ones that suddenly expose you to the abject, violent, or work in favor of a ideological project (other than porn, I get a LOT of tradwives), will be written off as the emergence of repressed desires. If during sleep, that which is censored is allowed to manifest through dreams, a real sicko could argue that uncensored AI is even able to optimize the unconscious by allowing you to practice dream-work passively and while awake. This should probably be a tertiary concern, but I think it speaks to how vulnerable to manipulation we become when we believe that technology, or a platform, can uncover a more truthful, uninhibited version of ourselves. Or, as Pepi makes clear, believing that these models are anything more than just math. We are already fighting a morale battle to convince people that they can produce something more meaningful than AI by virtue of having the capacity to be meaningful. And technomaximalism is worsening this crisis of confidence. It isn’t gonna save you. It’s not your mommy. Is that what you want? You want your mommy?