From Soul to Semi-Aura: Rethinking Creativity in the Age of Generative AI

Image via Wikipedia

***

When Botticelli painted The Birth of Venus in the mid-1480s, he articulated an aesthetic ideal that would shape Western visual culture for centuries to come. Venus’s classical pose and luminous presence became canonical emblems of beauty, endlessly reproduced and cited. Today, her image circulates not only inside museums but within the vast training datasets of generative artificial intelligence (AI), where it is digitized, fragmented, and recombined to produce new works that echo its formal structures. 

The aura that once marked Botticelli’s painting as singular has been dispersed and reconfigured, persisting less in the original canvas than in the algorithmic memory of cultural forms. For René Descartes, the soul was the immaterial seat of reason and creativity, the essence that distinguished human thought from mechanical replication. This metaphor, long foundational to Western understandings of art, positioned creativity as an expression of interiority, the unique spark of the human subject. In an era of generative AI, which produces images, music, and designs through statistical recombination of vast datasets, this metaphor appears increasingly unstable. Is Descartes’ conception of the soul now an anachronism, displaced by an algorithmic model of creativity that reframes invention as remix? As Salas Espasa and Camacho (2025) argue, AI-generated art carries a “semi-aura”: it gestures toward the originality of human creation while remaining tethered to computational reproduction. In this light, the Cartesian “soul” may seem obsolete, replaced by a vision of creativity as the statistical remixing of prior forms.

The celebratory framing of AI as democratizing art, rendering creativity accessible to all, and promising efficiency, novelty, and sustainability, has become widespread in both industry rhetoric and public discourse. Yet such optimism obscures the structural realities underpinning these technologies. Generative systems are neither neutral nor universally emancipatory. They encode and reproduce the cultural hierarchies embedded in their training data. Equally, the claim that AI liberates creative labor is tenuous. Erickson (2024) demonstrates that, in practice, AI in the creative industries has reduced autonomy for workers, transforming and often devaluing their roles. Rather than broadening access, it concentrates power in the corporations that own datasets, infrastructure, and distribution channels. What is presented as “democratization” is, in effect, capitalist rationalization, intensifying disposability, scalability, and profit extraction.

Artificial intelligence in the arts has been heralded as a democratizing force, capable of producing novelty at scale and expanding the scope of who can create. Yet closer examination suggests that generative systems tend not toward invention, but toward the reproduction of dominant aesthetic hierarchies. Outputs are often calibrated to produce what is most “marketable” and “safe”: standardized formal conventions, and stylistic codes already validated by consumer taste. Instead of opening new creative horizons, these systems frequently reinforce existing regimes of visibility, while pushing experimental or resistant aesthetics to the margins.

This tendency is not accidental, but a product of how datasets are compiled and operationalized. Ramya Srinivasan and Kanji Uchino (2020) argue that socio-cultural biases in generative pipelines are best understood through the long histories of art historical canons. Their work demonstrates that models trained on widely available art datasets inevitably absorb the preferences embedded in those collections, which are often drawn disproportionately from European and North American traditions, thereby privileging certain artistic lineages while sidelining others. In this sense, the apparent neutrality of AI is delusive because the system’s capacity to “learn” is already structured by asymmetries of cultural preservation and dissemination.

Francesca Bignotti (2025) makes a similar point in her study of the marginalization of African art within digital knowledge systems. She showcases how architectures of cultural representation (archives, databases, and digital catalogs) consistently reproduce the invisibility of traditions outside the Western canon. When these knowledge infrastructures become the training ground for generative AI, exclusion is not only repeated but also amplified. The omission of non-Western, experimental, or politically resistant aesthetics means that what AI produces as “new” is often only a recombination of the already dominant. Thus, the apparent breadth of AI’s imagination is in practice bounded by the narrowness of its sources.

AI images evoke the appearance of originality while lacking the conditions that would ground it. This does not simply mean that AI art is derivative; rather, it means that its authority and value derive from its proximity to cultural legitimacy already established elsewhere, whether that be in the canon, the marketplace, or a recognizable style. What is significant here is the shift in spectatorship: audiences encounter AI images as if they bore the mark of creative intention, even though the “intent” belongs not to a human subject but to an algorithm trained on vast cultural corpora. In this sense, the semi-aura destabilizes traditional markers of authenticity, while simultaneously reinforcing capitalist logics of consumption, where recognition and legibility become the primary means of determining value.

The rhetoric of AI as a democratizing force is deeply alluring, as corporate narratives suggest that by lowering barriers to entry, these tools enable everyone to become a designer, photographer, or artist. Yet this framing obscures the fact that what is being democratized is not access to creativity itself, but access to pre-packaged recombination. As Kristofer Erickson (2024) shows in his study of creative firms adopting AI, the effect is less an expansion of participation than a reconfiguration of labor in which human contributions are rendered increasingly invisible. Tasks once requiring specialized expertise, such as sketching ideation phases, pattern-making, or editing images, are displaced by generative systems, while the workers who historically performed them are sidelined or stripped of authorship.

The outcome is paradoxical: AI seems to open the gates of creativity to the many, while in practice, it concentrates creative power in the hands of the few who control the models, datasets, and infrastructures. Walkowiak and Potts (2024) map this as a structural transformation of the cultural and creative industries, identifying entire zones of job risk in occupations ranging from design assistants to freelance photographers. However, rather than abolishing labor, AI introduces new forms of precarity: human workers remain implicated in the process, whether that be curating data, refining prompts, or retouching outputs, but their agency is subsumed under the apparent autonomy of the machine.

This dynamic is central to the concept of the “semi-aura.” As Salas Espasa and Camacho (2025) argue, AI-generated works borrow legitimacy from the cultural authority of the originals they recombine. Nonetheless, they also borrow legitimacy from the invisibilized labor that sustains them. The semi-aura thus depends not only on the corpus of past artworks, but on the erasure of present human contribution. The two main implications are first, the semi-aura facilitates a new regime of appropriation in which corporations monetize the appearance of creativity while externalizing the costs to workers who become interchangeable and uncredited. Second, this shift reshapes the politics of authorship itself. If aura once marked the uniqueness of an artist’s hand and vision, and semi-aura marks the recombinatory authority of the machine,  labor’s role is thus displaced from center stage to backstage. As the Ethical Implications of AI in Creative Industries report (2025) notes, this erasure of labor not only devalues creative work but also forecloses collective bargaining power because what is no longer visible cannot easily be organized or defended. Seen in this light, the narrative of democratization is better understood as a labor mirage. 

The discourse surrounding AI in art is often displayed in the language of neutrality. Industry narratives present these systems as mere tools, technical instruments detached from ideology, available for artists to employ as they see fit. Such framing implies that questions of creativity, authorship, or ethics reside entirely in how individuals choose to use the technology. Yet, as Kate Crawford (2021) has argued in Atlas of AI, there is no such thing as neutral technology. Every stage of AI development, from infrastructure ownership to platform governance, encodes value judgments about what is included, what is excluded, and what forms of expression are privileged. Neutrality claims are particularly powerful because they deflect attention from the political economy in which AI is embedded. Corporate actors that own large datasets and computational infrastructures wield disproportionate influence over how creativity is mediated. 

As Couldry and Mejias (2019) show in The Costs of Connection, data infrastructures are not passive reflections of culture, but active mechanisms of appropriation. When a handful of technology firms and fashion conglomerates control the pipelines of generative AI, they do more than provide tools; they reshape the conditions of cultural production itself. This concentration of power allows companies to determine whose styles are emulated, which aesthetics are optimized for circulation, and how creative labor is valued. One consequence is the acceleration of disposability. By enabling the mass production of “new” designs at near-instantaneous speed, AI compresses the life cycle of artistic and cultural objects. In the fashion industry, this manifests in the intensification of micro-trends, such as garments designed, produced, and discarded in a matter of weeks. In visual culture more broadly, it results in the saturation of feeds with algorithmically-generated images whose novelty is less a product of creativity than of sheer volume.

If the deployment of generative AI in art is unavoidable, the question is not whether we should use it, but under what conditions and to whose benefit. Left unregulated, these systems will continue to amplify dominant imperatives of speed, disposability, and concentration of power. Yet alternative pathways exist. Dataset reform is essential: training corpora should be built with consent, compensation, and genuine cultural diversity. Transparent authorship practices, such as logging prompts, processes, and human inputs, could return visibility to the creative labor erased by claims of automation. Equally important are regulatory frameworks that update intellectual property law to protect artists, recognize communal traditions, and prevent algorithmic appropriation from becoming normalized theft. Beyond questions of ownership, a just approach to AI must also protect labor and ecological systems. This means safeguarding creative workers against deskilling through collective bargaining and labor protections, while recognizing that “prompt engineering” or data curation are forms of labor in their own right. It also means confronting AI’s environmental costs by measuring energy use and waste, as well as resisting the push toward hyper-accelerated trend cycles. Finally, cultural institutions have a crucial role to play: by committing to aesthetic pluralism, they can ensure that non-commercial, experimental, and historically marginalized forms are not overshadowed by algorithmically generated sameness. The task, then, is not to demand more AI, but rather to develop better AI systems that are accountable to justice, inclusivity, and sustainability, rather than profit alone.

***

This article was edited by Elise Grin and Jordan Donegan.

Related Post

Leave a Reply