A viral X post asked followers to explain, in as much detail as possible, what made an "AI-generated Monet" inferior to a real Monet painting. The image was a real Monet. The crowd picked it apart anyway, listing the supposed tells, technical flaws, and missing soul of the work.

The exchange went viral because it illustrates a documented finding in aesthetics research: identical artworks score lower the moment they are labeled as AI-generated, regardless of who actually made them. Henry Shevlin shared the original prank with the comment that the responses track with research showing people systematically downgrade aesthetic assessments of art when told it's AI-generated.

Key findings from the research literature:

  • The same image is rated less favorably when attributed to AI than to a human

  • Bias extends to basic visual perception, including color and brightness ratings

  • The effect activates on disclosure; in blind viewing, audiences struggle to tell AI from human art

The Stunt: The joke exposed how source attribution shapes aesthetic judgment in real time.

The original post asked followers to evaluate what it called "an AI image in the style of a Monet painting." Replies cataloged compositional flaws, the "artificial feel" of the work, and the absence of a true artist's touch. The image was Monet's actual painting. The thread functioned as a live demonstration of the labeling effect that researchers have been quantifying for years, including an early study showing the same artwork is rated as less creative and aesthetically valuable when participants are told it was made by AI.

The Meta-Analysis: A 2025 Tilburg University study measured how deep the bias runs across the published literature.

Alwin de Rooij published a meta-analysis in Psychology of Aesthetics, Creativity, and the Arts examining bias against AI in visual art across three aesthetic systems:

  • Sensory-motor system (six studies, 16 effect sizes): small but significant negative effect

  • Emotion-valuation system (27 studies, 94 effect sizes): small but significant negative effect

  • Knowledge-meaning system (26 studies, 49 effect sizes): moderate negative effect

The knowledge-meaning system covers judgments of intent, skill, and profundity. Those are the parts of aesthetic experience most tied to the perceived presence of a human mind behind the work. The researchers concluded that knowing AI was used diminishes aesthetic experience independently of an artwork's objective qualities.

Perception Itself Shifts: The bias is deep enough to change how viewers report basic visual features.

Source attribution alters reported perception, not only evaluation. According to PsyPost's coverage of perception research in this area, identical images receive lower color and brightness ratings when labeled as AI-generated. The change is measurable at the level of basic visual features, not just abstract judgments about quality or creativity.

Eye-tracking research has surfaced an implicit bias toward AI art that operates even when viewers cannot correctly identify the source of a work. The bias appears to influence where viewers look and how they process an image before any conscious judgment forms.

Who Carries the Bias: Age, art style, and context all moderate the effect.

The Tilburg meta-analysis flagged several moderators. Older viewers showed a stronger negative reaction to AI attribution than younger viewers, which the authors describe as evidence of a generational shift in attitudes toward AI in art creation. The bias was weaker for abstract work than for representational pieces. Image source and the situation in which viewers encountered the art also affected the size of the effect. A broader Frontiers in Psychology review on human perception of art in the age of AI reaches similar conclusions about how context and disclosure shape aesthetic response.

Implications for AI-Assisted Production: For creatives using generative tools, disclosure carries weight that pixels do not.

Studios, post houses, and independent filmmakers are folding AI into VFX, color work, animation, and storyboarding. Research on AI art and viewer empathy suggests audiences also report weaker emotional connection to work they believe is AI-generated. Since the same studies find that blind viewers cannot reliably separate AI from human work, the bias is operating on the label, not the image. How credits, marketing copy, and process documentation describe AI's role will shape reception independently of what the finished frame actually looks like.

The Origin Effect: Audience reception of AI-assisted work depends as much on framing as on craft.

The research consistently shows that bias activates on attribution rather than on visual evidence. As AI becomes embedded in mainstream production workflows, the framing around its role will affect how the finished work lands with viewers. The Tilburg authors note the bias is softening among younger audiences. It has not disappeared. A real Monet can still read as inferior the moment "AI" is attached to it.

Reply

Avatar

or to participate

Keep Reading