AI-generated artwork is the same as a gallery of rock faces. It is pareidolia, an illusion of art, and if culture falls for that illusion we will lose something irreplaceable. We will lose art as an act of communication, and with it, the special place of consciousness in the production of the beautiful.
…Just as how something being either an original Da Vinci or a forgery does matter, even if side-by-side you couldn’t tell them apart, so too with two paintings, one made by a human and the other by an AI. Even if no one could tell them apart, one lacks all intentionality. It is a forgery, not of a specific work of art, but of the meaning behind art.
Like a programming language interpreter, GPT-3 translates the designer’s intent from a language they’re already familiar with (English) to one they need to learn (Figma’s information architecture, as manifested in its UI.) This can be easier for a new/busy designer, much like Python is easier and faster to work with than assembly language.
But that’s not “designing” — at least not any more than compiling Python code is “programming.” In both cases, all the system does is translate human intent into a lower level of abstraction. Sure, the process saves time — but the key is getting the intent part right. I’ll be convinced the system is “designing” when it can produce a meaningful output to a directive like “change the product page’s layout to increase conversions.”
In my opinion, what makes a designer competent is precisely their ability to credibly justify their conclusions. If you can’t do this as a designer—no matter how successful your results are—then neither I nor anybody else can tell if you aren’t just picking things at random.
What I am proposing, then, is no less than to make a designer’s entire line of reasoning a matter of permanent record. On the surface is the familiar set of prescriptions, components, examples and tutorials, like you would expect out of any such artifact. Attached to every element, though, is a little button that says You click it, and it tells you. The proximate explanation will probably not be very satisfying, so you click on the next until you get to the end, at which point you are either satisfied with the explanation, or you aren’t.