Site Sessions 5: AI and Creativity

Photo by Sheffield Tech Parks

Last Thursday we had “AI & Creativity” – the latest event I’ve put together in my capacity as Digital Art Curator for the Site Gallery, here in Sheffield. This time it was a co-production with Sheffield Technology Parks and The Workstation, who hosted us as one of their regular “Platform” after work events. This resulted in a really great mix of audiences (basically, we brought our art crowd along to their business event!) and a packed house.

I got the idea for this one when I was at the Cambridge Festival of Ideas last month. I was there to talk about some AI-themed art work I had been developing on the Collusion R&D programme, and one of the other speakers was a chap from Cambridge Consultants, introducing us to an AI that makes ‘great art’ out of human doodles – Vincent. I thought the project was conceptually and technically very interesting, and the questions that came out of it deserved more time to explore. So I decided to bring the questions to Sheffield. (Sadly, we couldn’t get Vincent: his calendar is very full).

These are the core areas I wanted to delve into:

How should the art world respond to Artificial Intelligence? Does it make sense to describe the output of a computer algorithm as ‘creative’?

Does digital present a threat to analogue creativity, or have we always used technologies to realise our ideas? And what exactly is creativity, anyway?

To help us, I lined up three speakers, creativity and art in tech expert, Linda Candy; algo-musician and creative tech, Joanne Armitage, and Tech Lead at the V&A/inventor Duncan Gough.

Here are some of the points to emerge from each speaker (based on my notes so forgive non sequiturs):

Me

I did an 8 minute intro as usual, showing the video of Vincent linked above, and suggesting that art can take responsibility to change the story we hear in the media about AI, which is generally rather one-note (jobs will be lost, humanity will be destroyed by killer robots). I wondered whether technology’s increasingly impressive impersonation of us is one of the factors contributing to this sense of unease, and showed a few images, including Kryten painting a portrait of Rimmer in Red Dwarf, and a gif of Atlas doing a backflip. Why does tech have to look like us, and look at us? And how is this making us feel about its potential to think?

 

 

Linda

  • Humans are not datasets
  • We generally judge creativity by looking at the final product, rather than the process, (which is often far more revealing).
  • AI existed in the 80s, in an environment of digital optimism that contrasts with today’s focus on disruption.
  • Linda talked about The Imitation Game and the Turing test. For Turing, “The question of thinking machines is too meaningless to deserve discussion”, but it feels more meaningful nowadays.
  • John McCarthy in 1949 wrote: “How can I make a machine that can exhibit originality?” Machines can be a source of original ideas.
  • We outsource clerical tools, why not creative ones?
  • Linda showed several examples of art works and had us guess which were by machines and which were by humans (the vote was generally a 50/50 split). All but one were machine-made, but all were by Harold Cohen, most using Aaron.
  • Harold Cohen sought a kind of immortality through art, insisting his machine should be used to continue to create his works after he died (he died in 2016). Cohen reported the art machine helped greatly with his process, helping him to change and develop as an artist.
  • We ought to ask what AIs do for art – Google Deep Dream etc.
  • Linda ended with a video of a dance production called Dot and the Kangaroo which makes use of AI to enhance human creativity.

 

Jo

  • Jo is interested in creating new experiences with sound. She works with algorithmically generated sound in live coding.
  • She works with people with ‘algorithmical vulnerability’; those who fall through the cracks of education, NEETs, etc. Perhaps technology isn’t as inclusive as it thinks it is. There’s a tension between a tech culture that considers itself ‘open’ and the very high level of abstraction of coding, which actually requires a high level of literacy to grasp. “Does code open things up, or have the opposite effect?”
  • She has also worked with a number of projects involving gesture, and is collaborating with a dancer.
  • In this abstracted world of digital, how does the body fit in? Jo talked about ‘vibracite apparent motion’ and a wearable device she’s developed which enables people to ‘feel’ the pitch of a sound as it seems to travel around the body of the wearer.
  • She’s interested in putting the body back into the middle of the tech. AI projects typically remove the body; Jo wants to re-situate it at the centre.
  • Jo talked about Dr Rebecca Fiebrink, who runs a course on machine learning for musicians and artists.
  • She also mentioned Sarah Kenchington, creator of wonderful musical instruments.
  • Jo pointed out that automated creativity is nothing new, and the Player Piano dates back to the 2nd Century BC.

Duncan

  • Duncan believes AI will be vulnerable to the ‘see it, know it’ test. His feeling is, if there was one in the room, we’d know about it.
  • However ‘Real AI’ or ‘Strong AI’ is years away.
  • We’ve had voice and face recognition, but these are still notoriously faulty and problematic.
  • You only have to look at the Deep Dream ‘puppy/slug’ creations to realise how far we have to go. On the other hand, if this is the first step to sentience, we ought to be concerned to rein it in – if these are our AIs, maybe they need therapy!
  • The serious point Duncan’s making is that AIs need human help: to improve, interpret, and meet us halfway in a human world. AI needs humans, and really needs artists.
  • Duncan is interested in AI beyond usefulness. They can be nice to have, not just useful. He talked about two of his latest ventures, Ara the Raspberry Pi songbird, and VRniture: embedding ambient companion beings in items of furniture around the house.

 

 

 

The discussion

As always, we had about half an hour afterwards to discuss things with the audience. Nice and chatty they were, too! A huge amount came up in a short time, but here are just a couple of points that were raised that I found really interesting:

  • If humans need to suffer to create great art, then does it follow that the best AI art will come from an AI into whom we’ve programmed traumatic memories?
  • Someone else quoted Hayao Miyazaki in this video of a grotesque body simulation: to make something like this, which ignores the deep reality of pain, is “an insult to life itself“.
  • Jo reminded us that ‘data is not us’ and that to speak of ‘digital versions of us’ is a nonsense. That’s often forgotten.
  • A question about pacing of human and digital ‘life’ came up. Why do we expect or want AIs to replicate and expand so quickly? The human lifespan takes decades; is there some value in a ‘slow AI’ movement?
  • Are we ‘destined’ to create, or to make machines to create?

Leave a Reply

Your email address will not be published. Required fields are marked *