Home » Shaping Futures » AI just beat a human test for creativity. What does that even mean?

AI just beat a human test for creativity. What does that even mean?

Large language models are getting better at mimicking human creativity. That doesn’t mean they’re actually being creative, though.

AI is getting better at passing tests designed to measure human creativity. In a study published in Scientific Reports September 14, AI chatbots achieved higher average scores than humans in the Alternate Uses Task, a test commonly used to assess this ability.

This study will add fuel to an ongoing debate among AI researchers about what it even means for a computer to pass tests devised for humans. The findings do not necessarily indicate that AIs are developing an ability to do something uniquely human. It could just be that AIs can pass creativity tests, not that they’re actually creative in the way we understand. However, research like this might give us a better understanding of how humans and machines approach creative tasks.

Researchers started by asking three AI chatbots—OpenAI’s ChatGPT and GPT-4 as well as Copy.Ai, which is built on GPT-3—to come up with as many uses for a rope, a box, a pencil, and a candle as possible within just 30 seconds.

Their prompts instructed the large language models to come up with original and creative uses for each of the items, explaining that the quality of the ideas was more important than the quantity. Each chatbot was tested 11 times for each of the four objects. The researchers also gave 256 human participants the same instructions.

https://www.technologyreview.com/2023/09/14/1079465/ai-just-beat-a-human-test-for-creativity-what-does-that-even-mean/?utm_source=engagement_email&utm_medium=email&utm_campaign=wklysun&utm_term=09.17.23.nonsubs_eng&utm_content=TR35-2023-ENG&mc_cid=de2d212dd4&mc_eid=be5202f3c7