AI reveals how humans process abstract thought

Cameron Buckner, assistant professor of philosophy from the University of Houston considers whether studying AI systems may help us understand how people process abstract thought.

Ever more sophisticated AI systems are judged more for their output and how they can out-compete humans, rather than how their complex neural networks truly interact and seemingly learn. He hopes by deconstructing the machine, we’ll gain more insight in to how humans learn

Deep Convolution Neural Networks (DCNNs), multi-layered artificial neural networks have nodes which replicate how neurons process and pass information in the brain, they can demonstrate how abstract knowledge is acquired. They provide models which are helpful in the fields of neuroscience and psychology. Buckner muses that so successful have these networks become in processing complicated perception and discrimination tasks, they sometimes cause scientists to question how they work.

Because of their ability to acquire the kind of subtle, abstract, intuitive knowledge of the world, which hitherto came as standard to human beings, DCNNs are achieving things which until now were impossible to program in to computers.

Leave a Comment