Even smartest AI models don’t match human visual processing: How deep-network models take potentially dangerous ‘shortcuts’ in solving complex recognition tasks

Deep convolutional neural networks (DCNNs) don’t see objects the way humans do — using configural shape perception — and that could be dangerous in real-world AI applications. The study employed novel visual stimuli called ‘Frankensteins’ to explore how the human brain and DCNNs process holistic, configural object properties.

Leave a Comment