Unpublished
arXiv.org, 2021
APA
Click to copy
Katyal, K., Parent, J., & Alicea, B. (2021). Connectionism, Complexity, and Living Systems: a comparison of Artificial and Biological Neural Networks. arXiv.org.
Chicago/Turabian
Click to copy
Katyal, K., Jesse Parent, and Bradly Alicea. “Connectionism, Complexity, and Living Systems: a Comparison of Artificial and Biological Neural Networks.” ArXiv.org, 2021.
MLA
Click to copy
Katyal, K., et al. “Connectionism, Complexity, and Living Systems: a Comparison of Artificial and Biological Neural Networks.” ArXiv.org, 2021.
BibTeX Click to copy
@unpublished{k2021a,
title = {Connectionism, Complexity, and Living Systems: a comparison of Artificial and Biological Neural Networks},
year = {2021},
journal = {arXiv.org},
author = {Katyal, K. and Parent, Jesse and Alicea, Bradly}
}
While Artificial Neural Networks (ANNs) have yielded impressive results in the realm of simulated intelligent behavior, it is important to remember that they are but sparse approximations of Biological Neural Networks (BNNs). We go beyond comparison of ANNs and BNNs to introduce principles from BNNs that might guide the further development of ANNs as embodied neural models. These principles include representational complexity, complex network structure/energetics, and robust function. We then consider these principles in ways that might be implemented in the future development of ANNs. In conclusion, we consider the utility of this comparison, particularly in terms of building more robust and dynamic ANNs. This even includes constructing a morphology and sensory apparatus to create an embodied ANN, which when complemented with the organizational and functional advantages of BNNs unlocks the adaptive potential of lifelike networks. Introduction How can Artificial Neural Networks (ANNs) emulate the “lifelike” nature of Biological Neural Networks (BNNs)? In recent years, flavors of ANN such as Deep Neural Networks (DNNs), Generative Adversarial Networks (GANs), and Convolutional Neural Networks (CNNs) have been trained to produce generative and ephemeral outputs we consider to be lifelike. For example, GANs have enabled procedural generation (Risi and Togelius, 2020), which allows for the creation of art and other creative content. The one-shot language learning model GPT-3 (Brown, et al. 2020) is based on a transformer neural network and exhibits impressive performance. Based on these advances, one might think that a sparse representation of the brain is sufficient to approximate intelligent behavior. Yet there are limits to the realism of the outputs of such models. In cases where human users interact with procedurally-generated virtual characters, the human response resembles the uncanny valley effect (Tinwell, et al. 2013). Similarly, GPT-3 can exhibit strange and often dangerously incorrect assumptions about the world in which it is situated (Marcus and Davis, 2020). This suggests that future improvements should be in terms of fundamental shortcomings of the ANN model, and potential solutions including various forms of biological inspiration. Our goal here is to deconstruct ANNs in terms of their parallels (or lack thereof) with BNNs. In doing so, we wish to better understand properties that might make ANNs more like living systems. These properties may not necessarily improve the performance of ANNs, but they might afford