Harnad, Stevan (1982). « Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component ». Cognition and Brain Theory, 5, pp. 29-47.
Fichier(s) associé(s) à ce document :
HTML
Télécharger (32kB) |
Résumé
"Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on in our heads. Neural nets may be one way to ground the names of concrete objects and events in the capacity to categorize them (by learning the invariants in their sensorimotor projections). These grounded elementary symbols could then be combined into symbol strings expressing propositions about more abstract categories. Grounding does not equal meaning, however, and does not solve any philosophical problems.
Type: | Article de revue scientifique |
---|---|
Mots-clés ou Sujets: | catégorisation, computation, apprentissage, langage, ancrage symbolique, évolution, intelligence artificielle, cognition, réseaux neuronaux, perception categorielle, Searle, Turing, sciences cognitives |
Unité d'appartenance: | Faculté des sciences humaines > Département de psychologie Instituts > Institut des sciences cognitives (ISC) |
Déposé par: | Stevan Harnad |
Date de dépôt: | 24 sept. 2007 |
Dernière modification: | 20 avr. 2009 14:27 |
Adresse URL : | http://archipel.uqam.ca/id/eprint/134 |
Modifier les métadonnées (propriétaire du document) |
Statistiques |