Use this url to cite publication: https://hdl.handle.net/007/36298
Yerkes-Dodson law in agents' training
Type of publication
Straipsnis konferencijos medžiagoje Web of Science duomenų bazėje / Article in conference proceedings in Web of Science database (P1a1)
Title
Yerkes-Dodson law in agents' training
Publisher (trusted)
Springer |
Date Issued
2003
Extent
p. 54-58
Is part of
Progress in artificial intelligence. - (Lecture notes in artificial intelligence, ISSN 0302-9743). Berlin : Springr Verlag, 2003, Vol. 2902. ISBN 3540205896.
Field of Science
Abstract
Well known Yerkes-Dodson Law (YDL) claims that medium intensity stimulation encourages fastest learning. Mostly experimenters explained YDL by sequential action of two different processes. We show that YDL can be elucidated even with such simple model as nonlinear single layer perceptron and gradient descent training where differences between desired outputs values are associated with stimulation strength. Non-linear nature of curves "a number of iterations is a function of stimulation" is caused by smoothly bounded nonlinearities of the perceptron's activation function and a difference in desired outputs.
Type of document
type::text::conference output::conference proceedings::conference paper
ISBN (of the container)
3540205896
WOS
000187551600005
eLABa
15067264
Coverage Spatial
Vokietija / Germany (DE)
Language
Anglų / English (en)
Bibliographic Details
9