Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Jump to content

Frank Rosenblatt

From Wikipedia, the free encyclopedia
Frank Rosenblatt
Born
Frank Rosenblatt

(1928-07-11)July 11, 1928
DiedJuly 11, 1971(1971-07-11) (aged 43)
Known forPerceptron
Academic background
Alma materCornell University
ThesisThe k-Coefficient: Design and Trial Application of a New Technique for Multivariate Analysis (1956)
InfluencesWalter Pitts, Warren Sturgis McCulloch, Donald O. Hebb, Friedrich Hayek, Karl Lashley

Frank Rosenblatt (July 11, 1928 – July 11, 1971) was an American psychologist notable in the field of artificial intelligence. He is sometimes called the father of deep learning[1] for his pioneering work on artificial neural networks.

Life and career

[edit]

Rosenblatt was born into a Jewish family in New Rochelle, New York as the son of Dr. Frank and Katherine Rosenblatt.[2]

After graduating from The Bronx High School of Science in 1946, he attended Cornell University, where he obtained his A.B. in 1950 and his Ph.D. in 1956.[3]

For his PhD thesis, he built a custom-made computer, the Electronic Profile Analyzing Computer (EPAC), to perform multidimensional analysis for psychometrics. He used it between 1951 and 1953 to analyze psychometric data collected for his PhD thesis. The data was collected from a paid, 600 item survey of more than 200 Cornell undergraduates. The total computational cost was 2.5 arithmetic operations, necessitating the use of an IBM CPC as well.[4] It was said that 15 minutes of data processing took just 2 seconds.[5]: 32

He then went to Cornell Aeronautical Laboratory in Buffalo, New York, where he was successively a research psychologist, senior psychologist, and head of the cognitive systems section. This is also where he conducted the early work on perceptrons, which culminated in the development and hardware construction of the Mark I Perceptron in 1960.[2] This was essentially the first computer that could learn new skills by trial and error, using a type of neural network that simulates human thought processes.

Rosenblatt's research interests were exceptionally broad. In 1959 he went to Cornell's Ithaca campus as director of the Cognitive Systems Research Program and also as a lecturer in the Psychology Department. In 1966 he joined the Section of Neurobiology and Behavior within the newly formed Division of Biological Sciences, as associate professor.[2] Also in 1966, he became fascinated with the transfer of learned behavior from trained to naive rats by the injection of brain extracts, a subject on which he would publish extensively in later years.[3]

In 1970 he became field representative for the Graduate Field of Neurobiology and Behavior, and in 1971 he shared the acting chairmanship of the Section of Neurobiology and Behavior. Frank Rosenblatt died in July 1971 on his 43rd birthday, in a boating accident in Chesapeake Bay.[3] He was eulogized at the floor of the House of Representatives, including former Senator Eugene McCarthy.[4]

Academic interests

[edit]

Perceptron

[edit]

Rosenblatt was best known for the Perceptron, an electronic device which was constructed in accordance with biological principles and showed an ability to learn. Rosenblatt's perceptrons were initially simulated on an IBM 704 computer at Cornell Aeronautical Laboratory in 1957.[6] When a triangle was held before the perceptron's eye, it would pick up the image and convey it along a random succession of lines to the response units, where the image was registered.[7]

He developed and extended this approach in numerous papers and a book called Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, published by Spartan Books in 1962.[8] He received international recognition for the Perceptron. The New York Times billed it as a revolution, with the headline "New Navy Device Learns By Doing",[9] and The New Yorker similarly admired the technological advancement.[7]

An elementary Rosenblatt's perceptron. A-units are linear threshold element with fixed input weights. R-unit is also a linear threshold element but with ability to learn according to Rosenblatt's learning rule. Redrawn in[10] from the original Rosenblatt's book.[11]

Rosenblatt proved four main theorems. The first theorem states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set (and sufficiently many independent A-elements). The fourth theorem states convergence of learning algorithm if this realisation of elementary perceptron can solve the problem.

Research on comparable devices was also being done in other places such as SRI, and many researchers had big expectations on what they could do. The initial excitement became somewhat reduced, though, when in 1969 Marvin Minsky and Seymour Papert published the book "Perceptrons". Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of A-units receptive fields. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. Thus, Rosenblatt proved omnipotence of the unrestricted elementary perceptrons, whereas Minsky and Papert demonstrated that abilities of perceptrons with restrictions are limited. These results are not in contradictions but the Minsky and Papert book was widely (and wrongly) cited as the proof of strong limitations of perceptrons. (For detailed elementary discussion of the first Rosenblatt's theorem and its relation to Minsky and Papert work we refer to a recent note.[10])

After research on neural networks returned to the mainstream in the 1980s, new researchers started to study Rosenblatt's work again. This new wave of study on neural networks is interpreted by some researchers as being a contradiction of hypotheses presented in the book Perceptrons, and a confirmation of Rosenblatt's expectations.

The Mark I Perceptron, which is generally recognized as a forerunner to artificial intelligence, currently resides in the Smithsonian Institution in Washington D.C.[3] The Mark I was able to learn, recognize letters, and solve quite complex problems.

Principles of Neurodynamics (1962)

The neuron model employed is a direct descendant of that originally proposed by McCulloch and Pitts. The basic philosophical approach has been heavily influenced by the theories of Hebb and Hayek and the experimental findings of Lashley. The probabilistic approach is shared with theorists such as Ashby, Uttley, Minsky, MacKay, and von Neumann.

— Frank Rosenblatt, Principles Of Neurodynamics, page 5

Rosenblatt's book Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, published by Spartan Books in 1962, summarized his work on perceptrons at the time.[11] The book was previously issued as an unclassified report No. 1196-G-8, on 1961 March 15, through the Defense Technical Information Center.[12]

The book is divided into four parts. The first gives an historical review of alternative approaches to brain modeling, the physiological and psychological considerations, and the basic definitions and concepts of the perceptron approach. The second covers three-layer series-coupled perceptrons: the mathematical underpinnings, performance results in psychological experiments, and a variety of perceptron variations. The third covers multi-layer and cross-coupled perceptrons, and the fourth back-coupled perceptrons and problems for future study.

Rosenblatt used the book to teach an interdisciplinary course entitled "Theory of Brain Mechanisms" that drew students from Cornell's Engineering and Liberal Arts colleges.

Rat brain experiments

[edit]

Around the late 1960s, inspired by James V. McConnell's experiments with memory transfer in planarians, Rosenblatt began experiments within the Cornell Department of Entomology on the transfer of learned behavior via rat brain extracts. Rats were taught discrimination tasks such as Y-maze and two-lever Skinner box. Then their brains were extracted, and the extracts and their antibodies were injected into untrained rats that were subsequently tested in the discrimination tasks to determine whether or not there was behavior transfer from the trained to the untrained rats.[13] Rosenblatt spent his last several years on this problem and showed convincingly that the initial reports of larger effects were wrong and that any memory transfer was at most very small.[3]

Other interests

[edit]

Astronomy

[edit]

Rosenblatt also had a serious research interest in astronomy and proposed a new technique to detect the presence of stellar satellites.[14] He built an observatory on a hilltop behind his house in Brooktondale about 6 miles east of Ithaca. When construction on the observatory was completed, Rosenblatt began an intensive study on SETI (Search for Extraterrestrial Intelligence).[3] He also studied photometry and developed a technique for "detecting low-level laser signals against a relatively intense background of non-coherent light".[13]

Politics

[edit]

Rosenblatt was very active in liberal politics. He worked in the Eugene McCarthy primary campaigns for president in New Hampshire and California in 1968 and in a series of Vietnam protest activities in Washington.[15]

IEEE Frank Rosenblatt Award

[edit]

The Institute of Electrical and Electronics Engineers (IEEE), the world's largest professional association dedicated to advancing technological innovation and excellence for the benefit of humanity, presents annually a IEEE Frank Rosenblatt Award.

See also

[edit]

References

[edit]
  1. ^ Tappert, Charles C. (2019). "Who is the Father of Deep Learning?". 2019 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE. pp. 343–348. doi:10.1109/CSCI49370.2019.00067. ISBN 978-1-7281-5584-5. S2CID 216043128. Retrieved 31 May 2021.
  2. ^ a b c Carey, Hugh L. (1971). "Tribute to Dr. Frank Rosenblatt" (PDF). Congressional Record: Proceedings and Debates of the 92d Congress, First Session. US Government Printing Office. pp. 1–7. Archived from the original (PDF) on 26 February 2014. Retrieved 24 Dec 2021.
  3. ^ a b c d e f Emlen, Stephen T.; Howland, Howard C.; O'Brien, Richard D. "Frank Rosenblatt, July 11, 1928 — July 11, 1971" (PDF). Cornell University. Retrieved 24 Dec 2021.
  4. ^ a b Penn, Jonathan (2021-01-11). Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century (Thesis). [object Object]. doi:10.17863/cam.63087.
  5. ^ "Editor Miscellany", American Scientist 42, no. 1 (January 1954): 32.
  6. ^ "Hyping Artificial Intelligence, Yet Again". newyorker.com. 31 December 2013.
  7. ^ a b Mason, Harding; Stewart, D.; Brendan, Gill (28 November 1958). "Rival". The New Yorker.
  8. ^ Preprint as a military report in 1961-03-15 as Report #1196-0-8
  9. ^ "New Navy Device Learns By Doing". The New York Times. 8 July 1958.
  10. ^ a b Kirdin A, Sidorov S, Zolotykh N (2022). "Rosenblatt's First Theorem and Frugality of Deep Learning". Entropy. 24 (11): 1635. doi:10.3390/e24111635. PMC 9689667. PMID 36359726.
  11. ^ a b Rosenblatt, F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms; Spartan Books: Washington, DC, USA, 1962.
  12. ^ Defense Technical Information Center (1961-03-15). DTIC AD0256582: PRINCIPLES OF NEURODYNAMICS. PERCEPTRONS AND THE THEORY OF BRAIN MECHANISMS.
  13. ^ a b Rosenblatt, Frank, and CORNELL UNIV ITHACA NY. Cognitive Systems Research Program. Technical report, Cornell University, 72, 1971.
  14. ^ "Frank Rosenblatt - July 11, 1928-July 11, 1971" (PDF). dspace.library.cornell.edu.
  15. ^ "Frank Rosenblatt - July 11, 1928-July 11, 1971" (PDF). dspace.library.cornell.edu.
  • Mason, Harding; Harding Mason, D. Stewart,Brendan Gill (November 29, 1958). "Rival". The New Yorker.{{cite web}}: CS1 maint: multiple names: authors list (link) An interview with Frank Rosenblatt, and Marshall C. Yovits of the Office of Naval Research.