Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Topic: Parallel Networks That Learn To Pronounce English Text Problem

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 7

Topic

PARALLEL NETWORKS THAT LEARN TO PRONOUNCE ENGLISH TEXT Problem: The main problem in the parallel networks that learn to pronounce english test research paper is in english text we have more number of alphapets and different type of sounds. So, to pronounce this training data is more difficult for the input data. What did the authors do? Terrence Sejnowski and Charles rosenburg considered the english alphabets and brought those into the existing field and the inputs are given to the artificial network unit and verified the output model of the english pronounciation of the words. Related Works? Drs.Alfonso caramazza, Stephen Hanson and few are contributed their work for helping in the concept for learning language and learning and Dr. Stephen Hanson and Andrew Olson also did his work under real contributions in the statistical anaylsis of the hidden units and also a paper is related to our research paper is discused below: A learning algorithm for the Boltzmann machines is the related work to the parallel neural network that learn to pronounce english text , here which describes a type of parallel constraint satisfaction network which we call a Boltzmann Machine that is capable of learning the underlying constraints that characterize a domain simply by being shown examples from the domain. Then, when shown any particular example, the network can interpret it by finding values of the variables in the internal model that would generate the example. So, in the above content finding values of variables is equal to the pronouncing the english text. Key Role of the paper? The key role of the paper is that pronouncing English alphabets in different possibilities considering one possibility that a word o and words started with o is one similarly stone can have a chance in different types of pronunciation and the machine to do so it has to be trained given a set of inputs like the English alphabets.

Data Sets? The training data which is used in the paper is regular English alphabets and the testing data is that the words which are given input to the system which is in understandable (or) known language. Input data: Considering seven groups of units and one group of units in each of the other two layers. Output data: Desired output on which is associated with the center, or fourth, letter of this seven letter shadow. The other six letters provided a partial theme for this decision. Approach? Considering back propagation and Boltzmann problem to this paper that transforming the words or letters to the speech. Sometimes these are implemented in the knowledge representation of the theory or network.

Here if we compare the more than 20,012 word in the dictionary and the data is including stress information required nearly 2,000,000 bits for storaging the device. Here the letter- to-sound tells that the average action level of each hidden unit for each letter-to-sound regarding to the training data. Experimental Results? Finally, the inputs are given and the outputs are displayed in the matter of the pronounciation of the words. The main concept of the complete separation of the consonants and vowels has two groups clustering which will having a different pattern. In that variables are most important for letter and consonants are more important for the sounds. For example suppose a word wow is typed the neural network would return a pronunciation to that as per the training by the teacher. Conclusions and Ideas? Nettalk is the copy or model that represents or reproduces something in a greatly reduced size for many aspects of learning. Network is designed with some language of letters and consonants and it is went around into several stages with some levels of performance. Network is recovered from damage more easily with some initial parts. Nettalk can be used as a research tool in different aspects of network coding,scaling and training in the domain. Nettalk over spacing effect may generalize to more complex memory systems which will be used for the distributed representations to store typical information.

TOPIC
SEXNET: A NEURAL NETWORK IDENTIFIES SEX FROM HUMAN FACES Problem? The problem is about sex identification for making the determination visually, the humans play a key role. But, Sometimes machines may not be matched in this criteria. Here system has to be trained to identify the differences between manchines and humans. What did the authors do? B.A. Golomb, D.T. Lawrence and T.J. Sejnowski considered a neural network, network which is trained for discriminating the sex in human faces, and tested with a set of 90 exemplars. In this case, Images are designed to 30*30 and it is compressed by using a 900*40*900 fully-connected back propagation network. Related Works? Dr. A. OToole contributed his work for providing images in this paper and additionally Shona Chattarji contributed his work in helping with graphics. This research paper was supported by the foundation name called Drown. Anatomic basis and neurobehavioral mechanisms is the related work to the Neural network identifies sex from the human faces is that neural mechanisms face processing in man are (1) designed to deal with the configuration of upright faces and (2) located predominantly in the right cerebral hemisphere. Here, Face processing in monkeys and man appears to utilize qualitatively similar mechanisms, but the extent and/or direction of cerebral asymmetry in these mechanisms may not be similar. So, in the above content finding faces of man and monkey in neural mechanisms are similar to the sex identification between Humans and machines. Key Idea of the paper? The main key idea of the paper is How the sex is recognized from the faces?. To determine this consider a neural network with some set of images which is tested and compressed. To obtain this an experiment is performed on various photographs of young adults with their plain faces visible. Data set? The training data which is used in the paper is 90 young adult images mainly faces without having facial hair, no jewelry and also no makeup, white cloth and the testing data is that the images which are given input to the system which is in understandable (or) known language. Input Data: 90 Images in different types of pixels and modifying according to the brightness Output Data: Firstly considering one image where faces are rotated firstly and in the same way eyes are translated. Now face, eyes are scaled and cropped to extent image around eyes and mouth. Considering image into 30*30 pixels with 12 pixels between the eyes and 8 pixels from eyes to mouth. Now finally 256 gray-level images are adjusted for the brightness to detect the perfect image.

Approach? Identifying the human faces in two stages, one is image compression and sex discrimination and these are fully connected three layers networks with two biases. Here image is compressed in the scheme of Cotrell and Fleming this is called the scheme which is a method to identify the face identity network. The network is having 30*30 images that is 900 input units through a 40 input equal to the desired output and this was the method in twofold. Following are the network for the compression network and the sex network for identifying the human faces.

The SexNet portion had 40 hidden inputs and one output. The network to produce a 1 for men, and a 0 for women. If the output is greater than 0.5 then the system accounted for male and if the value is less than 0.5 then the system accounted for female.
Results? Considering studies of 5 humans on the 90 faces revealed errors of 8,10,12,8,14 corresponding to the 8.9,11.1,13.3,8.9,15.5% with an average error of 11.6% and finally for the 10 hidden units it gaves errors on test faces like 15,0,20,0,20,10,0,10% for an average of 8.1% For one trial, sexnet is correctly assigned for all ten test faces and finally face was clear female whose sex value had been mistranscribed. Conclusions and ideas? Finally we derived that the sexNet perfromance was just similar to the humans but not by percent errors.Not only correctly sex previously faces but also diffculties on that faces and also for humans. sexNet is having human strategy which is confronted with a training face that consist in developing a special category and provide input to the facial information.

Topic Efficient training of artificial neural networks for autonomous navigation Problem: The problem of efficient training of artificial neural networks for autonomous navigation is training neural networks in real time to perform the difficult perception tasks. What did the authors do? In this paper, authors proposed a scheme called fly where we teach the network to imitate the human driver under actual driving conditions. Here artificial neural networks have never been successfully trained using sensor data in real time perception task. The author has tried to test the ALVINN to run in any road conditions like single lane road, highway road, lane etc. Related Works? Crisman J.D and Thorpe C.E contributed their work on Road and Navigation is about uses human activities derived from sensor data to help people navigate, in particular to retrace a trail previously taken by that person or another person. So, in the above content human activities can be easily known by sensors is equal to the actions taken by the human driver in different type of critical situations. Key Idea of the paper? The key role of the paper is that ALVINN system is a system that driver can drive the vehicle if any mistakes are happened then automatically by using sensors, the error is easily captured and video is displayed. So that very accurately it can navigate to the destinations. Data set? The training data which is used in this paper is consisting a single hidden layer back-propagation network Input data: consisting input layer with a network consists of 30*32 unit retina which a video camera image is projected Output data: In the ouput data consists of 30 units and the linear representation of the direction of the vehicle travel to keep the vehicle on the road. In this ouput unit represented as travel straight ahead condition. Approach? System flow: Input/output

Implementation: Considering ALVINN network with some road images with input and steering direction as the ouput. Here we are using back-propagation algorithm which relates to the correct steering direction. Here we used weight change momentum factor and learning rate constant for quick results for the output representation and the exemplar scheme. Here the steering direction is to keep the vehicle centered on the perfect positon on the road. Now the original image is shifted with different amounts which is related to the center of the road. The correct steering direction is performed by the driver for the original image is altered ofr eac of the shifted images.The random Replacements process is takes place that one forwad and one backward of the backpropagation algorithm is performed on the 200 exemplars and finally process should be repeated. This is performed as twice on sensor-based autonomous system. Results: ALVINN is to learn for different features to detect the position of the steering vehicle and in different driving situations; detectors are used for the correct steering direction. Finally, integrating symbolic knowledge sources are capable for planning a route and maintaining the vehicles position on a map into the system. Conclusions and Ideas: ALVINN is able to drive the vehicle in different types of situations with autonomys navigation and the vehicle result is ALVINN was able to drive in all conditions unlike the autonomys navigation systems. Developing the connectionist and non-connectionists techniques for combining networks for individual driving situations into a single system capable of handling them all integrating symbolic sources for planning a route and maintaining the vehicles position on a map into the system

You might also like