Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Research Interests:
Research Interests:
Προτασιακή και πρωτοβάθμια λογική
Η εργασία μελετά τη δυνατότητα προσομοίωσης χαοτικών ελκυστών με νευρωνικά δίκτυα τόσο σε θεωρητικό επίπεδο όσο και σε επίπεδο προσομοίωσης καθώς και τις χαοτικές ιδιότητες που αποτελούν εγγενές χαρακτηριστικό ενός νευρωνικού δικτύου αφού... more
Η εργασία μελετά τη δυνατότητα προσομοίωσης χαοτικών ελκυστών με νευρωνικά δίκτυα τόσο σε θεωρητικό επίπεδο όσο και σε επίπεδο προσομοίωσης καθώς και τις χαοτικές ιδιότητες που αποτελούν εγγενές χαρακτηριστικό ενός νευρωνικού δικτύου αφού και αυτό το ίδιο αποτελεί μία μη γραμμική δομή.
Research Interests:
This survey paper presents a collection of the most important algorithms for the well-known Traveling Salesman Problem (TSP) using Self-Organizing Maps (SOM). Each one of the presented models is characterized by its own features and... more
This survey paper presents a collection of the most important algorithms for the well-known Traveling Salesman Problem (TSP) using Self-Organizing Maps (SOM). Each one of the presented models is characterized by its own features and advantages. The modes are compared to each other to find their differences and similarities. The models are classified in two basic categories, namely the enriched and hybrid models. For each model we present information regarding its performance, the required number of iterations, as well as the number of neurons that are capable of solving the TSP problem. Based on the experimental results, the best model is identified for different occasions. The paper is a good starting point for anyone who is interested in solving TSP with SOM and desires to grasp a lot about this renowned problem.
The objective of this tutorial is to present the fundamental theory of Karp, Miller and Winograd, whose seminal paper laid the foundations regarding the systematic description of the organization of computations in systems of uniform... more
The objective of this tutorial is to present the fundamental theory of Karp, Miller and Winograd, whose seminal paper laid the foundations regarding the systematic description of the organization of computations in systems of uniform recurrent equations by means of graph structures, via the definition of computability conditions and techniques for the construction of one-dimensional and multi-dimensional scheduling functions. Besides the description of this theory, the paper presents improvements and revisions made by other authors and furthermore, points out the differences regarding the conditions of causality and dependency between the general case of systems of recurrent equations and the special case of multiple nested loops.
Optimization is a concept, a process, and a method that all people use on a daily basis to solve their problems. The source of many optimization methods for many scientists has been the nature itself and the mechanisms that exist in it.... more
Optimization is a concept, a process, and a method that all people use on a daily basis to solve their problems. The source of many optimization methods for many scientists has been the nature itself and the mechanisms that exist in it. Neural networks, inspired by the neurons of the human brain, have gained a great deal of recognition in recent years and provide solutions to everyday problems. Evolutionary algorithms are known for their efficiency and speed, in problems where the optimal solution is found in a huge number of possible solutions and they are also known for their simplicity, because their implementation does not require the use of complex mathematics. The combination of these two techniques is called neuroevolution. The purpose of the research is to combine and improve existing neuroevolution architectures, to solve time series problems. In this research, we propose a new improved strategy for such a system. As well as comparing the performance of our system with an already existing system, competing with it on five different data-sets. Based on the final results and a combination of statistical results, we conclude that our system manages to perform much better than the existing system in all five datasets.
Research Interests:
This paper proposes a neural network architecture for solving systems of non-linear equations. A back propagation algorithm is applied to solve the problem, using an adaptive learning rate procedure, based on the minimization of the mean... more
This paper proposes a neural network architecture for solving systems of non-linear equations. A back propagation algorithm is applied to solve the problem, using an adaptive learning rate procedure, based on the minimization of the mean squared error function defined by the system , as well as the network activation function, which can be linear or non-linear. The results obtained are compared with some of the standard global optimization techniques that are used for solving non-linear equations systems. The method was tested with some well-known and difficult applications (such as Gauss-Legendre 2-point formula for numerical integration, chemical equilibrium application, kinematic application, neuropsychology application, combustion application and interval arithmetic benchmark) in order to evaluate the performance of the new approach. Empirical results reveal that the proposed method is characterized by fast convergence and is able to deal with high-dimensional equations systems.
Research Interests:
This paper presents anMLP-type neural network with some fixed connections and a backpropagation-type training algorithm that identifies the full set of solutions of a complete system of nonlinear algebraic equations with n equations and n... more
This paper presents anMLP-type neural network with some fixed connections and a backpropagation-type training algorithm
that identifies the full set of solutions of a complete system of nonlinear algebraic equations with n equations and n
unknowns. The proposed structure is based on a backpropagation-type algorithm with bias units in output neurons layer.
Its novelty and innovation with respect to similar structures is the use of the hyperbolic tangent output function associated
with an interesting feature, the use of adaptive learning rate for the neurons of the second hidden layer, a feature
that adds a high degree of flexibility and parameter tuning during the network training stage. The paper presents the theoretical aspects for this approach as well as a set of experimental results that justify the necessity of such an architecture and evaluate its performance.
Research Interests:
The objective of this research is the presentation of a neural network capable of solving complete nonlinear algebraic systems of n equations with n unknowns. The proposed neural solver uses the classical back propagation algorithm with... more
The objective of this research is the presentation of a neural network capable of solving complete nonlinear algebraic systems of n equations with n unknowns. The proposed neural solver uses the classical back propagation algorithm with the identity function as the output function, and supports the feature of the adaptive learning rate for the neurons of the second hidden layer. The paper presents the fundamental theory associated with this approach as well as a set of experimental results that evaluate the performance and accuracy of the proposed method against other methods found in the literature.
Research Interests:
The objective of this research is the presentation of a feed-forward neural network capable of estimating the 2-cycle fixed points of Henon map by solving their defining nonlinear algebraic system. The network uses the back propagation... more
The objective of this research is the presentation of a feed-forward neural network capable of estimating the 2-cycle
fixed points of Henon map by solving their defining nonlinear algebraic system. The network uses the back propagation
algorithm and solves the aforementioned system for a set of values of the parameters ˛ and ˇ of Henon map. Besides the
estimation of the fixed points, the paper includes the study of the network convergence and its speed for many different
initial conditions. Copyright © 2013 John Wiley & Sons, Ltd.
Τεχνική αναφορά που περιγράφει τη βασική θεωρία των εκθετών Lyapunov που επιτρέπουν το χαρακτηρισμό μιας χρονοσειράς δεδομένων ως χαοτική και παρουσιάζει αναλυτικά τον αλγόριθμο του Wolf για τον αριθμητικό υπολογισμό του μέγιστου εκθέτη... more
Τεχνική αναφορά που περιγράφει τη βασική θεωρία των εκθετών Lyapunov που επιτρέπουν το χαρακτηρισμό μιας χρονοσειράς δεδομένων ως χαοτική και παρουσιάζει αναλυτικά τον αλγόριθμο του Wolf για τον αριθμητικό υπολογισμό του μέγιστου εκθέτη Lyapunov μιας πειραματικώς καταγεγραμμένης χρονοσειράς

And 6 more

Research Interests:
Απόσπασμα από το βιβλίο μου της Μαθηματικής Λογικής
Research Interests:
Research Interests:
Η λογική του Leibniz
Ο στόχος αυτής της μελέτης είναι η παρουσίαση της συλλογιστικής του Αριστοτέλη (ο οποίος δίκαια χαρακτηρίζεται ως ο πατέρας της Λογικής), με πολύ συνοπτική αναφορά (λόγω του περιορισμένου χώρου) στις προσπάθειες που έγιναν για την... more
Ο στόχος αυτής της μελέτης είναι η παρουσίαση της συλλογιστικής του Αριστοτέλη (ο οποίος δίκαια χαρακτηρίζεται ως ο πατέρας της Λογικής), με πολύ συνοπτική αναφορά (λόγω του περιορισμένου χώρου) στις προσπάθειες που έγιναν για την ανάπτυξη αυτής της επιστήμης, πριν και μετά τον Αριστοτέλη. Αυτή η μελέτη αποτελεί την πρώτη μίας σειράς δύο μελετών, εκ των οποίων η δεύτερη, που θα κοινοποιηθεί στο άμεσο μέλλον, πραγματεύεται τη μετά-βαση από τη φιλοσοφική στη μαθηματική λογική.