Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Anastasios Maronidis

    The current deliverable summarises the work conducted within task T4.3 of WP4, focusing on the extraction and the subsequent analysis of semantic information from digital content, which is imperati ...
    Recently, SSVEP detection from EEG signals has attracted the interest of the research community, leading to a number of well-tailored methods, such as Canonical Correlation Analysis (CCA) and a number of variants. Despite their... more
    Recently, SSVEP detection from EEG signals has attracted the interest of the research community, leading to a number of well-tailored methods, such as Canonical Correlation Analysis (CCA) and a number of variants. Despite their effectiveness, due to their strong dependence on the correct calculation of correlations, these methods may prove to be inadequate in front of potential deficiency in the number of channels used, the number of available trials or the duration of the acquired signals. In this paper, we propose the use of Subclass Marginal Fisher Analysis (SMFA) in order to overcome such problems. SMFA has the power to effectively learn discriminative features of poor signals, and this advantage is expected to offer the appropriate robustness needed in order to handle such deficiencies. In this context, we pinpoint the qualitative advantages of SMFA, and through a series of experiments we prove its superiority over the state-of-the-art in detecting SSVEPs from EEG signals acqui...
    The current deliverable summarises the work conducted within task T4.5 of WP4, presenting our proposed approaches for contextualised content interpretation, aimed at gaining insightful contextualis ...
    The current deliverable summarises the work conducted within task T4.4 of WP4, presenting our proposed models for semantically representing digital content and its respective context – the latter r ...
    The notion of signal sparsity has been gaining increasing interest in information theory and signal processing communities. As a consequence, a plethora of sparsity metrics has been presented in the literature. The appropriateness of... more
    The notion of signal sparsity has been gaining increasing interest in information theory and signal processing communities. As a consequence, a plethora of sparsity metrics has been presented in the literature. The appropriateness of these metrics is typically evaluated against a set of objective criteria that has been proposed for assessing the credibility of any sparsity metric. In this paper, we propose a Generalised Differential Sparsity (GDS) framework for generating novel sparsity metrics whose functionality is based on the concept that sparsity is encoded in the differences among the signal coefficients. We rigorously prove that every metric generated using GDS satisfies all the aforementioned criteria and we provide a computationally efficient formula that makes GDS suitable for high-dimensional signals. The great advantage of GDS is its flexibility to offer sparsity metrics that can be well-tailored to certain requirements stemming from the nature of the data and the proble...
    The remarkable clinical heterogeneity of CLL has prompted several initiatives towards the development of prognostic models aiming to stratify patients into subgroups with distinct outcome. However, despite progress, the resultant... more
    The remarkable clinical heterogeneity of CLL has prompted several initiatives towards the development of prognostic models aiming to stratify patients into subgroups with distinct outcome. However, despite progress, the resultant prognostic models, mostly based on Cox regression analysis, have not been adopted in everyday clinical practice, mainly due to failure to provide sufficiently accurate predictions on a per patient basis. Here, we approached the issue of prognostication amongst Binet stage A CLL cases following a novel approach, in particular using Adaboost, an ensemble learning algorithm based on decision trees. Adaboost jointly considers all available parameters providing a specific prediction for each patient, unlike Cox regression models which are based on identifying parameters with independent prognostic significance. In addition, Adaboost models are completely automated with minimal time for training and prediction generation. This is in contrast to Cox models which a...
    Handling big data poses as a huge challenge in the computer science community. Some of the most appealing research domains such as machine learning, computational biology and social networks are now overwhelmed with large-scale databases... more
    Handling big data poses as a huge challenge in the computer science community. Some of the most appealing research domains such as machine learning, computational biology and social networks are now overwhelmed with large-scale databases that need computationally demanding manipulation. Several techniques have been proposed for dealing with big data processing challenges including computational efficient implementations, like parallel and distributed architectures, but most approaches benefit from a dimensionality reduction and smart sampling step of the data. In this context, through a series of groundbreaking works, Compressed Sensing (CS) has emerged as a powerful mathematical framework providing a suite of conditions and methods that allow for an almost lossless and efficient data compression. The most surprising outcome of CS is the proof that random projections qualify as a close to optimal selection for transforming high-dimensional data into a low-dimensional space in a way that allows for their almost perfect reconstruction. The compression power along with the usage simplicity render CS an appealing method for optimal dimensionality reduction of big data. Although CS is renowned for its capability of providing succinct representations of the data, in this chapter we investigate its potential as a dimensionality reduction technique in the domain of image annotation. More specifically, our aim is to initially present the challenges stemming from the nature of big data problems, explain the basic principles, advantages and disadvantages of CS and identify potential ways of exploiting this theory in the domain of large-scale image annotation. Towards this end, a novel Hierarchical Compressed Sensing (HCS) method is proposed. The new method dramatically decreases the computational complexity, while displays robustness equal to the typical CS method. Besides, the connection between the sparsity level of the original dataset and the effectiveness of HCS is established through a series of artificial experiments. Finally, the proposed method is compared with the state-of-the-art dimensionality reduction technique of Principal Component Analysis. The performance results are encouraging, indicating a promising potential of the new method in large-scale image annotation.
    ABSTRACT The use of image processing techniques in cultural heritage applications has been gaining increasing interest in the research community. In this paper, an integrated framework that can be used for virtual restoration of the... more
    ABSTRACT The use of image processing techniques in cultural heritage applications has been gaining increasing interest in the research community. In this paper, an integrated framework that can be used for virtual restoration of the facial region of damaged Byzantine icons is presented. A key aspect of the proposed methodology is the integration of practices adopted by expert icon restorers into a machine-based expert system that incorporates the modules of damage detection, shape and texture restoration. Damage detection is performed based on a residual-based approach, while the shape restoration method utilizes a 3D shape model generated by incorporating a set of geometrical rules defined by expert Byzantine style iconographers. Texture restoration is based on the recursive Principal Component Analysis (PCA) technique so that combinations of colors learned from a training set are applied to the damaged icon regions. All modules, developed as part of this framework, are incorporated into a user-friendly application that can be used by amateurs or professional Byzantine icon restorers and conservators. The potential of the developed tool has been validated through a quantitative experimental process and a user-based evaluation.
    ABSTRACT Byzantine art is overwhelmed by a multitude of icons that portray sacred faces. However, a large number of icons of historical value are either partially or totally damaged and thus in need of undergoing conservation. The... more
    ABSTRACT Byzantine art is overwhelmed by a multitude of icons that portray sacred faces. However, a large number of icons of historical value are either partially or totally damaged and thus in need of undergoing conservation. The detection and assessment of damage in cultural heritage artifacts comprise an integral part of the conservation process. In this paper, a method that can be used for assessing the damage on faces appearing in Byzantine icons is presented. The main approach involves the estimation of the residuals obtained after the coding and reconstruction of face image regions using trained Principal Component Analysis texture models. The extracted residuals can be used as the basis for obtaining information about the amount of damage and the positions of the damaged regions. Due to the specific nature of Byzantine icons several variations of the basic approach are tested through a quantitative experimental evaluation so that the methods most suited to the specific application domain are identified. As part of the experimental evaluation, holistic as well as patch-decomposition techniques have been utilized in order to catch the global and local information of the images, respectively. According to the results it is possible to detect and localize with reasonable accuracy the damaged areas of faces appearing in Byzantine icons.
    ABSTRACT In this paper, the robustness of appearance-based, subspace learning techniques for facial expression recognition in geometrical transformations is explored. A plethora of facial expression recognition algorithms is presented and... more
    ABSTRACT In this paper, the robustness of appearance-based, subspace learning techniques for facial expression recognition in geometrical transformations is explored. A plethora of facial expression recognition algorithms is presented and tested using three well-known facial expression databases. Although, it is common-knowledge that appearance based methods are sensitive to image registration errors, there is no systematic experiment reported in the literature and the problem is considered, a priori, solved. However, when it comes to automatic real-world applications, inaccuracies are expected, and a systematic preprocessing is needed. After a series of experiments we observed a strong correlation between the performance and the bounding box position. The mere investigation of the bounding box’s optimal characteristics is insufficient, due to the inherent constraints a real-world application imposes, and an alternative approach is demanded. Based on systematic experiments, the database enrichment with translated, scaled and rotated images is proposed for confronting the low robustness of subspace techniques for facial expression recognition.
    In this paper, the problem of frontal view recognition on still images is confronted, using subspace learning methods. The aim is to acquire the frontal images of a person in order to achieve better results in later face or facial... more
    In this paper, the problem of frontal view recognition on still images is confronted, using subspace learning methods. The aim is to acquire the frontal images of a person in order to achieve better results in later face or facial expression recognition. For this purpose, we utilize a relatively new subspace learning technique, Clustering based Discriminant Analysis (CDA) against two
    ABSTRACT An integrated tool that can be used for damage detection, shape restoration and texture restoration of faces appearing in Byzantine icons, is presented. The damage detection process involves the estimation of residuals obtained... more
    ABSTRACT An integrated tool that can be used for damage detection, shape restoration and texture restoration of faces appearing in Byzantine icons, is presented. The damage detection process involves the estimation of residuals obtained after the coding and reconstruction of face image regions using trained Principal Component Analysis (PCA) texture models. Shape restoration is accomplished using a model-based approach that employs a 3D shape model generated by taking into account a set of geometrical rules adopted by Byzantine style iconographers. Texture restoration is performed using a customized version of the recursive PCA technique. For this purpose dedicated PCA texture models representing different categories of faces appearing in icons, are used. All methods developed as part of the project are incorporated into a user-friendly application, which can be utilized by both amateurs and professionals. Indicative visual and quantitative results show the potential of the developed application.
    In this paper, the robustness of appearance-based subspace learning techniques in geometrical transformations of the images is explored. A number of such techniques are presented and tested using four facial expression databases. A strong... more
    In this paper, the robustness of appearance-based subspace learning techniques in geometrical transformations of the images is explored. A number of such techniques are presented and tested using four facial expression databases. A strong correlation between the recognition accuracy and the image registration error has been observed. Although it is common-knowledge that appearance-based methods are sensitive to image registration errors, there is no systematic experiment reported in the literature. As a result of these experiments, the training set enrichment with translated, scaled and rotated images is proposed for confronting the low robustness of these techniques in facial expression recognition. Moreover, person dependent training is proven to be much more accurate for facial expression recognition than generic learning.