The current deliverable summarises the work conducted within task T4.3 of WP4, focusing on the extraction and the subsequent analysis of semantic information from digital content, which is imperati ...
Research Interests:
Recently, SSVEP detection from EEG signals has attracted the interest of the research community, leading to a number of well-tailored methods, such as Canonical Correlation Analysis (CCA) and a number of variants. Despite their... more
Recently, SSVEP detection from EEG signals has attracted the interest of the research community, leading to a number of well-tailored methods, such as Canonical Correlation Analysis (CCA) and a number of variants. Despite their effectiveness, due to their strong dependence on the correct calculation of correlations, these methods may prove to be inadequate in front of potential deficiency in the number of channels used, the number of available trials or the duration of the acquired signals. In this paper, we propose the use of Subclass Marginal Fisher Analysis (SMFA) in order to overcome such problems. SMFA has the power to effectively learn discriminative features of poor signals, and this advantage is expected to offer the appropriate robustness needed in order to handle such deficiencies. In this context, we pinpoint the qualitative advantages of SMFA, and through a series of experiments we prove its superiority over the state-of-the-art in detecting SSVEPs from EEG signals acqui...
Research Interests:
The current deliverable summarises the work conducted within task T4.5 of WP4, presenting our proposed approaches for contextualised content interpretation, aimed at gaining insightful contextualis ...
Research Interests:
The current deliverable summarises the work conducted within task T4.4 of WP4, presenting our proposed models for semantically representing digital content and its respective context – the latter r ...
Research Interests:
The notion of signal sparsity has been gaining increasing interest in information theory and signal processing communities. As a consequence, a plethora of sparsity metrics has been presented in the literature. The appropriateness of... more
The notion of signal sparsity has been gaining increasing interest in information theory and signal processing communities. As a consequence, a plethora of sparsity metrics has been presented in the literature. The appropriateness of these metrics is typically evaluated against a set of objective criteria that has been proposed for assessing the credibility of any sparsity metric. In this paper, we propose a Generalised Differential Sparsity (GDS) framework for generating novel sparsity metrics whose functionality is based on the concept that sparsity is encoded in the differences among the signal coefficients. We rigorously prove that every metric generated using GDS satisfies all the aforementioned criteria and we provide a computationally efficient formula that makes GDS suitable for high-dimensional signals. The great advantage of GDS is its flexibility to offer sparsity metrics that can be well-tailored to certain requirements stemming from the nature of the data and the proble...
Research Interests:
Research Interests:
Research Interests:
The remarkable clinical heterogeneity of CLL has prompted several initiatives towards the development of prognostic models aiming to stratify patients into subgroups with distinct outcome. However, despite progress, the resultant... more
The remarkable clinical heterogeneity of CLL has prompted several initiatives towards the development of prognostic models aiming to stratify patients into subgroups with distinct outcome. However, despite progress, the resultant prognostic models, mostly based on Cox regression analysis, have not been adopted in everyday clinical practice, mainly due to failure to provide sufficiently accurate predictions on a per patient basis. Here, we approached the issue of prognostication amongst Binet stage A CLL cases following a novel approach, in particular using Adaboost, an ensemble learning algorithm based on decision trees. Adaboost jointly considers all available parameters providing a specific prediction for each patient, unlike Cox regression models which are based on identifying parameters with independent prognostic significance. In addition, Adaboost models are completely automated with minimal time for training and prediction generation. This is in contrast to Cox models which a...
Research Interests:
Research Interests:
Handling big data poses as a huge challenge in the computer science community. Some of the most appealing research domains such as machine learning, computational biology and social networks are now overwhelmed with large-scale databases... more
Handling big data poses as a huge challenge in the computer science community. Some of the most appealing research domains such as machine learning, computational biology and social networks are now overwhelmed with large-scale databases that need computationally demanding manipulation. Several techniques have been proposed for dealing with big data processing challenges including computational efficient implementations, like parallel and distributed architectures, but most approaches benefit from a dimensionality reduction and smart sampling step of the data. In this context, through a series of groundbreaking works, Compressed Sensing (CS) has emerged as a powerful mathematical framework providing a suite of conditions and methods that allow for an almost lossless and efficient data compression. The most surprising outcome of CS is the proof that random projections qualify as a close to optimal selection for transforming high-dimensional data into a low-dimensional space in a way that allows for their almost perfect reconstruction. The compression power along with the usage simplicity render CS an appealing method for optimal dimensionality reduction of big data. Although CS is renowned for its capability of providing succinct representations of the data, in this chapter we investigate its potential as a dimensionality reduction technique in the domain of image annotation. More specifically, our aim is to initially present the challenges stemming from the nature of big data problems, explain the basic principles, advantages and disadvantages of CS and identify potential ways of exploiting this theory in the domain of large-scale image annotation. Towards this end, a novel Hierarchical Compressed Sensing (HCS) method is proposed. The new method dramatically decreases the computational complexity, while displays robustness equal to the typical CS method. Besides, the connection between the sparsity level of the original dataset and the effectiveness of HCS is established through a series of artificial experiments. Finally, the proposed method is compared with the state-of-the-art dimensionality reduction technique of Principal Component Analysis. The performance results are encouraging, indicating a promising potential of the new method in large-scale image annotation.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
In this paper, the problem of frontal view recognition on still images is confronted, using subspace learning methods. The aim is to acquire the frontal images of a person in order to achieve better results in later face or facial... more
In this paper, the problem of frontal view recognition on still images is confronted, using subspace learning methods. The aim is to acquire the frontal images of a person in order to achieve better results in later face or facial expression recognition. For this purpose, we utilize a relatively new subspace learning technique, Clustering based Discriminant Analysis (CDA) against two