Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Publication in the conference proceedings of EUSIPCO, Trieste, Italy, 1996
Abstract The paper is presenting a core idea of the research line, dedicated to tools and utilities for a more friendly information access. In order to make future multimedia systems smarter, new mechanisms for high-level data and user... more
Abstract The paper is presenting a core idea of the research line, dedicated to tools and utilities for a more friendly information access. In order to make future multimedia systems smarter, new mechanisms for high-level data and user understanding need to be embedded in multimedia communication systems
The paper deals with personalization of navigation in the educational content, introduced in a competence-based instructional design system InterMediActor. The system constructs an individualized navigation graph for each student and thus... more
The paper deals with personalization of navigation in the educational content, introduced in a competence-based instructional design system InterMediActor. The system constructs an individualized navigation graph for each student and thus suggests the learning objectives the student is most prepared to attain. The navigation tools rely on the graph of dependencies between competences, and the student model. We use fuzzy
High availability communication networks with very low failure rates are often designed by using physical diversity, i.e., the traffic between a given pair of nodes is routed by using several physically disjoint paths. The selection of... more
High availability communication networks with very low failure rates are often designed by using physical diversity, i.e., the traffic between a given pair of nodes is routed by using several physically disjoint paths. The selection of the pair of routes that maximizes the connectivity of a node is not an easy problem, be- causesuchconnectivitycannotbeexpressedasanadditivefunction of the availability of links and
Se propone, al amparo del proyecto “Nuevos Algoritmos para la Gestión Eficiente de Contenidos Multimedia en Redes de Comunicaciones Móviles”(NAGEC), un nuevo mecanismo para la búsqueda y recuperación de imágenes basado en realimentación... more
Se propone, al amparo del proyecto “Nuevos Algoritmos para la Gestión Eficiente de Contenidos Multimedia en Redes de Comunicaciones Móviles”(NAGEC), un nuevo mecanismo para la búsqueda y recuperación de imágenes basado en realimentación de relevancia. La arquitectura propuesta se compone de una red neuronal y un tesauro. La red neuronal extrae de las imágenes dos parámetros: textura y color. El tesauro recoge las relaciones semánticas existentes entre los términos descriptores de las imágenes de la ...
This paper describes a failure alert system and a methodology for content reuse in a new instructional design system called InterMediActor (IMA). IMA provides an environment for instructional content design, production and reuse, and for... more
This paper describes a failure alert system and a methodology for content reuse in a new instructional design system called InterMediActor (IMA). IMA provides an environment for instructional content design, production and reuse, and for students' evaluation based in content specification through a hierarchical structure of competences. The student assessment process and information extraction process for content reuse are explained.
Computer based training or distance education are facing dramatic changes with the advent of standardization efforts, some of them concentrating in maximal reuse. This is of paramount importance for a sustainable-cost... more
Computer based training or distance education are facing dramatic changes with the advent of standardization efforts, some of them concentrating in maximal reuse. This is of paramount importance for a sustainable-cost affordable-production of educational materials. Reuse in itself should not be a goal, though, since many methodological aspects might be lost. In this paper we propose two content production approaches for the InterMediActor platform under a competence-based methodology: either a bottom-up approach where ...
Abstract. The fundamental assumption that training and operational data come from the same probability distribution, which is the basis of most learning algorithms, is often not satisfied in practice. Several algorithms have been proposed... more
Abstract. The fundamental assumption that training and operational data come from the same probability distribution, which is the basis of most learning algorithms, is often not satisfied in practice. Several algorithms have been proposed to cope with classification problems where the class priors may change after training, but they can show a poor performance when the class conditional data densities also change. In this paper, we propose a re-estimation algorithm that makes use of unlabeled operational data to adapt ...
Abstract In this paper we show that the low detection capabilities of conventional equalizers (linear and decision feedback equalizers) and the excessive complexity of those based on neural networks can be avoided by means of mixed... more
Abstract In this paper we show that the low detection capabilities of conventional equalizers (linear and decision feedback equalizers) and the excessive complexity of those based on neural networks can be avoided by means of mixed schemes. Linear equalizers aided by controid selection (as in Radial Basis Function networks) improve performance over the standard linear FIR equalization approach, and modified DFE based on bi-layer perceptron avoids error propagation, outperforming conventional schemes. 1
Abstract This paper explores the mechanisms to efficiently combine annotations of different quality for multiclass classification datasets, as we argue that it is easier to obtain large collections of weak labels as opposed to true... more
Abstract This paper explores the mechanisms to efficiently combine annotations of different quality for multiclass classification datasets, as we argue that it is easier to obtain large collections of weak labels as opposed to true labels. Since labels come from different sources, their annotations may have different degrees of reliability (e.g., noisy labels, supersets of labels, complementary labels or annotations performed by domain experts), and we must make sure that the addition of potentially inaccurate labels does not degrade the performance achieved when using only true labels. For this reason, we consider each group of annotations as being weakly supervised and pose the problem as finding the optimal combination of such collections. We propose an efficient algorithm based on expectation-maximization and show its performance in both synthetic and real-world classification tasks in a variety of weak label scenarios.
Abstract Most supervised learning algorithms are based on the assumption that the training data set reflects the underlying statistical model of the real data. However, this stationarity assumption is not always satisfied in practice:... more
Abstract Most supervised learning algorithms are based on the assumption that the training data set reflects the underlying statistical model of the real data. However, this stationarity assumption is not always satisfied in practice: quite frequently, class prior probabilities are not in accordance with the class proportions in the training data set. The minimax approach is based on selecting the classifier that minimize the error probability under the worst case conditions. We propose a two-step learning algorithm to train a neural network in order to ...
Abstract In this paper we propose a new method for training classifiers for multi-class problems when classes are not (necessarily) mutually exclusive and may be related by means of a probabilistic tree structure. Our method is based on... more
Abstract In this paper we propose a new method for training classifiers for multi-class problems when classes are not (necessarily) mutually exclusive and may be related by means of a probabilistic tree structure. Our method is based on the definition of a Bayesian model relating network parameters, feature vectors and categories. Learning is stated as a maximum likelihood estimation problem of the classifier parameters. The proposed algorithm is tested on an image retrieval scenario
The minimization of the empirical risk based on an arbitrary Bregman divergence is known to provide posterior class probability estimates in classification problems, but the accuracy of the estimate for a given value of the true posterior... more
The minimization of the empirical risk based on an arbitrary Bregman divergence is known to provide posterior class probability estimates in classification problems, but the accuracy of the estimate for a given value of the true posterior depends on the specific choice of the divergence. Ad hoc Bregman divergences can be designed to get a higher estimation accuracy for the posterior probability values that are most critical for a particular cost-sensitive classification scenario. Moreover, some sequences of Bregman loss functions can be constructed in such a way that their minimization guarantees, asymptotically, minimum number of errors in nonseparable cases, and maximum margin classifiers in separable problems. In this paper, we analyze general conditions on the Bregman generator to satisfy this property, and generalize the result for cost-sensitive classification.
Research Interests:
Abstract An optimal least-mean-square (LMS) algorithm is presented to be used with a lookup-table plus transversal filter adaptive echo canceller that has independent steps for the non-overlapped and overlapped parts of the transversal... more
Abstract An optimal least-mean-square (LMS) algorithm is presented to be used with a lookup-table plus transversal filter adaptive echo canceller that has independent steps for the non-overlapped and overlapped parts of the transversal structure. Its advantage over previous schemes and algorithms used with this family of structures is presented. A suboptimal switched step version is discussed. Some simulation examples illustrate the performances of these schemes
Abstract The authors demonstrate the advantage of using a Pao network in symbol by symbol data equalization. It is a simple and fast scheme that can be used even for nonlinear channel cases. One of the open problems is the selection of... more
Abstract The authors demonstrate the advantage of using a Pao network in symbol by symbol data equalization. It is a simple and fast scheme that can be used even for nonlinear channel cases. One of the open problems is the selection of the order of the inputs. A cascade-correlation scheme offers the possibility of accomplishing this task in a particularly efficient manner. It is necessary to extend the obtained results to practical modulation constellations and models of transmission channels, especially to nonlinear models. ...
Abstract For a Transmission System Power Operator such as Red Electrica de Espana (REE), operating a high power grid in real-time conditions, reliability of its communication circuits are a main concern. REE has developed a high... more
Abstract For a Transmission System Power Operator such as Red Electrica de Espana (REE), operating a high power grid in real-time conditions, reliability of its communication circuits are a main concern. REE has developed a high availability communication networks with very low failure rates, its main design criteria being the physical diversity, ie, the traffic between a given pair of nodes is routed by using several physically disjoint paths. In this setting, we propose an availability-based cost design method in order to design very ...
The application of the Bayesian formulation to the joint data and channel estimation in digital communication is not feasible in practice because the computational complexity and memory requirements of the estimation process grow... more
The application of the Bayesian formulation to the joint data and channel estimation in digital communication is not feasible in practice because the computational complexity and memory requirements of the estimation process grow exponentially with time. However, the evolution with time of the channel conditional density model suggests the application of pruning, selection, crossover and other concepts from evolutionary computation and neural networks, which drastically reduce the complexity of the Bayesian equalizer without ...
Résumé/Abstract The problem of identifying terrains in Landsat-TM images on the basis of non-uniformly distributed labeled data is discussed in this paper. Our approach is based on the use of neural network classifiers that learn to... more
Résumé/Abstract The problem of identifying terrains in Landsat-TM images on the basis of non-uniformly distributed labeled data is discussed in this paper. Our approach is based on the use of neural network classifiers that learn to predict posterior class probabilities. Principal Component Analysis (PCA) is used to extract features from spectral and contextual information. The proposed scheme obtains lower error rates that other model-based approaches.
Recently, several authors have explored the application of Neural Networks to compensate the channel effects in digital communication systems, with the goal of reducing the limitations of the conventional schemes: the suboptimal... more
Recently, several authors have explored the application of Neural Networks to compensate the channel effects in digital communication systems, with the goal of reducing the limitations of the conventional schemes: the suboptimal performance of the Linear Equalizer (LE) and the Decision Feedback Equalizer (DFE), or the complexity and the model dependence of Viterbibased detectors.
Abstract The decentralized detection of events is a primary task in many applications of wireless sensor networks. Since energy consumption is the main constraint in networks of battery-powered sensors, as it limits their lifetime, taking... more
Abstract The decentralized detection of events is a primary task in many applications of wireless sensor networks. Since energy consumption is the main constraint in networks of battery-powered sensors, as it limits their lifetime, taking explicitly into account the energy costs in the design of any decentralized detection algorithm becomes a major issue. Based on state-of-the-art censoring techniques and a selective communications framework we develop an energy-aware decentralized detection scheme that, in a greedy fashion, ...
Digital coding of information tries to compress information to improve the use rates of existing transmission and storage devices without degrading the quality after the decoding process. Vector Quantization (VQ) has been broadly used in... more
Digital coding of information tries to compress information to improve the use rates of existing transmission and storage devices without degrading the quality after the decoding process. Vector Quantization (VQ) has been broadly used in such systems —speech and image coding for example— to increase the compression under some distortion constraints. Although some algorithms have been proposed for VQ design,
Research Interests:
Abstract In this paper, starting from showing that a recurrent version of a radial basis function (RBF) network can compute optimal symbol-by-symbol decisions for equalizing digital channels in digital communication systems, we present... more
Abstract In this paper, starting from showing that a recurrent version of a radial basis function (RBF) network can compute optimal symbol-by-symbol decisions for equalizing digital channels in digital communication systems, we present structures for non-Gaussian channel equalization and delayed decisions. To reduce the complexity of the structure, which grows exponentially with the memory of the channel (like that of Viterbi detectors), we propose some simplification option, preserving parallelism and near-optimal performance. ...
ABSTRACT Selective communication (censoring) strategies allow nodes in a sensor network to discard low importance messages with the purpose of saving energy that can be used for transmitting more important messages later. In this paper we... more
ABSTRACT Selective communication (censoring) strategies allow nodes in a sensor network to discard low importance messages with the purpose of saving energy that can be used for transmitting more important messages later. In this paper we apply simple selective policies based on Markov Decision Processes to a distributed target tracking scenario based on particle filters and the Information-Driven Sensor Querying (IDSQ) scheme. The resulting algorithm is combined with other energy-efficient schemes, such as data aggregation or data fusion, extending the action space of these techniques. Our simulation work shows that the network lifetime can be substantially increased while keeping a low tracking error. And even more, selecting the sampling rate properly, lifetime is prolonged without increasing the tracking error.
Abstract In this paper we propose a new method for training classifiers for multi-class problems when classes are not (necessarily) mutually exclusive and may be related by means of a probabilistic tree structure. It is based on the... more
Abstract In this paper we propose a new method for training classifiers for multi-class problems when classes are not (necessarily) mutually exclusive and may be related by means of a probabilistic tree structure. It is based on the definition of a Bayesian model relating network parameters, feature vectors and categories. Learning is stated as a maximum likelihood estimation problem of the classifier parameters. The proposed algorithm is specially suited to situations where each training sample is labeled with ...
Several rate control (RC) schemes include the basic unit (BU) layer, where the quantization parameter (QP) value can be modified within a picture to get a fine adjustment to the target bits. The BU is a group of macroblocks (MBs) which... more
Several rate control (RC) schemes include the basic unit (BU) layer, where the quantization parameter (QP) value can be modified within a picture to get a fine adjustment to the target bits. The BU is a group of macroblocks (MBs) which share the same QP value, and its size is set previously to the encoding process. This paper describes a RC algorithm capable of detecting the instants in the sequence encoding process where a small BU size works efficiently and, for the rest of cases, use a large one to enhance quality. Our ...
Research Interests:

And 57 more