Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the... more
Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.
Esta pesquisa tem como tema as imagens em pos-alta-definicao, especialmente no que tange a atual generalizacao sobre distribuicao, consumo e possibilidades de visualizacao. Seu objetivo e compreender a transformacao da imagem como uma... more
Esta pesquisa tem como tema as imagens em pos-alta-definicao, especialmente no que tange a atual generalizacao sobre distribuicao, consumo e possibilidades de visualizacao. Seu objetivo e compreender a transformacao da imagem como uma estrutura de visualizacao cientifica contemporânea, por isso, toda a defesa esta pautada em expandir a reflexao, aprofundando o processo evolutivo produzido pela imagem de altissima definicao, notadamente no setor do entretenimento e na ciencia. Durante o estudo, foi proposta uma investigacao que pudesse identificar e avaliar o quanto as imagens em pos-alta-definicao podem e sao usadas na ciencia em nossos dias. Todo o esforco se concentrou em evidenciar sua importância como condutora do processo de descobertas e, principalmente, sua contribuicao para desvendar o planeta Marte.
Literature has long been used as a source for reading materials in English as a first language (L1). In recent years, there has been a growing interest in utilizing literature in second language (L2) classrooms. The present article... more
Literature has long been used as a source for reading materials in English as a first language (L1). In recent years, there has been a growing interest in utilizing literature in second language (L2) classrooms. The present article assumes that using literature in L2 reading can have the same effect as in L1. Integrating literature into L2 learning can create a learning environment that will provide comprehensible input and a low affective filter. Literary texts may be used in both extensive and intensive reading. Use of different literary genres is discussed with a special focus on the benefits of using stories.
This review essay of two edited volumes sketches how STS scholars have analyzed scientific representation and visualization in recent work. Several key foci have emerged, among them attending closely to materiality, engaging the digital... more
This review essay of two edited volumes sketches how STS scholars have analyzed scientific representation and visualization in recent work. Several key foci have emerged, among them attending closely to materiality, engaging the digital through embodied action, turning to ontology, as well as benefitting from artistic practice and critique. In diverse ways these choices are informed by a discontentment with the Cartesian split of mind and body as well as the picture theory of language. Yet, naturalism endures as a template, an expectation, and sometimes a specter with and against which much representational work in science is done. What STS scholars have learnt about representation in laboratory and expert settings still awaits being employed more comprehensively for making sense of practices beyond the lab, especially in contested political, social and ecological environments. In setting out to do so, they ought to reflect on the kinds of logic that their practices of representing representation enact.
In this paper, we thoroughly study a trilinear interpolation scheme previously proposed for the Body-Centered Cubic (BCC) lattice. We think that, up to now, this technique has not received the attention that it deserves. By a... more
In this paper, we thoroughly study a trilinear interpolation scheme previously proposed for the Body-Centered Cubic (BCC) lattice. We think that, up to now, this technique has not received the attention that it deserves. By a frequency-domain analysis we show that it can isotropically suppress those aliasing spectra that contribute most to the postaliasing effect. Furthermore, we present an efficient GPU implementation, which requires only six trilinear texture fetches per sample. Overall, we demonstrate that the trilinear interpolation on the BCC lattice is competitive to the linear box-spline interpolation in terms of both efficiency and image quality. As a generalization to higher-order reconstruction, we introduce DC-splines that are constructed by convolving a Discrete filter with a Continuous filter, and easy to adapt to the Face-Centered Cubic (FCC) lattice as well.
To increase communication and collaboration opportunities, members of a community must be aware of the social networks that exist within that community. This paper describes a social network monitoring system – the KIWI system – that... more
To increase communication and collaboration opportunities, members of a community must be aware of the social networks that exist within that community. This paper describes a social network monitoring system – the KIWI system – that enables users to register their interactions and visualize their social networks. The system was implemented in a distributed research community and the results have shown that KIWI facilitates collecting information about social interactions. Furthermore, the visualization of the social networks, given as feedback, appeared to have a positive impact on the group, augmenting their social network awareness.
Abstract In this paper, we introduce a new approach to learn dissimilarity for interactive search in content based image retrieval. In literature, dissimilarity is often learned via the feature space by feature selection, feature... more
Abstract In this paper, we introduce a new approach to learn dissimilarity for interactive search in content based image retrieval. In literature, dissimilarity is often learned via the feature space by feature selection, feature weighting or a parameterized function of the ...
Computer visualizations are all around us. In this paper we describe a design process in which we explore the development of a new visualization to aid managerial decision making. The ultimate goal of our design effort is to develop a... more
Computer visualizations are all around us. In this paper we describe a design process in which we explore the development of a new visualization to aid managerial decision making. The ultimate goal of our design effort is to develop a visualization that allows for presenting most of the critical financial ratios used to describe a firm's activity on a single computer display and dynamically. In doing so, we hope to enable managers to develop holistic and intuitive appreciations of such matters as how a business changes ...
The creation of 3D models is generally considered by newcomers to be a difficult activity requiring a number of skills and considerable practice. This paper describes work in the INHERIT project which aims to address these issues by... more
The creation of 3D models is generally considered by newcomers to be a difficult activity requiring a number of skills and considerable practice. This paper describes work in the INHERIT project which aims to address these issues by providing a 3D modelling tool set which is easy to use, requiring few skills and little practice. This is achieved by the development of software tools which are customised to build particular types of model. The key aspect of these tools is the treatment of the underlying data of the 3D model as a tree structure of nodes which consist of parameterised representations of the components of the object being modelled. The tools then automatically generate the graphics primitives that enable the visualisation and interaction with the object. This paper describes the implementation of the first tool created following this principle which enables school children to model church structures
This paper introduces a successful approach for distinguishing abandoned luggage in surveillance recordings. We join short-and long-term foundation models to concentrate on closer view of objects, where every pixel in an information... more
This paper introduces a successful approach for distinguishing abandoned luggage in surveillance recordings. We join short-and long-term foundation models to concentrate on closer view of objects, where every pixel in an information picture is named a 2 bit code. In this manner, we acquaint a structure with recognized static frontal areas in light of the worldly move of code designs, and to figure out if the applicant districts contain surrendered protests by breaking down the back-followed directions of baggage proprietors. The trial comes about acquired in light of video pictures from 2006 performance evaluation of tracking and surveillance, and 2007 advanced video and signal-based surveillance databases demonstrate that the proposed approach is successful for identifying relinquished gear, and that it outflanks past techniques.
Semantic data browsing is important task for open and governmental data in behalf of public control. There are many projects and solutions regarding semantic data browsing and navigation, but despite the fact, in Slovakia, the... more
Semantic data browsing is important task for open and governmental data in behalf of public control. There are many projects and solutions regarding semantic data browsing and navigation, but despite the fact, in Slovakia, the availability of such data is poor. It is a shame, because projects like National Action Plan of Open Government and the site data.gov.sk are already operating for several years. In this work we would like to point out key aspects of semantic data and detail the Slovak market of semantic data. We design and propose our solution of semantic data browsing, evaluate the implementation in our AGECRT NET tool.
A B S T R A C T Visualisations can highly contribute to the importance and authority of new ideas, concepts, and knowledge claims. Among the many visualisations, few become well-known and influential in environmental governance. Whilst... more
A B S T R A C T Visualisations can highly contribute to the importance and authority of new ideas, concepts, and knowledge claims. Among the many visualisations, few become well-known and influential in environmental governance. Whilst these have been objects of specific research, this study questions what constitutes and underpins their influence. For this, the paper codifies influential visualisations and defines criteria for studying their visual characteristics. The criteria are applied to two case studies, the " traffic light " and the " planetary boundaries " diagrams. To increase the validity of the findings, the study also introduces two " failure cases " as a plausibility check.
❖In this work we give an overview of some of the most common free visualization techniques for extracting information from large data volumes. Our goal is to unveil their potential so that non-experts could get an idea how they could... more
❖In this work we give an overview of some of the most common free visualization techniques for extracting information from large data volumes. Our goal is to unveil their potential so that non-experts could get an idea how they could empower their work. ❖More specifically, we will focus in our work on Google Analytics, Pentaho, and R. ❖Our presentation is intended to a broad audience in no specific technological knowledge. Pentaho ❖Pentaho is an open source business intelligence suite basically focuses on data integration, analytical processing and reporting. ❖Pentaho has a free and open source Community edition under GPLv2 license. It also has a paid version which the price changes according to the company demands. ❖The server requirements to run Pentaho are at least 8 GB of memory, 20 GB of free disk space after installation and 64 bit dual core processor. ❖The first image shows the gather sales information of major phone retailers. http://www.webdetails.pt/pentaho/api/repos/:publ...
Synthesis of asynchronous circuits from Signal Transition Graphs (STGs) involves resolving state coding conflicts. The refinement process is generally done automatically using heuristics and often produces sub-optimal solutions, which... more
Synthesis of asynchronous circuits from Signal Transition Graphs (STGs) involves resolving state coding conflicts. The refinement process is generally done automatically using heuristics and often produces sub-optimal solutions, which have to be corrected manually. This paper presents a framework for an interactive refinement process aimed to help the designer. It is based on the visualization of conflict cores, i.e., sets of transitions causing coding conflicts, which are represented at the level of finite and complete prefixes of STG unfoldings.
This paper looks at the conditions of the emergence of "race" as a new scientific category during the eighteenth century, arguing that two modes of discourse and visualization played a significant role: that on society,... more
This paper looks at the conditions of the emergence of "race" as a new scientific category during the eighteenth century, arguing that two modes of discourse and visualization played a significant role: that on society, civility, and civilization -- as found principally in the travel literature -- and that on nature, as found in natural history writings, especially in botanical classifications. The European colonizing enterprise had resulted in an extensive flow of new objects at every level. Visual representations of these new objects circulated in the European cultural world and were transferred and transformed within travelogue and natural history writings. The nature, boundaries, and potentialities of humankind were discussed in this exchange within the conceptual grid of classifications and their visual representations. Over the course of the century the discourse on society, civility, and civilization collapsed into the discourse on nature. Humans became classified a...
Advancement in information and technology has made a major impact on medical science where the researchers come up with new ideas for improving the classification rate of various diseases. Breast cancer is one such disease killing large... more
Advancement in information and technology has made a major impact on medical science where the researchers come up with new ideas for improving the classification rate of various diseases. Breast cancer is one such disease killing large number of people around the world. Diagnosing the disease at its earliest instance makes a huge impact on its treatment. The authors propose a Binary Bat Algorithm (BBA) based Feedforward Neural Network (FNN) hybrid model, where the advantages of BBA and efficiency of FNN is exploited for the classification of three benchmark breast cancer datasets into malignant and benign cases. Here BBA is used to generate a V-shaped hyperbolic tangent function for training the network and a fitness function is used for error minimization. FNNBBA based classification produces 92.61% accuracy for training data and 89.95% for testing data.