... Myers b , Christopher Navarro b , Muhammed Şahin c , Billie Spencer a & Nathan Tolbert b ... more ... Myers b , Christopher Navarro b , Muhammed Şahin c , Billie Spencer a & Nathan Tolbert b pages 100-108. ... For buildings, MAEviz offers two decision support models. The first model is equivalent cost analysis (also called a cost benefit analysis) [Park, 2004 View all references]. ...
Statistics and machine learning are data-oriented tasks in which domain models are induced from d... more Statistics and machine learning are data-oriented tasks in which domain models are induced from data. The bulk of research in these fields concentrates on inducing models from data archived in computer databases; however, for many problem domains, human expertise forms an essential part of the corpus of knowledge needed to construct models of the domain. The discipline of knowledge engineering has focused on encoding the knowledge of experts in a form that can be encoded into computational models of a ...
The Semantic Document Library (SDL) was driven by use cases from the environmental observatory co... more The Semantic Document Library (SDL) was driven by use cases from the environmental observatory communities and is designed to provide conventional document repository features of uploading, downloading, editing and versioning of documents as well as value adding features of tagging, querying, sharing, annotating, ranking, provenance, social networking and geo-spatial mapping services. It allows users to organize a catalogue of watershed observation data, model output, workflows, as well publications and documents related to the same watershed study through the tagging capability. Users can tag all relevant materials using the same watershed name and find all of them easily later using this tag. The underpinning semantic content repository can store materials from other cyberenvironments such as workflow or simulation tools and SDL provides an effective interface to query and organize materials from various sources. Advanced features of the SDL allow users to visualize the provenance of the materials such as the source and how the output data is derived. Other novel features include visualizing all geo-referenced materials on a geospatial map. SDL as a component of a cyberenvironment portal (the NCSA Cybercollaboratory) has goal of efficient management of information and relationships between published artifacts (Validated models, vetted data, workflows, annotations, best practices, reviews and papers) produced from raw research artifacts (data, notes, plans etc.) through agents (people, sensors etc.). Tremendous scientific potential of artifacts is achieved through mechanisms of sharing, reuse and collaboration - empowering scientists to spread their knowledge and protocols and to benefit from the knowledge of others. SDL successfully implements web 2.0 technologies and design patterns along with semantic content management approach that enables use of multiple ontologies and dynamic evolution (e.g. folksonomies) of terminology. Scientific documents involved with many interconnected entities (artifacts or agents) are represented as RDF triples using semantic content repository middleware Tupelo in one or many data/metadata RDF stores. Queries to the RDF enables discovery of relations among data, process and people, digging out valuable aspects, making recommendations to users, such as what tools are typically used to answer certain kinds of questions or with certain types of dataset. This innovative concept brings out coherent information about entities from four different perspectives of the social context (Who-human relations and interactions), the casual context (Why - provenance and history), the geo-spatial context (Where - location or spatially referenced information) and the conceptual context (What - domain specific relations, ontologies etc.).
... 1 Towards A Spatiotemporal Event-Oriented Ontology Yong Liu, Robert E. McGrath, Shaowen Wang,... more ... 1 Towards A Spatiotemporal Event-Oriented Ontology Yong Liu, Robert E. McGrath, Shaowen Wang, Mary Pietrowicz, Joe Futrelle, James D. Myers ... [19] Smith, B. and Garnett, G. (2007) "MusiVerse." Copenhagen: International Computer Music Conference [20] Visser, U. (2004). ...
Cyberinfrastructure (CI) is a broad term for the combination of high-performance computers, large... more Cyberinfrastructure (CI) is a broad term for the combination of high-performance computers, large-scale data storage, high speed networks, high-throughput instruments and sensor networks, and associated software that, as a ubiquitous, persistent infrastructure, is having, and will continue to have, a direct impact on scientific and engineering productivity. The potential for CI to allow individuals to perform more detailed analyses and/or study more complex systems is becoming well understood. Less well understood is the potential of CI to enable dynamic group and community-scale coordination of resources to more effectively address multidisciplinary scientific questions and to strengthen the connections between basic research and applications. We have coined the term "Cyberenvironments" to represent the software infrastructure and interfaces needed to realize this vision of CI as a systemic catalyst for transformative change in research practice and end-to-end productivit...
Proceedings of the 6th international workshop on Adaptive and reflective middleware held at the ACM/IFIP/USENIX International Middleware Conference - ARM '07, 2007
Researchers need to be able to discover data in order to use it and hence, discussions of scalabl... more Researchers need to be able to discover data in order to use it and hence, discussions of scalable data systems often focus on universal data catalogs as an early, and sometimes preeminent goal. However, such a characterization of requirements for data services can be problematic in that it ignores many other high value services that researchers need. Further, by placing too much emphasis on the requirements for discovery over those for scalable data publication and use, designers and developers risk significantly underestimating the costs and challenges involved in providing useful, scalable, and sustainable services. Most obviously, if discovery is an early emphasis, the system’s value depends on the amount of data and metadata submitted as well as the system’s usability relative to mature disciplinary repositories. This results in critical mass barriers as well as a catch22 situation in which the costs of generating metadata must be borne long before any value is derived. Overemp...
Page 1. J. Phys. Chem. 1994,98, 12535-12544 12535 Investigation of Acetyl Chloride Photodissociat... more Page 1. J. Phys. Chem. 1994,98, 12535-12544 12535 Investigation of Acetyl Chloride Photodissociation by Photofragment Imaging Subhash Deshmukh, James D. Myers, Sotiris S. Xantheas, and Wayne P. Hess* Environmental ...
Collaboration tools and problem solving environments (PSEs) have historically evolved independent... more Collaboration tools and problem solving environments (PSEs) have historically evolved independently, despite the fact that most of their developers would agree that transitions between team and individual work are facile and frequent in the real world. The explanation of this paradox has roots in both technology limitations and our lack of understanding of the details of how team and individual work are related and how to encode those relations in software. Progress is being made in both of these areas, and, as a result, there is growing interest in discussion between the collaborative systems and PSE communities, and a growing number of systems that provide collaboration and PSE functionality. We envision this Collaborative Problem-Solving Environments (CPSEs) minitrack as a forum for bringing these communities together and hope that the selected papers will provide the basis for a lively discussion of the issues involved in designing, developing, and deploying CPSEs in science and...
... Myers b , Christopher Navarro b , Muhammed Şahin c , Billie Spencer a & Nathan Tolbert b ... more ... Myers b , Christopher Navarro b , Muhammed Şahin c , Billie Spencer a & Nathan Tolbert b pages 100-108. ... For buildings, MAEviz offers two decision support models. The first model is equivalent cost analysis (also called a cost benefit analysis) [Park, 2004 View all references]. ...
Statistics and machine learning are data-oriented tasks in which domain models are induced from d... more Statistics and machine learning are data-oriented tasks in which domain models are induced from data. The bulk of research in these fields concentrates on inducing models from data archived in computer databases; however, for many problem domains, human expertise forms an essential part of the corpus of knowledge needed to construct models of the domain. The discipline of knowledge engineering has focused on encoding the knowledge of experts in a form that can be encoded into computational models of a ...
The Semantic Document Library (SDL) was driven by use cases from the environmental observatory co... more The Semantic Document Library (SDL) was driven by use cases from the environmental observatory communities and is designed to provide conventional document repository features of uploading, downloading, editing and versioning of documents as well as value adding features of tagging, querying, sharing, annotating, ranking, provenance, social networking and geo-spatial mapping services. It allows users to organize a catalogue of watershed observation data, model output, workflows, as well publications and documents related to the same watershed study through the tagging capability. Users can tag all relevant materials using the same watershed name and find all of them easily later using this tag. The underpinning semantic content repository can store materials from other cyberenvironments such as workflow or simulation tools and SDL provides an effective interface to query and organize materials from various sources. Advanced features of the SDL allow users to visualize the provenance of the materials such as the source and how the output data is derived. Other novel features include visualizing all geo-referenced materials on a geospatial map. SDL as a component of a cyberenvironment portal (the NCSA Cybercollaboratory) has goal of efficient management of information and relationships between published artifacts (Validated models, vetted data, workflows, annotations, best practices, reviews and papers) produced from raw research artifacts (data, notes, plans etc.) through agents (people, sensors etc.). Tremendous scientific potential of artifacts is achieved through mechanisms of sharing, reuse and collaboration - empowering scientists to spread their knowledge and protocols and to benefit from the knowledge of others. SDL successfully implements web 2.0 technologies and design patterns along with semantic content management approach that enables use of multiple ontologies and dynamic evolution (e.g. folksonomies) of terminology. Scientific documents involved with many interconnected entities (artifacts or agents) are represented as RDF triples using semantic content repository middleware Tupelo in one or many data/metadata RDF stores. Queries to the RDF enables discovery of relations among data, process and people, digging out valuable aspects, making recommendations to users, such as what tools are typically used to answer certain kinds of questions or with certain types of dataset. This innovative concept brings out coherent information about entities from four different perspectives of the social context (Who-human relations and interactions), the casual context (Why - provenance and history), the geo-spatial context (Where - location or spatially referenced information) and the conceptual context (What - domain specific relations, ontologies etc.).
... 1 Towards A Spatiotemporal Event-Oriented Ontology Yong Liu, Robert E. McGrath, Shaowen Wang,... more ... 1 Towards A Spatiotemporal Event-Oriented Ontology Yong Liu, Robert E. McGrath, Shaowen Wang, Mary Pietrowicz, Joe Futrelle, James D. Myers ... [19] Smith, B. and Garnett, G. (2007) "MusiVerse." Copenhagen: International Computer Music Conference [20] Visser, U. (2004). ...
Cyberinfrastructure (CI) is a broad term for the combination of high-performance computers, large... more Cyberinfrastructure (CI) is a broad term for the combination of high-performance computers, large-scale data storage, high speed networks, high-throughput instruments and sensor networks, and associated software that, as a ubiquitous, persistent infrastructure, is having, and will continue to have, a direct impact on scientific and engineering productivity. The potential for CI to allow individuals to perform more detailed analyses and/or study more complex systems is becoming well understood. Less well understood is the potential of CI to enable dynamic group and community-scale coordination of resources to more effectively address multidisciplinary scientific questions and to strengthen the connections between basic research and applications. We have coined the term "Cyberenvironments" to represent the software infrastructure and interfaces needed to realize this vision of CI as a systemic catalyst for transformative change in research practice and end-to-end productivit...
Proceedings of the 6th international workshop on Adaptive and reflective middleware held at the ACM/IFIP/USENIX International Middleware Conference - ARM '07, 2007
Researchers need to be able to discover data in order to use it and hence, discussions of scalabl... more Researchers need to be able to discover data in order to use it and hence, discussions of scalable data systems often focus on universal data catalogs as an early, and sometimes preeminent goal. However, such a characterization of requirements for data services can be problematic in that it ignores many other high value services that researchers need. Further, by placing too much emphasis on the requirements for discovery over those for scalable data publication and use, designers and developers risk significantly underestimating the costs and challenges involved in providing useful, scalable, and sustainable services. Most obviously, if discovery is an early emphasis, the system’s value depends on the amount of data and metadata submitted as well as the system’s usability relative to mature disciplinary repositories. This results in critical mass barriers as well as a catch22 situation in which the costs of generating metadata must be borne long before any value is derived. Overemp...
Page 1. J. Phys. Chem. 1994,98, 12535-12544 12535 Investigation of Acetyl Chloride Photodissociat... more Page 1. J. Phys. Chem. 1994,98, 12535-12544 12535 Investigation of Acetyl Chloride Photodissociation by Photofragment Imaging Subhash Deshmukh, James D. Myers, Sotiris S. Xantheas, and Wayne P. Hess* Environmental ...
Collaboration tools and problem solving environments (PSEs) have historically evolved independent... more Collaboration tools and problem solving environments (PSEs) have historically evolved independently, despite the fact that most of their developers would agree that transitions between team and individual work are facile and frequent in the real world. The explanation of this paradox has roots in both technology limitations and our lack of understanding of the details of how team and individual work are related and how to encode those relations in software. Progress is being made in both of these areas, and, as a result, there is growing interest in discussion between the collaborative systems and PSE communities, and a growing number of systems that provide collaboration and PSE functionality. We envision this Collaborative Problem-Solving Environments (CPSEs) minitrack as a forum for bringing these communities together and hope that the selected papers will provide the basis for a lively discussion of the issues involved in designing, developing, and deploying CPSEs in science and...
Uploads
Papers by James Myers