Journal of Management Information Systems, Sep 1, 1987
Abstract:Information is valuable if it derives from reliable data. However, measurements for data... more Abstract:Information is valuable if it derives from reliable data. However, measurements for data reliability have not been widely established in the area of information systems (is).This paper attempts to draw some concepts of reliability from the field of quality control and to apply them to is. The paper develops three measurements for data reliability: internal reliability—reflects the “commonly accepted” characteristics of various data items; relative reliability—indicates compliance of data to user requirements; and absolute reliability—determines the level of resemblance of data items to reality.The relationships between the three measurements are discussed, and the results of a field study are displayed and analyzed. The results provide some insightful information on the “shape” of the database that was inspected, as well as on the degree of rationality of some user requirements. General conclusions and avenues for future research are suggested.
Journal of the Operational Research Society, Aug 1, 2001
ABSTRACT Analysing the performance of an application performed on a distributed system is discuss... more ABSTRACT Analysing the performance of an application performed on a distributed system is discussed in this paper. An analogy between a distributed system and a production process is portrayed, particularly for an application running on several computers. Consequently, theories of management of production processes are employed to help analyse and manage distributed systems, specifically, the Theory of Constraints (TOC). Using TOC combined with the cost/utilization model, which was initially developed to evaluate the utilization of a single processor and is extended here to handle a distributed system, it is demonstrated how the performance of a distributed system can be examined. The methodology presented here is based on a simple graphic display aimed to allow managers of information systems to locate constrained resources, to optimize the distribution of the computer application, and to examine and pinpoint improper imbalances and fluctuations in the system workload. The model develops into a management decision support tool that may be applied in areas such as buffer policy, assessment of protective capacity, investment in computer resources, and identification of areas for improvement.
The last three chapters dealt with daily managing of the network resources, particularly the serv... more The last three chapters dealt with daily managing of the network resources, particularly the service units. Daily managing involves operational policies such as dispatching, repositioning, and routing.
We began this book by presenting an overall, top-down view of policy making in service networks. ... more We began this book by presenting an overall, top-down view of policy making in service networks. This has led us to an analysis of the network performance under steady-state conditions, as portrayed by the hypercube model(4) presented in Chapter 1. In subsequent chapters, we have decomposed the overall view into a sequence of decisions and models, each relating to a different aspect of service network management. In this chapter, which is the last one in the book, we turn back to a more comprehensive analysis.
Journal of Management Information Systems, Sep 1, 1998
ABSTRACT The Israeli Air Force (IAF) has developed a simulation system to train its top commander... more ABSTRACT The Israeli Air Force (IAF) has developed a simulation system to train its top commanders in how to use defensive resources in the face of an aerial attack by enemy combat aircraft. During the simulation session, the commander in charge allocates airborne and standby resources and dispatches or diverts aircraft to intercept intruders. Seventy-four simulation sessions were conducted in order to examine the effects of time pressure and completeness of information on the performance of twenty-nine top IAF commanders. Variables examined were: (1) display of complete versus incomplete information, (2) time-constrained decision making versus unlimited decision time, and (3) the difference in performance between top strategic commanders and mid-level field commanders.Our results show that complete information usually improved performance. However, field commanders (as opposed to top strategic commanders) did not improve their performance when presented with complete information under pressure of time. Time pressure usually, but not always, impaired performance. Top commanders tended to make fewer changes in previous decisions than did field commanders.
Journal of Management Information Systems, Sep 1, 1987
Abstract:Information is valuable if it derives from reliable data. However, measurements for data... more Abstract:Information is valuable if it derives from reliable data. However, measurements for data reliability have not been widely established in the area of information systems (is).This paper attempts to draw some concepts of reliability from the field of quality control and to apply them to is. The paper develops three measurements for data reliability: internal reliability—reflects the “commonly accepted” characteristics of various data items; relative reliability—indicates compliance of data to user requirements; and absolute reliability—determines the level of resemblance of data items to reality.The relationships between the three measurements are discussed, and the results of a field study are displayed and analyzed. The results provide some insightful information on the “shape” of the database that was inspected, as well as on the degree of rationality of some user requirements. General conclusions and avenues for future research are suggested.
Journal of the Operational Research Society, Aug 1, 2001
ABSTRACT Analysing the performance of an application performed on a distributed system is discuss... more ABSTRACT Analysing the performance of an application performed on a distributed system is discussed in this paper. An analogy between a distributed system and a production process is portrayed, particularly for an application running on several computers. Consequently, theories of management of production processes are employed to help analyse and manage distributed systems, specifically, the Theory of Constraints (TOC). Using TOC combined with the cost/utilization model, which was initially developed to evaluate the utilization of a single processor and is extended here to handle a distributed system, it is demonstrated how the performance of a distributed system can be examined. The methodology presented here is based on a simple graphic display aimed to allow managers of information systems to locate constrained resources, to optimize the distribution of the computer application, and to examine and pinpoint improper imbalances and fluctuations in the system workload. The model develops into a management decision support tool that may be applied in areas such as buffer policy, assessment of protective capacity, investment in computer resources, and identification of areas for improvement.
The last three chapters dealt with daily managing of the network resources, particularly the serv... more The last three chapters dealt with daily managing of the network resources, particularly the service units. Daily managing involves operational policies such as dispatching, repositioning, and routing.
We began this book by presenting an overall, top-down view of policy making in service networks. ... more We began this book by presenting an overall, top-down view of policy making in service networks. This has led us to an analysis of the network performance under steady-state conditions, as portrayed by the hypercube model(4) presented in Chapter 1. In subsequent chapters, we have decomposed the overall view into a sequence of decisions and models, each relating to a different aspect of service network management. In this chapter, which is the last one in the book, we turn back to a more comprehensive analysis.
Journal of Management Information Systems, Sep 1, 1998
ABSTRACT The Israeli Air Force (IAF) has developed a simulation system to train its top commander... more ABSTRACT The Israeli Air Force (IAF) has developed a simulation system to train its top commanders in how to use defensive resources in the face of an aerial attack by enemy combat aircraft. During the simulation session, the commander in charge allocates airborne and standby resources and dispatches or diverts aircraft to intercept intruders. Seventy-four simulation sessions were conducted in order to examine the effects of time pressure and completeness of information on the performance of twenty-nine top IAF commanders. Variables examined were: (1) display of complete versus incomplete information, (2) time-constrained decision making versus unlimited decision time, and (3) the difference in performance between top strategic commanders and mid-level field commanders.Our results show that complete information usually improved performance. However, field commanders (as opposed to top strategic commanders) did not improve their performance when presented with complete information under pressure of time. Time pressure usually, but not always, impaired performance. Top commanders tended to make fewer changes in previous decisions than did field commanders.
Digital Presentation and Preservation of Cultural and Scientific Heritage. Conference Proceedings. Vol. 9, Sofia, Bulgaria: Institute of Mathematics and Informatics – BAS, 2019. ISSN: 1314-4006, eISSN: 2535-0366, 2019
The new academic discipline of Data Sciences (DS) has been developed in recent years mainly becau... more The new academic discipline of Data Sciences (DS) has been developed in recent years mainly because of the need to make decisions based on huge amounts of data-Big Data. In parallel, there has been a huge progress in the development of technologies that enable to identify patterns, to filter big data, and to provide relevant meanings to information, due to machine learning and sophisticated inference techniques. The profession of Data Scientist (or Data Analyst) has become highly demanded in recent years. It is required in the business sector where data is the "oxygen" for business survival; it is needed in the governmental sector in order to improve its services to the citizens; and it is very imperative in the scientific world, where large data depositories collected in varied disciplines have to be integrated, mined and analyzed, in order to enable in-terdisciplinary research. The purpose of this paper is to demonstrate how the scientific discipline of Data Sciences fits into academic programs intended to prepare data analysts for the business, public, government, and academic sectors. The article first delineates the Data Cycle, which portrays the transformation of data and their derivatives along the route from generation to decision making. The cycle includes the following stages: problem definition identifying pertinent data sources data collection, and storing (including cleansing and backup) data integration data mining processing and analysis visualization learning and decision-making feedback for future cycles. Within this cycle, there might be sub cycles, where a number of stages are repeated and reiterated. It should be noted that the data cycle is generic. It might have slight variations under various circumstances, however, there is not much difference between the scientific cycle and all the other cycles. Each stage within the cycle requires different tools, namely hardware and software technologies that support the stage. This article classifies these tools. The final part of the article suggests a typology for academic DS programs. It outlines an academic program that will be offered to those wishing to practice the Data Analyst profession. An introductory course that should be mandatory to all students campus-wide is sketched.
Uploads
Papers by Niv Ahituv