NCM 110a Elearning Notes Prelims
NCM 110a Elearning Notes Prelims
NCM 110a Elearning Notes Prelims
A. Informatics Introduction
"In attempting to arrive at the truth, I have applied everywhere for information, but in scarcely an instance have I been
able to obtain hospital records fit for any purposes of comparison. If they could be obtained, they would enable us to
decide many other questions besides the ones alluded to. They would show subscribers how their money was being spent,
what amount of good was really being done with it, or whether the money was not doing mischief rather than good."
(Nightingale, 1859)
What is informatics? Isn't it just about computers? Taking care of patients is nursing's primary concern, not thinking about
computers! These are not unusual thoughts for nurses to have. Transitions are always difficult, and a transition to using
more technology in managing information is no exception. This use of information technology in healthcare is known as
informatics, and its focus is information management, not computers. Whether nursing uses informatics effectively or not
will determine the quality of future patient care as well as the future of nursing.
Information is an integral part of nursing. When you are caring for patients, what besides the knowledge that nursing
education and experience has provided do you depend on to provide care? You need to know the patient's history, medical
conditions, medications, laboratory results, and more. Could you walk into a unit and care for a patient without this
information? How this information is organized and presented to you affects the care that you can provide as well as the
time you spend finding it.
The old way is to record and keep the information for a patient's current admission in a paper chart. Today, with several
specialties, consults, medications, laboratory reports, and procedures, the paper chart is inadequate. A well-designed
information system, developed with you and for you, can facilitate finding and using information that you need for patient
care. Informatics skills enable you to participate in and benefit from this process. Informatics does not perform miracles; it
requires an investment by you, the clinician, to assist those who design information systems so the systems are helpful and
do not impede your workflow.
If healthcare is to improve, it is imperative that there be a workforce that can innovate and implement information
technology (AHIMA & AMIA, 2006). There are two roles in informatics: the informatics specialist and the clinician who
must use health information technology. This means that in essence every nurse has a role in informatics. Information, the
subject of informatics, is the structure on which healthcare is built. Except for purely technical procedures (of which there
are few if any), a healthcare professional's work revolves around information. Is the laboratory report available? When is
Mrs. X scheduled for surgery? What are the contraindications for the prescribed drug? What is Mr. Y's history? What
orders did the physician leave for Ms. Z? Where is the latest x-ray report?
An important part of healthcare information is nursing documentation. When information systems are designed for
nursing, this documentation can also be used to expand our knowledge of what constitutes quality healthcare. Have you
ever wondered if the patient for whom you provided care had an outcome similar to others with the same condition? From
nursing documentation, are you easily able to see the relationship between nursing diagnoses, interventions, and outcomes
for your patients? Without knowledge of these chain events, you have only your intuition and old knowledge to use when
making decisions about the best interventions in patient care. Because observations tend to be self-selective, this is often
not the best information on which to base patient care. Informatics can furnish the information needed to see these
relationships and to provide care based on actual patient data.
If Florence Nightingale were with us today, she would be a champion of the push toward more use of healthcare
information technology Information in a paper chart essentially disappears into a black hole after a patient is discharged.
Because we can't easily access it, we can't learn from it and use it in patient care.
This realization is international. Many countries, especially those with a national health service, have long realized the
need be able to use information buried in charts. In the United States, the strategic plan for wider implementation of
Health Information Technology formulated four goals, all of which will affect nurses and nursing. They are as follows:
1. Inform clinical practice with use of EHRs (Electronic Health Records). The strategies for achieving this goal are:
2. Interconnect clinicians so that they can exchange health information using advanced and secure electronic
communication. The strategies for achieving this goal are:
a. Encourage regional collaborations that reflect the needs and goals of the region
b. Develop a national health information network
c. Coordinate federal health information systems
3. Personalize care with consumer-based health records and better information for consumers by:
4. Improve public health through advanced biosurveillance methods and streamlining the collection of data for quality
measurement and research. This requires the collection of detailed clinical information and will be accomplished by:
a. Unify public health surveillance architectures by making the systems able to exchange information b. Streamline
quality and health status monitoring to provide the ability to look at quality at the point of care and in real time
c. Provide tools to accelerate research and dissemination of evidence into clinically useful products, applications, and
knowledge
To fulfill these goals, information, which is the structure on which healthcare is built, can no longer be managed with
paper. If we are to provide evidence-based care, the mountains of data that are hidden in medical records must be made to
reveal their secrets. Bakken (2001) told us that there are five components needed to provide evidence-based care:
Information is a capital good with a value the same as labor and materials (National Advisory Council on Nurse Education
and Practice, 1997). The financial health of organizations depends on effective and efficient information management.
Today, healthcare organizations are waking up to the fact that how information is handled and processed has a great effect
on both the outcomes for those who purchase services and the economics of healthcare. Manual recording and filing of
information are inadequate to manage today's healthcare information. We have made some attempts to use technology to
manage information, but these efforts often fall short as a result of our inexperience in grasping the schemes of where
information originates, how it is used clinically and administratively, and how it can be used to improve practice.
The complexity of today's healthcare milieu, added to the explosion of knowledge, makes it impossible for any clinician
to remember everything needed to provide high-quality patient care. Additionally, healthcare consumers today want their
healthcare providers to integrate all known relevant scientific knowledge in providing their care. We have passed the time
when the unaided human mind can perform this feat: Modern information management tools are needed as well as a
commitment by healthcare professionals to change practices when more knowledge becomes available.
B. Definitions
Informatics is about managing information. The tendency to relate it to computers comes from the fact that the ability to
manage large amounts of information was born with the computer and progressed as computers became more powerful
and commonplace. It is, however, human ingenuity that is the crux of informatics. The term "informatics" originated from
the Russian term "informatika" (Sackett & Erdley 2002). A Russian publication, Oznovy Informatiki (Foundations of
Informatics), published in 1968 is credited with the origins of the general discipline of informatics (Middle East Technical
University, n.d.). At that time it was described within the context of computers. "Medical informatics" was the first term
used to identify informatics in healthcare. It was defined as the information technologies that are concerned with patient
care and the medical decision making process. Another definition stated that medical informatics is complex data
processing by the computer to create new information. As with many healthcare enterprises, there was debate about
whether "medical" referred only to informatics focusing on physician concerns, or if it refers to all healthcare disciplines.
Increasingly, it is seen that other disciplines have a body of knowledge separate from medicine, but part of healthcare, and
the term healthcare informatics is becoming more commonly used. In essence, informatics is the management of
information, using cognitive skills and the computer.
HEALTHCARE INFORMATICS
Healthcare informatics focuses on managing information in healthcare. It is an umbrella term that describes the capture,
retrieval, storage, presenting, sharing, and use of biomedical information, data, and knowledge for providing care,
problem solving, and decision making (Shortliffe & Blois, 2001). The purpose is to improve the use of healthcare data,
information, and knowledge in supporting patient care research, and education (Delaney, 2001). The focus is on the
subject, information, rather than the tool, the computer. The distinction is not always obvious as a result of the need to
master computer skills to enable one to manage this information. The computer is used in acquiring, organizing,
manipulating, and presenting the information. It will not produce anything of value without human direction in how,
when, and where the data is acquired, how it is treated, interpreted, manipulated, and presented. Informatics provides that
human direction.
NURSING INFORMATICS
Healthcare has many disciplines, thus it is not surprising that healthcare informatics has many specialties of which nursing
is one. Nursing informatics is also a subspecialty of nursing which the American Nurses Association (ANA) recognized in
1992, with the first informatics certification examination being given in the fall of 1995 (Newbold, 1996). Nursing
informatics has as its focus managing information pertaining to nursing. Specialists in this area look at how nursing
information is acquired, manipulated, stored, presented, and used. Informatics nurses work with practicing nurses to
identify the needs of nurses for information and support, and with system developers in the development of systems that
work to complement the practice needs of nurses. Nursing informatics specialists bring to system development and
implementation a viewpoint that supports the needs of the clinical end user. The objective is an information system that is
not only user friendly for data input, but presents the clinical nurse with needed information in a manner that is timely and
useful. This is not to say that nursing informatics stands alone, it is an integral part of the interdisciplinary field of
healthcare informatics, hence related to and responsible to all the healthcare disciplines.
The term "nursing informatics," was probably first used and defined by Scholes and Barber in 1980 in their address that
year to the MED-INFO conference in Tokyo. There is still no definitive agreement on exactly what the term nursing
informatics means. As Simpson once said (Simpson, 1998), defining nursing informatics is difficult because it is a moving
target. The original definition said that nursing informatics was the use of computer technology in all nursing endeavors:
nursing services, education, and research. (Scholes & Barber, 1980) Another early definition that followed the broad
definition of Scholes and Barber was written by Hannah, Ball & Edwards (1994). They defined nursing informatics as any
use of information technologies in carrying out nursing functions. Like the Scholes and Barber definition, these
definitions focused on the technology and could be interpreted to mean any use of the computer from word processing to
the creation of artificial intelligence for nurses as long as the computer use involved the practice of professional nursing.
The shift from a technology orientation in definitions to one that is more information oriented started in the mid 1980s
with Schwirian (Staggers & Thompson, 2002). She created a model to be used as a framework for nursing informatics
investigators (Schwirian, 1986). The model consisted of four elements arranged in a pyramid with a triangular base. The
top of the pyramid was the desired goal of the nursing informatics activity and the base was composed of three elements:
1) users (nurses and students), 2) raw material or nursing information, and 3) the technology, which is computer hardware
and software. They all interact in nursing informatics activity to achieve a goal. The model was intended as a stimulus for
research.
The first widely circulated definition that moved away from technology to concepts was from Graves and Corcoran
(Staggers & Thompson, 2002). They defined nursing informatics as "a combination of computer science, information
science and nursing science designed to assist in the management and processing of nursing data, information and
knowledge to support the practice of nursing and the delivery of nursing care" (Graves & Corcoran, 1989)(p. 227). This
definition secured the position of nursing informatics within the practice of nursing and placed the emphasis on data,
information, and knowledge (Staggers & Thompson, 2002). Many consider it the seminal definition of nursing
informatics.
Turley (Turley, 1996), after analyzing previous definitions, added another discipline, cognitive science, to the base for
nursing informatics. Cognitive science emphasizes the human factor in informatics. Its main focus is the nature of
knowledge, its components, development, and use. Goossen (1996), thinking along the same lines, used the Graves and
Corcoran definition as a basis and expanded the meaning of nursing informatics to include the thinking that is done by
nurses to make knowledge-based decisions and inferences for patient care. Using this interpretation, he felt that nursing
informatics should focus on analyzing and modeling the cognitive processing for all areas of nursing practice. Goossen
also stated that nursing informatics should look at the effects of computerized systems on nursing care delivery.
The first ANA definition in 1992 added the role of the informatics nursing specialist to the Graves and Corcoran
definition. The 2001 ANA definition stated that nursing informatics combines nursing, information and computer sciences
for the purpose of managing and communicating data, information, and knowledge to support nurses and healthcare
providers in decision making (American Nurses Association, 2001). Information structures, processes, and technology are
used to provide this support. In the latest ANA Scope and Standards this definition was reiterated, albeit in slightly
different wording (American Nurses Association, 2008) and with the addition of wisdom to the data, information, and
knowledge conceptual framework. This most recent definition emphasized again that the goal of nursing informatics is to
optimize information management and communication to improve the health of individuals, families, populations, and
communities.
Staggers and Thompson (2002), who believe that the evolution of definitions will continue, pointed out that in all of the
current definitions, the role of the patient is under emphasized. Some early definitions included the patient, but as a
passive recipient of care. With the advent of the Internet, more and more patients are taking an active role in their
healthcare. This factor changes not only the dynamics of healthcare, but permits a definition of nursing informatics that
recognizes that patients as well as healthcare professionals are consumers of healthcare information and that patients may
be participating in keeping their medical records current. Staggers and Thompson also pointed out that the role of the
nurse as an integrator of information has been overlooked and should be considered in future definitions.
Despite these definitions, the focus of much of today's practice informatics is still on capturing data at the point of care
and presenting it in a manner that facilitates the care of an individual patient. Although this is a vital first step, when
designing patient care information systems, thought needs to be given to secondary data analysis, or analysis of data for
purposes other than for which it was originally collected. Using aggregated data, or the same piece(s) of data, for example,
outcomes of a given intervention for many patients, you can make decisions based on actual patient care data.
Understanding how informatics can serve you as an individual nurse, as well as the profession, puts you in a position to
work with informatics specialists to make retrievable data needed to improve patient care.
In 1850, it was possible for all the medical knowledge known to the Western world to be put into two large volumes
making it possible, for one person to read and assimilate all this information. The situation today is dramatically different.
The number of journals available in healthcare and the research that fills them have increased many times over. Even in
the early 1990s, if physicians read two journal articles a day, by the end of a year they would be 800 years behind in their
reading (McDonald, 1994). A healthcare clinician may be expected to know something about 10,000 different diseases
and syndromes, 3,000 medications, 1,100 laboratory tests and the information in the more than 400,000 articles added to
the biomedical information each year (Davenport & Glaser, 2002). Additionally, current knowledge is constantly
changing: one can expect much of their knowledge to be obsolete in five years or less.
In healthcare, the increase in knowledge has led to the development of many specialties such as respiratory therapy,
neonatology, and gerontology, and subspecialties within each of these. As these specialties have proliferated and spawned
the development of many miraculous treatments, healthcare has too often become fractionalized, resulting in difficulty in
gaining an overview of the entire patient. The pressure of accomplishing the tasks necessary for a patient's physical
recovery usually leaves little time for perusing a patient's record and putting together the bits and pieces so carefully
charted by each discipline. Even if time is available there is simply so much data, in so many places, that it is difficult to
merge the data with the knowledge that a healthcare provider has learned, as well as with new knowledge needed to
provide the best patient care. We are drowning in data but lack the time and skills to transform it to useful information or
knowledge.
The development of the computer as a tool to manage information can be seen in its history. The first information
management task "computerized" was numeric manipulation. Although not technically a computer by today's terminology,
the first successful computerization tool was the abacus, which was developed about 3000 BC. Although when one
developed skill, real speed in these tasks was possible, the operator of the abacus still had to mentally manipulate data. All
the abacus did was store the results step by step. Slide rules came next in 1632, but like the abacus required a great deal of
skill on the part of the operator. The first machine to add and subtract by itself was Blaise Pascal's "arithmetic machine,"
built in 1542. The first "computer" to be a commercial success was Jacquard's weaving machine built in 1804. Its
efficiency so frightened workers at the mill where it was built that they rioted, broke apart the machine, and sold the parts.
Despite this setback, the machine proved a success because it introduced a cost-effective way of producing goods.
The difference and analytical engines, early computers designed by Charles Babbage in the mid 19th century, although
never built, laid the foundation for modern computers (Analytical Engine, 2007). The first time that an automatic
calculating machine was successfully used was in the 1900 census. Herman Hollerith (who later started IBM) used the
Jacquard loom concept of punch cards to create a machine that enabled the 1900 census takers to compile the results in
one year instead of the 10 (Herman Hollerith-Punch Cards) required for the 1890 census. The first computer by today's
perception was the Electronic Numerical Integrator and Computer (ENIAC) built by people at the Moore School of
Engineering at the University of Pennsylvania in partnership with the U.S. Government. When completed in 1946, it
consisted of 18,000 vacuum tubes, 70,000 resistors, and 5 million soldered joints. It consumed enough energy to dim the
lights in an entire section of Philadelphia (Moye, 1996). The progress in hardware since then is phenomenal; today's
"Palmtop" computers have more processing power than ENIAC did.
The use of computers in healthcare originated in the late 1950s and early 1960s as a way to manage financial information.
This was followed in the late 1960s by the development of a few computerized patient care applications (Saba & Erdeley
2006). Some of these hospital information systems included patient diagnoses and other patient information as well as
care plans based on physician and nursing orders. Because of the lack of processing power then available, these systems
were unable to deliver what was needed and never became widely used.
EARLY HEALTHCARE INFORMATICS SYSTEMS
One of the interesting early uses of the computer in patient care was the Problem-Oriented Medical Information System
(PROMIS) begun by Dr. Lawrence Weed at the University Medical Center in Burlington, VT (McNeill, 1979) in 1968.
The importance of this system is that it was the first attempt at providing a total, integrated system that covered all aspects
of healthcare, including patient treatment. It was patient oriented and used as its framework the problem-oriented medical
record (POMR). The unit featured an interactive touch screen and was known for fast responsiveness (Problem-Oriented
Medical Information System, n.d.). At its height, it consisted of over 60,000 frames of knowledge.
PROMIS was designed to overcome four problems that are still with us today: lack of care coordination, reliance on
memory, lack of recorded logic of delivered care, and lack of an effective feedback loop (PROMIS: The Problem-Oriented
Medical Information System, 1980). The system provided a wide array of information to all healthcare providers. All
disciplines recorded their observations and plans, and related them to a specific problem. This broke down barriers
between disciplines, making it possible to see the relationship between conditions, treatments, costs, and outcomes.
Unfortunately, this system did not have wide acceptance. To embrace it meant a change in the structure of healthcare,
something that did not begin to happen until the 1990s, when managed care in all its variations reinvigorated a push
toward more patient-centered information systems, a push that is continuing as you read this.
Another early system that became functional in 1967 and is still functioning, is the Help Evaluation Logical Processing
(HELP) system developed by the Informatics Department at the University of Utah School of Medicine. It was first
implemented in a heart catheterization laboratory and a post open heart intensive care unit. It is now hospital wide and
operational in many hospitals in the Intermountain Healthcare system (Gardner, Pryor, & Warner, 1999). This is not only a
hospital information system, but integrates a sophisticated clinical decision support system that provides information to
clinical areas. It was the first hospital information system that collected data for clinical decision making and integrated it
with a medical knowledge base. It is well accepted by clinicians and has demonstrated that a clinical support system is
feasible and that it reduces healthcare costs without sacrificing quality.
As the science of informatics has progressed, there have been changes in information systems. Originally computerized
clinical information systems were process oriented. That is, they were implemented to computerize a specific process, for
example, billing, order entry, or laboratory reports. This led to the creation of different software systems for different
departments, which unfortunately could not share data, creating a need for clinicians to enter data more than once. An
attempt to share data by integrating data from disparate systems is a difficult and sometimes impossible task. Even when
possible, the results are often disappointing and can leave negative impressions of computerization in users' minds. These
barriers are being slowly overcome with the introduction of data standards, both in terminology and in protocols for
passing data from one system to another.
Newer systems, however, are organized by data and are designed to use the same piece of data many times, thus requiring
that the entry be made only once. The primary design is based on how data is gathered, stored, and used in an entire
institution rather than on a specific process such as pharmacy or laboratory. For example, when a medication order is
placed, the system can have access to all the information about a patient including his diagnosis, age, weight, allergies,
and eventually genomics, as well as the medications he is currently taking. The order and patient information can also be
matched against knowledge such as what drugs are incompatible with the prescribed drug, the dosage of the drug, and the
appropriateness of the drug for this patient. If there are difficulties, the system can deliver warnings at the time the
medication is ordered instead of requiring clinician intervention either in the pharmacy or at the time of administration.
Another feature in a data-driven system is the ability to make the same information available to the dietician planning the
patient's diet and the nurse providing patient care and doing discharge planning, thus enabling a more complete picture of
a patient than one that would be available when separate systems handle dietetics and nursing.
Evidence-based practice will result not only from research and practice guidelines, but also from unidentifiable (data
minus any patient identification) aggregated data from actual patients. It will also b possible to see how patients with a
given genomics react to a drug, thus helping the clinician in prescribing drugs. This same aggregated data will help
clinicians make decisions by providing information about treatments that are most effective for given conditions,
replacing the current system which is too often based on "what we have always done" rather than empirical information.
These systems will use computers that are powerful enough to process data so that information is created "or the fly," or
immediately when requested. Systems that incorporate these features will require a new way of thinking. Instead of
having all one's knowledge in memory, one must be comfortable both with needing to access information and with
changing one's practice to accommodate the new knowledge.
Computerization will affect healthcare professionals in other ways. Some jobs will change focus. As nurses we may find
that our job as a patient care coordinator has shifted from transcribing and checking orders to accessing this information
on the computer. To preserve our ability to provide full care for our patients, and as an information integrator for other
disciplines, we will need to make our information needs known to those who design the systems. To accomplish this we
all need to be aware of the value of both our data and our experience and to be able to identify the data we need to
perform our job, as well as to appreciate the value of the data that others and we add to the healthcare system.
D. Benefits of Informatics
The information systems described previously will bring many benefits to healthcare. These benefits can be seen in the
ability to create and use aggregated data, prevent errors, ease working conditions, and provide better healthcare records.
One of the primary benefits of informatics is that data that was previously buried in inaccessible records becomes usable.
Informatics is not just about collecting data, but about making it useful. When data is captured electronically in a
structured manner, it can be retrieved and used in many different ways, both to easily assimilate information about one
patient and as aggregated data. Aggregated data is the same piece or pieces of data for many patients. Table 1-1 shows
some aggregated data for postsurgical infections sorted by physician and then by the organism. Because infections for
some patients are caused by two different pathogens in Table 1-1, you see two entries for some patients, however, this is
all produced from only one entry of the data. With just a few clicks of a mouse, this same data could be organized by unit
to show the number of infections on each unit. This is possible because data that is structured as in Table 1-1 and
standardized can be presented in many different views.
When aggregated data is examined, patterns can be seen that might otherwise take several weeks or months to become
evident, or might never become evident. When patterns, such as the prevalence of infections for Dr. Smith emerge (Table
1-1), investigations into what these patients have in common can begin. Caution, however, should be observed. The
aggregated data in Table 1-1 are insufficient for drawing conclusions; the data only serves as an indication of a problem
and clues to where to start investigating. Aggregated data is a type of information or even knowledge, but wisdom says
that it is incomplete. If this data were shared outside of an agency, or with those who don't need to have personal
information about a patient, it would be de-identified, that is, there would be no patient names and probably no physician
names. Deciding who can see what data is one of the current issues in informatics.
Informatics through information systems can improve communication between all healthcare providers, which will
improve patient care as well as reduce stress. Additional benefits for healthcare include making the storage and retrieval of
healthcare records much easier, quicker retrieval of test results, printouts of needed information organized to meet the
needs of the user, and fewer lost charges as a result of easier methods of recording charges. The computerization of
administrative tasks such as staffing and scheduling also saves time and money.
Each healthcare discipline will benefit from its investment in informatics. In nursing, informatics will not only enhance
practice, but also allow nursing science to develop (Fitzpatrick, 1988). Informatics will improve documentation and, when
properly implemented, can reduce the time spent in documentation. It is believed by many nurses that they spend over
50% of their time doing paperwork (Womack et al, 2004). Entering vital signs both in nursing notes and on a flow sheet,
wastes time and invites errors. In a well-designed clinical documentation system, this data will be entered once, retrieved,
and presented in many different forms to meet the needs of the user.
Paper documentation methods create other problems such as inconsistency and irregularity in charting as well as the lack
of data for evaluation and research mentioned above. An electronic clinical information system can remind users of the
need to provide data in areas apt to be forgotten and provide a list of terms that can be clicked to enter data. The ability to
use patient data for both quality control and research is vastly improved when documentation is complete and electronic.
Despite Florence Nightingale's emphasis on data, for much of nursing's history, nursing data has not been valued. It is
either buried in paper patient records that make retrieving it economically infeasible or, worse, discarded when a patient is
discharged, hence unavailable for building nursing science. With the advent of electronic clinical documentation, nursing
data can be made a part of the EHR and become available to researchers for building evidence-based nursing knowledge.
The recent Maryland report on the use of technology to address the nursing shortage demonstrated that informatics can be
used to improve staff morale and patient care (Womack et al., 2004). For example, paper request forms can be eliminated,
work announcements can be more easily communicated, the time for in-services can be reduced, and empty shifts can be
filled using Internet software.
In understanding the role and value that informatics adds to nursing, it is necessary to recognize that the profession is not
confined to tasks, but that it is cognitive. Providing the data to support this is a joint function of nursing informatics and
clinicians. Identifying and determining how to facilitate its collection is an informatics skill that all nurses need.
The need to manage complex amounts of data in patient care demands that nurses, regardless of specialty area, have
informatics skills (Gaumer, Koeniger-Donohue, Friel, & Sudbay, 2007; Nelson, 2007; Wilhoit, Mustain, & King, 2006).
Informatics skills require basic computer skills as one component (Staggers, Gassert, & Curran, 2002). A recent survey of
hospital administrators in three states in the southeastern United States revealed that one of the competencies that they
wanted from nurses dealt with the use of the computer (Uttley-Smith, 2004). This supports an earlier study by Gravely,
Lust, & Fullerton (1999) that found that 83% of hospital recruiters indicated the importance of computer skills. Another
skill needed for proficiency in informatics is information literacy. Both these skills have also been identified by the ANA
and National League for Nursing (NLN) as necessary for evidence-based practice.
COMPUTER FLUENCY
The term "computer literacy" is used broadly to mean the ability to perform various tasks with a computer. Given the
rapid changes in technology and in nursing, perhaps a better perspective on computer use can be gained by thinking in
terms of computer fluency rather than literacy. The term "fluency" implies that an individual has a lifelong commitment to
acquiring new skills for the purpose of being more effective in work and personal life (Committee on Information
Technology Literacy, 1999). This necessitates a goal of gaining sufficient foundational skills and knowledge to enable one
to independently acquire new skills. Thus, computer literacy is a temporary state, whereas computer fluency involves
being able to increase one's ability to effectively use a computer when needed.
A perusal of Listserv archives in informatics reveals periodic requests for instruments to measure the computer
competency of staff. Unfortunately there is little agreement on specific competencies needed, let alone an instrument to
measure this, but there is a consensus that it involves a positive attitude toward computers, knowledge and understanding
of computer technology, computer hardware and software skills, and the ability to visualize the overall benefits to nursing
from this technology (Hobbs, 2002). Simpson (1998) pointed out the need for nurses to master computers to avoid
extinction. A computer is a mind tool that frees us from the mental drudgery of data processing, just as the bulldozer frees
us from the drudgery of digging and moving dirt. Like, the bulldozer, however, the computer must be used intelligently or
damage can result.
Given the forces moving healthcare toward more use of informatics, it is important for nurses to learn the skills associated
with using a computer for managing information. Additionally, knowing how to use graphical interfaces and application
programs such as word processing, spreadsheets, databases, and presentation programs is as an important an element in a
professional career as mastering technology skills (McCannon & O'Neal, 2003). Just as anatomy and physiology provide a
background for learning about disease processes and treatments, computer fluency skills are necessary to appreciate more
complex informatics concepts (McNeil & Odom, 2000) and for learning clinical applications (Nagelkerk, Ritolo, &
Vandort, 1998).
Ronald and Skiba (1987) were the first to look at computer competencies required for nurses. In the late 1990s and early
part of this century this issue was revisited, but the focus became the use of computer skills as part of informatics skills
(McCannon & O'Neal, 2003; McNeil et al., 2003; Pew Health Professions Commissions, 1998; Staggers, Gassert, &
Curran, 2001; Staggers et al., 2002; Uttley-Smith, 2004). One of the more thorough studies is by Staggers, Gassert, and
Curran (2001). They defined four levels of informatics competencies for practicing nurses. The first two pertain to all
nurses, the last two to informatics nurses.
•The beginning nurse should possess basic information management and computer technology skills. Accomplishments
should include the ability to access data, use a computer for communication, use basic desktop software, and use decision
support systems.
• Experienced nurses should be highly skilled in using information management and computer technology to support their
major area of practice. Additional skills for the experienced nurse include being able to make judgments on the basis of
trends and patterns within data elements and to collaborate with nursing informatics nurses to suggest improvements in
nursing systems.
•The informatics nurse specialist should be able to meet the information needs of practicing nurses by integrating and
applying information, computer, and nursing sciences.
• The informatics innovator will conduct informatics research and generate informatics theory.
INFORMATION LITERACY
Information literacy, or the ability to know when one needs information and how to locate, evaluate, and effectively use it
(National Forum on Information Literacy, 2004) is an informatics skill. Although it involves computer skills, like
informatics, it requires critical thinking and problem solving. Information literacy is part of the foundation for evidence-
based practice and provides nurses with the ability to be intelligent information consumers in today's electronic
environment (Jacobs, Rosenfeld, & Haber, 2003).
The level of computer fluency needed by nurses to be both information literate and informatics capable in their practice is
what is expected of any educated nurse. In this course, the lessons addressing basic computer skills will emphasize
concepts that promote the ability to learn new applications. These lessons provide information underlying the use of
informatics in professional life both on and off a clinical unit, and to adapt to changes in technology. In future lessons
these principles will be built upon to allow the reader to start to develop beginning informatics skills, including the ability
to find and evaluate information from electronic sources. Additional lessons will allow the reader to develop skills
necessary to work with nursing informatics specialists in providing effective information systems and the use of nursing
data.
Summary
Healthcare is in transition and nursing is being affected by these changes. Part of these changes involves informatics.
Whether the change will be positive or negative for patient care and nursing depends on nurses. For the change to be
positive, nurses need to develop skills in information management, known in healthcare as informatics. To gain these
skills, a background in both computer and information literacy skills is necessary.
As knowledge continues to expand logarithmically, data and information can no longer be managed solely by the human
mind. The use of tools to aid the human mind has become mandatory. Although healthcare has been behind most
industries in using technology to manage its data, there are many forces, both at the governmental and private levels that
are working to change this. With these pressures, healthcare informatics is rapidly expanding. There are many
subspecialties in informatics, of which nursing is one. Embracing informatics will allow nurses to assess and evaluate
practice just as a stethoscope allows the evaluation and assessment of a patient.
The use of computers in healthcare started in the 1960s, mostly in financial areas, but with the advance in computing
power and the demand for clinical data, computers are being used more and more in clinical areas. With this growth has
come a change in focus for information systems from providing solutions for just one process, to an enterprise-wide
patient-centered system that focuses on data. This new focus provides the functionality that allows one piece of data to be
used in multiple ways. To understand and work with clinical systems, as well as to fulfill other professional
responsibilities, nurses need to be computer fluent, information literate, and informatics knowledgeable.
1. Introduction
We have entered the information age. Home computers, laptops, handheld computers, and iPODs are pervasive. Banks
and stock markets move and track billions of dollars around the world every day through information systems. Factories
and stores buy, build, sell, and account for the products in our lives through information systems. In schools, computers
are being used as teaching tools and as instructional resources for students in such varied disciplines as astronomy,
Chinese, and chemistry. The airline industry uses information systems to book seats, calculate loads, order meals,
determine flight plans, determine fuel requirements, and even fly the planes and control air traffic.
The information age has not left the health industry untouched. Moving beyond standard data processing for
administrative functions common to all organizations such as human resources, payroll, and financial information systems
now play an important role in patient care by interpreting electrocardiograms, scheduling, entering orders, reporting
results, and preventing drug interactions (by cross-referencing drug compatibility and warning appropriate stall). We are
beginning to see the advent of lifetime electronic health records in many countries. In addition, information systems are
now being more widely used in support of population health and public health activities related to health protection (e.g.,
immunization), health promotion (e.g., well baby clinics), disease prevention (e.g., smoking cessation or needle exchange
programs), and health monitoring or surveillance (e.g., restaurant inspection or air quality monitoring).
Nurses have always had a major communication role at the interface between the patient/client and the health system. This
role is now labeled information management, and nurses are increasingly using information systems to assist them to
fulfill this role in clinical practice, administration, research, and education. Before attempting to talk about the role of
nursing in informatics, let us first establish definitions of "nursing" and "nursing informatics!"
What Is Nursing?
Nursing is emerging as a professional practice discipline. Based on the work of theorists, nursing practitioners see its
goals as the promotion of adaptation in health and illness and the facilitation of achievement of the highest possible
individual state of health (Rogers, 1970; Roy, 1976). These early theoretical models have provided the impetus for the
development of current approaches to the classification of phenomena of concern to nursing care.
The practitioner of nursing has many roles and responsibilities. Among these roles are those of an interface between the
client and the healthcare system and that of client advocate in the healthcare system. Nursing functions can be considered
under three major categories:
• Managerial, which includes establishing nursing care plans, keeping charts, transcribing orders and requisitions, and
scheduling patient appointments for diagnostic procedures or therapy;
• Delegated tasks, which include physical treatments and administration of medications under the direction of a physician;
• Autonomous nursing functions, which include interpersonal communication skills, application of the psychological
principles of client care, and providing physical care to patients.
It is the third category of nursing activities that is the core of nursing practice. In this category of autonomous activity
nurses use their knowledge, skills, judgment, and experience to exercise independent decision making related to the
phenomena for which nurses provide care and the nursing interventions that effect those phenomena and influence patient
care outcomes.
Before we explore the nature of hospital and nursing information systems, we need to review the definitions of health,
medical, and nursing informatics. Francois Gremy of France is widely credited with coining the term informatique
medical, which was translated into English as medical informatics. Early on, the term medical informatics was used to
describe "those collected informational technologies which concern themselves with the patient care, medical decision
making process" (Greenburg, 1975). Another early definition, in the first issue of the Journal of Medical Informatics,
proposed that medical informatics was "the complex processing of data by a computer to produce new kinds of
information" (Anderson, 1976). As our understanding of this discipline developed, Greenes and Shortliffe (1990)
redefined medical informatics as "the field that concerns itself with the cognitive, information processing and
communication tasks of medical practice, education, and research, including the information science and the technology
to support these tasks. An intrinsically interdisciplinary field... [with] an applied focus... [addressing] a number of
fundamental research problems as well as planning and policy issues," More recently, Shortliffe et al. (2001) defined
medical informatics as "the scientific field that deals with biomedical information, data, and knowledge their storage,
retrieval and optimal use for problem-solving and decision-making,"
One question consistently arose: "Does the word medical refer only to physicians, or does it refer to all healthcare
professions?" The premise was that medical referred to all healthcare professions and that a parallel definition of medical
informatics might be "those collected informational technologies that concern themselves with the patient care decision-
making process performed by healthcare practitioners." Thus, because nurses are healthcare practitioners who are
involved in the patient care and the decision-making process that uses information captured by and extracted from the
information technologies, there clearly was a place for nursing in medical informatics. Increasingly, as research was
conducted and medical informatics evolved, nurses realized there was a discrete body of knowledge related to nursing and
the use of informatics. During the early 1990s, other health professions began to explore the use of informatics in their
disciplines. Mandil (1989) coined the phrase "health informatics," which he defined as the use of information technology
(including both hardware and software) in combination with information management concepts and methods to support
the delivery of healthcare. Thus, health informatics has become the umbrella term encompassing medical, nursing, dental,
and pharmacy informatics among others. Health informatics focuses attention on the recipient of care rather than on the
discipline of the caregiver.
The nurse's early role in medical informatics was that of a consumer. The literature clearly shows the contributions of
medical informatics to the practice of nursing and patient care. Early developments in medical informatics and their
advantages to nursing have been thoroughly documented (Hannah, 1976). These initial developments were fragmentary
and generally restricted to automating existing functions or activities such as automated charting of nurses' notes,
automated nursing care plans, automated patient monitoring, automated personnel time assignment, and the gathering of
epidemiological and administrative statistics. Subsequently, an integrated approach to medical informatics resulted in the
development and marketing of sophisticated hospital information systems that included nursing applications or modules.
As models of health services delivery have shifted toward integrated care delivery across the entire spectrum of health
services, integrated information systems have developed. These enterprise systems provided an integrated clinical record
within a complex integrated healthcare organization. Such systems support evidence-based nursing practice, facilitate
nurses' participation in the healthcare team, and document nurses' contribution to patient care outcomes. They have failed,
however, to meet the challenge of providing a nationwide comprehensive, lifelong, electronic health record that integrates
the information generated by all of a person's contacts with the healthcare system.
• Use of artificial intelligence or decision-making systems to support the use of the nursing process
•Use of a computer-based scheduling package to allocate staff in a hospital or healthcare organization
•Use of computers for patient education
•Use of computer-assisted learning in nursing education
• Nursing use of a hospital information system
• Research related to what information nurses use when making patient care decisions and how those decisions are made
As the field of nursing informatics has evolved, the definition of nursing informatics has been elaborated and refined.
Graves and Corcoran (1989) suggested that nursing informatics is "a combination of computer science, information
science, and nursing science designed to assist in the management and processing of nursing data, information, and
knowledge to support the practice of nursing and the delivery of nursing care." An Expert Panel of the American Nurses
Association (2001) promoted nursing informatics as a specialty that integrates nursing science, computer science, and
information science to manage and communicate data, information, and knowledge in nursing practice. Nursing
informatics facilitates the integration of data, information and knowledge to support patients, nurses and other providers in
their decision-making in all roles and settings. This support is accomplished through the use of information structures and
information technology.
In an extensive review and analysis of the evolution of definitions of nursing informatics, Staggers and Thompson (2002,
p. 259) concluded that after three decades as a specialty there was still a proliferation of definitions for nursing
informatics. Staggers and Thompson (2002) modified the ANA definition and proposed a revised definition of nursing
informatics as a specialty that integrates nursing science, computer science, and information science to manage and
communicate data, information and knowledge to support patients, nurses and other providers in their decision making in
all roles and settings. This support is accomplished through the use of information structures, information processes and
information technology.
Furthermore, Staggers and Thompson (2002, p. 259-260) built on the ANA work to propose that the goal of nursing
informatics is to improve the health of populations, communities, families and individuals by optimizing information
management and communication. This includes the use of information and technology in the direct provision of care, in
establishing effective administrative systems, in managing and delivering educational experiences, in supporting lifelong
learning, and in supporting nursing research.
As we mentioned earlier, nursing informatics has moved beyond merely the use of computers and is increasingly referring
to the impact of information and information management on the discipline of nursing. Staggers and Thompson (2002)
affirmed our long-held position that nurses are "information integrators at the patient level." Nurses form the largest group
of healthcare professionals in any setting to have a health information system. Therefore, when providing patient care,
nurses make use of information management more often than any other group of healthcare professionals.
The nursing profession is recognizing the potential of informatics to improve nursing practice and the quality of patient
care. New roles are evolving for nurses. The American Nurses Association (2001) recognized nursing informatics as a
nursing specialty in 2001. Hospitals and other healthcare organizations are now hiring informatics nurse specialists and
informatics nurse consultants to help in the design and implementation of information systems. Nurse educators are using
information systems to manage the educational environment. Computer-based information systems are used to instruct,
evaluate, and identify problem areas of specific students; gather data on how each student learns; process data for research
purposes; and carry out continued education. Nurse researchers, who have been using computerized software for data
manipulation for years, are turning their attention to the problems of identifying variables for data sets essential to the
diagnosing of nursing problems, choosing nursing actions, and evaluating patient care. There is no doubt that we have
reached the information age in nursing. We must now prepare for the full impact of informatics on nursing.
5. Future Implications
Technology has historically relieved people of backbreaking drudgery and dreary monotony, providing them with more
free time to pursue personal relations and creative activities. Nurses, too, when relieved of routine and time-consuming
clerical or managerial paper handling chores, can devote more time to the unique problems and needs of individual
patients or clients. Increasingly, across the world, the managerial and clerical paper-handling tasks of nursing are being
performed by information systems. In addition, robotics (e.g., lifting and turning patients, delivering medications or
meals, and recording temperature, pulse, and other physiological measurements) might assist with the physical care
category of nursing tasks. Similarly, decision support systems may actively assist with nursing judgments.
Relieved of routine and less complex chores, the professional nurse having enhanced information management skills and
working in an environment enhanced by information systems will be expected to carry out higher level, more complex
activities that cannot be programmed. Nurses are being held responsible and accountable for the systematic planning of
holistic and humanistic nursing care for patients and their families. Nurses are also increasingly responsible for the
continual review and examination of nursing practice (using innovative, continuous quality improvement approaches), as
well as applying basic research to finding creative solutions for patient care problems and the development of new models
for the delivery of nursing care. Increasingly, nurses will provide more primary care through community-based programs
providing health, promotion and early recognition and prevention of illness. Nurses' role as patient educator is being
extended by means of multimedia programs and the Internet. At the same time nurses must assume greater responsibility
for assisting the public to become discriminating users of information as they select, sort, interpret, evaluate, and use the
vast volumes of facts available across the Internet.
Nurses still must assess, plan, carry out, and evaluate patient care, but advances in the use of information management,
information processes, and technology will continue to create a more scientific, complex approach to the nursing care
process. They will have to be better equipped by their education and preparation to have a more inquiring and
investigative approach to patient care. Evidence-based nursing practice is becoming the standard. As information systems
assume more routine clerical functions, nurses will have more time for direct patient care. Accordingly, nursing must be
part of future developments in nursing informatics with strong input regarding such decisions as the following.
The implication is that nursing must continually reassess its status and reward systems. Presently, a nurse gains status and
financial reward by moving away from the bedside into supervisory and managerial roles. If more of these coordinating
functions are taken over by the computer, nursing must reappraise its value system and reward quality of care at the
bedside with prestige and money. Some movement in this direction is already beginning for example, the movement
toward employment of clinical nurse specialists prepared at the master's degree level to work at the bedside. However,
currently this movement seems to be too little and too slow.
6. Summary
The role of the nurse will intensify and diversify with the widespread integration of computer technology and information
science into healthcare agencies and institutions. Redefinition, refinement, and modification of the practice of nursing will
intensify the nurse's role in the delivery of patient care. At the same time, nurses will have greater diversity by virtue of
employment opportunities in the nursing informatics field.
Nursing's contributions can and will influence the evolution of healthcare informatics. Nursing will also be influenced by
informatics, resulting in a better understanding of our knowledge and a closer link of that knowledge to nursing practice
(Turley, 1997). As a profession, nursing must anticipate the expansion and development of nursing informatics.
Leadership and direction must be provided to ensure that nursing informatics expands and improves the quality of
healthcare received by patients within the collaborative interdisciplinary venue of health informatics.
A. Introduction
Since the beginning of time, people have invented tools to help them. Tracing the evolution of computers gives us a
clearer historical vantage point from which to view our rapidly changing world. This approach also identifies informatics
as a tool that can advance the goal of high quality nursing care. From a historical perspective, however, it is difficult to
identify the true origin of computers. For instance, we could go back in time to the devices introduced by Moslem
scientists and to the mathematicians of the fifteenth century. An example is Al-Kashi, who designed his plate of
conjunctions to calculate the exact hour at which two planets would have the same longitude (De and Price, 1959;
Goldstine, 1972). A more familiar example is the first rudimentary calculating tool, the Chinese abacus. This is still a
rapid and efficient method of handling addition and subtraction.
1. The 1950s
The 1950s
The transistor, invented by Shockley in 1947 (WGBH, 2004), was used to develop a second generation of computers and
ultimately the transistor lead to the silicon chip. Second-generation computers were smaller, produced less heat, were
more reliable, and were much easier to operate and maintain. Second- generation computers moved into the business and
industrial world where they were used for data- processing functions such as payroll and accounting. The rapidly
expanding healthcare industry began using computers to track patient charges, calculate payrolls, control inventory, and
analyze medical statistics. During the 1950s, Blumberg (1958) foresaw the possibilities of automating selected nursing
activities and records. Little action was taken then because the existing computer programs were inflexible, computer
manufacturers had a general lack of interest in the healthcare market, and hospital administrators and nursing management
had a general lack of interest and knowledge about such equipment.
The 1960s
During the 1960s, universities were bursting at the seams as members of the post-World War II baby boom entered
college. The philosophy of "education for all" left educators searching for a way to provide more individualized and self-
paced instruction. The computer seemed to hold great promise. At the University of Illinois, Dr. Donald Bitzer was
working on a display screen that would increase the graphics resolution available on. The PLATO (programmed logic for
automated teaching operations) computer system he developed.
During 1965 and 1966, the "third generation" of computers was introduced. These third-generation computers were
identifiable by their modular components, increased speed, ability to service multiple users simultaneously, inexpensive
bulk storage devices that allowed more data to be immediately accessible, and rapid development of systems.
The 1970s
The development of the silicon chip paved the way for the development of minicomputers and personal computers. The
silicon chip allowed large amounts of data to be stored in an extremely small space. This development allowed the total
size of computers to be significantly reduced. The first mass-market personal computer, the MITS Altair 3800, was sold in
1975 as a hobbyist kit; the Apple I, another personal computer kit marketed to hobbyists by Stephen Wozniak and Steve
Jobs, followed in 1976. Soon the vision of connecting computers to share information began to spread, and many
computer networks were being developed all over the world, but they could not communicate with one another because
they used different protocols, or standards, for transmitting data. In 1974, Vint Cerf (sometimes known as the "father of
the Internet"), along with Bob Kahn, wrote a new protocol, TCP (transmission control protocol), which became the
accepted international standard. The implementation of TCP allowed the various networks to connect and become the
Internet as we know it.
2. The 1980s
The 1980s
Personal computer technology augmented and replaced the large, cumbersome hardware of the 1970s. In 1981 IBM
debuted the PC with an Intel 8088 microprocessor, 16K of RAM, and a 5.25 disk drive with two choices of operating
system. The Apple Mac was launched in 1984, and in 1986 Intel introduced the 486 Processor. Research and development
in computer technology was aimed at "open systems." This additional technological advance served the nursing profession
because it systematized and simplified the process of data entry, storage, and retrieval.
The 1990s
During the 1990s, information technologies, including personal computers and workstations, combined with
telecommunications technologies such as local and wide area networks to create client/server architectures. Client/server
architecture integrated and capitalized on the strengths of the hardware, software, and telecommunications capacities,
allowing users to navigate through data across many systems. These linkages were vital to breaking the barriers between
different systems. The open flow of information among systems contributed to many developments in nursing informatics.
In 1990 Tim Berners-Lee created a new way to interact with the Internet the World Wide Web. His system made it much
easier to share and find data on the Internet. Others augmented the World Wide Web by creating new software and
technologies to make it more functional. For example, Marc Andreesen led the team that created Netscape Navigator
(Griffin, 2000).
The dawning of the twenty-first century saw pervasive digital proliferation. Digital cameras, music players, and videos
became widely available. Handheld devices of all sorts and for highly diverse purposes multiplied and became widely
available and affordable. Handheld wireless communication devices such as Blackberry are bringing convergence of
digital telecommunications.
When focusing on the use of computers in healthcare, computers traditionally gained entry through the accounting area,
where most hospital computer systems still have their roots. Patient care requires continuous and instantaneous response
in contrast to the fiscal methodology where timing is less critical. To achieve successful utilization of computers in
healthcare, both needs must be addressed.
The 1950s
During the late 1950s a few pioneering hospitals installed computers and began to develop their application software.
Some hospitals had help from computer manufacturers, especially IBM. Then, in 1958-1959, John Diebold and Associates
undertook an in-depth feasibility study of hospital computing at Baylor University Medical Center. The final report
identified two major hospital wide needs for computerization: (1) a set of business and financial applications, and (2) a set
of hospital-medical applications that would require on-line terminals at nursing stations and departments throughout the
hospital. Such a system could be used for the following purposes:
• As a communications and message-switching device to route physicians' orders and test results to their proper
destinations
• As a data-gathering device to capture charges and patient medical information
• As a scheduler to prepare such items as nursing station medication schedules
• As a database manager with report preparation and inquiry capabilities
These functions are often collectively called hospital information systems (HISS), medical information systems (MISS),
and sometimes hospital-medical information systems (HMISs). The first term (or its acronym HIS) is used in this book.
4. The 1960s
The 1960s
Although a few hardware manufacturers offered some business and financial application packages for in-hospital
processing during the early 1960s, it was not until the mid-1960s that other vendors began to see the potential of the
hospital data-processing market. The hardware vendors during the 1960s (e.g., IBM, Burroughs, Honeywell, UNIVAC,
NCR, CDC) were committed primarily to selling large general-purpose computers to support clinical, administrative,
communication, and financial systems of the hospitals.
The 200 to 400 bed hospitals that installed computers during the late 1960s for accounting applications found their
environments growing more complex, which resulted in a constant battle just to maintain and update existing systems to
keep pace with regulating agencies. Many hospitals of this size turned to shared computer services.
In 1966, Honeywell announced the availability of a business and financial package for a shared hospital data-processing
center. IBM followed quickly the next year with SHAS (shared hospital accounting system). The availability of this
software was an important factor in establishing not-for-profit and for- profit shared centers during the next 5 years. There
are still many shared-service companies (e.g., SMS, McAuto) specializing in hospital data processing. These companies
continue to provide useful services, particularly to smaller hospitals. The companies prospered not only because of their
computer and systems products and services but because smaller, single hospitals were unable to justify, employ, and
retain the varied technical and management skills required for this complex and constantly changing environment.
The first hospital computer systems for other than accounting services were developed during the late 1960s. The
technology that attempted to address clinical applications was unsuccessful. Terminal devices such as keyboard overlays,
early cathode-ray tubes, and a variety of keyboard and card systems were expensive, unreliable, and unwieldy. Also,
hardware and software were scarce, expensive, and inflexible. Database management systems, which are at the heart of
good information software today, had not yet appeared. During this period, some hospitals installed computers in offices
to do specific jobs (Ball and Jacobs, 1980). The most successful of these early clinical systems were installed in the
clinical pathology laboratory (Ball, 1973). Most of the hospitals that embarked on these dedicated clinical programs were
large teaching institutions with access to federal funding or foundation research money. Limited attempts were made to
integrate the accounting computer with these stand-alone systems.
During the mid-1960s, Lockheed Missile and Space Company and National Data Communications (then known as Reach)
began the development of HISS that would require little or no modification by individual hospitals. This was the
forerunner to the product that is now marketed by Eclipsys.
About 1965, the American Hospital Association (AHA) began to conduct four or five conferences per year to acquaint
hospital executives with the potential the computer holds for improving hospital administration. The AHA also devoted
two issues of its journal solely to data processing. These AHA activities served to crystallize a market (i.e., hospitals ready
for data processing) and to encourage existing and new firms to enter the marketplace.
5. The 1970s
The 1970s
During the early 1970s, with inflation problems and with cost reimbursement becoming stricter, some large hospitals that
had installed their own computers with marginal success changed to the shared service. By this time the shared companies
had improved earlier accounting software and could carry out tighter audit controls. Most importantly, however, the
companies developed personnel who understood hospital operations and could communicate and translate the use of
computer systems into results in their client hospitals. This added a dimension of service that is seldom offered or
understood by the major hardware vendors. As a result, the business opportunities for the service companies increased.
Over time, these companies have expanded their scope of services from fiscal applications and administrative services to
clinical and communication applications.
Major hardware changes took place as the minicomputer entered the scene. This was quickly followed by the introduction
of the personal computer during the late 1970s, Technologic developments have resulted in a steady trend toward
microminiaturization of computers. Personal computers are now more powerful than the original ENIAC. These personal
computers have invaded homes, schools, offices, nursing stations, and administrative offices to a degree never dreamed
possible. The linking of these personal computers, using local area networking technology, provided a better alternative to
many of the processes formerly carried out by one large general-purpose computer.
Many major mainframe hardware companies moved rapidly into the personal computer field as well. Simultaneously, the
service companies began to develop on-site networking systems to handle data, communications and specialized
nonfinancial applications. They began to expand their scope of data retention to support clinical applications that required
a historical patient database.
The 1980s
During the 1980s, specific personal computer-based systems were developed. These systems did not replace but, instead,
complemented a variety of alternatives in various healthcare environments. Thus, awareness of computer concepts by
healthcare professionals became even more essential.
The 1990s
During the 1990s, the advent of powerful, affordable, portable personal computers made information management tools
accessible to support highly mobile, remote activities, especially in community health. At the same time, the power of
networks and database technology has made possible linkages of health data in widely separated locations. There also was
a growing emphasis on information management across health enterprises. Accompanying this was recognition of the
importance of patient/client-centered, integrated data in contrast to departmental focused data. Such linkages and shift in
focus created the possibility of a longitudinal, lifelong health record encompassing healthcare encounters by individuals
with all sectors of the healthcare system.
Nursing Education
The seminal work in the use of computers in nursing education was conducted by Maryann Drost Bitzer. During the early
1960s, Bitzer wrote a program that was used to teach obstetric nursing. Her program was a simulation exercise. It was the
first simulation in nursing and one of the first in the healthcare field. (Bitzer's 1963) master's thesis showed that students
learned and retained the same amount of material using the computer simulation in one-third the time it would take using
the classic lecture method. This thesis (Bitzer, 1963) has become a classic model for subsequent work by her and many
others, including two of the authors (K.J.H. and M.J.E.) of this volume. Bitzer's early findings have been consistently
confirmed. She was later project director on two Department of Health Education and Welfare (HEW)- funded research
projects. These projects undertook evaluative studies that documented the efficacy of teaching nursing content using a
computer. Until 1976 Bitzer was associated with the Computer-Based Education Research Laboratory at the University of
Illinois in Urbana, where she continued to develop computer-assisted instruction lessons to teach nursing.
During the 1970s many individual nursing faculties, schools, and units developed and evaluated computer-assisted
instruction (CAI) lessons to meet specific institutional student needs. Most of the software created was used solely by the
developing institution.
The use of computers to teach nursing content has been a focal point of informatics activity in nursing education.
However, the need to prepare nurses to use informatics in nursing practice is just as important. This aspect was pioneered
in 1975 by Judith Ronald of the School of Nursing, State University of New York at Buffalo. Ronald developed the course
that served as a model and inspiration for courses developed later. Ronald's enthusiasm and her willingness to share her
course materials and experiences have greatly facilitated the implementation of other such courses throughout North
America.
In Scotland, Christine Henney of the University of Dundee undertook similar activities aimed at promoting computer
literacy among nurses.
Nursing Administration
The use of computers to provide management information to nurse managers in hospitals has been promoted on both sides
of the Atlantic. Marilyn Plomann of the Hospital Research and Educational Trust (an affiliate of the American Hospital
Association) in Chicago was actively involved for many years in the design, development, and demonstration of a
planning, budgeting, and control system. (PB CS) for use by hospital managers. In Glasgow (Scotland) Catherine
Cunningham was actively involved in the development of nurse-manpower planning projects on microcomputers.
Similarly, Elly Pluyter-Wenting (from 1976 to 1983 in Leiden, Holland), Christine Henney (from 1974 to 1983 in.
Dundee, Scotland), Phyllis Giovanetti (from. 1978 to the time of writing in Edmonton, Canada), and Elizabeth Butler
(from 1973 to 1983 in London, England) have been instrumental in developing and implementing nurse scheduling and
staffing systems for hospitals in their areas.
In the public health area of nursing practice, Virginia Saba (a nurse consultant to the Division of Nursing, Bureau of
Health Manpower, Health Resources Administration, Public Health Service, Department of Health and Human Services)
was instrumental in promoting the use of management information systems for public health nursing services. The
objective of all these projects has been to use computers to provide management information to help in decision-making
by nurse administrators.
7. Nursing Care
Much research on the development of computer applications for use in patient care was conducted during the 1960s.
Projects were designed to provide justification for the initial costs of automation and to show improved patient care.
Hospital administrators became aware of the possibilities of automating healthcare activities other than business office
procedures. Equipment became more refined and sophisticated. Healthcare professionals began to develop patient care
applications, and the manufacturers recognized the sales potential in the healthcare market.
Nurse pioneers who have contributed to the use of computers in patient care activities have been active on both sides of
the Atlantic. In the United Kingdom, Maureen Scholes, chief nursing officer at The London Hospital (Whitechapel),
began her involvement with computers and nursing in 1967 as the nurse member of the steering team that directed and
monitored The London Hospital Real-Time Computer Project. This project resulted in a hospital communication system
that provided patient administration services, laboratory services, and radiography services using 105 visual display units
in all hospital wards and departments.
Elizabeth Butler was associated with the Kings College Hospital from 1970 to 1973. As the nursing officer on a medical
unit, Butler was involved in developing and implementing the computerized nursing care plan system for the Professional
Medical Unit and for the nursing care plan system for all wards and specialties in the 500-bed general area of the hospital.
In Dundee, Scotland, Christine Henney worked with James Crooks (from 1974) on the design and implementation of a
real-time nursing system at Ninewells Hospital.
In the United States, Carol Ostrowski and Donna Gane McNeill were both associated with the development of the
Problem Oriented Medical Information System (PROMIS) at Medical Center Hospital of Vermont under the direction of
Lawrence Weed. From 1969 to 1979, Donna Gane McNeill was a nurse clinician on the PROMIS project. As such,
McNeill managed the first computerized nursing unit, developed content for PROMIS, and developed functions and tasks
for the computer. She also conducted a comparison between computerized and noncomputerized units. From June 1976 to
December 1977, Carol Ostrowski served as director of audit for the PROMIS system. She was responsible for
implementing the components that supported concurrent audit of medical and nursing care and the environment that
guided and evaluated patient care.
In the United States, Margo Cook also began her association with computers in nursing in 1970 when she was employed
at El Camino Hospital, Mountain View, California. Cook participated as the nursing representative on the team that
developed and implemented the Medical Information System (still marketed by Elipsys). As nursing implementation
coordinator, Cook was responsible for identifying and addressing the needs of all nursing units at El Camino. Often she
functioned as interpreter between the computer analysts and nurses. Eventually she assumed senior level responsibility for
the MIS maintenance and development. In 1983 Cook left El Camino to become senior consultant of Hospital
Productivity Management Services.
In 1976 Dickey Johnson became computer coordinator at Latter Day Saints Hospital in Salt Lake City, Utah. Johnson's
responsibilities involved coordination between the computer department and other hospital users in planning,
development, implementation, and maintenance of all programs either used by, or affecting nursing personnel. In 1983,
Johnson was the nursing representative on the hospital's Computer Committee, which was actively involved in planning,
designing, and implementing a hospital- wide computer system. Johnson was responsible for projects that included order
entries, nursing care plans, nurse acuity, and nurse staffing.
In Canada, from 1978 to 1983, Joy Brown and Marjorie Wright, systems coordinators at York Central Hospital in
Richmond Hill, Ontario, were actively involved in designing, coding, and implementing the computerized patient care
system at their hospital. They were also responsible for training many nurse users on the system. Beginning in 1982 at
Calgary General Hospital in Calgary Alberta, Wendy Harper, assistant director, Nursing Systems, was responsible for all
aspects of the nursing applications on the hospital information system being installed in that hospital.
Nurses have recognized the potential for improving nursing practice and the quality of patient care through nursing
informatics. These applications facilitate charting, care planning, patient monitoring, interdepartmental scheduling, and
communication with the hospital's other computers. New roles for nurses have emerged. Nurses have formed computer
and nursing informatics interest groups to provide a forum through which information about computers and information
systems is communicated worldwide.
Kathryn Hannah, of the University of Calgary, was the first nurse elected to the Board of Directors of the Canadian
Organization for the Advancement of Computers in Health (COACH). In that capacity, with the assistance of David Shires
of Dalhousie University (and at that time program chairman for the International Medical Informatics Association),
Hannah was instrumental in establishing the first separate nursing section at an International Medical Informatics
Association (IMIA) meeting (Medinfo '80, Tokyo). Previously, nursing presentations at this international conference had
been integrated within other sections. In 1982, based on the success of this Tokyo workshop, which Hannah also chaired,
a contingent of British nurses led by Maureen Scholes mounted an International Open Forum and Working Conference on
"The Impact of Computers on Nursing." The international symposium on the impact of computers on nursing was
convened in London, England, in the fall of 1982, followed immediately by an IMIA-sponsored working conference. One
outcome of the working conference was a book that documented the developments related to nursing uses of computers
from their beginning until 1982. The second outcome was a consensus that nurses needed a structure within an
international organization to promote future regular international exchanges of ideas related to the use of computers in
nursing and healthcare. Consequently, in the spring of 1983, a proposal to establish a permanent nursing working group
(Group 8) was approved by the General Assembly of IMIA. In August 1983, the inaugural meeting of the IMIA Working
Group on Nursing Informatics (Group 8) was held in Amsterdam.
In 1992, the working group recommended a change of bylaws and began its transformation to a nursing informatics
society within the IMIA. This society continues the organization of symposia every 3 years for exchange of ideas about
nursing informatics, dissemination of new ideas about nursing informatics through its publications, provision of
leadership in the development of nursing informatics internationally, and promotion of awareness and education of nurses
about nursing informatics.
In the United States in 1981, Virginia Saba was instrumental in establishing a nursing presence at the Symposium on
Computer Applications in Medical Care (SCAMC). This annual symposium, although not a professional organization,
provided opportunities for nurses in the United States to share their experiences. In 1982 the American Association for
Medical Systems and Informatics (AAMSI) established a Nursing Professional Specialty Group. This group, which was
chaired by Carol Ostrowski, provided the benefits of a national professional organization as a focal point for discussion,
exchange of ideas, and leadership for nurses involved in the use of computers. Subsequently, AAMSI merged with
SCAMC to become the American Medical Informatics Association (AMIA). This organization continues to have a highly
active nursing professional specialty group.
9. Summary
Despite their wide usage, computers are historically young and did not come into prominence until 1944 when the IBM-
Harvard project called Mark I was completed. This was followed closely by the development, in 1946 at the University of
Pennsylvania, of the ENIAC I, the first electronic computer with no moving parts. Subsequent refinement of computer
technology, development of the silicon chip in 1976, development of the Internet, and the World Wide Web have made
personal computers as common in our homes as television sets. Handheld wireless digital telecommunications devices are
now pervasive.
During the 1950s computers entered the healthcare professions. They were primarily used for the purposes of tabulating
patient charges, calculating payrolls, controlling inventory, and analyzing medical statistics. A few farsighted individuals
saw the possibilities of automating selected nursing activities and records. However, little action was taken because of the
inflexibility and slowness of the equipment, the general disinterest of the manufacturers in the healthcare market, and the
lack of knowledge concerning such equipment among hospital management, hospital administrators, and nursing
management.
By the 1960s, hospital administrators had been exposed to the possibility of automating healthcare activities; in addition
to existing business office automation, equipment had become more refined and sophisticated, and the manufacturers had
recognized the sales potential in the healthcare market. The major focus during the 1960s was on the research aspect of
computer applications for patient care; the business applications for auditing functions in the healthcare industry were
becoming well established. Projects were designed to provide justification for the initial costs of automation and to
display the variety of areas in which computers could be used to facilitate and improve patient care. Nurses began to
recognize the potential of computers for improving nursing practice and the quality of patient care, especially in the areas
of charting, care plans, patient monitoring, interdepartmental scheduling and communication, and personnel time
assignment. These individual computer applications or modules, which were developed to support selected nursing
activities, were later integrated in modular fashion into various hospital information systems. Today these hospital
information systems are widely promoted and marketed by computer vendors.
Simultaneously, advances in the uses of computers in educational environments were initiated during the 1960s. The
major focus during this decade was on showing the efficacy of computers as teaching methods. During the 1970s, many
projects were designed to compare student learning via computer with learning via traditional teaching methods. The mid-
1970s also saw the development of the personal microcomputer and during the latter years of that decade their widespread
dissemination throughout society. During the 1980s nursing educators were scrambling to develop software for use with
this technology. In fact, the hardware technology has advanced beyond nursing educators' capacity to use it all.
Major contributions by nurses to developments leading to the use of computers in nursing were also discussed. Our
apologies to those nurses whose activities were unknown to us. If readers know of other nurses whose contributions merit
inclusion in future editions, the authors would be pleased to receive such information.
The future demands that computer technology be integrated into the clinical practice environment, education, and research
domains of the nursing profession. The ultimate goal is always the best possible care for the patient.
A. Introduction
1. Introduction I
Introducing the use of a stethoscope precedes learning how to listen to heart and lung sounds. To be effective in using the
stethoscope, a clinician needs to know when to use the bell and when to use the diaphragm. In the same way, it is
imperative to have some understanding of how and when to use the tool of informatics, the computer.
A complete computer system is the integration of human input and information resources using hardware and software. In
computer terms, hardware refers to objects such as disks, disk drives, monitors, keyboards, speakers, printers, mice,
boards, chips, and the computer itself. Software includes programs that give instructions to the computer that make the
machine useful. Information resources are data that the computer manipulates. Human input refers to the entire spectrum
of human involvement, including deciding what is to be input and how it is to be processed as well as evaluating output
and deciding how it should be used. In this chapter, we look at the computer, and in the next, at software.
COMPUTER MISCONCEPTIONS
When computers were new, there were many fears and misconceptions about using them. Some of these were computers
can think, computers require a mathematical genius to use, and computers make mistakes (Perry, 1982). Today there are
other misconceptions, perhaps born of familiarity, which can be dangerous to users. It is important to understand that
computers cannot think, and they are not smart. Incidents like the one in which Deep Blue (the nickname given to an IBM
computer specially designed to play chess) won a chess game against world champion Garry Kasparov led to such
misperceptions. Consider the game of chess. Although there are many possible combinations, there are a given set of
moves, rules, and goals that make it a perfect stage to display the potential of computers. Deep Blue is a very powerful
computer, capable of quickly analyzing hundreds of millions of possible moves and responding according to rules (known
as algorithms) that were part of the software that beat Kasparov. It made use of these qualities to beat Kasparov. It did not
use thinking in the human sense.
The thought that only mathematical geniuses can use computers, although just as false, continues to flourish. This belief is
linked to the development of the first computers as a means to "crunch numbers," or process mathematical equations.
Hence, in colleges and universities, many computer departments are still housed in, or closely related to, the departments
of mathematics. It did not take experts long to translate the mathematical concepts into everyday language, an
accomplishment that made the computer available to everyone, regardless of level of proficiency in math.
The last myth, that computers make mistakes, makes it a wonderful excuse for human error. This was well illustrated by a
cartoon in the early 1990s that showed a man saying, "It's wonderful to be able to blame my mistakes at the office on the
computer, I think I'll get a personal computer." Computers act on the information they are given. As one humorist said,
"Computers are designed to DWIS, or Do What I Say" As many a user will tell you, they resist with great determination
any inclination to DWIM, or "Do What I Mean!" Unlike a colleague to whom you only need to give partial instructions
because the person is able to fill in the rest, a computer requires complete, definitive, black-and-white directions. Unlike
humans, computers cannot perceive that a colon and semicolon are closely related, and in many cases, a computer
believes that an uppercase letter and a lower case letter are as different as the letter A from the letter X. This is known as
case sensitivity. There are no "almosts" with a computer.
COMPUTER CHARACTERISTICS
A computer accomplishes many things that are impossible without it. When programmed properly, it is superb in
remembering and processing details, calculating accurately, printing reports, facilitating editing documents, and sparing
users many repetitive, tedious tasks, which frees time for more productive endeavors. Remember, however, that computers
are not infallible. Being electronic, they are subject to electrical problems. Humans build computers, program them, and
enter data into them. For these reasons, many situations can cause error and frustration. Two of the most common
challenges with computers are "glitches" and the "garbage in, garbage out" (GIGO) principle. That is, if data input
then the output.
Anyone, who had been using a computer when it crashed or "went down" may have experienced a guilty feeling that she
or he did something wrong. If the person actually did create the crash, unless she or he were purposely engaged in
something destructive, that person did not cause the crash; she or he just found a flaw in the system that was inadvertently
created by the programmer(s).There are times, however, when crashes occur for seemingly no reason. Computers,
regardless of their manufacturer, will at some time, for unknown reasons, perform in a totally unexpected manner (Perry,
1982). This is as true today as when computers were new. The good news is that this is much less apt to happen despite
the complexity of today's computers.
The attitudes people have toward computers range from complete dislike and frustration to curiosity and excitement.
Although the mass media and personal acquaintances convey both perceptions, people seem to remember the negative
points more clearly. As with all new experiences, becoming acquainted with a computer or learning a new computer
application can produce anxiety. When computers in healthcare were fairly new, many studies were done that looked at
the attitude of nurses toward computers (Focus, 1995; McBride & Nagle, 1996; Simpson & Kenrick, 1997). Today, these
studies look at the attitudes and anxiety toward the electronic health record (Chan, 2007; Dillon, Blankenship, & Crews,
2005). It is highly likely that the same type of anxieties were present when the general population found it necessary to
learn to drive. There are, however, no known studies of this phenomena dating from that time (Box 2-1).
Addressing these fears takes time for both a trainer and the individual experiencing the fears. One-on- one sessions for the
person affected may be necessary and save time in the long run by preventing frantic calls to the help center. Studies show
that the learning patterns of those afraid of computers can be improved by treating the bodily symptoms of anxiety and
providing distracting thought patterns (Bloom, 1985). Techniques, such as teaching relaxation methods before starting any
hands-on training, often helps, as does giving the anxious trainee something to repeat internally such as, "You're in
control, not the computer."
Other helpful techniques include recognizing and accepting fear. One method is to have trainees check off from a list of
possible feelings (e.g. panicky, lost, curious) those that they are feeling, a practice that can help them face their fears.
Inherent in all these terrors is the fear of failure and of looking incompetent in front of their peers. This may be especially
evident in people who see themselves as having a high degree of competence in their profession and to whom people look
for answers. Therefore, placing themselves in a learning situation can be very threatening to their self-image.
TYPES OF COMPUTERS
The progress in computers is measured by generations, each of which grew out of a new innovation. Computer sizes vary
from supercomputers intended to process large amounts of data for one user at a time to small palmtop computers. Each
type has its niche in healthcare. However, it is becoming increasingly difficult to classify the different types of computers,
because smaller ones take on the characteristics of their bigger brothers as the amount of space needed for processing
lessens.
Supercomputers
Technically, supercomputers are the most powerful type of computer, if power is judged by the ability to do numerical
calculations. Supercomputers can process hundreds of millions of instructions per second. They are used in applications
that require extensive mathematical calculations, such as weather forecasts, fluid dynamic calculations, and nuclear
energy research. Supercomputers are designed to execute only one task at a time; hence, they devote all their resources to
this one situation. This gives them the speed they need for their tasks.
Mainframes
The first computers were large, often taking up an entire room. They were known as mainframes and were designed to
serve many users and run many programs at the same time. These computers formed the backbone of many hospital
information systems. Users used what were technically called videotext terminals, but often referred to as "dumb
terminals." A videotext terminal consisted of a display screen, keyboard, and modem or device that connected it to the
mainframe. Information was entered on the keyboard and transmitted to the mainframe, which was located somewhere
else, often in the basement in a secure, temperature-controlled room. Any processing done to the information was done by
the mainframe, which returns the results to the screen of the videotext terminal. This is why you still find the IS
departments in the basement of hospitals.
Minicomputers
As computers became more powerful, their size was reduced. The same work done by a mainframe became amenable to
being accomplished on what was termed a minicomputer. Minicomputers were like mainframes (i.e., they were multiuser
machines that originally served videotext terminals), but they were smaller and less costly unlike larger mainframes, they
did not require a special temperature- controlled room and were useful in situations with fewer users. As computers started
to be built in many sizes, it became impossible to classify one as a mainframe or another as a minicomputer; these terms
are seldom used today.
Servers
The functions served by mainframes and minicomputers today are performed by computers of varying sizes, which are
referred to as servers. The functions performed by servers and their powers are as varied as the needs of the users. These
functions vary from servers that operate in the mainframe/video text format especially in hospital information systems to
those that are just a repository for files (a user-created item) that are "served" to users who have software that does the
processing on their personal computer (PC). Some servers in the middle of this continuum may also store the programs
that a user needs and provide them as needed by the user.
Thin Clients
Thin clients are today's version of the video-text terminal. These are computers without a hard drive and with limited, if
any, processing power. Besides costing much less, thin clients do not need to be upgraded when new software is made
available because they do not contain any applications. Because they do no processing, older PCs can function in this
capacity instead of being retired.
PCs are designed for one individual to use at a time. PCs are based on microprocessor technology that enables
manufacturers to put an entire processing (controlling) unit on one chip, thus permitting the small size. When PCs were
adopted in business, they freed users from the resource limitations of the mainframe computer and allowed data
processing staff to concentrate on tasks that needed a large system. Today, although capable of functioning without being
connected to a network, in businesses including healthcare, PCs are usually connected or networked to either other PCs or
servers. They still process information, but when networked, they can also share data. In information systems, PCs often
replace the old videotext terminal and handle the tasks of entering and retrieving information from the central computer or
server, although thin clients may be used for this purpose instead. When a full PC is available on the unit, personnel can
use application programs such as word processing. PCs are available in many different formats such as a desktop or tower
model, laptops, and tablet computers.
The original PC was a desktop model. You are probably familiar with these. They were placed horizontally on a desk and
a monitor was placed on top of them. As the use of computers became more common, users were reluctant to give up desk
space to their computer, and computers that stood upright, known as towers, were developed. With these, only the monitor
needed was on the desk, while the computer itself stood on the floor, often under the desk.
Laptops/Notebooks
As computer usage became popular, people found that they needed the files and software on their computer to accomplish
tasks away from their desks. The first computer that made this practical was really more transportable than portable.
Developed in the 1980s, it was about the size of a desktop and had a built-in monitor. Toward the end of the 1980s, the
transportables were replaced by true portables: laptops, or computers small enough to fit on one's lap. As technology
continued to place more information on a chip, laptops became smaller. They are now referred to as notebook computers.
Some healthcare organizations use notebook computers for point-of-care (POC) data entry at the bedside or anyplace
where care is delivered.
Notebook computers do have some drawbacks. The screen is usually smaller than the one in a desktop, and the resolution
may not be as crisp. Keyboards are also smaller. The mouse, or pointing and selecting device, can be a button the size of a
pencil eraser in the middle of the keyboard, or a small square on the user end of the keyboard, or a small ball embedded in
the keyboard. Some users purchase docking stations for their notebook computers. The computer can be placed in the
docking station (may be called a port replicator, or notebook extender), which is connected to hardware such as a larger
monitor, keyboard, and regular mouse. This enables the user to have access to these devices when at their desk, but makes
it easy to remove the computer and enjoy its portability, albeit, without the hardware connected to the docking station.
Again, as the processing power and storage capacity of computers increased, there was a demand for mobile computers
that were more versatile than laptops. The subnotebook and tablet computer are some of these as are personal digital
assistants (PDAs).
2. Introduction II
Hardware is the term that describes the physical pieces of the computer, commonly grouped into five categories.
1. Input - Data must be placed into the computer before the computer can be useful.
2. Memory - All data processing takes place in memory.
3. Central processing unit (CPU) - This is the "brain" of the computer, which coordinates all the activities and does the
actual data processing.
4. Storage - The data and programs can be saved for future use.
5. Output - Processed data are of little value to people unless they can see the data in some form.
Hardware can be considered the anatomy of a computer, its physical, mechanical portion.
Software is the term that describes the nonphysical pieces. It can be grouped into two categories.
1. Operating system - This is the collection of standard computer activities that need to be done consistently and reliably.
These processes are the building blocks for computer functions and programs. Well known examples include Microsoft
Windows, Apple OS X, and the open source Linux flavors.
2. Application programs - These are packages of instructions that combine logic and mathematical processing using the
building blocks of the computer. Programs are what make computers valuable to people by transforming raw data into
information. Examples of applications include spreadsheets, word processing, games photo managers etc.
Software can be considered the physiology of a computer, the instructions that make its anatomy function properly. These
pieces of a computer are described later in more detail, but it is important to have a mental picture of a "computer" before
we proceed. It is also helpful to understand some computer terminology (or jargon) that often overwhelms or confuses.
Computer technology has had an explosive growth during the past several decades. The large computers that used to fill
their own special purpose rooms have in many cases been replaced by computers small enough to fit on a desk ("desktop"
model), on one's lap ("laptop" model), or in the palm of one's hand ("palmtop" computers). This trend is expected to
continue.
B. Hardware
1. Input
Keyboard
The keyboard is the primary data entry tool for every computer. Commonly, keyboards consist of an alpha, typewriter
formatted layout. Most have an accounting/calculator type of numeric pad to the right of the alpha layout for easy
financial information entry. Additionally, there is a numeric/symbols row across the top of the alpha layout, very similar to
a typewriter.
Between the alpha and the calculator/numeric pad on a keyboard is a set of page command keys. These usually control the
page, such as Insert, Delete, Home, End, Page Up and Page Down. In this general area of the keyboard, also reside
directional arrow keys that help you to navigate around the page of your document or website.
Across the very top of the keyboard is a row of Function Keys, usually designated as such with labels such as F1, F2, F3,
etc. These keys are used to accomplish specific functions within the context of each software program. When using a new
software package, it is best to read the Commands & Function Keys pages of the manual to learn how the F Keys work
inside of the particular program. And finally, there are keys along either side of the alpha layout which, similar to a
typewriter, help to navigate around the document. The Tab key helps to insert indents; the Caps Lock will put you into an
all bold mode; the Backspace key will simultaneously back up your cursor and erase the text you just typed; the Shift key
is a temporary capitalization or upper case character of the key; and the Enter key is also known as a hard return and
designates a deliberate line space.
Mouse
The Mouse or clicker as it is known in some places, is the small, palm sized device created by the industry to help the user
short cut command keys on the keyboard, and to navigate around computer software, an internet page, or a document.
Click is the primary function of the mouse. The user simply points and clicks onto whatever they desire to see. On the
internet and in software documents, a double click is required to open the page or document. Most mouse designs have a
left button, right button, and a roll button on the top center of the mouse. The left button is for the click and the double
click functions, and the right button is for Reveal or Copy codes in internet pages and software screens.
The roll button on the top and center of your mouse is for quick scrolling through a page. This saves the operator grief
when using the scroll bar on the right side of the monitor. Some scroll bars can be awkward. It is also helpful when
copying large amounts of text that require rolling the page down, since it makes it easier to copy and scroll without having
to use the side scroll bar.
2. Process
CPU
The CPU of a computer is often referred to as the brains or the brain center. Technically, the term CPU is an abbreviation
for Central Processing Unit. Also known among computer geeks as the box, the CPU is housed inside of metal housing
which usually includes floppy drives, hard drives and other peripheral.
The CPU itself includes what is known as the motherboard of the computer - the central board which holds all of the
computer's RAM [random access memory] and all of the related components such as cooling fans, resistors and other
electronic parts which cause the computer to function properly. The computer's RAM size determines the size of programs
the computer can use. Generally, a software package will have a RAM quote on it - that is, the amount of RAM necessary
to operate the program. A computer operator must check the software requirements for RAM against the system's known
size. Most programs today require no more than 512MB [mega bytes] of RAM to operate.
Also contained in the CPU and the motherboard, is the computer's ROM [read only memory], another essential
component of the computer's brains. The ROM functions in the motherboard to contain or store the program or programs
which boot-up [or start up] the computer. Anytime the ROM in a computer is damaged on the motherboard, then the
computer will not start up correctly and possibly not at all.
Processors
Memory
The two basic types of memory, ROM and RAM, were defined earlier. Generally, a computer has a sufficient amount of
ROM built in by the manufacturer. ROM is preloaded with the low-level logic and processes needed to start the computer
when its power is turned on (a process called "booting up"). Most computers also have a starting amount of RAM
preinstalled. RAM can be purchased separately, though, and installed as needed. Application programs are loaded, when
called for, into RAM. The program executes there and stores information in other parts of RAM as is needed. Today,
application programs have growing RAM requirements as more logic and functions are packed in them.
Also contained in the CPU and the motherboard, is the computer's ROM [read only memory], another essential
component of the computer's brains. The ROM functions in the motherboard to contain or store the program or programs
which boot-up [or start up] the computer. Anytime the ROM in a computer is damaged on the motherboard, then the
computer will not start up correctly and possibly not at all.
There are several types of CPU chips. In the personal computer world, the Intel Corporation is probably the most
recognized manufacturer with, first, its 80 x 86 series of CPU chips (i.e., 80386,80486,...) and then its Pentium chip (i.e.,
Pentium 4 or P4) series. In the large computer world, IBM (International Business Machines) is probably the most
recognized name.
One measure of CPU processing capacity is called MIPS ("millions of instructions per second.") Although not a totally
accurate measure, it is useful to see the growth of processing capacity over time. Intel's 80386 chip, produced in 1985,
was rated at 5 MIPS. Intel's Pentium chip, introduced 8 years later in 1993, was rated at 100 MIPS-about 20 times faster.
Intel's Pentium 4 chip, made 7 years later in 2000, was rated at 1700 MIPS.
All CPUs have three basic elements: a control unit, an arithmetic logic unit (ALU), and an internal memory unit. The
ALU performs all the mathematical operations, the control unit determines where and when to send information being
used by the ALU, and the internal memory is used to hold and store information for those operations. The CPU has an
internal system clock that it uses to keep everything in synchronized order. The clock's speed is described in terms of
frequency, using megahertz (MHz) or gigahertz (GHz), so a CPU might be described as having a clock speed of 450 MHz
or 2.4 GHz. Generally, the faster the clock, the faster the CPU can process information.
3. Storage
The memory of a computer is not the place to store information and programs for a long time. ROM is read-only
(unwritable) and therefore not of any use. RAM holds programs and information but only so long as the computer is
turned on; once turned off; all information in RAM is gone. Therefore other means are used for long-term storage, the
most common technologies being magnetic, optical, and special nonvolatile memory.
• Hard Drive is the computer's primary data storage unit. The hard drive generally holds all of your programs and all of
your created, downloaded and saved data.
• A flash drive is a small, portable drive that can hold as much as small hard drives [internal drive]. Most flash drives are
the size of a small cigarette lighter and fit in the palm of your hand. They come in several popular sizes, most commonly:
512MB; 1GB; and 2GB. They range on average from $19.95 to $99.95 and are popular among the jet set, executive
crowd. Memorex gives theirs a standard dog-tag styled neck rope similar to company security passes that makes it easy to
carry and access.
Flash drives have replaced the older floppy drives, they are easier to use and they hold a lot more information on them.
They are quickly inserted, accessed, and removed from most small and personal computers. Older computers have their
USB ports in the rear of the CPU [where the power cord plugs in], while the newer computers have the USB port in the
front, next to the CD drive. The USB port is what the flash drive plugs into directly.
The cool thing about the flash drive, other than its portability, is that when you plug it in you do not have to go searching
to find it in the computer. The USB port on any computer will automatically prompt the computer to pull up the flash
drive for the user. From that point, you simply click and select your file to use or save.
• CD drives read documents saved on CDs [compact disk]. A CD will hold up to 700MB of information on it. Most CDs
are not erasable, so they do not yet have the flexibility of the earlier floppy drive. However, the CD is a more reliable
storage tool because it is sealed into a digital format. Unless you break your CD, the information is relatively safe.
Most computer programs and teaching programs are sold on CD. When you buy a new computer program, you download
it from a CD and the CD serves as your backup information. If your computer has a CD burner in it, then you can also
make CDs of anything you create. You create the file, save it, and burn it into a new CD. Then it is available to share with
others on their computers.
• Floppy Drive is an older type of information saving device that is still peripherally in use. Generally, a floppy disk is a
3.5 inch disk encased in plastic with a sliding access panel for the computer to read the information from. Users insert a
floppy disk into the floppy drive, usually located on the front panel of your CPU, and save data onto the floppy for use at
their convenience and in other locations.
Today, the CD drive and the flash drive are replacing the floppy drive. However, there are still quite a number of small
businesses and personal computer users which rely on floppy drives to save their creations and company information.
• Removable Disk Drive is the same kind of device as a hard disk drive with similar storage capabilities. The difference is
that the magnetic storage media can be removed and replaced, just as with a floppy disk.
• Tape describes a medium that can magnetically encode a lot of information. In many ways, tape in a computer system is
used like audio tape. Computer tape is typically used to store a copy of important information, to be recovered in case of a
major problem with the computer. Tape is packaged in various ways, from large reels to small cartridges. Tape is reusable.
4. Output
Monitor is the visual display screen for the contents of the computer and the data being worked on or viewed. It is similar
to a television screen and it displays the computer's content at a comfortable brightness and resolution.
The older monitors are large and bulky like a small office machine. Jokingly referred to b geeks as "boat anchors", the
older, large monitors are on their way out of popular use. Th newer monitors are similar to the newer televisions in that
they are light weight and slim. Generally, the newer monitors weight under 3 pounds, are 4 inches or less in thickness, and
are referred to as flat screens. The most common sized monitors used in small business and home computer operations are
15", 17" and 19" screens.
Monitors can be adjusted internally to accommodate the operator's comfort level for viewing. The lower the resolution
selected, the larger the lettering is on the screen. Most monitors ca be adjusted for a resolution level between 600 X 800
pixels [micro dots per inch] to 1280> 1024 pixels. There are two reasons to adjust the resolution:
1. Size and/or use of the monitor - larger screens require smaller resolution, and gaming programs require smaller
resolution for the best experience.
2. To make the user more visually comfortable - smaller resolution will cause the letters in any screen to appear larger.
To find the adjustment page screen for a Windows XP computer, go to the Start button on the bottom left of the Task Bar,
click onto the Start button and go to Settings, and then to Control Panel and click. You will open the Control Panel's
selection of options. Double click onto Display, and you will open the Display Properties. From this point, you want to
select the Settings tab. On the left side of this screen, about midway down, you will see a slide bar which will allow you to
adjust the resolution. Slide to smaller or larger, according to your need, and then click onto Apply. From here you can
close out of the entire Control Panel and your computer will automatically update according to your changes.
Visually impaired individuals now have options not available 10 years ago for their computer use. Included in these
options are:
• Screen readers - which will read the information displayed on the screen to the visually impair c
user.
• Text readers - which will read text out loud that is typed into the user's keyboard, a text reader also reads out loud emails
and common word processed documents, and
• Screen magnifiers - which magnify the content of the screen from 2X to 16X its original size.
Monitor is the most common way a person sees the information and instructions on computer. Historically, on desktop
computers the monitor looks like a television screen an uses the same display technology as a television. On laptop
computers ("small enough to fit i your lap"), the monitor is a flat screen that uses liquid crystal display (LCD) technology.
This LCD technology is increasingly being used in desktop monitors as well. Some other names fc the monitor are VDT
(video display terminal), CRT (cathode ray tube), screen, and display.
• Printers and plotters are two ways by which the computer can put the processed information, such as a report or a chart,
onto paper for people. The most common output device in offices is the laser print r capable of putting either text (e.g., a
report) or graphics (e.g., a chart) onto standard-size paper.
C. Software
There are (5) five basic types of operating systems (OS) used in small office and home computer systems today. They are:
1. DOS - Disk Operating System. This is the system of most home computers.
2. MS DOS - Microsoft's version of DOS. If the home computer is made by Microsoft, or if it operates on a Microsoft
based system such as Windows XP or any of the Windows series, then it operates on MS DOS.
3. UNIX - A widespread operating system used primarily in businesses. Originally developed by engineers at Bell South
in the late 1960's.
4. Linux - A derivative of the UNIX system, developed into a more flexible system in the early 1990's. This is an up and
coming OS that is starting to take hold in the marketplace.
5. The Mac OS series - Used by owners of Apple's Macintosh series of computers a.k.a. Apple or Mac.
Although operating systems experienced growth, transitions and transformations since the early days of the 1960's when
they began to develop in competitive avenues of the marketplace, those listed above dominate the marketplace today.
Because GUIs are easy to use, most healthcare information systems use a GUI operating system, usually a version of
Microsoft Windows. For this reason, as well as the computer fluency demanded of healthcare professionals, it is
appropriate to review some basics of using Windows. When a PC is turned on and finishes booting (often a three- to five-
minute process!), the screen that appears is called the desktop. The desktop has icons representing many of the application
programs that are installed on the computer. Below these icons is the name of the application that the icon represents.
When "clicked" (putting the mouse pointer on the icon and left clicking), the program the icon represents will open.]
OPENING A PROGRAM IN A PC
As with all things with Microsoft Windows, there are several ways to start a program when using a PC. Clicking the icon
on the desktop, mentioned earlier, is often used when the computer is first turned on. If an icon for the desired program is
not on the desktop, the user should click the Start button, then click All Programs and click the name of the program on
that list. The Start button, so called because it "starts" processes, is located in the lower left corner of the screen. In
Windows Vista, the start button is the Windows logo surrounded by a blue circle. Some programs that are together in a
suite such as Corel Office, Lotus SmartSuite, or Microsoft Office may require that the folder containing all of them be
opened, then the program selected from the secondary menu contained in the folder. In XP, this will be indicated by a
black triangle mark pointing to the right. In Windows 7, a container or more programs will be the folder icon as seen in
Figure 3-1. In either case, clicking the container will show the programs with that folder, and you can make your selection
from there.
All programs will have a symbol, which represents the mouse pointer, the shape of which may vary according to the task
and the program. In some cases it can be changed by the user. In word processing software it is often a vertical "I" bar; in
spreadsheets it may be a large plus sign; and in a presentation or graphics program, an arrow. Although many choices on
the top of the screen may be represented by icons, if you rest the mouse pointer on an icon, after a few seconds words
describing what the icon is will appear.
MULTIPLE WINDOWS
One of the advances of GUIs over DOS included the ability to have multiple programs open at the same time and to easily
move data between them. This makes it possible to copy a graph from a spreadsheet into a word processor.
Looking at the task bar at the bottom of the screen tells you how many instances of each Microsoft program are open, and
any other programs that are open. When so many programs are open that there is no room on the task bar, multiple
instances of the same program will be grouped under one icon meaning that you will have to click that tab and select
which file you wish to use for that program.
Navigating Between Programs and Files
One of the hardest things to grasp for new users of GUIs is that not only is it possible to have many different windows
open at one time, but using this option is an aid to productivity. As mentioned earlier, when you have more than one
program or file open, the program name will be on the task bar, which is at the bottom of the screen. If this line is not
visible, place your mouse pointer on the bottom of the screen, and it will appear.
Part of the left side of the task bar is the Quick Launch bar to which you can drag icons either from the desktop or the
program list for programs that you frequently use. Then instead of using the desktop or All Programs from the Start button
to open a program you can click the icon in the Quick Launch bar that represents the program you wish to open. If you
can't remember what the icons represent, rest your mouse pointer a second or two on it and the name of the program will
appear.
In a minimized format, you can place your mouse pointer on the sides of the window until it becomes a double-edged
arrow, then drag the side to change the size of the window. To move a window, place the mouse pointer in the title bar, or
the top line of the window that has the name of the program and often of the file itself, and drag the window wherever you
want it. If the windows overlap, the window that is the active window, or the window in which you are working, will be
on top. Clicking in a window makes it the active window. If the windows are side by side, the active window is the one
where the mouse pointer is, although sometimes the title bar will be a little darker in color in the active window. Placing
windows side by side or one on top of the other can be very useful when working on two files in the same program, or
needing to synchronize files from different programs.
EXITING WINDOWS
In the earlier sections we have discussed how to close a file on which you are working, and how to close a program.
Before shutting down a computer, it is necessary to exit Windows. Although invisible to users, Microsoft Windows, like
all operating systems, actually works very hard behind the scenes to provide its many functions, as do many of the
application programs. To do this, application programs and Microsoft Windows have to have easy access to specific
information. Often, this is accomplished by creating files on the hard disk of the information that the application needs to
use. These files are temporary and unknown to the users. They are created, changed, and deleted as users change what
they are doing. Before quitting Windows, the application programs need to be able to shut these files down. If users do not
exit all programs and close Microsoft Windows properly, these temporary files are left on the disk and can cause problems
as well as fill up your hard disk. To exit Windows and shut down your computer, click the Start button in the lower left
corner of the screen, select Shut Down from the menu and let the computer go through the shutdown process, which may
take a minute or more. If during the time you have used the computer, a program or the operating system has downloaded
some updates, you may see a message that files are being updated, please don't turn off the computer. This message may
or may not add that the computer will be turned off by the program, when the installation is complete, but in most cases
this will occur.
3. Types of Software
• Application Programs - are packages of instructions and operations that take raw data and process them into information.
Applications focus on working with people to produce information that is important to them. Some examples of
applications are word processing, spreadsheets, and desktop publishing.
• Graphical User Interface (GUI) Software - is a special type of software in common use today. It can be part of the
operating system software, or it can be a complete application program on its own; at times, a GUI (pronounced "gooey")
seems to straddle the line between operating and application software. The basic design of any GUI is that it stands
between the operator of the computer and the computer itself and acts as the go between. Any GUI has two primary goals:
(1) to shield the operator from needing a great amount of technical knowledge to use the computer effectively and
correctly; and (2) to give a consistent "look-and-feel" to application programs (if they are designed for it).
Accomplishing the first goal means that an operator can perform all necessary technical tasks (e.g., copying data files
between disks, backing up information) by pointing at icons (small pictures) on the screen. These icons represent tasks
that can be done. For example, by pointing to an icon that represents a desired data file and then dragging that icon over
onto another icon that represents a printer, a person can print the file.
Note that because of the GUI's capabilities, the person did not have to know the correct operating system commands to
print the file. Accomplishing the second goal means that any application program can be designed so it is less difficult for
a person to learn how to use it. Basically, the GUI defines a standard set of functions that it can provide (e.g., open a data
file, save a data file, print a data file) and gives standard ways for application programs to use these functions. If
application programs are designed and built to use these functions, a person has to learn only once how to open a data file.
Any other program that uses the GUI functions has the same "look-and-feel" that is, a person can open a data file in the
same manner. Doing things in a consistent, predictable way not only reduces a person's learning time but increases a
person's comfort level and productivity.
• Databases and Relational Database Management Systems - A database is a data file whose information is stored and
organized so it can be easily searched and retrieved. A simple analogy is a filing cabinet drawer.
The difference between a file and a database is the same difference between a file drawer that has reports dumped into it
in any old way and a drawer that has neatly labeled file folders, arranged in meaningful order, with an index that shows
where to store a report. In both cases, we know the information we need is in the file drawer-only in the second case (i.e.,
the database) we are confident that we can find that information quickly and easily.
A database management system (DBMS) is a set of functions that application programs use to store and retrieve
information in an organized way. Over the years, various ways to organize information have been used (e.g., hierarchical,
network, indexed). The way it is used most frequently now is called relational. A relational DBMS stores information in
tables (i.e., rows and columns of information). This approach allows powerful searches to be done quite easily.
• Operating systems - are the basic control programs for a computer. All the basic logic required for using a computer's
hardware, such as the monitor, the printer, and the hard drive, is contained in the operating system. Because the operating
system handles those computer parts, it is unnecessary for application programs to do so. An example in the personal
computer world is Microsoft Corporation's Windows operating system.
• Databases and Relational Database Management Systems - A database is a data file whose information is stored and
organized so it can be easily searched and retrieved. A simple analogy is a filing cabinet drawer.
The difference between a file and a database is the same difference between a file drawer that has reports dumped into it
in any old way and a drawer that has neatly labeled file folders, arranged in meaningful order, with an index that shows
where to store a report. In both cases, we know the information we need is in the file drawer-only in the second case (i.e.,
the database) we are confident that we can find that information quickly and easily.
A database management system (DBMS) is a set of functions that application programs use to store and retrieve
information in an organized way. Over the years, various ways to organize information have been used (e.g., hierarchical,
network, indexed). The way it is used most frequently now is called relational. A relational DBMS stores information in
tables (i.e., rows and columns of information). This approach allows powerful searches to be done quite easily.
• Operating systems - are the basic control programs for a computer. All the basic logic required for using a computer's
hardware, such as the monitor, the printer, and the hard drive, is contained in the operating system. Because the operating
system handles those computer parts, it is unnecessary for application programs to do so. An example in the personal
computer world is Microsoft Corporation's Windows operating system.
There are (5) five basic types of operating systems (OS) used in small office and home computer systems today. They are:
1. DOS - Disk Operating System. This is the system of most home computers;
2. MS DOS - Microsoft's version of DOS. If the home computer is made by Microsoft, or if it operates on a Microsoft
based system such as Windows XP or any of the Windows series, then it operates on MS DOS;
3. UNIX - A widespread operating system used primarily in businesses. Originally developed by engineers at Bell South
in the late 1960's;
4. Linux - A derivative of the UNIX system, developed into a more flexible system in the early 1990's. This is an up and
coming OS that is starting to take hold in the marketplace; to learn more, go to: http://www.linux.org/, and;
5. The Mac OS series - Used by owners of Apple's Macintosh series of computers a.k.a. Apple or Mac.
1. Computer Terms I
The terms listed in this lesson are meant to be beginner's terms only. Reference for more advanced studies of terms and
computer jargon are listed at the end of the lesson. Included in some of the term's definitions are helpful information and
tips, particularly for those terms which require more details.
Backup
A saved copy of your computer hard drive, temporary files or current project. Generally, a backup is a digitally saved copy
[on CD] of everything on your hard drive. Programs such as Ghost will also handle this task automatically although most
home operators perform the task manually themselves. It is essential to anyone who seriously works on computers to
backup their work. Computers crash, accidents happen, files get erased by the unsuspecting, and work can be forever lost
without a backup.
Boot up
To start up your computer is to boot up your computer. If your computer crashes you may need to use your boot disk, the
one that came with your computer system, in order to restore your system.
Bug
A bug is an error in programming that was overlooked. Bugs are the primary reason for updates that have to be
downloaded from the internet to keep your system up to date and running smoothly.
Burn
To copy your file or information onto a CD is to burn a CD. This requires a CD burner since not all CD drives have
burners in them.
Bytes
Eight bits is a byte. Each bit is designated by a 1 or zero; this is the basis of all written computer code. Technically, there
are differences, but to the novice, the easy way to think of it is that a byte is most commonly a letter, punctuation or space.
CD
Compact disk; a small, plastic disk which retains information in a digital format [there are a variety of digital formats
which can be recorded on a CD]. The CD superceded the floppy disk because it encases the saved information in a hard
plastic that is impenetrable without excessive force. It is virtually weatherproof and cannot be easily destroyed.
Chat Room
An online forum in which people [usually adults although some young adult forums exist] can talk to one another
instantly via a login and password system. Most chat rooms cover specific topics: consumer advice or advocacy; dating;
politics; religion; technical information; and others.
Crash
For the computer to crash is to have it suddenly stop working and refuse to restart. Indications of this are that you cannot
get it to boot up, you cannot get it to reboot, you cannot access your files, or it will not give you a prompt in DOS.
Usually, if your system crashes, the information on your hard drive is gone forever unless it is backed up somewhere.
Doc File
A doc file is usually a reference to the Microsoft Word program in which a document is saved with a file extension
of .doc. Doc files are some of the most common files sent in emails and/ used to transfer information from person to
person in the internet.
Download
A download is a program, a program update, a file, or anything transferred via email. Some of the big search engines like
Google and Yahoo have downloads for their customers on a regular basis to update the customer's version of the search
engine program. A file sent from one person to another in the internet is generally called a download. A download can also
be a file that you paid for such as an MP3 music file. If you go out to the internet, capture a file, and save it on your
computer, then it is a download.
DOS
The abbreviation for "Disk Operating System"; also known as THE Microsoft language. Every product produced by
Microsoft is written in DOS.
DVD
An abbreviation for "Digital Versatile Disk". It is currently the most high powered portable storage disk available on the
market. The current maximum storage for a DVD is 4.7GB which is more than 5 times the storage space on a CD.
e-Book
A book made available online in a digital format; also known as an "electronic book". The e- Book is the most progressive
format for the book publishing industry, yet it suffers certain obstacles such as appeal to those over 30, portability, and the
comfort factor to name a few.
Email
Online electronic communication between people in different locations. Most commonly email is provided for and
supported by large search engines, individual websites and public library accounts. Some of the best places to look for
free email providers are:
• Google - http://mail.google.com
Yahoo - http://www.yahoo.com
• Free Email Provider's Guide - http://www.fepg.net/providers.html
• EmailAddresses.com - http://www.emailaddresses.com
Flash Drive
A flash drive is a small, portable drive that can hold as much as small hard drives [internal drive]. Most flash drives are
the size of a small cigarette lighter and fit in the palm of your hand. They come in several popular sizes, most commonly:
512MB; 1GB; and 2GB.
Folder
A file; a place to store your documents. Located in the directory of your hard drive [a.k.a. C drive].
Format a drive
To prepare a disk for use.
Gigabytes
1024 megabytes or approximately 1 billion bytes [characters].
2. Computer Terms II
Hard Drive
The hard drive is the computer's primary data storage unit. The hard drive generally holds all of your programs and all of
your created, downloaded and saved data.
Hardware
Hardware is all of the physical, tangible elements of your computer.
ISP
Information Services Provider; the company that brings your computer services to your house or business. ISPs are very
similar to your phone or electric company in that they provide a specific service. Most ISPs utilize your current phone
cabling or your satellite/television cabling to provide their services.
JPEG File
Joint Photographic Experts Group is the name of the committee that wrote the file format that is now in widespread use
for most online photos that are shared or posted on public sites. This is a file type in which you will save most of the
photos that you use in your computer.
Laptop
A small, self-contained computer that can fit in your lap. Generally used for mobile purposes such as business or travel.
Megabytes
1024 kilobytes; 1 kilobyte is 1024 bytes.
Modem
A communications medium between your computer and the ISP, most commonly contained in a box that connects to your
computer with Ethernet cables and phone cables. The exception to this is the FIOS [fiber optic cable] services available
from Verizon.
Network
The computers in your home or business which are all connected together.
PDA
Also known as a personal digital assistant; the most popular ones are the Palm Pilot and the Blackberry. You can send and
receive email and other documents via your PDA.
PDF File
Technically, it means portable document format. The .pdf format is used primarily on Adobe's Acrobat programs. Adobe
owns the market on this format, and provides free readers to anyone online. Without the current Acrobat reader you cannot
see a .pdf file unless you purchase the Acrobat program for about $450. The .pdf file is a common and popular file type
for business forms, e-Books and general mainstream documents.
Phish
Hackers and others bait email users to try to attain the user's personal identity and credit card information for the purposes
of perpetrating fraud and/or identity theft. Phishers also post fake websites that look identical to the real ones so they can
trick users into giving their credit card information.
Program
Also known as software, a program is a set of commands, written in a computer language, which instructs the computer to
function in certain ways. Examples of common programs include the Windows collection of popularly used products such
as, MS Word for everyday document production, MS Excel for everyday accounting calculations and spreadsheets, and
MS Access for everyday database creations for small offices.
Reboot
Reboot is what a computer operator does when his or her computer crashes or suddenly shuts down and stops working.
Snail Mail
Snail mail is opposed to email which is fast, almost instantly and electronically. Snail mail is your regular paper delivery
mail service in your country.
Surfing
Surfing is what computer operators do when they go online via the internet and do searches for various needs or topics.
Surfing is what you are doing when you are looking for what you need; similar to shopping, except it is the web version.
Text File
A text file [.txt] is the most common file used. It is also the most adaptable for anything you want to use the contents for in
the future.
Trojan
A Trojan is a hidden program inside of another program, usually serving a good purpose. However, in reference to email
and the internet, a Trojan is commonly a type of virus that comes in via a hidden method.
Virus
In computer usage, a virus is computer code [as a computer user, your concern is for the harmful ones] that is transferred
from one computer to another without your knowledge or permission. Most commonly, the harmful viruses that circulate
on the internet come into your computer [to destroy it or take personal information from it, etc.] via your email.
E. Your Data
Backup Systems
The most traditional method of backing up your data is the disk system. For home computers and small home offices a
floppy disk, CD disk, or flash drive is the best method for saving a backup copy of your created data.
Innumerable computer operators, authors, artists, creators of original works, and others have lost their work due to the
simple oversight of a basic backup system. Any creation of yours, in which you take time to build it and store it, is worth
saving to a recoverable file.
To the average home computer operator, I would suggest that you preserve your files in CD backups at least once every 3
months or in a flash drive every month. Be sure to store your backup copies in a location off of the premises of your
computer, preferably a safe deposit box or a relative's safe in their home. If your home or office were severely vandalized
or caught on fire, you want your data in an off-site location so that you can recover your work.
The same is true for small business operations, except that the backups should be weekly and definitely stored in a safe
deposit box in your bank or similar financial institution.
Small businesses are usually more concerned than individuals about these factors, although freelancers, independent
contractors and others who earn their living in a more independent way than normal employment will also find protection
a/central/s concern for them.
• Terminals
In the early days of computer technology, an organization usually required only a single large-capacity computer to handle
its information needs. These computers were called "mainframes." They required a trained staff to maintain and run them
and were quite expensive to purchase and upgrade (e.g., add more memory, more disk storage). People gave information
and commands to the mainframe through a "terminal," essentially just a keyboard and monitor; the terminal had no
processing capability of its own. The number of terminals a mainframe could handle was limited, which created lineups of
people waiting their turn to submit computer requests.
• Workstations
Advances in computer technology, such as IBM's personal computer introduced in 1981, dramatically changed this
situation. Now it was possible to have a powerful computer right in the office and for far less money. What is more, all its
resources and power were under the control of, and totally available to, its user. As people began to move toward personal
computing, computer manufacturers built more powerful workstations. Soon, these powerful workstations became small
enough to be easily moved, promoting the idea of "mobile computing." Today, laptop computers easily allow computer
technology to be available at the point of care.
• Stand-alone
By "stand-alone" we mean that all the pieces of a computer that are needed to gather, process, display, possibly store, and
provide an output of the information are physically connected; moreover, if needed, they can be moved as a complete unit
to another location. This is the usual setup for most home and small business computer systems. Such a setup is
inexpensive and quite simple to manage. Although it makes sense to use a "stand-alone" computer, it is often better for a
computer to be part of a network.
A peripheral is any external device such as a keyboard, monitor, or mouse, which is connected to the computer. Because
they are external in desktops, it is easy to see that these are peripherals. On laptops, however, the connection of keyboards
and monitors is internal. In general, peripherals are the devices that allow inputting of data to a computer and outputting
of information from a computer. Besides these obvious peripherals, there are many others such as printers (output) and
scanners (input) that allow humans to use computers. Although the CPU would happily complete any instructions it
received without any visible output, users generally want some type of output, such as what is seen on a monitor or video
screen. There are also devices that both input and output information to a computer such as the port to which a printer is
connected.
• DIGITAL CAMERAS are finding a place in healthcare for purposes such as recording the healing progress of wounds.
Text descriptions cannot compare with a picture in letting clinicians and patients see healing progress.
• SCANNERS take a picture of a document and then allow users to save this as a file. Unless there is character
recognition software available, any text that is scanned will be in a picture format and uneditable. Additionally, some
healthcare agencies are inputting clinical records to electronic health records by scanning free text. Even when character
recognition software is used to translate the words in the "picture" to text, the result needs to be checked for accuracy.
Further, unless the form that is scanned is especially designed for scanning, this free text is unstructured and not easily
used for reporting information from data.
• CLINICAL MONITORS can be part of a network and monitored at a central location. They can also be programmed to
provide alarms, either at the central station or to individual pagers, when the monitor shows something beyond the norm.
Clinical monitors whether attached to a network, or not, can allow patient data such as that produced by cardiology or
fetal monitors to be directly input to a computer. The advantage of computerized clinical monitoring is that it allows one
person to monitor many patients at once as well as provide notification of problems. It should never be allowed to reduce
nurse-patient interaction.
Connecting Peripherals
A peripheral is connected to a computer through a port. Although today the USB port is the de facto standard for PCs, in
the past there were other types of ports such as serial and parallel. Computers manufactured in the last few years do not
have serial or parallel ports.
• USB PORT or universal serial bus is a standard that was originally created to connect phones to PCs in the mid-1990s
("Everything USB ... We Mean Everything!"). Lately, it has become the standard port for connecting peripherals to PCs.
These ports are a thin slot, about half inch by one fourth of an inch, which are found on the sides of laptops and on the
front of newer desktops or towers. The port is designed with a solid piece in one part, usually the top, so that USB device
can only be inserted one way. The original USB port was capable of transmitting data at only 12 megabits (Remember, it
takes 8 bits to form a byte, which is required to transfer one letter) per second, which made it useful for only mice and
keyboards ("Everything USB... We Mean Everything!"). This easy connection method, however, created a revolution that
resulted in devices such as flash drives, external hard drives, and webcams, which needed faster transmission speeds. This
need was met by a new standard, the USB2.0 port, which transmits information at 480 megabits per second. The USB
connection has largely replaced older serial and parallel connectors.
• FIREWIRE originated in the mid-1980s as a high-speed data transfer method for Macintosh internal hard drives
(Nathanael, 2006). Apple presented this technology to the Institute of Electrical and Electronics Engineers (IEEE) who in
December 1995 released IEEE 1394, which is an official Firewire standard. It was often referred to as Firewire 400 and
had transfer speeds of 100 to 400 megabits per second. In April 2002, the IEEE released a new standard for Firewire 800,
which can theoretically transfer data at up to 3.2 gigabits per second. Although its speed is faster than USB ports, it is
impractical for low-bandwidth devices. This fact, together with the knowledge that only Macintoshes include Firewire
ports by default, has kept it out of the mainstream. Because of its superior ability to transfer uncompressed video from
digital camcorders, it is now found in all modern digital camcorders. Most digital cameras, however, still use USB to
transfer images because USB ports are today standard equipment on computers.
• INFRARED PORT is a connection on a computer that uses IR signals to wirelessly transmit information between
devices such as a PDA and a computer. It has a range of about 5 to 10 feet. Most handheld devices have the capability to
communicate via IR ports that allow the device to directly interface with another device to exchange data.
Ergonomics
Ergonomics is the science of designing a work environment so that it is convenient to use and does not prove injurious to
health. Although it is an important consideration for preventive health, it is too often overlooked when setting up
computer hardware. This despite the fact that using a keyboard injures more workers in the United States than any other
workplace tool (Bailin, 1995) Even nurses who do not spend all day at the computer are affected. One study from a
Scandinavian journal, reported by Nielsen and Trinkoff (Nielsen & Trinkoff, 2003) found that some nurses, even those
who use a computer less than four hours a day, had a 32% prevalence of upper arm repetitive stress injury, 60% of which
was carpal tunnel syndrome.
Healthcare agencies, which should be very concerned about preventing injuries associated with any repetitive activity
such as typing, could save money by focusing more on ergonomics. Computers are supposed to facilitate data recording,
not impose additional burdens on healthcare personnel. Those planning a system should walk a day in the shoes of a user
or several days in the shoes of several users before making firm decisions about computer placement. Asking staff nurses
and other clinical computer users to participate in determining computer workstation design is another way of improving
ergonomics (Nielsen & Trinkoff, 2003).
Unfortunately, few studies have been done on nursing use of computers with most research concentrating on seated
workstations (Nielsen & Trinkoff, 2003). Some simple things could improve the work environment of nurses who
document with or otherwise use computers. For nurses who are on their feet all day, if they are to chart at the point of
care, they need a way to be able to sit while using the computer. If the computer is also used by those standing, an
adjustable computer stand could be employed so that a user who is standing does not have to stoop. Additionally, if the
computer is fixed at a height for users who are standing, a stool should be provided for the nurse who uses the computer
extensively. Touch screens and light pens may be ideal for quick entry, but for extended entries, they are very tiring to the
arm. Providing dual means of entry may solve this situation. Resolution of a screen is also important. The higher the
resolution, the easier the screen is on the eyes. It is also important to prevent glare on the screen. In situations where this is
impossible, it is possible to purchase screen filters that will cut down the glare.
More thought also needs to be given to how a workstation is designed for those who will use a computer for more than
four hours a day. Consideration needs to be given to the posture the user will be forced to adopt. The best chairs have
adequate support for the outward curve of the lumbar spine and the inward curve of the thoracic spine. Studies have
shown that a 100 to 110 reclined position is better than an upright posture (Cornell University Ergonomics Web, 2007).
The feet should be flat on the floor, or a footrest should be provided (Figure 2-3). Wrists, knuckles, and the top of the
forearm should fall into a straight line while typing (Bailin, 1995). To promote circulation to the lower arm and hand, the
elbow angle should be open. Both of these can be accomplished with a negative tilt keyboard (Cornell University
Ergonomics Web). The monitor should be placed directly in front of the user to avoid neck twisting. Studies have found
that the best position for the monitor is for the center of the screen to be about 17.5 degrees below eye level and about an
arm's length away. The ideal placement of a mouse is on a flat, movable mouse platform positioned one to two inches
above the numeric keypad.
Laptops, under most use conditions, violate all ergonomic requirements for computers (Cornell University Ergonomics
Web, 2004). This is caused by the connection between the keyboard, screen, and computer. If the computer rests on a
table, the keyboard will be too high for proper arm positioning. If the computer is lower, the monitor placement may
require that the head be tilted forward for use. For these reasons, if a laptop is your primary computer, to provide more
ergonomic working conditions you should invest in a docking station. If you carry the laptop and all the required
accessories such as spare battery, power cord, or external drive weighing 10 pounds or more, consider a wheeled carrying
case.
Computerese
Many computer-related terms are used in discussion, instruction, and advertising. Although they are not strictly hardware
terms, they can often be confusing. If one watches a computer when it has just been turned on, one will see different types
of information flashing across the screen. This information is produced by what is called the "booting" process. Booting
refers to all the self-tests that a computer performs and the process of retrieving, either from the BIOS or a disk, the
instructions necessary to allow the user to start using the computer. The term "reboot" means to restart the computer. A
warm boot is restarting the computer without turning it off, a selection that is offered when one elects to start turning off
the computer. A cold boot is starting the computer when the power is completely off. Avoid cold boots if you can, because
the jolt of electricity received each time the computer is turned on may shorten its life span. Warm boots are often used
when a computer freezes or crashes. It erases information in RAM, which often eliminates memory conflicts that may
have caused the problem. These conflicts can be caused by different programs trying to store data in the same location. If
a warm boot fails to notify the programs that it is time to stop fighting for the same space and give control back to the
user, the machine must be turned off for a cold boot.
A bug is a defect in either the program or hardware that causes a malfunction. It may be as simple as presenting the user
with a blood pressure chart when a weight chart was requested, to a more serious defect that causes the entire system to
crash.
Compatibility refers to whether programs designed for one chip will work with an older or newer chip, or whether files
created with one version of a program will work with another version of the same program. Most computer chips and
software are backward compatible, that is, they will work with older versions of a program or files created with an older
version of a program. Some are not, however, forward compatible, or the situation in which an older program does not
recognize files created by a newer version of the same program. This is particularly true of spreadsheets, databases and
presentation programs.
A driver is a software program that allows data to be transmitted between the computer and a device that is connected to
the computer. Drivers are generally specific to the brand and model of the device. They may come with a new peripheral,
or can often be downloaded from the vendor's Web site.
Although the term hacker originally meant a person who enjoyed learning about computer systems and was often
considered an expert on the subject, mass media have turned it into a term to refer to individuals who gain unauthorized
access to computer systems for the purpose of stealing and corrupting data. The original term for such a person was
cracker. Today, differentiations may be made by using the term white hat hacker for a person who uses his or her computer
knowledge to benefit others. Black hat hacker is the term used for those who use their computer skills maliciously.
When used with a computer the terms logical and physical refer to where data are located in the computer. The physical
structure is the actual location, whereas a logical structure is how users see the data. For example, when a user requests
information about laboratory tests, he or she may see the indications for the test, the normal values, the cost of a test, and
the patient's test results. Although this information may be presented as one screen, which is a logical structure, different
pieces will have been retrieved from different files in different locations, which is the physical structure of the
information.
Another potentially confusing computer term is object. Although the more common use of the term "object" is for a
physical entity, or at least a picture on the screen, to a computer, an object is anything the computer can manipulate. That
is, an object can be a letter, word, sentence, paragraph, piece of a document, or an entire document. Objects can be nested,
that is, a word is an object nested within a sentence object. A paragraph is an object that is contained in a document. When
an object is selected, clicking the right mouse button presents a menu of properties of that object that can be changed.
H. Networks
1. Networks
A nurse encounters a patient with an unfamiliar disease. From an email message, the nurse learns that a document on a
computer in another country has information about caring for patients with this disease. Within 60 seconds of logging on
to the Internet, the nurse prints out the document. This ability to exchange information on a global scale is changing the
world. No longer do healthcare professionals have to wait for information to become available in a journal in the country
in which they live. Nurses and other healthcare professionals can and do use computers to network with colleagues all
over the world. Healthcare depends on communication: communication between the nurse and the patient, communication
between healthcare professionals, communication about organizational issues, and communication with the general
public. As you can see from the previous paragraph, the methods used to communicate in healthcare are today being
augmented with computer networking. Since the first computers talked to each other in the late 1960s, networking has
progressed to the point where not only computers in an organization are connected to each other, but also institutions are
connected to a worldwide network known as the Internet.
A network can range in size from a connection between a palmtop and a personal computer (PC) to the worldwide,
multiuser computer connection - the Internet. Variation in network size or the number and location of connected
computers is often seen in the name used to denote the network, such as a local area network (LAN) or a wide area
network (WAN). A LAN is a network in which the connected computers are physically close to one another, such as in the
same department or building. A WAN is a network in which the connections are farther apart. Often, a WAN is an
internetwork of LANs. WANs are sometimes referred to as enterprise networks because they connect all the computer
networks throughout the entire organization or enterprise.
NETWORK ARCHITECTURE
There are many different variations in how networks are constructed, or what is referred to as their architecture, often
depending on the purpose of the network. For a home network, a peer-to-peer network in which each connected computer
is a workstation is a normal approach. In this scheme, each computer can have a shared folder that is accessible by the
other computers. Often, the network is primarily for connecting to the Internet or for sharing hardware such as a printer.
Another type of architecture, often seen in healthcare agencies, is client/server architecture. The principles behind this
model vary, but for most healthcare applications they are similar to the "dumb terminal" model. A client computer has
software that allows it to request and receive information from the server. The server has software that can accept these
requests, find the appropriate information, and transmit it back to the client (Figure 5-1). The client views the information,
enters data, and sends it back to the server for processing. Under this model, the client computer does no processing.
Beyond making the initial request, rarely is any of this process visible to the user. Users sometimes have the
misperception that the software and data reside on the computer/terminal that they are using, instead of the server.
There are other variations for networks. A computer in a healthcare facility may function as a client for the patient care
information system, but may have application software that allows users to do things like word processing, in which case
it acts like a regular computer. This computer may also be networked to another server that stores the files created by the
networked computer, or the files may be stored on the computer that was used to create them. This computer may even be
connected to the Internet through another server. Printers are usually connected to a network so that more than one
computer/client can use them. Managing networks is an ongoing maintenance task performed by the network
administrator.
CONNECTIONS
Networks are connected physically with a variety of materials such as twisted-wire cables, phone lines, fiber optic lines,
or radio waves. Computers that are wired together are said to be hard wired. When you see the term "hard" with another
item, this means that the item is permanent, or that it physically exists. Most healthcare agency networks, even those that
use wireless, are hard wired to some extent.
Wireless transmissions are limited in distance, so they do not compete with other radio traffic. When a wireless system is
installed, nodes are placed at strategic locations throughout the institution, locations that are determined after a thorough
assessment of the building. A node in a wireless connection is a single point on the network that consists of a tiny router
with a few wireless cards and antennas. These nodes pick up the signals sent by a user and transmit them to the central
server, or even to another node for rebreed-cast, and transmit signals back to the user's computer. Successful wireless
communication depends on an adequate number of nodes or hardware that send or rebroadcast the signal and on their
placement. The distance of a device from the node will affect both the speed of transmission and whether one can use the
network.
Wireless transmission is less secure than hard-wired transmission because the signal is available for use for anyone in
range. Security depends on the network administrators who follow procedures to secure the transmission. Many wireless
networks, including those at home, use wired equivalent privacy (WEP), the goal of which is to prevent disclosure or
modifications of messages in transit. When this is employed, to connect to a network, a user must have the WEP key to
enter before being allowed to use the network. In many cases, once a WEP key is entered on a computer, the computer
remembers it and will automatically find it any time a user is in range of that network. Another newer, more secure form
of protection is WPA (Wi-Fi Protected Access) (Webopedia, 2007). It features improved data encryption and better user
authentication.
PROTOCOLS
For networks to function correctly, it is necessary that there be agreements known as protocols, which prescribe how data
will be exchanged between participating computers. These protocols include standards for tasks such as how the system
will check for transmission errors, whether to use data compression, and if so how, how the sending machine will indicate
that the message it has sent is complete, and how the receiving machine will indicate that it has received the message.
• Definition
A network is a way to connect computers so several benefits can be realized. Local Area Networks (LANs) connect
computers that are physically close together (i.e., in the same local area). This means not only in the same room but also in
the same building, or in several buildings that are close together. LANs use three things to connect computers: a physical
connection (e.g.,
wire), a network operating system, and a communication scheme.
There are several ways to connect computers physically. The most common method is to use category 5 cable and the
latest way uses fiber optic cable (light is used in place of electricity). The very latest methods are wireless; they use either
radio transmission or infrared light for the connection. Each method is suited for different situations and is part of the
consideration when a network is built.
There are several network operating systems available today that provide the necessary processes to allow computers to
talk with each other and to share information and programs. The communication schemes are properly called protocols.
This is a standard method by which the computers in a network to talk with each other and pass information around. There
are three main ways to connect computers in a network: star, ring, and bus configurations. These are called network
topologies and represent different physical arrangements of the computers. As with the physical connecting medium, each
topology has its strengths and weaknesses, which must be considered when a network is built.
Benefits of a Network
The important benefits of a network are shared information, shared programs, shared equipment, and easier
administration. It is technically possible for any computer on a network to read and write information that another
computer has in its storage (i.e., its hard disk). Whether that computer is allowed to do so is an administrative matter. This
means, though, that information can be shared among the computers on the network. Programs can also be used by
computers on the network, regardless of where those programs are physically stored. It is also possible (and usually
desirable) for computers on a network to share equipment such as printers.
Technically, any computer on the network can print its information on a printer that is physically connected to another
computer somewhere else. By sharing expensive office equipment, an organization reduces its expenses. Finally,
administration of computers on a network is simplified because all the other computers can be examined, helped, and
maintained from one computer.
Wide Area Networks (WANs) are extensions of Local Area Networks. There are two kinds of WAN. The first one attaches
or connects a single computer to a preexisting LAN; this kind is called "remote LAN attachment." The second one
connects, or "bridges," two or more preexisting LANs. Both WANs allow a computer to use information or equipment no
matter where they are located in the organization. An interesting point about WANs is the options that can be used to
connect the LANs. Instead of being limited by the length of cable that can be placed between computers, WANs can
communicate via satellite and earth stations. This literally means that a person could be using a computer in Africa and
working with information that is on a computer in Iceland-without knowing or caring about its origin. To that person, the
information appears to be on his or her computer.
Open Systems
"Open systems" is the idea that it should be possible to do two things: run a particular program on any brand of computer
and connect any collection of computers together in a network. However, because of the development of computer
technology, this is difficult to accomplish.
Most computers were initially developed as "closed" systems; that is, a manufacturer built the computer, wrote the
operating system, and wrote the application programs to run on the computer. Each computer manufacturer saw
tremendous sales advantage from this strategy. The result was several computers that were similar in function but very
different in how those functions were executed. It was not easy to buy an application program from a vendor and run it on
two different brands of computers. It was a torturous exercise to get any two computer brands to "talk" with each other.
For people who simply want to buy and use computer technology, "plug and play" is the ideal mode. This means that a
computer could be purchased from vendor X, a second computer from vendor Y, a program from vendor Z, and a printer
from vendor A, and all these parts could be connected and used with the same ease that people expect with stereo system
components. The way to achieve this ideal is through standards. Just as stereo components are built to use a standard
voltage, produce or use a standard type of signal, and connect with standard plugs and cables, computers and application
programs need to use certain standards for communication protocols and file access. This "plug and play" mode is getting
closer today because of vendors and manufacturers' support and adoption of standards.
Client/Server Computing
As we have seen, computers come in a variety of sizes and with various processing capacities. Some computers are better
suited than others for different tasks. For example, personal computers, because of the physical size of their hard drives,
have a limit to their storage capacity. On the other hand, the large, mainframe-type computer was designed to handle
tremendous amounts of information and therefore has large storage capacity. Where does it make more sense to store a
large data set?
This brings us to client/server computing. The essence of "client/server computing" is to assign to each computer the tasks
for which it is best qualified or, in other words, to use the right tool for the job. Capitalize on the strengths of one
computer for task A and use a different computer more suited for task B. A personal computer works well with people; it
is fast and has color and good graphics display capability. It could be the primary interface device for people and
computer systems. Mainframe computers have huge storage capacity, great speed, and large processing power. This could
be the place to store, process, and retrieve information from the vast amount of data accumulated by a large organization.
In a network, client/server computing makes sense.
This is a small section of miscellaneous short cuts and computer navigation tips. They are in no particular order, but will
help you as you go along in your computer learning.
JPEG Files
If you scan a photo into a .jpg file or .jpeg file, many times you will find that it is too large [too many pixels] to use on
public sites [like MySpace, eHarmony and others] where you would want to use it. The quick way to solve this issue is to
resize the .jpeg file using one of the online services such as Shrink Pictures. Simply go to: http://www.shrinkpictures.com/
and follow their directions. When you finish shrinking and saving your new .jpeg file, their page gives you an option to
delete the file from their server which makes the entire practice fairly safe as far as services on the internet.
Acrobat Reader
You will not be able to read an Adobe Acrobat file without the current reader. Since Acrobat is one of the more popular
files used on the internet, this will prove to be an essential part of your computer system. Acrobat files are all the files you
will ever see which have a file name extension [the part of the file name on the other side of the period] of .pdf . To
download the current Acrobat reader, go to: http://www.adobe.com/products/acrobat/readstep2.html
Reboot
If the computer is still on, the easy way to reboot is to quickly and sequentially press your CTRL+ALT+ Delete keys on
your keyboard. This is the most common reboot command to give a personal computer. If the computer is not on, then
reboot will occur when you turn it back on.
Text File
If you copy text from something on the internet because you intend to post it in another place, the best file to save it to is a
.txt file. In the Windows programs, the .txt files can be created in Notepad usually under the Accessory panel under your
Start button. The reason.txt is the best file to save to is because when you go to copy and paste the content to another site,
document or format, all of the previous program's formatting will be stripped out of it and will not confuse the new site or
document. In other words, you will transfer only content, not programming. You can then easier re-format the text to fit
the current program you choose to use.
Virus
When using email, use an email system that will first scan your incoming attachments for viruses. If you open an email
attachment and it contains a virus, then you will have to clean your entire computer.
Example:
My business name is XYZ Corporation
Person I am writing the document to is named John Smith
This is the third version [or third edit] of the original document
Original document is saved/named as: Business Plan for John Smith
Today's date is March 31, 2008
XYZ.JS.v.3.Business Plan.033108.doc
At a glance in my file directory I can later tell the "who, what, when and document type" of the file. This helps to later
sort files according to timing and relevancy without having to open lots of files to find the most recent or most appropriate
file.
This lesson is a small run down of how to handle emergency issues which can occur. The ones listed here are those that
are most likely to create panic for new users. Techniques to solve each problem are offered; if they do not work in your
situation, then you will need to seek a local geek to help you.
• Windows - When you turn your computer on, immediately begin pressing your F8 key repeatedly until a screen pops up.
When the screen pops up it will give you several options. You want to select Restart in Safe Mode. This will allow the
computer to reboot and start up [so you can find your document(s) on the spot]. You do not want to create documents in
Safe Mode, and you do not want to perform any complex tasks while on the blink. When you return home, or wherever
you have your OS disk, then you can use it to reinstall your OS if necessary.
• MAC - To reboot in Safe Mode on a Mac, hold down the Shift key as you reboot, or hold down the "S" key to reboot in
Single User Mode to look through your files and find your problem. If you are away from home, it is best to wait until you
have access to your OS disk before digging through the files to find the problem.
If your computer sounds like it is grinding or thinking and it takes too long to move from one point to another, then you
have system problems. The specific problem referred to here will prevent you from opening a document, or it will take 15
minutes for the system to boot up, or some similarly extreme delay in achieving your decided command to the computer.
• You have contracted through the internet, or through another person's file which you have opened, a virus, Trojan, worm
or some other destructive tool and it has corrupted your computer's OS. In this instance, the virus or worm, etc. is
achieving its goal. You will have to immediately run a clean up program on your computer or risk it freezing up
completely. Programs such as Ghost or Norton 360 provide ways to regularly backup your work, so you should already
have your information secured and available for emergency retrieval. If you haven't installed a program to backup your
data, then you may lose your documents. The first thing you should do is reboot in Safe Mode, and then perform a virus
scan in your system. Most of the antivirus and firewall products now have a systematic way to isolate viruses in your
system. Use this program tool and you should get back to normal.
• You have installed a program or tool that requires more RAM than your system currently has on it. Large or complex
programs use a lot of RAM when your system is up and running, and even more when the particular program is open and
being used. Too little RAM will freeze up your system. For today's home computers and small business computers, the
very minimum RAM a system should have is 512MB. If you install between 2GB and 4GB of RAM on your system, then
you should be somewhat safe from RAM freezes. Keep in mind that the amount of RAM your system can safely hold is
determined by the computer's motherboard and processor. If you are running on say an older Pentium III processor, then
the most your motherboard will operate with is 512MB of RAM. On the Windows system, you can find your processor
and RAM information by clicking through the Start button, and then the following sequence of clicks: Settings/Control
Panel/Administrative Tools/My Computer/View System Information/General Tab. At the bottom of the General Tab is the
information on your processor and RAM. Close out of these by clicking the X in the top right corner of each box. This
process is for viewing only. Do not make any changes to anything when you open this up to find it.
The good news about RAM is that it is fairly inexpensive to purchase [about $60 per 512MB]. The bad news is that unless
and until you become fairly experienced with opening up your computer and making hardware changes to it, you will
need to pay a service center to install the RAM so that you do not inadvertently fry your motherboard or the RAM. This is
one area where it is good to heed the warning: If you do not know anything about the insides of a computer, then do not
attempt to install the RAM. The good news about RAM is that it is fairly inexpensive to purchase [about $60 per 512MB].
The bad news is that unless and until you become fairly experienced with opening up your computer and making
hardware changes to it, you will need to pay a service center to install the RAM so that you do not inadvertently fry your
motherboard or the RAM. This is one area where it is good to heed the warning: If you do not know anything about the
insides of a computer, then do not attempt to install the RAM.
A. Introduction
"The Internet is a worldwide system of computer networks - a network of networks in which users at any one computer
can, if they have permission, get information from any other computer (and sometimes talk directly to users at other
computers)"
Our world has changed dramatically as the Internet has become more available to the majority of the citizens of Earth. As
non-technicians, what we were able to accomplish at our desks a mere thirty years ago cannot even compare with what we
are capable of today. When we thought we were in the jet age - using photocopiers in place of carbon sheets, and making
long distance calls via satellite - we were only on the edge of the true age of information, and a technology that would
change absolutely everything.
A result of certain advances in technology that were initially developed to connect smallish, institutional networks to one
another, the Internet became widely available for general use in the United States in the 1980s. The concept, alone, is
astonishing, and its actual implementation even more incredible.
It is tempting to try to describe this global phenomenon in material, tangible terms. For instance, the ageing former Alaska
senator, Ted Stevens once described the Internet as a "series of tubes." For those without a basic understanding of the
technology, the Internet is a mystery too deep to fathom. This course will provide you with comfortable and user-friendly
information that will dissolve some of the mystery, and give you the ability to speak and think intelligently about the
World Wide Web.
Before we address the ins and outs of using the Internet, it is very useful to first understand its inception and, in a basic
way, how and why it functions. As we go along, different terminology will be introduced to you, and will also be
incorporated at the end of each lesson for reference.
This course is designed with the assumption that the user is already familiar with computers and is seeking to gain a better
knowledge and grasp of the Internet and its workings. Therefore, general computer terminology and instructions are not
part of the course. If you are seeking basic operating instruction, refer to your computer's or Windows' tutorial.
B. History
Our ancestors would be awestruck by our ability to communicate almost instantly with people in every part of the globe
without a telephone or telegraph, or even a wire at to bridge the connection. We are able to look up businesses in remote
countries; the answer to almost any question we can think of is at our very fingertips. We can exchange money and
products, and conduct business without ever leaving our desks; we can study and obtain a degree in higher education,
invest in the stock market and trace our genealogy with the touch of a button. We play video games with people on other
continents and, sometimes, find unprecedented wealth using this small plastic-encased artificial brain, and its connections
to the largest system of networks in the world, which links us to others around the globe.
After the Soviet Union sent Sputnik to the moon, the United States government, not to be left behind, began to step up its
efforts toward science and research in order to stay ahead of the game.
But, the development of the Internet did not happen overnight. The idea of a "Galactic Network" was introduced back in
1962 by J.C.R. Licklider, a head of the computer science department at MIT who was involved in the US government's
Defense Advanced Research Projects Agency (DARPA). He was able to convince some of his colleagues of the feasibility
of his idea, but what began to generate the reality of it was the development, by Paul Barand, of packets of data that could
switch back and forth between networks.
Initially, the objective of the research was to find a way to keep our communication systems intact in the event of an
attack on us by a nuclear weapon. The government wanted to be certain that there was a way to maintain control of our
ability to strike back.
The first packet switching among networks in 1968 connected four major institutions: University of California at Los
Angeles, SRI (in Stanford), University of California at Santa Barbara, and University of Utah.
With the invention of the Ethernet in 1976, connecting Local Area Networks, or LANs together became possible. A LAN
is a network within in a location that connects its computers. For instance, your local hardware store probably has a LAN -
a system that connects its computers together. Later, of course, Wide Area Networks, or WANS, which connected users
over larger geographical areas.
In the early 1990's HTTP (Hypertext Transfer Protocol) was developed. HTTP is a set of rules that "...defines how
messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various
commands. For example, when you enter a URL in your browser, this actually sends an HTTP command to the Web
server directing it to fetch and transmit the requested Web page".
In other words, http provides another set of rules that served to make the Internet easier to use for the non- technological
user.
You can imagine that each technological piece of the Internet puzzle was developed and implemented fairly rapidly, once
the idea caught fire. We will look more closely at the nuts and bolts of connections later in the course, but for now, it is
enough to know that each piece of the development puzzle served to make the Internet more efficient and more accessible
to people like us!
The Internet is certainly not a series of tubes, however, it also is not simply a magical system of ethereal energy. Massive
super-computers, connected with thousands of smaller networks, in cooperation with one another, form an enormous
worldwide network that allows us to interconnect globally.
Glossary
Computer - A programmable machine that responds to a specific set of instructions in a well-defined manner and can
execute a prerecorded list of instructions (a program). Electronic and digital, the actual machinery -- wires, transistors, and
circuits -- is called hardware; the instructions and data are called software.
1. Introduction
Web pages are what make up the World Wide Web. These documents are written in HTML (hypertext markup language)
and are translated by your Web browser. Please note that a Web page is not the same thing as a Web site. A Web site is a
collection of pages. A Web page is an individual HTML document. This is a good distinction to know, as most techies
have little tolerance for people who mix up the two terms. http://www.techterms.com/definition/webpage
The browser is a program like Internet Explorer or Firefox. A web address, or URL, is the specific "location" of a website,
and looks something like this: http://www.yahoo.com. We will define a URL further below.
Browsers and search engines, such as GoogleTM, can be seen as vehicles installed on your computer which help you
travel to the URL address on the Internet that you have specified - something like getting into a taxi and giving the driver
an address.
NOTE: We will use the word "Google" throughout this course, as we refer to searching for something. For example, we
may suggest to "try Googling rabbit hutch,"or some other terminology. Because Google is such a popular search engine, it
is commonly used as both a noun and verb!
What is the difference between a search engine and a browser? Sometimes the two are used interchangeably and can be
confusing. Here are Wikipedia's definitions of both:
SEARCH ENGINE: A program that searches documents for specified keywords and returns a list of the documents where
the keywords were found. Although search engine is really a general class of programs, the term is often used to
specifically describe systems like GoogleTM Alta Vista and Excite that enable users to search for documents on the World
Wide Web and USENET newsgroups.
Typically, a search engine works by sending out a spider to fetch as many documents as possible. Another program called
an indexer then reads these documents and creates an index based on the words contained in each document. Each search
engine uses a proprietary algorithm to create its indices such that, ideally, only meaningful results are returned for each
query.
BROWSER: Short for Web browser, a software application used to locate and display Web pages. The two most popular
browsers are Microsoft Internet Explorer and Firefox. Both of these are graphical browsers, which means that they can
display graphics as well as text. In addition, most modern browsers can present multimedia information, including sound
and video, though they require plug-ins for some formats. www.Wikipedia.com
2. Smart Spiders
Your browser has transported you to a search engine. A search engine looks at your chosen keywords and sends out an
electronic "spider" that crawls the internet, searching through thousands of databases to find the websites that are most
relevant to your keywords, and then lists them in order of relevance to help you find the site that is closest as possible to
what you are looking for.
The address is where? You have seen the following rectangular box image at the top of your screen, and may be able to
look at it right now. It is the field in your browser that accepts an "address" to travel to your destination on the Internet.
URL stands for UNIFORM RESOURCE LOCATOR. The "http" part refers to the "protocol," which we will discuss a bit
later. The "www, of course," stands for the World Wide Web.
http://www.doggiecoupon.com
This box, or field displays the address of wherever you currently are on the web. But you can also simply type an address
like the one above, or copy and paste it into the URL field, then press Enter, and you will magically be flown to that
"address" on the Internet. Remember that when you are typing in a URL, it must be exact in order for the browser to find
the site. So, when someone asks you for a URL, you will know they simply want a website address!
To find GoogleTM, you would type into your browser's URL field www.Google.com. When GoogleTM comes up on your
screen, it offers you a simple window wherein you will type your key words.
Once you have typed the keywords "dog food coupon," into this field, Google'sTM "spiders" will look all over the web to
choose web pages that make the most sense, and will then list them for you. Here is what a typical search result looks like
when your search engine (Google") "finds" what you are searching for:
See that the search engine was able to locate an entry that contains the exact same words as your keywords, and this entry
is number one on the indexed list. In GoogleTM, you could click your mouse on the dark blue hyperlink that the search
engine offers, and you would be whisked off to the web address, or URL, that is displayed in red at the bottom of the
entry. In this case, the website GoogleTM has suggested is www.doggiecoupon.com.
Is it a "hot" link? How can you tell if a word or sentence or image is an active "link?" Links usually show up in a royal
blue color. Another way to tell is when you place your mouse's cursor over the text or image, the cursor symbol changes
from an arrow to a hand. This is your indication that clicking your cursor will transport you to another spot.
What is spider?
A spider is "software that gathers information from the Web and places it into a large database that it can be searched by
search engines" (Gralla p. 395).
In other words, a spider is a shopper that picks out and sorts data that is most relevant to your keywords, and then turns it
over to your search engine. Easy!
When you want to search for something on the Internet, try to think of the most basic specific words that describe the data
you want. Search engines are smart, but they cannot read your mind. Be as succinct as possible, without going into great
detail. For instance, "inflatable beach ball" is better than "12-inch colored blowup ball for playing at the beach," and far
better than just "ball." If you experiment for a while with key words, you will begin to see how the search engine takes
your requests quite literally. It's a good idea to "surf" the Internet in the beginning, to get the feel of how it is set up and
how to find what you want.
Glossary
Browser - Short for Web browser, a software application used to locate and display Web pages
HTML - Hyper-text Markup Language - Short for Hypertext Markup Language, the authoring language used to create
documents on the World Wide Web
Hyperlink - A link on a web page that sends you to another web page or resource
Key Words - Search words used for sorting and locating information on the Internet
Link - In hypertext systems, such as the World Wide Web, a link is a reference to another document Links are sometimes
called hot links because they take you to other document when you click on them.
Search engine - A website that allows you to perform searches throughout the Internet
Spider - Software that gathers information from the Web and puts it into a large database that can be searched by search
engines
Surfing - To move from place to place on the Internet searching for topics of interest
Website - A site (location) on the World Wide Web that contains a home page, which is the first document users see when
they enter the site, and may also contain additional documents and files Each site is owned and managed by an individual,
company or organization
As we discussed earlier, when we create and send data, or electronic information, on the Internet, it is broken down into
smaller packets to make it compatible with all the places it needs to travel in order to reach its target.
Packets can be imagined as chunks of data, each carry along signals which allow them to cross hubs, routers and other
directional hardware, as they quickly make their way to their final destination. (We will discuss packets in more depth
later in the course.) The discovery of packet technology was really what gave birth to the Internet as we know it. The
following excerpt from the Internet Society reflects the point at which a real Internet began to take form.
"The objective was to develop communication protocols which would allow networked computers to communicate
transparently across multiple, linked packet networks. This was called the Internetting project and the system of networks
which emerged from the research was known as the 'Internet""
(http://www.isoc.org/internet/history/cerf.shtml).
The set of scientific rules for the transport of these packets are the Transmission Control Protocol (TCP) and Internet
Protocol (IP). You can think of Protocols as simply the rules that govern the transportation of data.
You have probably heard the term "hub" if you work in an office with inter-connected computers, or a LAN. The hub is
simply a piece of hardware that connects all the computers to each other on one network. If one computer is not connected
to the hub, it is off the network and cannot communicate with the others. A hub is generally just a central board containing
as many outlets as the network has nodes. The outlets look like a mass of telephone line plug-ins. In an office, it is
normally in a special room where the network's server resides.
A router, on the other hand, is most often used to connect networks together. Routers are smart. They examine data
packets to determine where they are headed. They then take a look at the volume of activity that is taking place on the
Internet, and make the decision to transfer the data to another router closer to the packet's final destination. Routers are
traffic cops of the Internet. They ensure that all data gets sent to where it's supposed to go via the most efficient route.
(Gralla)
The National Science Foundation, where this packet technology was developed, became one of the largest backbones (or
VBNS - very high-speed Backbone Network Services) for the Internet. Super-computers developed by colossal
organizations, generally the government, academia, and the military, form the "backbones" for thousands of internet
service providers (ISPs). While our personal computers continue to shrink in size, make no mistake these super-computers
that form backbones of the internet take up the space of large rooms and consist of rows and rows of encased technology.
Regional commercial internet providers - like American Online and Microsoft and large utilities such as telephone
companies -- as well as smaller, regional networks of private and public agencies form a "loosely organized alliance,"
each one funded under different circumstances and for different consumption (Gralla).
Domain Name
A domain name is part of a URL, and is a name that is chosen by the owner of a web page or web site. Many domain
names represent more than one IP address. What that means is that when a browser takes you to a domain name such as
the one we have been looking at, it is often taking you just to the first page, or landing page, of a website. Each additional
"page" within site is labeled differently.
Why? Your clue is the extension, or suffix, which in this case is ".com" or dot com. What.com means is that this is a
"commercial "web page and you can be sure that a commercial site is in the business of selling something.
If you see a web address that looks like this: www.puppylove.org, you will know that the ".org" extension, or suffix,
indicates this site is an organization of some kind whose goal is not profit-making. Likewise, a "gov" extension indicates a
government agency site, an ".edu" extension indicates an educational institution site, a ".net" extension indicates that the
site accommodates a network. Here is a quick list of some of the more common extensions, or suffixes you may see in a
URL:
.ca - Canada
.com - Commercial business .edu - Educational institution .gov - Government agency
.mil - Military
.net - Network organization .org - Organization (nonprofit) .th-Thailand
These suffixes indicate which "top level domain" or TLD an IP address belongs to, and there are only a limited number of
these top level domains.
HTTP
We briefly discussed HTTP in Lesson One, which means Hypertext Transfer Protocol. It has no visible effect on your
Internet activities, but you should remember that HTTP defines how a message is transmitted and formatted - it is a set of
rules that gives a message to the web server and tells it to transmit data. A complete URL often, then, looks like this:
http://www.thepuppyshop.com
Glossary
vBNS or Backbone - Some paid for by the federal government, large agencies, such as the National Science Foundation,
provide infrastructure to connect supercomputer centers around the world
Data and Programs - Distinct pieces of information, usually formatted in a special way. All software is divided into two
general categories, data and programs. Programs are collections of instructions for manipulating data. Data can exist in a
variety of forms -- as numbers or text on pieces of paper, as bits and bytes stored in electronic memory, or as facts stored
in a person's mind
Hub - A device that connects several computers to one another on a network
ISP - A device that connects several computers to one another on a network
Petaflop - A petaflop is the ability of a computer to do one quadrillion floating point operations per second
Router - Hardware that sends packets to their proper destinations along the Internet.
Super-Computer - An extremely fast computer that can perform hundreds of millions of instructions per second
TCP/IP - Transmission Control Protocol/internet Protocol - The communications that underlie the Internet
TLD Extension - Top-Level Domain- refers to the suffix attached to Internet domain names, such as net, com, org, etc
1. Modems
Obviously, you have already connected to be able to take this course. But connections are an important part of Internet
knowledge, and we will discuss them in this lesson. Whether you are sitting at a computer that is connected to a LAN, or
you are at your private computer at home, there is a connection being made that is much more complex and fascinating
than it might seem.
Modem
Just because it is interesting to know, the word modem came from the words "modulator" and "demodulator," which gives
you an idea of the modem's job. A modem enables your computer to transmit its data to telephone or cable lines, which
connect you to the Internet. Why do we need a modem? It has to do with storage of data.
Digital/Analog
Your computer is only able to store its data as digital information, while data that is transported over telephone and cable
lines is done so in analog waves. The modem acts as a mediator, changing your data's digital structure to allow it to travel
in analog form. A clearly elegant explanation of the difference between digital and analog is as follows:
Computers are digital machines because at their most basic level they can distinguish between just two values, 0 and 1, or
off and on. There is no simple way to represent all the values in between, such as 0.25. All data that a computer processes
must be encoded digitally, as a series of zeroes and ones.
The opposite of digital is analog. A typical analog device is a clock in which the hands move continuously around the
face. Such a clock is capable of indicating every possible time of day. In contrast, a digital clock is capable of representing
only a finite number of times (every tenth of a second, for example).
In general, humans experience the world analogically. Vision, for example, is an analog experience because we perceive
infinitely smooth gradations of shapes and colors. Most analog events, however, can be simulated digitally.
Photographs in newspapers, for instance, consist of an array of dots that are either black or white. From afar, the viewer
does not see the dots (the digital form), but only lines and shading, which appear to be continuous.
Although digital representations are approximations of analog events, they are useful because they are relatively easy to
store and manipulate electronically. The trick is in converting from analog to digital, and back again.
www.webopedia.com
In other words, a digital format is a limited one that involves encoding with numbers, while that same data in analog form
becomes free to travel electromagnetically over waves. When the data finally reaches its destination - another computer, it
is transformed by that computer's modem - or demodulated -- back into the more mundane digital format.
Broadband Connections:
A broadband connection is a very fast Internet connection, such as cable modem or DSL. Although these terms may sound
confusing, once you have a basic understanding of the modem, it is easier to understand how data travels on the Internet
through different kinds of connections.
Coaxial:
The "coaxial" cable used by cable television companies provides a greater bandwidth than telephone lines, thus allowing a
much quicker connection to the Internet. Coaxial cable's central wire is surrounded by insulation, as well as by braided
grounding wire. All that insulation forms a shield which protects from interference by other electromagnetic waves, and
makes for a fast, efficient transmission. It is also the cable used for Ethernet connections.
Cable Modem:
Although not a true modem, cable internet is a very fast Internet connection and perhaps the most popular, especially in
cities and more populated areas. Cable modem uses a network card inside your computer to connect to the cable line. At
this time, Cable Modem is likely to be your best choice for a fast, efficient Internet connection.
Satellite Connections:
You still need coaxial cable to get Satellite Internet service, but it needs only to stretch between your computer location
and the dish that you have specially installed outside your home. Your satellite company will install the dish and very
meticulously "point" it at the proper location in the sky for optimum reception. In some cases, a satellite television
company will provide Internet access on the same dish, but more often, a special satellite Internet company requires its
own equipment and setup.
Your Internet request travels to the satellite company's NOC, or Network Operations Station, which launches it up into the
sky to its satellite. The satellite hurls the info to the dish outside your home, which transmits it over the cable and attached
modem, which forwards it to a network card in your computer. Looking at it this way, it sounds like the process of using
satellite Internet would involve a day's wait. Not at all. You will be receiving data at 400 Kbps rather than the typical
28.8Kbps.
What is Kbps?
Short for kilobits per second, this is simply a measure of data transfer speed. For the purposes of this course, we will not
delve too deeply into transfer speeds, but it is good to know that there are differences in speed, depending upon your
equipment and your connection.
The fax machine gave us an indication of how modems work. Everyone knows the somewhat obnoxious sounds of a fax
being sent. But the sounds mean something: When a modem first sends a signal to another modem, there is a steady signal
that tells us the two modems are "talking," and have connected. The warbling tone that follows is called a "handshake"
between the two modems, which are, in a sense, making an agreement about how the information will be handled, at what
speed, and other factors. The sending modem then transforms the digital information to analog form and sends it; the
receiving modem receives it and transforms it back to digital form. Amazing!!
World of Wireless:
Any computer network that has no physical wired connection between the sender and receiver of data is called wireless.
The network is, instead, connected by "radio waves and/or microwaves". Although wireless technology does not use
copper or fiber optic lines, it is nonetheless, reliant on a physical wired network, somewhere along the line. Wireless
devices depend on Radio Frequency (RF) waves that project a signal which is able to detect a wired network.
Glossary
Handshake The indication that two modems are making an agreement about the terms of transfer of data
NOC Network Operations Center - a central location for satellite Internet companies
Rf Radio frequency - waves used in wireless technology
1. Email I
Sending and receiving email on the computer is one of the primary uses of the Internet. Email offers many advantages to
nurses and other healthcare professionals. Before 1995, only those who shared the same network connection could
exchange email. For instance, those using CompuServe as their Internet service provider (ISP) could only email others
who also used CompuServe. Today, with most computer networks being part of the Internet, the exchange of email is
possible with anyone in the world who has an Internet email address. Being knowledgeable about how to use and manage
email makes you a more efficient professional communicator.
Email enhances communication with speed and convenience. Most messages arrive at their destination only a few
moments after they have been sent, even if the destination is halfway around the globe. Email offers a way to avoid
"telephone tag" and to create a written message that the receiver can read several times, convert into voice with special
software, or save. To use email, you need email software (called an email client) and an email account.
EMAIL SOFTWARE
An email client is software used to access email from an email server so it can be viewed and read. An email server is
simply a computer that uses server software to receive and make email available for those who have an account on that
server. Operating systems come with email software such as Windows Mail (formerly, Outlook Express). Other free email
clients usable by both personal computers (PCs) and Macs can be found at Eudora, or Mozilla's Thunderbird. There are
similarities in the menus for all of the email clients. Figure 6-2 shows an example of the Microsoft Outlook 2007 window.
Email software can keep a copy of all messages that are sent and received, however the default is to delete these messages
when the email software is closed. Many people like to be able to search and view these messages after more than one
session. To accommodate this, the default is easily set to keep the messages. Because these folders can become huge, if
the default is reset to keep the messages, one needs to open and delete the messages on a regular schedule, perhaps every
month. Email software also provides a way to create folders so that incoming mail can be saved into a specific folder for
future reference. Email is ideal for communicating across time zones, particularly those where night and day are opposite,
not only because the sender and receiver can respond at times that are most convenient for them, but also because it saves
postage and telephone expenses.
EMAIL ACCOUNTS
Access to the Internet is through an ISP, either a private one at home or through a college, work, or establishment that
offers free Wi-Fi. There are many ways to create an email account and get an email address. ISPs provide email accounts,
usually two to five per account, for their customers. Colleges also provide email accounts for their students, and many
healthcare agencies create email accounts for their employees. It is also possible to get a free email account at Google,
Microsoft, or Yahoo by visiting their home page and opening an account.
• Email Addresses
Like postal mail, email requires sender and receiver addresses. At first glance, an email address may look forbidding, but a
closer look at the address will reveal the formatting rules. An email address has two parts separated by the @ sign. The
first part, or the letters before the @ sign, is called the user name (sometimes called the login name) and identifies the
sender. The part after the @ is the name of the email server where the email is received. The characters after the last dot in
an email address constitute the suffix that identifies the domain or main subdivision of the Internet to which the computer
belongs (see Figure 6-3).
• User Name
The computer name and domain suffix are dependent on which email server you are using; these are assigned by the ISP
that hosts the email account. Most institutions assign a login name (user id), which includes at least part of the user's
name. If you create an account for yourself, remember that the user name reflects on you. Email messages can be
forwarded anywhere, and the email address of the person who created it, as well as anyone who receives it, is also
forwarded. For this reason, healthcare providers should always select a user name that conveys their profession
appropriately; cutesy names are never appropriate.
Configuring an Email Client
To configure an email client that is on your PC to send and receive email, you need just five pieces of information: the
email account type, the name of the incoming server, the name of the outgoing server, your login user name, and your
logon password. The account type and name of the incoming and outgoing server are assigned by the ISP where you have
your account. The type of email account is usually POP3 (Post Office Protocol), but may also be IMAP (Internet Mail
Access Protocol). Outgoing servers use SMTP (Simple Mail Transfer Protocol). In private accounts such as one from your
own PC or one created with one of the free email clients, you select your own login or user name and password when you
open an account. The logon user name and password allow you to retrieve your email from your account. To simplify this
process, Outlook 2007 automatically configures the email accounts using the email address and email server password.
Email users should consider having several email accounts, each with a different purpose. One email account should be
used for official communication with coworkers and colleagues. Students should use an account dedicated to official
school communication with other students and faculty. A third email account, using free online email software or a home
ISP, should be used for personal communication. In addition, a fourth email account might be used for online shopping to
trap potential resulting spam.
If you have more than one email address, you may want to configure your email client to download from more than one
server. For example, if you create a free email account at Google or Yahoo, you can set your email client to download mail
from there by using the account settings. Unless you are an expert, use only one outgoing server.
When you activate the Write feature of your email client, your address will already be on the "From" line. Your first step
is to enter the address(es) of the receiver(s) on the "To" line. If you have put the recipient's name in your email client's
address book, when you start to enter the beginning of the name, the address book will automatically finish it for you.
After entering the recipient's name, enter a brief word or phrase about the message on the subject line. Emails with no
subject line may be classified as junk mail by the user's email server and not be received by the recipient. Finally, create
your message and add a signature that clearly identifies you.
There may be times when you want to send the same message to more than one person. This is simple if the message is a
reply to a message that was also addressed to others to whom you wish the reply to be sent; select "Reply All" to start
your message instead of "Reply." If you routinely send email to the same group of people, such as to members of a
committee that you are on, you can use the address book to create a group. With a group, you then place the name of the
group on the "To" line, instead of each name. Use Help to learn how to create a group of recipients. When sending a
duplicate of a message to others, you have the option of using "Cc" or "Bcc." The acronym "Cc" is derived from earlier
days of the typewriter carbon copy. The "Bcc" indicates a blind copy where the receiver is unaware that others have also
received a copy of the message. Blind carbon copies should be used judiciously; the sender must consider the ethics of
secrecy when using the function.
Sometimes you want to forward a message that you have received to someone else. Before forwarding a message, give
thoughtful consideration to the sender and receiver. In consideration to the sender, send a copy of the email to the original
sender when forwarding the email to others. When forwarding a message, edit the message to remove the email addresses
of prior recipients. Avoid passing on chain letters; most people do not want them, and many agencies will terminate your
account if you do this. Messages warning about viruses or other dire things that will happen to your computer if you don't
do something are generally hoaxes.
EMAIL SIGNATURE
Email written by professionals should always include a signature with the sender's name, title, company name, and
geographical location. A signature is similar to the return address on a postal letter; however, personal information such as
street addresses and home phone numbers should be avoided. Most signatures are one to five lines; a signature may be
personalized by including a favorite quotation.
EMAIL FILE FORMATTING
There are three main file formats for creating and sending messages: plain text (TXT), rich text file (RTF), and hypertext
markup language (HTML). TXT files have no formatting, which makes them ideal for electronic mailing lists because any
email client can read them. RTF files allow for some formatting, but not the robust features of a word processor. HTML
files use HTML tags to display formatting of text. Not all email software can read HTML messages, especially in less
developed nations. For file formats that are either rtf or html, you can use the formatting functions, such as bold, italics,
and bullets in your email client.
As convenient as email is, it must be used appropriately. Care must be taken in deciding what is included in a message
that is sent via email; contents of email must always adhere to common decency. Email should never be used to give bad
news, such as a poor evaluation, work layoff, or pay decrease. A rule of thumb is not to include anything in an email
message that you wouldn't want to read on the front page of the newspaper since it is very easy for a recipient to either
accidentally or purposely forward your message to someone else. All messages carry headers that can be traced to the
original sender. Even if the sender uses a remailer, a service that strips the identifying header so that email can be sent
anonymously, sender information can be traced by contacting the remailer service (Freeman, 2007).
There is no guarantee that email that is sent using an educational facility or employer email server or email provider
service will be private. A growing trend is for institutions to monitor email to avoid potential litigation and investigations
from government agencies (AMA/ePolicy Institute, 2008; Clearinghouse, 2006). There is still no definitive case law on
whether students or employees have the right to privacy in email. This includes situations in which the institution has said
that your email is private because if the message can be construed as damaging to the institution, privacy promises may be
legally invalidated. There are several court cases that have set precedence and ruled in favor of the employer (Bourke v.
Nissan Motor Corporation in the United States, 1993; Smyth v. The Pillsbury Company, 1996 Sykes, 2000).
2. Email II
EMAIL ETIQUETTE
Email etiquette, like regular mail letter etiquette, is essential for professional communication (Jones, 2006). The rules for
creating email are important. First of all, always include a short pertinent subject line. When replying to email, make sure
to include appropriate information from the prior message in the reply. Email is a special form of communication, not as
interactive as the telephone, but more interactive than written communication. Because it often seems very personal and
quick, there is a tendency to regard it as a verbal conversation and forget that the recipient may have been involved in
many complicated matters since he or she last sent you a message. For this reason, mailers provide an option to include
the prior message with the original when you reply. To prevent messages that rapidly become too large and
uncommunicative, edit the prior message so that only the parts pertinent to your reply are included. In general, email
should be short and to the point, but not too short. A message that is too short may be misinterpreted by the recipient, who
may feel that the sender was being abrupt or curt.
Use the appropriate font and case when writing email. According to email etiquette, use of all uppercase letters (all caps)
indicates that the user is shouting, so all caps should never be used for sentence construction. Use font colors thoughtfully.
Depending on the message, a red-colored font may be interpreted as swearing (Cleary & Freeman, 2005). Finally, email
should always be signed. Professional email should include a signature, title, and contact information, such as a mailing
address of the agency, phone number, and home page URL. (see Box 6-1).
There are some major differences between email and letters sent by postal mail. Postal mail letters generally have the
reader's full, undivided attention. In contrast, because of the sheer volume of email, the reader may not read the message
thoroughly, causing misunderstandings resulting in problems in relationships. If there is a chance for disagreement, or
email messages seem to be causing disagreements, use email to set up a time for either a person-to-person meeting or a
telephone conversation.
ACRONYMS AND EMOTICONS
Email is devoid of the nonverbal commands of face-to-face communication. Thus, expressions of subtle meaning and tone
are easily lost. With the informality of much email and the limited typing skills of many who send email messages, it is
only natural that common acronyms and icons have developed. They are only valuable when the recipients understand
them. Acronyms use the first letter of words or word parts to communicate a common phrase (see Table 6-1). They are
commonly used in informal email and instant messaging. Acronyms are not appropriate for professional communication.
To provide the appropriate body language tone, telecommunicators have devised icons called emoticons (emotional
icons), sometimes called smileys, which can be created on the keyboard. For example, one that is frequently used is :-).
When tilting your head to the left, which is the position for "reading" keyboard character emoticons, you can see a smiling
face (see Table 6-2). Some email clients include graphic emoticons. A classic graphic emoticon is a round, yellow button
with two dots for eyes and half circle for a smile. Use emoticons sparingly, if at all, in professional business
communication.
ATTACHMENTS
Many people use email to send files created with other software, such as a word processor or spreadsheet. When including
attachments in your email, use the following guidelines. First, always alert the receiver before sending an attachment and
verify that the receiver has the software that can view the attachment. For example, if a file is created and saved with
Publisher with the Publisher file extension ".pub," the recipient will not be able to view the file unless he or she has
Publisher on their computer. Be sure that the recipient has the appropriate version of the software that you are using; many
file formats from the same vendor's program are not backwardly compatible.
Attaching files to email messages increases the size of the message, which in turn increases the time required to both send
and receive it. If the attached file is large, consider "zipping" it. Zipping a file involves using a piece of software that
compresses the file to make it smaller. In general, text files can be compressed much more than graphics files. Files may
be zipped separately or as a group (archived). Before zipping a file, make sure that the recipient has unzip software in his
or her computer and that the person's email server allows for zipped attachments (some email servers remove attachments
with certain file formats, such as a zipped file, to prevent the spread of potentially harmful viruses). Most of today's
operating systems include some type of zip/unzip software. Use the Help feature to learn to use it.
An alternative for attaching a large file or zipped file is to use file transfer Internet Web sites. Examples include
YouSendIt, SendThisFile, and LogMein. The file transfer Web sites provide free as well as fee- based services; both
require the user to register to receive a login and password. To send a file, go to the Web site, log in, enter the recipient's
email address and an email subject, browse (find in your folders) to the file, select it, click open on that window, and click
Send It. The recipient receives an email with a Web address (URL) to download the file. If you are in the habit of
exchanging large files with someone whose email account is limited in sending or receiving large files or if you are
limited in this respect, one or both of you may want to get a free email account where the size of attachments is not as
limited.
MANAGING EMAIL
Taking a few minutes to organize and manage email can save you much time and energy later (see Table 6-3). Whether
you use a standalone or Web-based client, the email management processes are similar. All email clients have Help menus
to guide you through the process of organizing email. You can use email alerts to assist in prioritizing email. Alerts
include flags, stars, and font colors. Use email filters to automatically file incoming email into designated folders and send
you personal alerts for email from certain senders. You can also use filters to filter unwanted mail such as spam
(unsolicited commercial email) to the deleted items folder.
OUT-OF-OFFICE REPLIES
There are times when you will not be answering email, such as when out of town. This, however, does not stop your
email. When you do not answer, your correspondents may become upset. To show consideration for them, it is smart to
have an automatic response sent to them informing them that you are unable to read and respond to their message and
letting them know when you will be able to do so. Most ISPs provide this service; to activate it go to your account on their
server and use Help to learn how to do this. If you are using Gmail, look for "Vacation Responder" in the Settings menu.
SPAM
Spam (junk email) is not only a nuisance, it is potentially harmful. As annoying as junk email can be, it is important to
recognize it to proactively limit or eliminate it. A first clue of junk is the sender's email address. If you don't know the
sender, the email is probably junk. To proactively limit or eliminate junk mail, learn how to develop email filters and
rules. In Windows Mail 2007, using the junk email options on the Tools menu can be used to filter spam to the junk folder
to decide what to do with mail determined to be junk. In Gmail the "Filters" window can be used to discard spam. Some
online email clients allow the user to report spam with the click of a button.
Spam is the electronic version of junk postal mail, except that it shifts the costs of advertising to the receiver; it exists
because it is a cheap way to advertise. It also fills the Internet with unwanted messages. Usually, spam originates from a
false address, so replying is a waste of time. If, however, you receive a spam message and you know from the email
address that the ISP exists, such as one of the online services, you can forward it to postmaster online.service (substitute
the name of the service for online.service). Most services and ISPs take a very dim view of spamming and terminate the
account of anyone who sends junk email, or if someone's identity has been illegally assumed, the ISP will look for ways to
prevent this from happening again.
All email users must be knowledgeable about spamming practices and malware, such as phishing and pharming. One of
the best ways to avoid spam is to not give out an email address to any Web site. An alternative is to acquire a free mailbox
site and use that email address when completing online forms, for example, when shopping online. Records of online
shopping are then emailed to the free designated account along with any advertisements that they might send. When
visiting the free email account, legitimate email can be forwarded to another address and spam can be deleted. Users
should never open a known spam message! Finally, users should never purchase a product or service advertised from
unsolicited email (Project, 2007).
3. List Servs
An email discussion list, sometimes referred to as a listserv (no "e" at the end) from the first software that was used to
create one, is made up of a group of subscribers with a common interest. The software allows subscribers to receive
discussion postings via email. Subscribing, or joining, a list is generally free. Instructions for subscribing are specific to a
list and are generally found on the Web site for the list or as a click at a place where you find information about the list.
Once subscribed to a group, copies of any message posted to the group are sent to all members mailboxes (see Figure 6-
4). Members can reply to any of the messages or send an entirely new message to the group. The tasks of keeping track of
subscribers and sending copies of messages are accomplished by a software program. Other kinds of email list software
include Majordomo, Mailbase, and Listproc.
Most lists are automated, that is, the software immediately sends out any messages posted to the group's posting address;
some groups are moderated, that is, the message is first vetted by the list owner before posting.
Most mailing list software offer many options for subscribers beyond just sending messages. For this reason, there are two
addresses for each group: one that manages the list, performing such tasks as subscribing a new member or evoking the
available options such as temporarily suspending mail from the group; and another address that is used only when one
posts a message to the group. This information is included in the information that is sent to all new subscribers. It is
important to save this message because it not only provides you with the administrative address but also tells you how to
unsubscribe, find the names of other participants, digest your mail (so that you receive one message per day with all of
that day's postings), and temporarily stop your mail from the list (*very* important if you are not able to check your mail
for a while). The instructions for use of email lists vary. When replying to a list message pay particular attention to
whether you need to use reply or Reply All. With some lists, the Reply feature will permit you to reply to all list
participants; with other lists, you need to use "Reply All" to send the reply to more than the person who wrote the last
message. Make sure that you sign the email posted to mailing lists with a standard "signature" that includes at least your
name and email address. Some lists "strip" the email address of the participants in the header.
FINDING A NURSING DISCUSSION LIST
There are electronic mailing lists for just about every nursing and other healthcare specialty imaginable; finding the right
one is not always easy. Although you can search the Web using the term "nursing discussion lists" or "nursing listserv,"
staying up-to-date on what mailing lists are active is almost impossible. Examples of popular nursing discussion lists are
as follows:
Regular listserv messages are not threaded, but if the group has an archives page, the messages are usually threaded on
that page. The threading concept works only when posters preserve the original subject, either by using the Reply or
Reply All function or by copying and pasting the original subject on the subject line prefaced by "Re:". It breaks down
when posters use the Reply function to start a new topic without changing the subject line. In online courses, messages are
also threaded, so learning to use threaded messages is an important skill.
New list users may think that a list is a way to avoid doing a needed literature search. Don't expect other list members to
do this for you. If you want information about a topic, when asking for help indicate the sources you have found. When a
member indicates that he or she has done homework and is willing to share, other members are very happy to provide
more help. Do remember to sign your email and use a contact signature, which includes your email address and the name
and address of your affiliating organization.
FLAME WARS
Messages sent to an electronic mailing list go to many people whom the sender does not know. The messages will be read
under a variety of conditions, by people in different moods. Given these circumstances, it is not surprising that
occasionally a member takes umbrage at what another says and posts an overheated message. A message of this sort,
called a flame, can easily deteriorate into a flame war when others respond derogatorily with other positions on the
disputed topic. In most groups, this ends on a conciliatory note when cooler heads prevail. Flame wars can happen in any
group, but tend to be more common in groups with controversial topics.
4. Blogs
Blogging has become so popular that many news reporters and editors use this format to influence and inform the public.
Blogging has had an impressive impact on political elections and public opinion over the last few years, and is now an
accepted part of Internet participation. Blog topics range from very personal reflections to music, art, travel, fashion,
education, and corporate life.
You can read and join blogs in virtually every area in which you are interested. But, you do not need to go to each blog
several times a day to see what has been updated or added. Technology called RSS, which stands for "Really Simple
Syndication," will feed to your computer the updates from the blogs you choose to subscribe to, every time there is an
update.
5. Social Networks
Few people have not heard of Facebook or MySpace. Most of us know of these social sites through word of mouth, the
news media or other stories of relationships made, broken or restarted through these incredible electronic hangout sites.
They are certainly not just for kids; many business professionals and companies have their own page on Facebook or
MySpace.
On March 11, 2009, Reuters reported that "Networking and blogging sites account for almost ten percent of time spent on
the internet -- more than on email." The article goes on to say, "The biggest growth in Facebook membership comes from
the 35-49 year old set. Facebook has added twice as many 50-64 year old visitors as it has visitors under 18.
A social network provides the user with a personal page, upon which he or she can post anything and everything about
himself, his life, his likes, interests, his poetry or any other expression of self, and especially, his friends! On your page,
you can "link" with "friends" who also have their own pages. Those friends will show up on other friends' pages, and
perhaps, those people will link up, as well. Thus, the "network" concept. Wikipedia's description, although wordier, is,
perhaps, more sophisticated:
Social Networks can increase the feeling of community among people. An ISN is a closed/private community that consists
of a group of people within a company, association, society, education provider and organization or even an "invite only"
group created by a user in an ESN. An ESN is open/public and available to all web users to communicate and are designed
to attract advertisers. ESN's can be smaller specialized communities (i.e. .TheSocialGolfer, ACountryLife.Com, Great
Cooks Community) or they can be large generic social networking sites (eg. MySpace, Facebook etc).
However, whether specialized or generic there is commonality across the general approach of social networking sites.
Users can upload pictures of themselves, create their 'profile' and can often be "friends" with other users. In most social
networking services, both users must confirm that they are friends before they are linked. For example, if Alice lists Bob
as a friend, then Bob would have to approve Alice's friend request before they are listed as friends. Some social
networking sites have a "favorites" feature that does not need approval from the other user. Social networks usually have
privacy controls that allow the user to choose who can view their profile or contact them.
All you need to do to join one of these networks is to go to one on the Internet, and sign up, using their format, a username
and a password. You can add as much or as little about yourself as you'd like. You may find the practice addictive, as you
see the things that people "post" on your "wall," and the photos of people you may or may not have heard of.
6. Downloads
Downloading
Downloading is the process of copying files from a larger network to a smaller system, like from the Internet to your PC.
For the most part, downloading has been made easy for the average user. We click a link, and wait for a file to load onto
our local computers without having to do much else. But there is a different system in operation for a download. Behind
the scenes, the Internet's File Transfer Protocol (FTP) software is what allows us to download, as well as upload, to the
Internet. FTP is simply another set of rules, or protocols, which governs an Internet process. We will discuss FTP again
later when we learn about setting up a website.
Most files are compressed in order to save time in the download process, and must be opened, or unzipped, once they
reach their destination. Not to worry - it is a rare Internet site that demands we know all about FTP and its workings. For
now, just remember that some downloads can damage your computer if they are infected with a virus. Also, consider the
fact that your computer needs sufficient memory and disk space to handle whatever you are bringing from the Internet.
We can now easily download music, graphics, text and video, following the instructions provided with the download link.
Downloads are prepared with instructions for installation; they generally pretty much take care of themselves, with a few
simple clicks from you, and they know the best places to store themselves once they reach your computer.
Some of the popular games at this time are World of Warcraft, Rock Star and Halo-3; many games require a game
console, such as X-box or Play Station, and charge a monthly fee to join the adventure.
Virtual Reality
A virtual reality is an artificially created "world" or "environment" that can be experienced on a computer. To a certain
degree, virtual reality is already available on the Internet, on sites that allow us to walk through virtual worlds. Preston
Gralla tells us we can "walk through a giant computer, explore bizarre art galleries, visit outer space, go to the sites of
what seem like ancient ruins, explore inside the human brain, and much more" (Gralla p. 275). Virtual reality, for the
average Internet user, is still on the horizon. Loading entire worlds, as you can imagine, takes a lot of memory because of
the humungous graphics involved, but it is coming. In the meantime, it is already being used in medicine and education,
allowing scientists to learn about worlds we cannot see with the human eye, and "territories," ranging from a molecule to
the cosmos. Currently, we can experience virtual reality with special gloves, goggles and other gadgets that are usually
available in entertainment centers. Virtual reality is definitely the wave of the future
Glossary
Introduction
The Internet has given us inexpensive asynchronous discussions, synchronous instant communication, email, electronic
mailing lists, and the library known as the World Wide Web (WWW). Creative users, not content to have the WWW as
just a repository of information, have given us social networking, interactive sites, instant news, and personal opinions not
regulated by traditional media.
The networking thus provided allows us to communicate with colleagues worldwide and to stay abreast of standards of
care and practice. History has taught us through recent devastating disasters that electronic networking can provide a
means for organizing and delivering healthcare provider volunteer assistance, pharmaceuticals and medical supplies, as
well as a way to provide care to those in need. Nurses trapped within the disasters have been able to connect to the
Internet and chronicle events using email and online blogs.
Chat is interactive email that has been around for a long time, but it has morphed into newer types of instant
communication. Chat is an optional feature that is available in course LMSs. In this milieu, the computer screen shows a
list of the participants as they enter the chat room. Chat users use their real names or a "handle or alias," type their
conversation, and tap the Enter key to send the message. Others in the chat room respond with their replies.
TELEPHONY
Telephony refers to computer software and hardware that can perform functions usually associated with a telephone.
Telephony products are often referred to as voice over the Internet protocol (VoIP). VoIP provides a means to make a
telephone call anywhere in the world with voice and video using the Internet, thereby bypassing the phone company. The
free versions provide phone communication from computer to computer. All you need is a microphone, speakers, and a
sound card. If you have a video camera, you can have video calls. Some VoIP software include conference and video calls.
The connection, computer processor, and software determine the number of people and quality of the connection. For a
small fee, you can choose to call telephone numbers on mobile phones and landlines using the Internet.
8. Web 2.0
In the quest to make the Web function in a more personal manner, a concept called Web 2.0 was born. It does not have
strict boundaries from the original Web but is about people interacting, sharing, and collaborating. The term "Web 2.0"
was first coined by O'Reilly and others to describe what online companies/services that survived the dot.com bubble burst
in 2000 had in common (O'Reilly, 2005). Web 2.0 is more of a philosophy than a technology and has many interpretations.
It is based on a common vision of its user community. The objective of all Web 2.0 services is to mutually maximize the
collective intelligence of the participants and add value for each participant by formalized and dynamic information
sharing and creation (Hogg, Meckel, Stanoevska-Slabeva, & Martignoni, 2006).
Web 2.0 services are an interactive development process (Hogg et al, 2006). Information sharing can be video, data, text,
or metadata such as annotations of history. The format is determined by the provider, with the basic idea being that
information is created and shared among many users maximizing the collective intelligence. The results can be commonly
accepted content or opinion. This type of intelligence requires some type of regulating, often from users of the site. User
recommendations on eBay are one example as are reviews of various travel facilities by those who have used them.
Web 2.0 provides a rich medium to nurses and other healthcare professionals for interactive networking. A professional
networking Web site is like a visit to one's colleague's office or home where one can see personal pictures and other decor
that reflect his or her ideas and personality.
Wikis
A Wiki is one example of a collective intelligence application. A "Wiki" is a piece of server software that allows users to
freely create and edit a Web page's content using any Web browser. Wiki supports hyperlinks and has a simple text syntax
for creating new pages and crosslinks between internal pages on the fly (What is Wiki, 2002). Wikis are Web sites
designed for sharing and collaborative work on documents. They are used for committee work, clubs, classrooms, and
knowledge management. Wikis can be private or public, for example, a Wiki designed for committee work would be
private. The Wiki administrator identifies the membership using email addresses, and the Wiki automatically emails
members with information on how to register for the Wikispace.
The culture of the group is a factor that affects the effectiveness of collaboration (Watson & Harper, 2008). Users must be
willing to share knowledge and exchange ideas. They must be open-minded and willing to seek new knowledge. When
Wikis are used as an updatable knowledge management repository, users must have the technical skills to upload and edit
documents.
Wikipedia is an example of a public Wiki. It is a popular, free, online encyclopedia, which began in 2001. Within the first
six months of development, users had contributed 6000 articles (Neus, 2001); as of November 2008, there were over two
and a half million articles written in English (Wikipedia, 2008). Wikipedia articles are collaboratively created and
improved by users. To make any changes to Wikipedia articles, users must register and then log in to the site. Tabs at the
top of the article provide a means for collaborative discussion on the topic, editing the topic, or viewing the history of
changes (an audit trail). All changes are recorded and can always be undone. The strength of Wikipedia is that it provides
information on an ever-expanding number of topics. Critics are quick to point out that the quality of the articles is
inconsistent. For technical information, it is often one of the most up-to-date and best source.
Sharing Media
The Sketchcast tool allows a registered user to draw and narrate an idea and then share it via a Web page or blog.
Slideshare allows users to embed a slideshow into a Web site or a blog, synch audio to slides, and join groups who share
the same interests. Presentations can be downloaded to the user's computer or shared with others. The search terms
"nursing" and "nursing research" at this site will find slideshows pertinent to the nursing profession. "Slide" at
http://www.slide.com/ also allows you to create online slideshows including music videos as well as create a guest book
for your site on which friends can share their pictures.
RSS FEEDS
Because blogs are updated, but generally not on a fixed schedule, you can subscribe to an RSS feed which will notify you
when there is new information on a blog. RSS feed is an acronym for both Rea Simple Syndication and Rich Site
Summary (your choice). RSS feeds are not confined to blogs. You can have RSS feeds of new items from Web sites
including electronic library searches, news, and blogs sent to your personalized Web browser home Web page, such as
iGoogle or MyYahoo. The signal that a Web site is amenable to being subscribed to is the orange icon on the page itself,
or in the location bar (see Figure 6-1). Since December 2005, this icon has become the industry standard denoting RSS
feed availability. Using the Help menu in your Web browser or email application you can easily create an RSS feed for
any site where you see the icon.
MASHUPS
A mashup is a Web application that takes data from more than one source and combines it into a single integrated tool
(Merrill, 2006). There are several varieties of mashups. One type involves mapping, in which data is overlaid on a map.
This type of mashup was given a push with the advent of Google Maps which, developers discovered, could be overlaid
with current data from various sources. This type of mashup could be used in healthcare to place cases of infectious
diseases on a geographical area. In a smaller example, overlaying data about infections on a unit over a blueprint of rooms
would provide more information than the data alone.
Another type of mashup involves associating videos or photos with the metadata about the picture such as who took the
picture and the location where it was taken. There are also search and shopping mashups such as the comparative
shopping tools that draw data from various sources. News mashups are another variety in which news from various
sources is synthesized into personalized news.
GOOGLE GROUPS
Google Groups discussion forums, similar to Wikispaces, allow for sharing of documents as well as a group discussion.
Unlike Wikispaces, all Google Group users must have a Gmail address. Google Groups can be private or public, or serve
as an electronic bulletin board. Private Google Groups has a feature that alerts all the members when there is a change to
the workspace. Google Groups are also used for collaborative work such as committee work, classrooms, and clubs.
Yahoo also provides the same service.
GRASS ROOTS MEDIA
The ability to share photographs and videos is a rapidly growing trend! The term Grass Roots Media refers to the
widespread creation and posting to the Web of media such as videos, slides, and blogs by nonprofessionals. There are
multiple ways to create digital photos or videos that can be posted to the Web. Many cell phones include both the ability
to take photos and videos. Inexpensive but "good- enough" digital camcorders are also available. One inexpensive digital
video camera is even small enough to fit inside a pocket (Pure Digital Technologies, n.d.). There are also disposable
camcorders that take up to 20 minutes of audio and video Money.com., 2005).
Online Photos
Web sites such as Flickr and Shutterfly are specifically designed for photo sharing. They both allow photos to be shared
privately and publically. Shared photos is an excellent way of sharing experiences from professional meetings and
conferences.
Although podcasts and vodcasts are often referred to with only the term "podcast," they are in different file formats. Many
MP3 players play both. Podcasts, like blogs, are available at just about every news Web site. Educators are taking
advantage of podcasting by recording their lectures and then uploading them as a podcast to the iTunes store or other
places. To subscribe to a podcast, open the command to subscribe in podcatching software and then copy and paste the
URL for the podcast.
1. HTML
Introduction
Consider building your own website! You can have a family website that contains photos and information about your
family, or a business website that helps you sell the gadget you invented. You can establish a website that is focused on
frogs or pears, or Aunt Mary's needlepoint -- just about anything that interests you and that you want to share, sell,
promote or flaunt.
If you GoogleTM the words "make a website" you will be astonished by the number of companies that will help you build
a site, host it and give you a webpage - and some for free! But, don't be confused about this - free only goes so far. There
are conditions in most cases, and there is almost never a truly free website. But, before you get started, it is a good idea to
know what is involved, and plan a little for the creation of your very own web site and for owning your own domain
name.
HTML
Hypertext Markup Language (HTML) is the most commonly used authoring language for creating websites. It is
relatively easy to learn and far less complicated than learning other programming languages. It is based on embedding
tags and attributes within your page. That may sound confusing, but it is basically quite simple.
Before a web-page becomes a web page, it is put together just like a regular document, EXCEPT that it has certain codes
embedded in it that tell the document how it is going to look when it's online.
These codes, or tags, are enclosed in the carrot-shaped angled brackets, (carrot-marks) that you can see on your keyboard
that look like << and >, usually the upper case characters on your comma and period keys.
Writing HTML can be fun. For example, if you are about to write some text that you want to appear online as bold, you
would type in the tag that looks like this: <bold>. and from that point on, the text will appear bold when it's online. At the
point where you want the bold formatting to end, you would simply type </bold> . inserting a slash in the tag.
In this way you can create an entire web page, using graphics, different colors and sizes of font, hyperlinks, and even
animation. Of course there are lots of tags to learn, and a range of colors and tools to choose from. If this interests you,
take an online course in HTML, and learn how easy it is to create your own web pages - when you know how.
If you own a copy of software such as Microsoft's "Frontpage" you can easily create professional, custom web pages with
lots of help and instruction at your fingertips. However, it is still quite helpful to know HTML when using these packages.
Tip: If you want to see what a web page looks like in its raw HTML form, go to your VIEW menu and
click on "SOURCE." This option lets you see everything that went into making that web page.
Site builders
If you are not particularly interested in learning HTML, but still want to build a website, consider visiting a site that offers
"site builder" tools. These authoring tools allow you to type in regular format, and create your website just like you would
create any document, and it embeds the tags for you, giving you choices of all kinds. Obviously, this is a much simpler
way to build a website, and unless you are wanting to do something extremely fancy, these site builders are the way to go.
Just GoogleTM website builder, and see all the choices!
You will be taken to a screen that allows you to try out your domain name to see if it has already been claimed. Since
there are millions of websites now on the Internet, you may need to be very creative to find one that is original and not
already spoken for!
Be prepared to pay a fee for the use of your domain name, and also be prepared to pay each year to maintain it. Some cost
less than others. Also, be aware that you cannot violate copyright laws with your domain name. An example is - you
cannot use domain names like www.proctor&gamblegames.com, or colsanderskentuckyfriedblues.com, or any other
domain name that would infringe on someone else's copyright or trademark. Nor can you use a double set of golden
arches as your logo - you get the idea!(We will discuss copyrights in more depth in the next lesson.)
It is better not to use extras in your domain name, such as a hyphen, if you can avoid it. Websites with URLs such as
www.rubber-duck-place.com are a little more difficult for people to remember and type in, than www.rubberducky.com.
However, if you are forced to use a hyphen, don't panic. Your site will still be searchable on the web. Keep thinking and
trying out domain names until you come up with one that is most suited to your needs or products.
Remember that a website is made up of web pages, so technically, what you are creating is web pages that each have a
specific purposes and are linked to one another by hyperlinks.
What kind of site do you want? Remember that if your website is going to be a place to purchase a product of any kind,
you must choose a .com domain name. If you want to set up an organization, you might want to consider an .org or .net
site.
3. Website Components
Most websites have an email link so that people can send you a message, an order or other communication. Commercial
sites often include a "shopping cart" and/or "storefront" that allow your visitors to feel like they are shopping for your
products. It also allows them an easy way to check out and pay you. Most pre-packaged site builders include these
features, and nowadays often contain a blogging feature and other conveniences that will help you make the best use of
your website.
When planning your website, consider the following components before you start:
• Domain name
• Logo
• Content-text and photos about your subject
• Colors - background and text
• Design scheme - sketch out your pages
• Contact Us feature and Email capture
• About Us page
• Testimonials about what you are selling, if applicable
• Samples/photos
• Copyright information
• Links to other sites
• Secure purchasing function and shopping cart
• Any free links, information or tools you can provide relative to your subject
Shelley Lowery's article on www.twospots.com describes some basic considerations about choosing colors for your web
pages:
• Use caution when selecting your background and text colors. Busy backgrounds make text difficult to read and draw the
attention away from the text. In addition, always be consistent with your background theme on each page of your site.
• Select your colors very carefully, as colors affect your mood and will have an effect on your visitors as well.
• Bright colors, such as yellow and orange, cause you to become more cheerful or happy. Colors such as blue and purple
have a calming effect. Dark colors, such as brown and black, have a depressing effect.
• A good rule of thumb is to use colors based on the type of effect you're trying to achieve. However, it's always best for
your text areas to have a white background with black text.
So, planning ahead for your website is essential. Use search engines, books and instructional materials to make as many
decisions as possible before you begin to build your website. Look at sites that are similar to your subject areas and get
ideas (remembering to respect copyright and plagiarism laws and rules).
As with any created material, the material is owned by the person who created it, whether it is a piece of writing, a piece
of art, or a toy. A piece of original writing assumes natural copyrights to the person who thought and wrote those ideas.
Therefore, when you use someone else's writing verbatim without crediting them, or steal their ideas as your own, you are
in violation of copyright laws and have plagiarized their material.
When you use another person's words, you must always provide the name of the owner of the material and where you got
the information. If you do not do this, you will be considered to have plagiarized the work, and will be in violation of
copyright laws, whether the copyright has been registered by the creator or not. In the world of writing, there are different
formats for doing this. For our purposes here, just know that you MUST credit the creator of the work, as you have seen
throughout this course.
When you use an image or term that has been created by someone else, you may be infringing on trademark rights. Maybe
you do not believe these rules apply to you, especially if you are just a regular person trying to set up a modest website.
However, think twice about this. You can be sued for infringing on others' rights, not to mention the fact that you can be
forced to remove the offending material.
There are certain texts, images and music that are considered to be in the "public domain," meaning that, due to age,
familiarity or other circumstances, they simply belong to everyone.
The following information, borrowed from http://cyfair.lonestar.edu, will provide you with a basic reference to consult
while you are setting up a website and will help prevent you from breaking rules as you create your masterpiece:
What is copyright?
Copyright is a form of protection provided by the laws of the United States to the authors of "original works of
authorship," including literary, dramatic, musical, artistic, and certain other intellectual works. Copyright is automatically
applied when the work is created and "fixed in a copy" in some format (e.g., paper, film, audio, etc.), even if it does not
mention or list the symbol or the word "copyright."
Only 5 images can be taken from one source (copied from web page or scanned from a book).
Cite where you found the pictures in your bibliography.
Free clip art sites will be labeled as such. If it doesn't say you can use the images for free, assume that someone owns
them.
What is plagiarism?
Presenting someone else's ideas as your own and/or not giving credit to sources that you use in any project is considered
plagiarism. Not only is plagiarism dishonest, it is illegal.
Public domain
Works in the public domain have no copyright protection. Government documents and works created before 1923 are
considered in the public domain. Stanford University Law School provides a searchable database for locating items in the
public domain- freely available or for non-commercial use if the author is given credit.
SHERIFF
There are many established organizations that play a role in maintaining the integrity of the Internet, both in terms of
technology and the guidelines and operating principles.
InterNic
The InterNIC, or Network Information Center, is a group of registrars who maintain all the domains that are registered on
the Internet. They are part of our country's department of commerce, and they are responsible for all of the domain names
that are registered.
InterNIC is the head honcho when it comes to filing complaints about domain name or trademark theft, viruses or other
problems. Of course, these kinds of complaints should first be made to your ISP, but ultimately, InterNIC is the governing
body for Internet problems.
ISOC
The Internet Society is a non-profit organization dedicated to monitor, improve and keep tabs on Internet Standards.
Following is the organization's description of itself:
The Internet Society (ISOC) is a nonprofit organization founded in 1992 to provide leadership in Internet related
standards, education, and policy. With offices in Washington, USA, and Geneva, Switzerland, it is dedicated to ensuring
the open development, evolution and use of the Internet for the benefit of people throughout the world.
Again, this is not an agency that profits from or governs the Internet, but one that voluntarily oversees the general
standards of how the Internet is used. Groups that determine and advise on standards for the Internet's infrastructure are
members of this society.
ICANN
The Internet Corporation for Assigned Names and Numbers is an organization that accredits Registrars who sell domain
names to ISPs. It is getting a little complicated now, but really, all it means is that there are organizational bodies that have
formed just to handle the numbers of companies, organizations and people who participate in the use and upkeep of the
Internet.
Registrars
Registrars are private companies who register Internet domain names. There is a limited number of registrars, however,
the list has grown tremendously since the early 1990s. ICANN keeps a current list of accredited registrars and TLDs on
their web page.
WWWC
The World Wide Web Consortium, headed by Tim Berners-Lee, one of the web inventors, describes itself as follows: "The
World Wide Web Consortium (W3C) develops interoperable technologies (specifications, guidelines, software, and tools)
to lead the Web to its full potential. W3C is a forum for information, commerce, communication, and collective
understanding". In other words, this is a group that tries to keep the Internet operating at its maximum potential
technologically, as well as monitoring and improving web standards.
The ultimate answer to the question of who runs the Internet is that there are hundreds, or thousands, or tens of thousands,
of organizations involved in provisioning for, and shaping, the Internet. These include commercial companies,
universities, statutory bodies, government departments, Internet Service Providers, and many more.
So, who is really in charge? Ultimately, as we continue to use the Internet with behaviors of integrity and respect, we all
are.
Vocabulary
I. Viruses
1. Malware
In the early days of software programming, viruses usually occurred as accidents or mistakes in program code. A virus is
able to spread from one computer to another over a network, and often causes inconvenience and even damage to a
computer's operating system. A virus that is sent over the Internet, such as by way of email, can spread to thousands of
computers in a very short time.
What is a worm?
The difference between a generic virus and a worm virus is that a worm does not require anyone to press a key or send an
email. An email virus, for instance, will die out if everyone deletes it and it is no longer allowed to spread. However, a
worm does not depend upon any actions of a user, and can infect thousands of systems simultaneously since it uses the
internet to do its free- wheeling damage.
Trojan?
Remember the Trojan horse? A Trojan virus is wrapped up in what appears to be a legitimate program and when it is
installed, its damaging contents liberated into your computer system or network!
For the purposes of this course, we will not go into depth about how many viruses are out there on the Internet and how
many kinds exist. Instead, you need to know that your computer must be protected from the constant onslaught of new
viruses that can be "caught" over the Internet!
Paul Bocij tells us in his book, The Dark Side of the Internet, that people create viruses for several reasons. Some virus
writers actually feel they are helping educate those in the world of technology, who are challenged to figure out how to
prevent the virus. Others simply enjoy doing something that will cause many people harm or inconvenience. Still others
are looking to elevate themselves as better virus writers than others. Another common reason for the creation of viruses
has been to achieve some political goal, such as spreading a belief system or propaganda by reaching thousands of people
with a certain message. Finally, some people want revenge for one reason or another, and find that sending a virus
accomplishes some sort of win (Bocij).
Solutions:
1. Purchase one of the most popular virus prevention software packages available. It is best to go to GoogleTM and search
for the best anti-virus software, since there are new companies with improved technology that may be available by the
time you take this course. These software packages not only clean your system of viruses, but will also allow you to
periodically download the repairs for new viruses as they appear on the Internet.
2. Go to the security settings in the Tools section of your Browser, and make sure your browser does not automatically
download and install ActiveX components. Also, it is important when on the Internet to be sure you have a firewall in
place, whether it is through your security software or WindowsTM.
3. If you receive email that is not automatically deposited into your SPAM folder, but which comes from an unfamiliar
sender, delete it! Some viruses cannot spread into your system unless you open the email that has delivered them.
Naturally, if something as effective as a virus becomes common knowledge, someone is going to figure out how to make
money from the idea. Thus, we now have Adware and Spyware invading our computer systems via the Internet.
Spyware has the capability of monitoring your activities, as well as obtaining data and information from your computer
system. Spyware is often difficult to detect and eliminate. It can collect information about your credit cards, bank account
numbers and financial dealings, leaving you vulnerable to identity theft. However, its primary purpose is to target specific
advertising to you, based on what it perceives to be your interests and habits.
Adware is similar to spyware, but the information it gathers is used to target potential customers by monitoring your
Internet activities. It may detect which Internet pages you have visited and determine what products you might be
attracted to so that the appropriate ads will pop up on your system.
Adware and spyware are often secretly attached to legitimate-seeming software that you might purchase or download.
The main purpose of creating adware and spyware is the potential for money and free marketing research. How much
adware and spyware is out there? Again, Paul Bocij gives a great example to contemplate:
"In 2005,.[researchers] took a brand new computer without any security software installed, connected it to the Internet,
and then browsed a number of Web sites aimed at children. After just one hour, the machine was examined for malware,
and it was reported that a total of 359 pieces of adware were found. Although unscientific, this experiment suggests that
some adware producers are deliberately targeting young people" (Bocij).
Considering that this little experiment was carried out in 2005, try to imagine the growth of this activity in the last few
years!
It is probable that every time you search the Internet you are inadvertently picking up some kind of adware or spyware.
What should you do?
There are too many varieties of web bugs to describe them all, but you can prevent them from doing major harm to you or
your system.
Solutions
Most anti-virus software now comes with anti-spyware and anti-adware features. However, there are specific programs
available for download from the Internet that can effectively keep your system clean and free from this garbage. However
- beware - some of the products, especially free ones, may come with their own hidden adware and spyware. Make sure
that you search for a reputable product that has been tested by outside sources and has been proven not to include these
insidious freeloaders.
Cookies
Cookies are chunks of data that a web server places on your computer's hard drive, for several reasons. The cookies store
information about forms you fill out, sites that you have "registered" on, and goods you have purchased over the Internet.
Cookies actually have the effect of smoothing transactions that you make on the web, and are not particularly harmful.
Spam
If you have an email account, you will get Spam. Spam is a slang term for unsolicited emails, which are intended to either
involve you in some kind of scheme or sell you something. Of course, it does not refer to all unsolicited emails, but
generally, it is simply electronic junk mail advertising something.
Spam filter
Almost all email programs have spam filters which help detect emails that have been sent to a mailing list or newsgroup.
When you receive unwanted emails, send them to your spam file, and the filter will catch it the next time an email comes
from that spammer. Right now, there is no real way to avoid receiving spam if you use the internet, so, although it is
annoying, deleting it or letting your spam filter catch it may be your only remedy.
Hackers or Crackers
Hackers are "individuals who gain unauthorized access to computer systems for the purpose of stealing and corrupting
data. Hackers, themselves, maintain that the proper term for such individuals is cracker" (www.webopedia.com).
Government and military agencies have spent thousands of dollars recovering and repairing their networks and security
systems due to hacking. Large agencies are constantly on the lookout for breaches to their computer systems due to
hacking by people whose computer skills are outstanding, and whose intentions are not good. Unless you are running a
high-level organization through your computer, hacking is not necessarily a worry.
Glossary
Adware - Software that collects user information in order to target advertising to customers
Cookies - Small files saved on a user's hard disk when a website is visited
Cracker - What knowledgeable programmers who can break into computer systems call themselves
Hacker - Smart knowledgeable programmers who can break into complex computer systems
Malware - Any kind of malicious software
Spam - Term for unsolicited email
Spy ware - Software that secretly captures confidential information
Temp files - Local file that contains information regarding your internet browsing history
Trojan virus - Malicious software disguised as a legitimate program
Virus - A small program that attempts to copy itself from one computer to another, mostly destructive in nature
Worm - Similar to a virus, but does not attach itself to programs or data files, but exists as a separate entity
J. Internet Threats
1. Predators
The Dark Side- Predators, Scams, Hoaxes
We discussed the shady side of the Internet involving threats to computer systems and invasion of privacy through
software programs and hackers. However, more serious threats to Internet users involve scams, hoaxes, and predators who
prey on our vulnerability, gullibility or youthfulness. This lesson deals with the truly dark side of our technological age.
Although it is not necessary to avoid the Internet or be afraid of it, there are precautions to be taken, particularly with our
children, to make it a safe "place" to play.
Predators
When your children are playing for hours on the Internet, it is important to know what they are doing, and with whom
they are "chatting." Somewhat prevalent in chat rooms and social networks are predators who hope to get to know
children through communicating with them on the Internet. Once they establish a friendship and gain the child's trust, if
they are truly a predator, they may arrange to meet with the child for perverted or dangerous reasons. Children who have
met with Internet friends have been kidnapped, molested, brainwashed and mistreated in other unspeakable ways. This is
not to imply that there is a predator around every corner, but it is worth being very aware of what your children are up to
on the web.
Adolescents can be particularly vulnerable to the dark side of the Internet. Often feeling isolated and misunderstood, there
are often cult members and rescuers who want to help them overcome their unhappiness with themselves or their home
lives. Young women who put out the message that they are in distress are often sought by recruiters in the sex trade.
Dating websites come with their share of hazards, as well. Never, ever agree to meet personally with someone you have
just met on the Internet. In fact, unless and until you are completely positive that this person is for real, don't meet at all. If
you do get to the point of wanting to meet someone, make sure you
⚫ take along a friend, and tell someone where you are going,
We all want to think that we can trust our intuition regarding our judgment of others, but people who are ill or deviant can
be extremely convincing. There are, of course, horror stories that do not need to be told here. Just be smart!
Cyberstalking:
People sometimes use the Internet to harass or terrorize others. In addition, social networks have been breeding grounds
for what resembles gang activities, ganging up on one vulnerable person and ridiculing the victim, even to the point of
suicide. Cyberstalkers have also been known to claim that their victim is harassing them as a way to avoid prosecution.
Never allow yourself to get too involved with people you do not know, especially if you feel vulnerable. And, certainly,
always monitor your children's actions on the web.
Fraud
One of the Internet's most commonly known hoaxes is called "419 fraud," and involves offers of a large commission to
the receiver in exchange for your help in getting money out of Nigeria or other African nations (Bocij p. 211). This
scheme is so elaborate that the perpetrators provide telephone numbers in the Netherlands of their attorneys and photos of
wealthy "victims" in Africa who are somehow being prevented from getting their money.
Email Hoax
One of the most detested email hoaxes involves pleas for help for a dying child. These can be quite convincing, and the
perpetrators are often able to collect thousands of dollars from sympathetic users before they suddenly disappear.
A milder, less destructive email hoax is the chain email, which promises large sums of money only if you do not break the
chain, which requires you to forward the email on to many others. Know that this is only a convenient way of gathering
information - delete it!
An annoying email hoax is the sappy message of kindness or good wishes that threaten to turn your luck if you do not
forward them to a certain number of people within a certain span of time.
Other hoaxes involve identity theft, where confidential financial information is sold to those who steal credit and debit
card account numbers and spend money that does not belong to them. We discussed earlier the kinds of spyware that can
gather information from your computer, as well. Always be cautious when providing information over the Internet. It is a
good idea to have one credit card that you use for Internet purchases; try to find one that protects you from fraud, and
limit its use to the Internet.
Scams
The recently prosecuted Ponzi scheme is one that is carried out on the Internet every day. A Ponzi scheme is one that takes
money from new investors and uses it to pay older investors at high rates of interest. Beware of sites encouraging you to
invest.
This is a short description of some of the darker aspects of the Internet. Fortunately, they can all be avoided, and they don't
stop millions of people from using the Internet every day for constructive purposes. So, jump in, and enjoy the freedom to
connect with the world.
Vocabulary
419 Fraud - Attempts to get money by claiming to be wealthy and somehow stranded, usually in an African country
Chain email - An unsolicited email that asks you to forward it to others, under penalty of bad luck or promise of riches
Cyberstalker - Someone who watches and harasses another on the Internet
Dating Sites - Websites that provide information and connection with others for the purpose of forming a relationship
Email hoax - An unsolicited email that evokes sympathy and requests for money
Identity theft - Theft of a user's personal information for criminal purposes
Parental Control - Software devices provided by ISPs to protect children from unwanted content
Ponzi Scheme - Someone solicits your investment so that previous investors can be paid high rates of interest
Predator - A mentally ill or deviant person who preys on others to fulfill their needs
Scam - A scheme or hoax that presents itself as legitimate
It turns out that there are many people concerned about ethics when it comes to using the Internet. With a system so wide
open to every kind of culture and individual, it becomes important to come to some kind of consensus about what
considerate and ethical behavior on the Internet actually looks like. We will address abuses of the Internet in the following
lesson, but for now, let's take a look at the standards and ethics that have been at the core of discussions about the Internet.
Basic courtesies
Respecting others' privacy on the Internet, or using "netiquette," is an important aspect of practicing ethics.
For instance, do not stand behind someone and watch them enter a password. This is called "shoulder surfing" and is
frowned upon by most Internet users as an annoying invasion of privacy.
Do not post someone's photo on your social networking site without asking permission.
Do not engage in gossip about others or use the Internet to do emotional harm.