Test 2
Test 2
Test 2
A System Explained
A system is a set of detailed procedures, methods, processes, or course of action intended to
achieve a specific result or carry out a particular activity. Components and parts of a system and
interrelated steps work together for the good of the whole. A successful business attains results
that are consistent, measurable, and ultimately benefit customers. An accounting system, for
example, is found in every business. This particular system consists of several subsystems, i.e.,
accounts payable, accounts receivable, or payroll. Each subsystem has a well-defined goal and an
essential purpose. By working together, the combination of these subsystems comprises an
organization’s accounting system. This is just one of many different types of systems in
existence today. Almost all information management systems are made up of many subsystems
with sub-goals, all with the purpose of contributing to the organization’s primary goal.
These systems and processes are essential building blocks of a business or organization. Every
facet of a business is part of a system which can be managed or improved by applying effective
principles, and making appropriate decisions based on accurate information.
According to the American Heritage Dictionary, the term "synergy" is derived from the Greek
word “sunergos,” meaning "working together." Positive synergy is often referred to as the 2 + 2
= 5 effect. Synergy refers to the combined effects produced by two or more parts, elements, or
individuals, which then produces an effect greater than the totality of their separate effects.
A computer cannot make independent decisions or formulate steps for solving problems unless
programmed to do so by users. Even with sophisticated artificial intelligence, the initial
programming must be completed by humans. Consequently, a human-computer combination
allows the results of human thought to be translated into the efficient processing of large
amounts of information.
This synergy between computers and humans is the backbone of modern business. Successful
companies will find ways to harness this power to maximize business and organizational success.
By analyzing existing business processes, one can better understand how a business actually
works and how technology can improve success within the business. It is important to examine
business processes with an end goal of understanding how they might be enhanced thanks to
technology.
Transaction processing systems (TPS) are the most widely used information systems in the
world. The primary function of transaction processing systems is to record data collected where
an organization transacts business with other organizations. TPS’s can include point-of-sale
(POS) machines, automatic teller machines (ATM), and purchase order systems.
Business intelligence systems refer to data and software tools for organizing, analyzing, and
providing access to data with the goal of helping managers and other enterprise users make well-
informed decisions. Business intelligence systems address the decision-making needs of all
levels of management within a business.
The manager often has neither the time nor the resources to study and absorb long, detailed
reports of data. In order to bridge this gap, organizations often build information systems
specifically designed for analysis and decision-making within an organization. These systems are
called decision support systems (DSSs). Decision support systems rely on models and formulas
to produce concise information that can assist in decision making. Expert systems (ES) differ in
that they rely on artificial intelligence techniques to support knowledge-intensive decision-
making. It is essential to understand that a decision support system is only a decision aid, not an
alternative to human decision making. With expert systems, the expertise resides in the program
in the form of a knowledge base consisting of facts and relationships among the facts.
An information system is a type of platform or collection of platforms that exist to manage a set
of information or a technology product. For instance, the hardware and software used to create,
maintain and access an electronic health record is an information system. The computers, hard
drives and other electronic devices used to store and distribute patient records are part of the
system. These electronic devices on their own may be referred to as an information technology
system, even though they are part of a particular information system.
Information systems can also be considered an overarching umbrella term for the systems,
people and processes that businesses use to create, store, manipulate, and distribute information.
IS is the bridge between technology and the user.
The fields are certainly related as IT is a subset of IS. In order to understand information
management systems, one will need to know how information technology and information
systems are related to one another.
The Chief Security Officer (CSO) is in charge of information systems security for the business
and is responsible for enforcing the business’s information security policies. The CSO is
responsible for educating and training users and information systems specialists about security,
keeping management aware of security threats and breakdowns, and maintaining the tools and
policies chosen to implement security. Information systems security and the need to safeguard
personal data have become so important that corporations collecting vast quantities of personal
data have established positions for a Chief Privacy Officer (CPO). The CPO is responsible for
ensuring that the company complies with existing data privacy laws.
The Chief Knowledge Officer (CKO) is responsible for the business’s knowledge management
program. The CKO helps design programs and systems to find new sources of knowledge or to
make better use of existing knowledge in organizational and management processes.
End users are representatives of departments outside of the information systems group for whom
applications are developed. These users are playing an increasingly significant role in the design
and development of information systems. In the early years of computing, the information
systems group was composed mostly of programmers who performed highly specialized but
limited technical functions. Today, a growing proportion of staff members are systems analysts
and network specialists, with the information systems department acting as a powerful change
agent in the organization. The information systems department suggests new business strategies
and new information-based products and services and coordinates both the development of the
technology and the planned changes in the organization.
The invention of the Internet gave life to the creation of social media. Today, social media is a
term known even in the most remote areas of the world. Social media, in its present form, has
been around a relatively short time. Social networking through sites such as Facebook,
LinkedIn, and Snapchat provide a virtual meeting place for people and an opportunity to
advertise. Businesses are also offering more products and services through mobile commerce or
m-commerce on a mobile device such as a smartphone. Location-based services and
geoinformation services provide mobile users instant, on-the-spot promotions. The Internet is not
only a place to conduct e-commerce, but also a far-reaching advertising medium gradually
replacing other media such as television and newspapers. Almost every brick-and-mortar
business has extended its operations to the Internet.
Ethical, social, and political issues are closely linked. The ethical dilemmas one may face as a
manager of information management systems typically are reflected in social and political
debate. Ethical issues long preceded information management systems. Nevertheless, it has
heightened ethical concerns, taxed existing social arrangements, and made some laws obsolete or
severely crippled. There are a few key technological trends that will be discussed throughout this
course that drive these ethical stresses:
Claims to privacy are also involved in the workplace. Information technology and systems have
the potential to threaten individual claims to privacy by making the invasion of privacy
affordable, profitable, and effective. The claim to privacy is protected in the U.S., German,
and Canadian constitutions as well as in a variety of different ways in other countries through
various statutes. In the United States, the claim to privacy is protected in large by the First
Amendment’s guarantees of freedom of speech and association, while the Fourth Amendment
protects against unreasonable search and seizure of one’s documents or home, and the guarantee
of due process.
In modern society, the debate around privacy is a debate about modern freedoms. Technology
has always been intertwined with this right to privacy. Our capabilities to protect privacy are
greater than ever before, yet the capabilities for surveillance in existence today are without
precedent.
In Europe, privacy protection is much more diligent than in the United States. Unlike the United
States, European countries do not allow businesses to use personally identifiable information
without the consumers’ prior consent.
In January 2012, the E.U. issued significant proposed changes to its data protection rules. The
new rules would apply to all companies providing services in Europe, and require Internet
companies like Amazon, Facebook, Apple, Google, and others to obtain explicit consent from
consumers about the use of their data, and delete information at the user’s request based on the
“right to be forgotten.” The requirement for user consent includes the use of cookies used for
tracking purposes across the web (third-party cookies), but not for cookies used on a website.
Working with the European Commission, the U.S. Department of Commerce developed a safe
harbor framework for U.S. businesses. A safe harbor is a private, self-regulating policy and
enforcement mechanism that meets the objectives of government regulators and legislation but
does not involve government regulation or enforcement. U.S. businesses would be allowed to use
personal data from EU countries if they develop privacy protection policies that meet EU
standards. Enforcement would occur in the United States using self-policing, regulation, and
government enforcement of fair-trade statutes.
JmlkPTExODk5NzU default
The Internet is also essential to conducting modern business. That is why network connectivity
has become crucial given employees depend on e-mail, web access, even their phone service
through voice over Internet protocol.
The most important is the human element or the people in charge of all the other parts of the IT
infrastructure. Whether developers, system administrators, or network administrators, IT
professionals look at an organization’s needs and determine what hardware and software will
best support the needs of a business.
Every one of these components within an IT infrastructure depends on one another. Information
technology is the backbone of information management systems.
Evolution of IT Infrastructure
To date, there have been five stages in the computing platform evolution, each representing a
different configuration of computing power and infrastructure elements.
The Mainframe and Minicomputer Era (1959 to Present) was a period of highly centralized
computing. This particular type of computing is carried out by professional programmers and
systems operators. In recent years, the minicomputer has evolved into a midrange server and is
part of a network.
First introduced in 1981, the IBM PC is usually regarded as the beginning of the Personal
Computer Era. The IBM PC was the first to be widely adopted by American businesses, initially
using the DOS operating system and later the Microsoft Windows operating system. According
to Gartner Dataquest, a leading research and advisory company, in April 2002 the billionth
personal computer was shipped. The second billion mark was purportedly reached in
2007. (Gartner Dataquest, 2018)
In the Client/Server Era of computing (1983 to Present), desktop or laptop computers—which
are referred to as clients—are networked to powerful server computers that provide the client
computers with a variety of services. The term “server” references both the software application
and the physical computer on which the network software runs. Servers today are typically more
powerful versions of personal computers.
In the Enterprise Computing Era (1992 to Present), the use of the Internet was adopted within
businesses and developed into a trusted communications environment. Businesses began
seriously using the Transmission Control Protocol/Internet Protocol (TCP/IP) networking
standard to tie networks together. The resulting IT infrastructure links different pieces of
computer hardware into an enterprise-wide network enabling information to flow freely across
the organization.
Cloud and Mobile Computing Era (2000 to Present) is still considered a new technology. Cloud
computing is a method of computing that provides access to a shared pool of computing
resources over a network or the Internet. These “clouds” of computing resources are accessed on
an as-needed basis from any connected device or geographic location.
In 1965, prior to co-founding Intel and becoming a billionaire, Gordon Moore was working for
Fairchild Semiconductor as director of research and development. Moore theorized that the
number of components in an integrated circuit doubled approximately every year. He later
revised his prediction to say that a doubling would occur every two years. This prediction—
known today as “Moore’s Law of Microprocessors”—has been remarkably accurate. Through
the years microprocessors have become smaller, cheaper, and more powerful. Today we have an
abundance of affordable, powerful electronics.
A second technology driver of IT infrastructure is known as the Law of Mass Digital Storage. It
is estimated that the amount of digital information is roughly doubling every year. Fortunately,
the cost of storing digital information is falling at an exponential rate. In 2019, a 500-gigabyte
hard disk drive retails for about $30.
A fourth technology driver is a rapid decline in the expenses of communication and the
exponential growth of the size of the Internet. An estimated 4.2 billion people worldwide now
have Internet access (Internet World Stats, 2018). As communication costs decline, utilization of
communication and computing facilities will continue to explode.
The marketplace for computer hardware has increasingly become concentrated in top businesses
such as IBM, HP, Dell, Apple, and Oracle, and three chip producers: Intel, AMD, and IBM. Intel
tends to be the standard processor for business computing. Mainframes today continue to be used
to handle vast volumes of transactions or for analyzing substantial quantities of data. Mainframes
are also utilized for handling large workloads in data centers. The mainframe is utilized heavily
within banking and telecommunications networks.
Operating Systems
An operating system (OS) is the software that allows a user to run other applications on a
computing device. An operating system manages a computer's memory, processes, software, and
hardware. An example of an operating system is the Microsoft Windows operating system
(Windows OS). Windows is designed for desktop PCs, and at its height, Windows dominated the
personal computer world, running by some estimates on more than 90 percent of all personal
computers. (Statista, 2018)
Another popular operating system, the Macintosh Operating System (Mac OS) is designed by
Apple Inc. to be installed on the Apple Macintosh series of computers. First introduced in 1984,
it is a graphical user interface (GUI) based OS has been released in multiple different versions
over the years.
In the world of software, there are two main source models of development: open-source and
closed-source. Closed-source operating systems use code that is proprietary and kept secret to
prevent its use by other entities. Traditionally, they are sold for a profit. Open-source operating
systems use code that is freely-distributed and available to anyone to use, even for commercial
purposes. Examples of open-source operating systems include Linux, FreeBSD, and
OpenSolaris. Closed-source operating systems include Microsoft Windows and Solaris Unix.
MacOS is considered closed-source but does offer open-source components.
Mobile Operating System Platforms
Like the Windows operating system is to the desktop PC, mobile devices also require an
operating system. On mobile and tablet systems, closed-source operating systems include Apple
iOS and the Symbian OS that was used by BlackBerry. The now-discontinued OS Symbian was
the first popular smartphone operating system in the world. By 2009, it accounted for nearly half
of the global smartphone operating systems market. RIM/Blackberry, also a pioneer in this
market, held around 20 percent of the share. Despite the early start, both Symbian and RIM have
been almost driven out of the market as new operating systems have been released.
Android is based on the open-source Linux OS, though it has many proprietary, closed-source
extensions. Introduced by Google in 2007 (Google, 2019), as of 2016, Android is the most
popular smartphone operating system in the world. In 2009, 6.8 million Android smartphones
were sold. By 2015, this figure had increased to more than 1.16 billion. At the beginning of
2016, Android accounted for around 85 percent of all smartphone sales to end users worldwide.
(Statista, 2018) As of 2016, Apple’s iOS is the second most popular operating system for
smartphones.
Enterprise database management software is responsible for managing a business’s data so that it
can be efficiently accessed. Some of the leading database software providers are IBM (DB2),
Oracle, Microsoft (SQL Server), and Sybase (Adaptive Server Enterprise). These three providers
supply more than 90 percent of the estimated $70 billion U.S. database software marketplace.
Dell EMC Corporation primarily dominates the physical data storage market for large-scale
systems. Other physical storage device manufacturers include Seagate, Toshiba, and Western
Digital. In addition to disk arrays and tape libraries, enterprises utilize network-based storage
technologies. A storage area network(SAN) is a high-speed network of storage devices that
also connect those storage devices with servers.
SANs are especially useful for backing up data and disaster recovery initiatives. Within a SAN,
data can be transferred from one storage device to another without the need to connect to a
server. This allows the backup process to speed up and eliminates the need to use CPU cycles for
backup. Many SANs utilize fiber technology or other networking connectivity which allow
networks to span large geographic distances. This enables organizations to keep their backup
data in remote, geographically-dispersed locations.
The majority of local area networks and wide area enterprise networks use the TCP/IP protocol
suite as a standard. A protocol is a set of rules that govern how systems communicate. TCP/IP
(Transmission Control Protocol/Internet Protocol) is the underlying communication language
or protocol of the Internet. TCP/IP can also be used as a communications protocol within a
private network. When a computer is connected to the Internet, it is provided with a copy of the
TCP/IP program which enables the computer to communicate with every other computer.
TCP/IP is the protocol that drives the Internet today.
Today, the leading networking hardware providers are Cisco, Alcatel-Lucent, Nortel, and Juniper
Networks. Typical telecommunications platforms are provided by telecommunications/telephone
services companies offering voice and data connectivity, wide area networking, wireless
services, and Internet access. Leading telecommunications service vendors within the United
States include AT&T and Verizon. This market continues to grow with the appearance of new
providers of cellular wireless, high-speed Internet, and Internet telephone services.
The popularity of smartphones has created an upsurge in employees using their mobile devices in
the workplace. This growing trend is known as “bring your own device” (BYOD) has lead
businesses to rethink the way they manage information technology equipment and services.
Historically, at least in large businesses, the central IT department was responsible for selecting
and managing the information technology and applications used by the business and its
employees. Today, employees are playing a much more significant role in technology selection,
making it more difficult for a business to manage and control devices.
The trend of virtualization has provided the ability to host multiple systems on a single physical
machine. Virtualization helps organizations increase equipment utilization rates, conserving data
center space and energy usage. Virtualization also facilitates centralization and consolidation of
hardware administration.
One of the most significant computing trends in recent history is cloud computing. Cloud
computing is a trending model of computing in which computer processing, storage, software
are provided as a pool of virtualized resources. These “clouds” of computing resources are
accessed on an as-needed basis from any connected device and location over the Internet.
Businesses have access to applications and IT infrastructure anywhere, at any time, and on any
device.
As discussed in a previous section, Open-sourced software has become increasingly popular. The
open-source movement has been growing for over 30 years and has demonstrated that it can
produce commercially acceptable high-quality software. The rise of open source software,
particularly Linux and the applications it supports, has profound implications for corporate
software platforms in the area of cost reduction.
Database technology reduces many of the issues of traditional file organization. A database is a
collection of data organized to serve many applications efficiently by centralizing the data and
controlling redundant data. Rather than storing data in separate files, data appears to users as
being stored in a single location. A single database can serve multiple applications.
System software for creating and managing databases is known as a database management
system (DBMS). The database management system provides users and programmers with a
systematic way to create, retrieve, update, and manage data. The database management system
serves as an interface between the database and end users or application, ensuring that data is
consistently organized and readily accessible.
The DBMS manages three crucial components: the data, the database engine—which allows the
data to be accessed or modified— and the database schema—which defines the database’s
logical structure. These foundational components help provide concurrency, security, data
integrity, and consistent administration procedures. Typical database administration tasks
supported by the database management system include change management, performance
monitoring, and backup and recovery procedures. Many database management systems also
provide features such as logging and auditing of activity which aids in ensuring only authorized
access to the data.
The database management system is perhaps most useful for providing a centralized view of data
that can be accessed by multiple users. A database management system can limit what data each
end user sees by offering many views of a single database schema. Being able to restrict which
employee can access what data is very important for example in the case of employee records
information.
Big data does not refer to a quantity of data but usually refers to data in the petabyte and exabyte
range. In other words, billions to trillions of records, all from different sources constitute big
data. Big data are produced in much larger quantities and much more rapidly than traditional
data. While “tweets” are limited to 280 characters each, Twitter alone generates over eight
terabytes of data daily. According to the International Data Center, data are more than doubling
every two years. Today the amount of data available to organizations is growing at an
unprecedented rate.
Consequently, businesses are interested in big data because it can reveal more patterns and
anomalies than smaller data sets. This amount of data has the potential to provide new insights
into customer behavior, weather patterns, financial market activity, or other phenomena. For this
data to benefit businesses, organizations need new technologies and tools capable of managing
and analyzing non-traditional data along with their traditional enterprise data.
Functioning like a LAN, wireless local area network (WLAN) utilizes wireless network
technology, such as WiFi. These types of networks do not require that devices are physically
connected to the network. Using routers and switches, LANs can connect to wide area networks
to rapidly and safely transfer data. A wide area network (WAN) is a geographically distributed
private telecommunications network that interconnects multiple local area networks (LANs). The
Internet itself is an example of a wide area network, connecting all computers around the world.
Private networks can also be extended across the Internet via a virtual private network (VPN).
A VPN lets its users send and receive data as if their devices were connected to the private
network, even if they are not. Through a virtual point-to-point connection, users can access a
private network remotely.
In the majority of Wi-Fi communication, wireless devices communicate with a wired LAN using
access points. An access point is a box consisting of a radio receiver/transmitter and antennas
that links to a wired network, router, or hub. Mobile access points such as Verizon's Mobile
Hotspots use the existing cellular network to create Wi-Fi connections.
Wi-Fi technology presents numerous challenges. One is Wi-Fi’s security features, which can
make these wireless networks vulnerable to intruders. Another challenge of Wi-Fi networks is
their susceptibility to interference from nearby systems operating in the same spectrum, such as
wireless phones, microwave ovens, or other wireless LANs.
Radio frequency identification (RFID) is a technology that uses small tags with embedded
microchips. These microchips store information that can be used to uniquely identify an item as
well as provide additional information about the item. The information can then be transmitted
via radio signals over a short distance to RFID readers. The RFID readers then pass the data over
a network to a computer for processing.
Wireless sensor networks (WSNs) are networks of interconnected wireless devices that are
embedded in the physical environment to provide measurements of many points over large
spaces. These devices have built-in processing, storage, and radio frequency sensors and
antennas. They are linked into an interconnected network that routes the data they capture to a
computer for analysis. Wireless sensor networks are valuable in areas such as monitoring
environmental changes, monitoring traffic or military activity, protecting property, establishing
security perimeters, monitoring supply chain management, or detecting chemical, biological, or
radiological material.
There are five categories used to describe domain names and their functions. At the top is the
root domain, which can be found at the highest level of the tree but is visible to the user as the
“.” between the top and second-level domains. The root domain describes a country, a region, or
a type of organization. There are thirteen root name servers worldwide.
The top-level domain follows the root domains; For example, .edu, .org or .gov are examples of
top-level domains.
Second-level domains are names that are assigned to appropriate top-level domains. An example
of this, in the web address google.com, "Google" is the second-level domain and ".com" is the
top-level domain.
Sub-domains or third-level domains are used to describe any organization that is derived from
the second-level domain names. In our Google.com example, mail.Google.com, "mail" would be
considered the third-level domain.
Worldwide Internet policies have been established by several professional organizations and
government bodies. The Internet Architecture Board (IAB) helps define the overall structure
of the Internet. The Internet Corporation for Assigned Names and Numbers (ICANN), is
responsible for assigning IP addresses. The World Wide Web Consortium (W3C), is
responsible for setting Hypertext Markup Language and other programming standards for the
web.
VoIP Technology
Voice over IP (VoIP) technology enables traditional telephone services to operate over
computer networks. Voice over IP technology delivers voice information in digital form using
packet switching thus avoiding the tolls charged by local and long-distance telephone providers.
Calls that utilize VoIP technology travel over the corporate network thus avoiding the fees
charged by local and long-distance telephone companies. Voice calls can be initiated or received
with a VoIP-enabled telephone or with a computer equipped with a microphone and speakers.
Unlike the traditional telephone network, VoIP phones can be added or moved to different
offices without the need for rewiring or reconfiguring the network. Implementing VoIP within a
business does not mean that all employees must use VoIP enabled phones. Today, VoIP
providers implement IP telephony in a manner that supports a businesses' previous investment in
existing telephone equipment, allowing for a blend of digital and analog telephone stations.
In addition to traditional voice services, VoIP gives access to advanced applications that can
potentially help staff be more agile and productive. VoIP solutions aimed at businesses have
evolved into unified communications services that treat all communications—phone calls, faxes,
voice mail, e-mail, web conferences, and more—as discrete units that can all be delivered via
any means and to any handset, including cell phones.
JmlkPTExODk5NzU default
Drucker described the knowledge worker’s labor as “ever-changing, dynamic, and autonomous.”
(Drucker, 1959) Knowledge work is all about problem-solving and thinking to answer all the
questions that arise. They are expected to be innovative and have the abilities to come up with
new and better ways of doing things.
The Harvard Business Review (HBR) published an article in 2010 asking the question “Are all
employees knowledge workers?” (Hagel, Brown, & Davidson, 2010) Its authors came to the
same conclusion, making the case against drawing artificial boundaries in the workforce and
cautioning business leaders not to be too quick to write-off some jobs as mindless and routine.
We can instantly begin the process of learning anything, anywhere. The knowledge worker of the
past is becoming the learning worker of today. This new movement is the age of “learning
workers.” These people typically have college degrees and advanced training, but what sets them
apart is their knowledge of how to learn. Forbes recently noted, “how we value workers is
changing, and the emphasis now is on an employee’s ability to learn and adapt, rather than their
readiness to come into a job with the skills required to do everything.” (Morgan, 2016)
For businesses, this means establishing a culture of learning throughout the entire organization is
even more critical. From onboarding to employee training, businesses must work to create
training environments that enable continuous learning and rapid knowledge sharing among all
employees.
Becoming a learning organization has previously required great dedication, time, energy, and
resources. Advancements in learning technology now present business leaders with new tools
designed to help make this task more achievable. Technological advancements in information
management systems enable the knowledge worker of the past to become the learning worker of
the present and future.
Knowledge Management
Knowledge management (KM) is the process or processes used to handle and oversee all
the knowledge that exists within a company. The goal of knowledge management is to codify
knowledge, retrieve knowledge, improve collaboration, and stimulate overall organizational
learning. Knowledge management relies on an understanding of knowledge, which consists of
discrete or intangible skills that a person possesses.
The field of knowledge management identifies two main types of knowledge. Explicit
knowledge is knowledge or skills which can be easily articulated and understood, and therefore
easily transferred to others. Anything that can be written down in a manual—for example,
instructions, mathematical equations, etc.—qualifies as explicit knowledge. In contrast, tacit
knowledge is the knowledge that is difficult to articulate, package, and transfer to others. These
are usually intuitive skillsets that are challenging to teach—such as body language, aesthetic
sense, or innovative thinking.
The concept of knowledge management and the history and frameworks of knowledge
management fills texts books and entire degree programs. For our purposes, we will focus our
attention on the underlying information management system supporting knowledge management.
When choosing a knowledge management system, it is essential to ensure the platform of choice
can support the file size, file types, and volume that the organization will require. Considerations
for scalability of a knowledge management system are also crucial for businesses to consider
when choosing a knowledge management system.
The adoption of artificial intelligence (AI) into knowledge management systems replaces the
human consultants that had been analyzing the data and monitoring the knowledge management
processes. Today, cognitive computing, adaptive technology, and intelligent filtering tools have
enormous implications for codifying knowledge and have become increasingly adopted by
knowledge management systems. Artificial intelligence will be discussed in detail later in this
section.
Productivity Software
A category of application programs that allows a user to produce items such as documents,
spreadsheets, graphs, worksheets, and presentations is known as productivity software or office
suites. This software typically features user-friendly interfaces and are widely used as
foundational software for businesses to improve productivity.
Many different types of office suites are in existence today. The most commonly used office
suites include Microsoft Office, Google's G Suite, Apache OpenOffice, and LibreOffice. Like all
decisions made within a business, one may ask "which is better?". The answer to this question is
determined based on what will best fit the needs of the company’s employees, business type, and
industry.
Enterprise application software is typically divided into four major categories. These categories
include enterprise systems, customer relationship management systems, supply chain
management systems, and knowledge management systems. By integrating these enterprise
applications, the performance of the entire organization is enhanced.
Unlike consumer or small business applications, an enterprise application provides business logic
and supports an entire organization in an effort to lower costs and improve both productivity and
efficiency within the enterprise. (Davenport, 1998)
There are different types of licensing models. Open source licenses allow software to be freely
used, modified, and shared. Perpetual and subscription-based licensing models, on the other
hand, require payment. A perpetual software license is a type of license that once purchased,
authorizes the use the program indefinitely. Traditionally, the perpetual license was the dominant
licensing model used by most software vendors. Today, just 43 percent of software producers say
perpetual software licenses contribute to half or more of their revenues. (Flexera Software, 2016)
Evidenced by the decline of perpetual software licensing, the enterprise software marketplace is
undergoing a transformation in pricing and licensing models. This transformation is driven in
part by shifts in customer demand and continuously changing technology.
Other factors influencing the shift from perpetual to subscription software licenses include the
adoption of software as a service (SaaS) and cloud computing. Subscription-based software
models often ease the financial burden of buying software for a business by providing ongoing
subscription payments instead of an initial significant capital investment.
Cloud Computing
As mentioned in earlier sections, cloud computing refers to a model of computing in which
access to a shared pool of computing resources is provided over a network or the Internet.
Simply put, cloud computing is the delivery of computing services including servers, storage,
databases, networking, software, analytics, and intelligence over the Internet. Cloud computing
or at times simply referred to as “the cloud,” offers faster innovation, flexible resources, and
scalability.
One of the many benefits of cloud computing is the elimination of much of the capital expense
for a business. A business avoids having to purchase hardware, software, and operating on-site
datacenters. Instead, the cloud service provider offers the infrastructure and the experts required
to manage it.
In typical payment models for cloud computing, a business will only pay for the cloud services
used. Pay-per-use pricing typically results in lower operating costs, infrastructure efficiencies,
and scalability as the needs of the business change. (Microsoft Azure, 2019)
Most cloud computing services are provided on demand. Businesses are provisioned with the
resources required within minutes. This provides the business with a great deal of flexibility and
allows the ability to scale elastically. In cloud terms, “elastically” means delivering the right
amount of IT resources right when it is needed.
Another cost saving benefit of cloud computing is using a cloud service provider. This frees IT
manager from hardware set up, software patching, and other time-consuming IT management
chores allowing them to focus on achieving higher priority business goals.
Clouds that are operated by a third-party cloud service provider are known as public
clouds. These cloud service providers deliver computing resources over the Internet. Amazon
Web Services (AWS) and Microsoft Azure are examples of public cloud service providers. In a
public cloud, all hardware, software, and supporting infrastructure is owned and managed by the
cloud provider.
In a private cloud, the computing resources are used exclusively by a single business. A private
cloud may be physically located on the company’s on-site data center or hosted through a third-
party provider. The primary difference between a public and private cloud is that in a private
cloud the services and infrastructure are maintained by the business or organization and not by a
third-party provider.
Hybrid clouds as the name implies combine public and private clouds. In a hybrid cloud, data is
configured to be shared between both clouds. By allowing data to move between private and
public clouds, a hybrid cloud gives a business greater flexibility, more deployment options, and
helps optimize your existing infrastructure, security, and compliance. (Microsoft Azure, 2019).
(Microsoft Azure, 2019)
Infrastructure as a service (IaaS) is considered the most basic category of cloud computing
services. With IaaS, IT infrastructure—servers and virtual machines (VMs), storage, networks,
operating systems—is rented from a cloud provider. IaaS is most often billed on a pay-per-use
subscription and a level of technical knowledge is required during configuration.
The cloud computing services known as platform as a service (PaaS) supplies an on-demand
environment for testing, developing, delivering, and managing software applications. Platform as
a service is designed to make it easier to quickly create web or mobile apps without having to set
up or maintain the underlying infrastructure of servers, storage, network, and databases needed
for development. PaaS includes basic configurations and requires a little bit of technical
knowledge to utilize but not as much as IaaS.
A cloud computing method of delivering software applications over the Internet is known as
software as a service (SaaS). This service is provided on demand and is usually subscription-
based. With SaaS, the cloud provider hosts and manages the software application as well as the
underlying infrastructure. The cloud provider also handles any maintenance, like software
upgrades and security patching. The application is accessed via the Internet, usually with a web
browser on a phone, tablet, or PC. No technical knowledge is required as the provider manages
everything.
An intranet, on the other hand, is an exclusive network that can be accessed only by a specific
group of people. An intranet generally is a local network, meaning only people who are directly
connected to the intranet can access the information stored. Intranets, for example, may be used
for businesses not wanting their information to be accessed by people outside of the
organization. In many cases, you can only get intranet access if you are using a computer that is
directly linked to the organization's network.
The Internet and an intranet are not always separate. An extranet is a private network that uses
Internet technology to share part of the businesses intranet securely. An extranet only allows
access to specific information or access by certain people. The extranet provides a blend of the
confidentiality and access control allowed on an intranet along with the information availability
of the Internet.
The second significant advantage of a content management system is that it allows non-technical
people who don’t know programming languages to create and manage their web content
efficiently. The “what you see is what you get" (WYSIWYG ) editors of a typical content
management platform allow users to change the content of a site without needing to have a web
development background.
When a company uses a content management system to publish pages, it reduces the dependence
on web developers to make changes to the website. In addition to streamlining the editing or
posting of new content, it is more cost effective.
There are many free and subscription-based content management system offerings available for
personal and enterprise use. Presently WordPress is the most popular content management
system as it runs 21% of the top 10 million websites. It’s open-source, meaning it’s developed
and maintained by a large community and is free to use. WordPress can be either hosted on-
premise or hosted by a third-party provider.
There are also many SaaS (software as a service) companies that provide an all-in-one solution
to site creation. Companies like Squarespace and Wix offer a subscription model and a very user-
friendly interface to build any site desired. The advantage of using these services is that they
provide an easy to use way to set up and customize your site.
Business Intelligence
The technology-driven process for analyzing and presenting information to support business
decision-makers is known as business intelligence (BI).
Business intelligence programs may also incorporate forms of advanced analytics, such as data
mining, predictive analytics, statistical, and big data analytics. Advanced analytics projects are
often conducted and managed by separate teams of statisticians, data scientists, and other skilled
analytics professionals.
Business intelligence platforms are increasingly used as front-end interfaces for big data systems.
Modern business intelligence software offers flexible backends, enabling the organization to
connect to custom data sources. This, along with simple user interfaces, allows users to develop a
unified view of diverse business data.
Artificial Intelligence
The branch of computer science known as artificial intelligence (AI), encompasses the creation
of intelligent machines that work and react like humans. Devices utilizing artificial intelligence
can learn from experience and adjust to new inputs. Artificial intelligence has become popular
today thanks to increased data volumes, advanced algorithms, improvements in computing
power and storage, and the proliferation of big data.
Artificial intelligence has evolved to provide many benefits within many different industries. AI
adapts through progressive learning algorithms to let data do the programming. It does so by
finding structure and patterns in data so that the algorithm can then predict future
actions. Artificial intelligence can also adjust data models based on new information in cases
where the predicted answer is not correct.
Artificial intelligence achieves incredible accuracy through deep neural networks. For example,
your interactions with Siri or Alexa are all based on deep learning, and they will continue to
become more accurate the more they are utilized.
Since the role of the data in business is now more critical than ever, it can create a competitive
advantage when artificial intelligence is implemented. If a business has the best data in a
competitive industry, even if competitors are applying similar BI or AI techniques, the best, most
comprehensive data will always win.
Machine Learning
The branch of artificial intelligence known as machine learning is based on the concept that
systems can learn from data by identifying patterns and making decisions with minimal human
intervention. For instance, a self-driving car should be able to recognize the presence of other
vehicles and objects, and then change its behavior to avoid an accident. The idea of a self-taught
computer program has been a part of the artificial intelligence field since at least the 1970s.
Machine learning has significantly expanded in the last decade due to the growth in computing
power available. Machine learning is used every day, but one may not recognize it easily. For
example, every online search is resolved using algorithms that rank the billions of web pages
based on your request. Search results will also vary based on your prior searches, and the links
previously clicked on.
Industries working with massive amounts of data recognize the value of machine learning. By
utilizing machine learning to extract insights from data, organizations can work more efficiently
and gain an advantage over competitors.
For example, every time something is purchased Amazon, its recommender engine will suggest
other items that might interest the purchaser based on patterns in prior purchases, behavior on
other websites, and the purchases of others who share similar purchasing histories. Every time
you watch a movie on Netflix, a recommender system will come up with movies you might be
interested in based on a similar set of factors.
JmlkPTExODk5NzU default
Thirty to forty years ago, when information technology was in its infancy, the concept of hiring
dedicated information technology security personnel was almost unheard of. Instead, the most
knowledgeable and skilled systems architects, administrators and programmers were mobilized
in a crisis and had to manage with whatever challenge was presented. The information security
function was typically added on to other job roles, such as operations managers, system
administrators, or network administrators.
On March 26, 1999, everything changed. One highly publicized global event helped accelerate
the acceptance that security was no longer a backroom activity and needed consideration at the
highest levels of the boardroom. The Melissa virus was, in fact, a mass-mailing macro virus that
propagated at frightening speeds across the connected world, leveraging a weakness in Microsoft
Outlook that allowed it to send infected documents to the first fifty entries in each infected user's
address book.
The Melissa virus infected hundreds of thousands of computers within a matter of days, rapidly
spreading around the world. The virus struck with such ferocity that companies' e-mail servers
were overwhelmed with traffic; in some cases, leaving businesses without e-mail for weeks
while technicians eradicated the threat. The Melissa virus is credited with disrupting more than a
million e-mail accounts globally as well as costing companies an estimated $80 million.
This global event was one of the first to bring securing information management systems to the
forefront. In this section, we will focus on this critical aspect of information management
systems.
Malicious Software
Malicious software, better known as malware, is the term used to describe any malicious
program or code that is designed to be harmful to systems. Malware seeks to invade, damage, or
disable computer systems, networks, and mobile devices, often by taking control over a device’s
operations.
Malware is primarily aimed towards making money off of a business or individual persons
illicitly. Although malware typically cannot damage the physical hardware of systems or
network equipment it can and does steal, encrypt, or delete data, alter core computer functions,
and spy on computer activity without the knowledge or permission of the user.
The most common ways that malware accesses a system is through the Internet and e-mail.
Malware can penetrate a computer when an individual unknowingly visits a hacked website,
clicks on a game demo, downloads an infected file, installs a new toolbar from an unfamiliar
provider, or opens a malicious e-mail attachment. Malware can hide in seemingly legitimate
applications causing damage to systems and even entire enterprises.
Adware is unwanted software designed to display advertisements, most often within a web
browser. Typically, adware uses a deceptive method to disguise itself as legitimate or is secretly
added onto another application to trick the user into installing it. Spyware is a similar malware
which secretly observes the activities of a user without permission and reports the activities back
to the spyware creator.
A virus is a malware that when executed replicates itself by modifying other applications and
infecting them with its own malicious bits of code. Similar to viruses, worms are a type of
malware which is self-replicating in order to spread to other computers over a network. Worms
usually cause harm by destroying valuable data and files.
A Trojan horse is one of the most dangerous types of malware. It usually represents itself as a
useful piece of software in order to trick the user into installing it. Once a trojan horse is on a
system, the attackers behind the Trojan gain unauthorized access to the affected computer.
Trojans are used to steal financial information or install threats like ransomware. The term
Trojan horse is based on the large wooden horse in history used by the Greeks to trick the
Trojans into opening the gates to their fortified city during the Trojan War. Once inside the city
walls, Greek soldiers hidden in the horse revealed themselves and captured the city.
Ransomware is a widely used type of malware that locks a user out of a device and encrypts the
files rendering them unusable. The unsuspecting user is then forced to pay a ransom in order to
get access to the device. Ransomware is often considered the cyber criminal’s weapon of choice
due to the fact that it demands a quick, profitable payment in challenging to trace
cryptocurrency. Defending against ransomware is very difficult as the code behind ransomware
is easy to obtain through online criminal marketplaces.
Forms of Malware
Another form of malware is known as rootkit which provides an attacker with administrator
privileges on an infected system. It is typically designed to stay hidden from the user, other
software, and the operating system.
A keylogger is a malware that records all of the user’s keystrokes on the keyboard. The
keylogger collects the gathered information and sends sensitive information such as usernames,
passwords, or credit card details back to the attacker.
An exploit is a vulnerability in a system an attacker can take advantage of to gain access and
take control of a system. Exploits are often linked to malvertising, which attacks through
legitimate websites that unknowingly contain malicious content from a bad site. The malicious
content then tries to install itself on a computer. Unfortunately, in this case, all one has to do is
visit a legitimate site on the wrong day.
Social networking sites such as Facebook, have been significant targets for malware for many
years. According to the cybersecurity company Webroot, cybercriminals have taken to Facebook
Messenger to distribute malicious links and images through compromised accounts. These links
will take users to sites that pose as popular websites like YouTube, and demand that a malicious
browser extension is installed to properly view the website’s content (Moffitt, 2018). Internet
security business Symantec reported in 2018 that it had detected nearly 670 million new and
unique threats from malicious software in 2017, up from 357 million in 2016. Symantec also
reported that as many as 1 in 13 URLs in 2017 were malicious, as opposed to 1 in 20 in 2016.
(Symantec, 2018).
Many types of spyware also act as malicious software. These small programs install themselves
surreptitiously on computers to monitor user web surfing activity and serve up advertising.
Thousands of types of spyware have been documented and countless more undocumented. Many
users find such spyware annoying, and some critics worry about its infringement on the
computer users’ privacy. Some forms of spyware are especially nefarious. Keyloggers, for
example, record every keystroke made on a computer to steal sensitive data, launch web-based
attacks, gain access to e-mail accounts, or to collect personal information such as credit card
numbers.
Internet Vulnerabilities
The Internet is so vast that when abuses do occur, they can have a pervasive impact. Consider
when a corporate network is connected to the Internet. Suddenly the organization’s information
management systems can be exponentially more vulnerable to outside abusers.
The increase in Internet vulnerabilities can also be attributed to the widespread use of e-mail,
instant messaging, and peer-to-peer file-sharing programs. E-mail may contain attachments that
serve as launch points for malicious software or unauthorized access to internal corporate
systems. Employees may use e-mail messages to transmit valuable trade secrets, financial data,
or confidential customer information to unauthorized recipients.
Is it safe to log onto a wireless network at an airport, library, or other public location? It depends
on how vigilant a user is. Even a home wireless network is vulnerable due to the fact that radio
frequency bands can be scanned.
Bluetooth and Wi-Fi networks are both susceptible to hacking by eavesdroppers. Local area
networks (LANs) can be easily penetrated by outsiders armed with laptops, wireless cards, and
hacking software. This act is known as wardriving, which is the act of searching for wireless
networks by a person usually in a moving vehicle. Hackers utilize these tools to detect
unprotected networks, monitor network traffic, and in some cases, gain access to corporate
networks.
Wireless technology was designed to make it easy for stations to find and hear one another. The
service set identifiers (SSIDs) that identify the access points on a wireless network are
broadcast multiple times and can be detected fairly easily by an intruder’s sniffer programs.
In many cases, businesses do not have basic protections against wardriving. An intruder that has
connected to a wireless network is capable of accessing other resources on the network. For
example, the intruder could determine which other users are connected to the network and access
their data.
Computer Crime
The U.S. Department of Justice defines computer crime as “any violations of criminal law that
involve a knowledge of computer technology for their perpetration, investigation, or
prosecution.” (Kim, Newberger, & Shack, 2012) It is nearly impossible to estimate the
magnitude of the computer crime problem. According to the Ponemon Institute’s Second Annual
Cost of Cyber Crime Study sponsored by ArcSight, the median annual cost of cybercrime for the
organizations in the study was $11.7 million per year. (Ponemon Institute LLC, 2017) Often time
businesses are hesitant to report computer crimes because the crimes may involve employees, or
the business fears that publicizing its vulnerability will hurt its reputation.
Who are the nefarious actors of such heinous crimes? Hackers. A hacker is an individual or
group of individuals who intend to gain unauthorized access to a network or computer system.
Cracker is the term that is typically used to denote a hacker with criminal intent, although the
terms are often used interchangeably.
A sniffer is a type of eavesdropping program that monitors information traveling over a network.
A sniffer assists in identifying potential network issues or criminal activity when used
legitimately. When sniffers are used with criminal intent, they can be damaging and challenging
to detect. Sniffers enable hackers to steal information from anywhere on a network, including
proprietary company information, e-mail messages, and other confidential information.
In a denial-of-service (DoS) attack, hackers flood a network server or web server with many
thousands of false requests for services in order to crash the network. The network receives so
many requests that it cannot keep up with them and is unavailable to service legitimate requests.
Hackers create these botnets, or large groups of infected computers, by infecting other people’s
computers with malware giving a hacker control of their computers. The infected computer then
becomes a slave, or zombie, serving a master computer belonging to someone else. Once hackers
infect enough computers, they can use the amassed resources of the botnet to launch DDoS
attacks, phishing campaigns, or unsolicited “spam” e-mail. In 2017, over 12 million malicious
links were sent by botnets every day. (Symantec, 2018)
The fraudulent practice of directing Internet users to a bogus website is known as pharming.
The unsuspecting victim will appear to go to the correct web address; instead, it will be a fake
site that the user then unsuspectingly provides confidential information to. This is done by
gaining access to the Internet address information stored by Internet service providers who have
flawed or unsecured software on their servers.
When a user clicks on an ad displayed by a search engine, the advertiser typically pays a fee for
each click. A potential buyer is then directed to the site. Click fraud occurs when an individual
or computer program fraudulently clicks on an online ad without any intention of making a
purchase. This causes advertisers to pay for traffic to their website that is not actually occurring.
Identity theft can take many forms, from a stolen wallet to a store-wide data breach. Three
common causes include lost or stolen personal documents, insecure online data, and
companywide data breaches.
If a wallet, or passport has been stolen, you are susceptible to identity fraud. Identity thieves may
also steal mail or rummage through garbage to obtain documents containing social security or
bank account information.
An increased amount of banking, shopping, and personal communication taking place online
which increases the risk of identity theft. For example, when an individual enters a credit card
into an insecure site or reuses the same password across multiple websites, this will increase the
risk of identity theft.
Another prevalent method by which people fall victim to identity theft is the occurrence of large
data breaches. Large retailers, for example, are targets for data breaches that affect shoppers, or
registered users of large websites. In a typical data breach, cybercriminals hack into a company’s
private records to obtain its customers’ personal data which could include social security
numbers or credit cards.
There are warning signs that one has fallen victim to identity theft — strange charges on a credit
card statement for example or a sudden additional line of credit. Companies affected by a data
breach usually notify customers right away. At times it can take months before identity fraud
becomes apparent. For example, when the thief has used a victim’s personal information to set
up a new line of credit.
The Dark Web is used by identity thieves and hackers alike to buy and sell compromised
personal information. Criminals can potentially use information that resides on the Dark Web to
carry out their nefarious activities.
Access to the dark web requires the use of a special anonymizing browser called Tor. Once a
user connects via the Tor browser, web page requests are routed through a series of proxy servers
operated by thousands of users around the globe. This process of anonymizing renders a user's IP
address unidentifiable and untraceable making it virtually impossible to trace the source of the
computer.
Dark web sites tend to look similar to any other site however there are some key differences. One
distinct difference in comparing the dark web to the regular Internet is the naming structure.
Instead of ending in .com or .org, dark web sites end in .onion. That is “a special-use top level
domain suffix designating an anonymous hidden service reachable via the Tor
network.” (Guccione, 2019) Browsers with the appropriate proxy can access these sites, while
other access attempts are denied.
Dark web sites use a scrambled naming structure that creates difficult to remember URLs. In
addition to an unintelligible URL, dark web sites are very dynamic in naming. A popular dark
web commerce site known as Dream Market goes by the unintelligible address of
“6khhxwj7viwe5xjm.onion.” Dream Market specializes in drugs, digital goods, and other
services.
Contrary to popular belief there is a lighter side of the dark web. Not everything on the dark web
is illegal or nefarious in nature. The Tor network actually began as an anonymous
communications channel, and to this day still serves a valuable purpose in helping people
securely communicate in environments that are hostile to free speech.
The vulnerabilities of the Internet have also turned individuals and even entire nation states into
easy targets for politically-motivated hacking. Cyberwarfare is now a state-sponsored activity
designed to cripple and defeat other nations by penetrating computer networks with the intent of
causing damage and disruption.
In general, cyberwarfare attacks have become much more widespread, sophisticated, and
potentially devastating. The United States was the largest target for attacks in 2018, accounting
for 38% of all targeted attacks. (Symantec, 2018) Over the years, hackers have stolen plans for
missile tracking systems, satellite navigation devices, surveillance drones, and leading-edge jet
fighters. In 2017, the ransomware known as ‘WannaCry’ infected over 200,000 computers across
the globe in just three days. The ransomware was responsible for hospitals and clinics in the
United Kingdom having to turn away patients due to not being able to access their
computers. ‘WannaCry’ affected at least 150 countries, including the United States, United
Kingdom, France, Spain, and Russia.
Cyberwarfare poses a severe threat to the infrastructure of modern societies. Since the major
financial, health, government, and industrial institutions of modern societies rely on the Internet
for daily operations.
Cryptocurrency Explained
The dark web has flourished thanks in part to Bitcoin. Bitcoin is the crypto-currency that
enables two parties to conduct a transaction without knowing each other’s identity. Bitcoin was
first proposed in 2008 by the Satoshi Nakamoto, a pseudonymous software developer, as an
electronic payment system based on mathematical proof. The goal was to produce a means of
currency exchange independent of any central authority.
Bitcoin is created and held electronically by computers all around the world and relies on
verification based on cryptography. While many cryptocurrencies exist today, Bitcoin is the first
example of this growing asset class similar to that of traditional currencies.
Bitcoin is decentralized, and no single institution controls the Bitcoin network. With Bitcoin,
there is a limited supply which is tightly controlled by the underlying algorithm. While senders
of traditional electronic payments can usually be traced, users of Bitcoin operate in semi-
anonymity. There is no central bank and users do not need to identify themselves when sending
Bitcoin to another user. In Bitcoin, when a transaction request is submitted, the protocol checks
all previous transactions to confirm that the sender has the necessary Bitcoin balance as well as
the authority to send them.
The Bitcoin wallet address is the only way of identifying each user. It is possible for transactions
to be tracked, but it is very difficult. Unlike traditional electronic transaction, Bitcoin
transactions cannot be reversed.
Bitcoin has undoubtedly been a significant factor in the growth of the dark web with nearly all
dark web commerce sites conducting transactions in Bitcoin or some variant of cryptocurrency.
Both end users and information systems specialists are significant sources of vulnerabilities for
information management systems. End-users, for example, can introduce errors by entering
faulty data or by not following the proper instructions for processing data and using computer
equipment (He & King, 2008).
Businesses have valuable information assets to protect. Systems often house confidential
information about an individual which can include taxes, financial assets, medical records, and
personal information about family member as well as job performance reviews. Systems can also
contain information on corporate operations, new product development plans, trade secrets, and
marketing strategies. Government systems may store information on intelligence operations or
weapons information. Each of these information assets has tremendous value, and the
repercussions can be devastating if they are lost or placed in the wrong hands. Systems that are
unable to function due to security breaches, disasters, or malfunctioning technology can
permanently impact a company’s financial health. According to Inc., 60 percent of all businesses
will not recover from application or data losses with almost 50 percent of small businesses
falling victim to a cyber attack (Koulopoulos, 2017).
Inadequate security and controls can result in loss of company trade secrets or even serious legal
liability. Today businesses, in addition to their own information assets, must also protect those of
customers, employees, and business partners. Failure to do so may open the business to costly
litigation. For example, in 2014, Home Depot suffered a massive data breach that resulted in
over 50 million customers’ e-mails and/or credit card information being compromised. Three
years later, Home Depot settled with banks and agreed to pay $25 million in damages (Roberts,
2017).
Protecting information assets with sound security and control measures was once seen as an
unnecessary expense, but in fact, should be viewed as a high yield return on investment.
With today’s technology, we can now uniquely identify individuals amidst mass data sets and
streams. It is possible for companies and governments to monitor every conversation conducted,
every commercial transaction undertaken, and every location visited. The right to privacy is
articulated in many of the major international and regional human rights instruments. United
Nations Declaration of Human Rights (UDHR) 1948, Article 12: “No one shall be subjected
to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon
his honour and reputation. Everyone has the right to the protection of the law against such
interference or attacks.”
International Covenant on Civil and Political Rights (ICCPR) 1966, Article 17: “1. No one
shall be subjected to arbitrary or unlawful interference with his privacy, family, home or
correspondence, nor to unlawful attacks on his honour or reputation. 2. Everyone has the right to
the protection of the law against such interference or attacks.”
Global companies, unfortunately, experience hundreds of data breaches every year. News
about the misuse of user data has become increasingly concerning to stakeholders. In addition to
new privacy regulations that now require companies to protect user privacy, a growing number
of businesses are looking deeper into the issue of privacy. In modern business, privacy is
becoming a pillar of corporate responsibility.
In modern society, the debate around privacy is a debate about modern freedoms. While
technology has always been intertwined with this right, businesses now have more legal
responsibility to protect the privacy of both customers and employees. The capabilities that now
exist for surveillance are without precedent make the protection of privacy greater today than
ever before.
Organizations currently face two powerful trends: the growing pressure of regulatory
requirements on data protection that are emerging worldwide, and the increasing awareness of
privacy rights by customers.
Recent U.S. government regulations are forcing businesses to take security and control more
seriously. This is done by mandating the protection of data from abuse, exposure, and
unauthorized access. Companies now face legal obligations for the retention and storage of
electronic records as well as for privacy protection.
In the health care industry, for example, a business will need to comply with the Health
Insurance Portability and Accountability Act (HIPAA) of 1996. HIPAA outlines medical
privacy and security rules and procedures for the administration of health care billing as well as
automating the transfer of health care data between health care providers, payers, and plans.
HIPPA requires members of the health care industry to retain patient information for six years
and ensure the confidentiality of those records. It specifies privacy, security, and electronic
transaction standards for health care providers handling patient information.
Businesses providing financial services need to comply with the Financial Services
Modernization Act of 1999, better known as the Gramm-Leach-Bliley Act after its congressional
sponsors. The Gramm-Leach-Bliley Act requires financial institutions to ensure the
confidentiality and security of customer data. Storage of data must be done on a secure medium,
and special security measures must be enforced to protect data on storage media and during
transmittal.
Publicly traded companies must comply with the Public Company Accounting Reform and
Investor Protection Act of 2002, better known as the Sarbanes-Oxley Act. Following the
financial scandals of Enron and WorldCom, this act was designed to protect investors. It places
responsibility on companies and their management to safeguard the accuracy and integrity of
financial information.
Fundamentally, Sarbanes-Oxley is to ensure that internal controls are in place to govern the
creation and documentation of information in financial statements. Because information systems
are used to generate, store, and transport such data, the legislation requires businesses to consider
the security of information management systems and other controls necessary to ensure integrity,
confidentiality, and accuracy of the data.
There are many essential items in the regulation, including increased fines, breach notifications,
opt-in consent, and responsibility for data transfer outside the European Union. GDPR applies to
all organizations holding and processing an EU resident’s personal data, regardless of geographic
location.
Fines for noncompliance can be as high as €20 million or 4% of a company’s total global
revenue, whichever is greater. The most severe violations include not having proper customer
consent to process data or by not providing privacy considerations when designing a particular
system. (Lahiri, 2018)
The most common type of electronic evidence is e-mail. In the case of legal action, a business is
obligated to respond to a discovery request for access to information that could potentially be
used as evidence. The business is then required by law to produce the data requested. Courts can
now impose severe financial or even criminal penalties for the improper destruction of electronic
documents.
Electronic evidence may reside on computer storage media in forms that are not easily
recognized by the average user. Computer forensics experts try to recover hidden data for
presentation as evidence. A level of computer forensics awareness should be incorporated into a
business’s contingency planning process. As a best practice, the information management
systems staff and corporate legal counsel need to have a plan that can be executed if a legal need
arises.
JmlkPTExODk5NzU default
Planning is the first phase of the systems development process. It used to determine whether
there is a need for a new system to achieve a business’s strategic objectives and whether the
business has the ability to acquire the resources required to build a system. This step is used to
determine the scope of the problem and identify solutions. Resources, costs, time, benefits and
other expenses should be considered at this stage.
The second phase is systems analysis and requirements in which the business focuses on the
source of their problem or need for change. Possible solutions are carefully analyzed to help
determine whether they meet the functional requirements of the project. The needs of the end
users are also examined to ensure the new system can meet their expectations as well. Systems
analysis is vital in determining what the needs of the business are, how those needs will be met,
and who is responsible for parts of the project, as well as establish a timeline.
The third phase, systems design, describes the necessary specifications, features, and operations
that will satisfy the functional requirements of the proposed system. In this step, end users will
discuss and determine their specific business information needs for the proposed system. It’s
during this phase that end users will consider the essential components, structure, processing, and
procedures for the system to accomplish its objectives.
The fifth phase, integration and testing, involve systems integration and system testing which is
typically carried out by a quality assurance professional. It is determined in this phase if the
proposed design meets the initial set of business goals. Testing will be repeated to check for
errors until the end user finds it acceptable. Verification and validation are also an essential part
of this phase, both of which will help ensure the program’s successful completion.
The sixth phase, implementation, involves the actual installation of the newly-developed system.
In this phase, the project is put into production by moving the data and components from the old
system into the new system. Once implementation is complete, the system is then said to be in
production.
Operations and maintenance are the seventh and final phase. It involves maintenance and
regularly required updates. End users can fine-tune the system to meet additional user
requirements if needed. Maintenance of a newly implemented system includes supporting needed
changes to a production system to correct errors, meet new requirements, or improve processing
as needed.
Like the systems development life cycle, there are many frameworks that can be adopted for
software development. The waterfall method, for example, is a steady sequence of activity that
flows in a downward direction much like its name suggests. This traditional engineering process
closes each phase upon completion.
An adaptation of the waterfall model is the v-shaped model in which testing is incorporated as
an important part to the close of each phase.
The prototype method proposes a plan to build numerous software methods that allow different
elements to be tested out before fully developing them.
A hybrid of the prototype method is the rapid application development (RAD) method. This
method reduces the focus on initial planning to rapidly prototype and test potential solutions.
The spiral method provides more process steps, which are graphically viewed in a spiral
formation and is considered to offer greater flexibility and process adaptation.
Agile methods are software-based systems that provide feedback through an iterative process
and include Kanban, Extreme Programming (XP), and Dynamic systems development method
(DSDM).
When a full website created, and the desktop shrinks to the size of a smartphone screen, it is
difficult for the user to navigate through the site. The user must continually zoom in and out and
scroll to find relevant material. Companies have resorted to designing websites specifically for
mobile interfaces to resolve this issue. At times this requires a developer to create multiple
mobile sites to meet the needs of smartphones, tablets, and desktop browsers.
One alternative to the problem of having three different websites is to utilize responsive web
design. Responsive websites automatically change sizes and layouts according to the visitor’s
screen resolution, whether on a desktop, tablet, or smartphone. This approach uses a mix of
flexible grids and layouts, flexible images, and media queries that optimize the design for
different viewing contexts. When the user switches from a laptop to a mobile device, for
example, the website automatically accommodates the changing resolution and image size. By
utilizing responsive web design, the need for separate design and development is eliminated.
Users will access a single source of content, laid out to be read and navigated with ease.
The application of knowledge, skills, tools, and techniques to achieve specific targets within the
specified time and budgetary constraints is referred to as project management. Activities
associated with project management include planning the work, assessing risk, estimating
resources required to accomplish the job, organizing the work, acquiring human and material
resources, assigning tasks, directing activities, controlling project execution, reporting progress,
and analyzing the results.
The scope of a project is created to define what work is or is not included in a project. Project
management determines all work required to complete a project successfully when beginning a
project. Project managers should ensure that the scope of a project does not grow beyond what
was initially intended.
Cost is another significant component of project management. Cost is based on the time it takes
to complete a project multiplied by the cost of human resources required to complete the project.
Information management systems project costs often include the cost of hardware, software, and
workspace. Project management not only develops a budget for the project but also monitors the
ongoing expenses of a project.
Quality, another component of project management, serves as an indicator of how well the result
of a project satisfies the objectives of the project. The quality of information systems projects is
usually determined based on whether there is improved performance and decision making within
a business.
The final major component of project management is risk. Risk refers to potential problems
threatening the success of a project. These potential problems may prevent a project from
achieving its objectives by lowering the quality of project outputs, increasing time and cost, or
preventing the project from being completed entirely.
Project Risk
Information management systems projects differ dramatically in their size, scope, and level of
complexity. This is known as project risk. Project size, structure, and the level of technical
expertise of the project team members all influence project risk.
The larger the project size, the greater the project risk. Project size is typically indicated by the
dollars spent, the amount of implementation staff involved, the time allocated, and the number of
organizational components affected. Large-scale systems projects have a failure rate of 50 to 75
percent higher than that for other projects because these projects are complex and difficult to
control. The organizational complexity of the system (how many departments use it and how
much it influences business processes) contribute to the complexity of large-scale systems
projects. In addition, there are few consistently reliable techniques for estimating the time and
cost to develop large-scale information systems.
Project structure also contributes to project risk. Some projects are more highly structured than
others. Highly structured projects provide clear requirements so outputs and processes can be
easily defined. In a highly structured project, users know exactly what they want and what the
system should do. Projects with structured requirements run a much lower risk than those with
undefined and constantly changing requirements.
The project risk increases if the project team and the information system staff lack the required
technical expertise. If the team lacks the appropriate experience with technology proposed for the
project, it is highly likely that the project will take more time to complete because of the need to
first to learn the technology. Other common risk factors within IMS projects are usually
organizational: dealing with the complexity of requirements, the project scope, and how much of
the organization will be affected by the project's impact.
A Gantt chart lists project activities as well as their corresponding start and completion dates.
The Gantt chart will also help visually represent the timing and duration of different tasks as well
as resource requirements. Each task is depicted as a horizontal bar in which the length is
proportional to the time required to complete it. While Gantt charts show when project activities
begin and end, they do not depict task dependencies, or how tasks should be ordered.
Program Evaluation and Review Technique or a PERT chart graphically depicts project tasks
and their interrelationships. PERT charts list the specific activities that make up a project as well
as project dependencies. The PERT chart portrays a project as a diagram consisting of numbered
nodes representing project tasks. Each node is numbered and shows the task, duration, starting
date, and completion date. The direction of the arrows on the lines indicate the sequence of tasks
and displays which activities must be completed before the commencement of the next activity.
PERT charts for large projects can be difficult to interpret at times, and project managers often
use both techniques. These project management techniques help managers identify bottlenecks in
the project and determine the impact specific problems will have on project completion times.
They can also help systems developers partition projects into smaller, more manageable
segments with defined, measurable business results. Standard control techniques can successfully
chart the progress of the project against budgets and target dates, therefore deviations from the
plan can be easily spotted.
Microsoft Office Project has become the most widely used project management software today.
With capabilities for producing PERT and Gantt charts and for supporting critical path analysis,
resource allocation, project tracking, and status reporting. Microsoft Project also tracks the way
changes in one aspect of a project affect others. Project Professional provides collaborative
project management capabilities when used with Microsoft Office Project Server. Project Server
is tightly integrated with Microsoft’s SharePoint Services, a collaborative workspace platform.
Open source versions of project management software, such as Open Workbench and OpenProj,
provide a project management solution at an affordable cost.
While project management software helps organizations track individual projects, project
portfolio management software helps organizations manage the entire portfolios of projects and
dependencies among different projects. Project portfolio management software helps managers
compare project proposals against resource capacity to determine the optimal mix and
sequencing of projects that best achieve the organization’s strategic goals.
Managing the change surrounding the introduction of a new information management system
effectively can be challenging. In the implementation process, typically the systems analyst is a
change agent within the organization. The change agent communicates with users, mediates
between competing interest groups, and ensures that the organizational adjustment to such
changes is complete.
Encouraging user participation in the design and operation of information systems can help
facilitate positive organizational change. For example, if users are heavily involved in systems
design, they have more opportunities to create the system according to their priorities. This
involvement typically results in a higher rate of buy-in by the user, and in turn, users are more
likely to react positively to the completed system.
If an information systems project has the backing and commitment of management, it is more
likely to be embraced by users within the organization. Management backing also ensures that a
systems project receives sufficient funding and the resources required to be successful. If a
manager considers a new system a priority, the system will more likely be treated that way by his
or her subordinates.
Governance is ensuring that organizational activities, like managing IT operations, are aligned
in a way that supports the organization's business goals. Risk entails ensuring that any risk
associated with corporate activities is identified and addressed in a way that the organization's
business goals are supported. In the IT context, this means having a comprehensive IT risk
management process that rolls into an organization's enterprise risk management function. The
focus on compliance is to ensure that organizational activities are operated lawfully and within
regulation. In the information management systems context, this means making sure that IT
systems, and the data contained in those systems, are secured and appropriately used.
IT Governance
Information management systems governance provides a structure for aligning IT strategy with
business strategy. Formal IT governance programs are implemented that provide a framework of
best practices and controls to ensure that organizations meet both internal and external
requirements. As an alternative to a business creating its own framework, it can be time-saving
to use a framework that has already been established by experts in the industry. Many of these
frameworks include implementation guides to help organizations phase in an IT governance
program with fewer challenges along the way.
The Capability Maturity Model Integration (CMMI) method was developed by the Software
Engineering Institute. CMMI is an approach that focuses on performance improvement. CMMI
uses a scale to gauge an organization's performance, quality and profitability maturity level.
Factor Analysis of Information Risk (FAIR), is a relatively new model that helps
organizations quantify risk. The focus is on cybersecurity and operational risk, with the goal of
making more well-informed decisions.
IT Compliance
In the United States, there are many regulatory statutes enacted by Congress. Businesses
providing products and services in the United States are expected to adhere to these regulations.
In some cases, business executives take on personal responsibility for lawful adherence and can
be held personally liable. In addition to federal policies, many companies must comply with
international standards. It can be challenging to identify which laws, regulations, or mandates are
required. Typically, the legal team, C-Suite executives, and compliance officer are charged with
determining the scope of compliance.
Some of the better-known compliance standards that we have discussed previously include:
The plan will need to contain a statement of corporate goals and specifies how information
technology will support the attainment of those goals. Specific target dates and milestones will
be set in the plan that can be used later to evaluate the plan’s progress. The plan will need to
indicate key management decisions related to hardware acquisition, telecommunications, data,
and hardware, or any other technological change that could potentially impact the organization.
Organizational changes are also usually described in the information systems plan, including
training requirements, business process changes, and changes in authority, structure, or
management practice.