Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

UNIT 2

GENERAL NETWORK CONCEPTS

IP
As noted, IP is a ``network layer'' protocol.
This is the layer that allows the hosts to actually ``talk'' to each other.
Such things as carrying data grams, mapping the Internet address (such as 10.2.3.4) to a physical
network address (such as 08:00:69:0a:ca:8f), and routing, which takes care of making sure that
all of the devices that have Internet connectivity can find the way to each other.

Before we start talking about systems administration with any profundity, we should characterize
some basic terms that you will see all through this guide, and in different aides and
documentation in regards to systems administration.

These terms will be developed in the suitable segments that take after:

Connection: In systems administration, an association alludes to bits of related data that are
transfered through a system. This by and large induces that an association is worked before the
information exchange (by following the techniques laid out in a convention) and afterward is
deconstructed at the toward the finish of the information exchange.

Packet: A packet is, as a rule, the most fundamental unit that is transfered over a system. When
communicating over a system, packets are the envelopes that carry your information (in pieces)
from one end point to the next.

Packets have a header divide that contains data about the packet including the source and
destination, timestamps, arrange jumps, and so forth. The principle segment of a packet contains
the real information being transfered. It is here and there called the body or the payload.

System Interface: A system interface can allude to any sort of programming interface to systems
administration equipment. For example, in the event that you have two system cards in your PC,
you can control and arrange each system interface related with them exclusively.

A system interface might be related with a physical gadget, or it might be a portrayal of a virtual
interface. The "loopback" gadget, which is a virtual interface to the local machine, is a case of
this.
LAN: LAN remains for "local territory arrange". It alludes to a system or a bit of a system that
isn't openly available to the more noteworthy web. A home or office arrange is a case of a LAN.

WAN: WAN stands for "wide territory arrange". It implies a system that is substantially more
broad than a LAN. While WAN is the significant term to use to portray expansive, scattered
systems when all is said in done, it is normally intended to mean the web, all in all.

In the event that an interface is said to be associated with the WAN, it is by and large expected
that it is reachable through the web.

Convention: A convention is an arrangement of guidelines and gauges that basically characterize


a language that gadgets can use to communicate. There are an extraordinary number of
conventions being used widely in systems administration, and they are frequently actualized in
various layers.

Some low level conventions are TCP, UDP, IP, and ICMP. Some natural cases of application
layer conventions, based on these lower conventions, are HTTP (for getting to web content),
SSH, TLS/SSL, and FTP.

Port: A port is an address on a solitary machine that can be fixing to a particular bit of
programming. It isn't a physical interface or location, however it enables your server to have the
capacity to communicate utilizing more than one application.

Firewall: A firewall is a program that chooses whether movement coming into a server or going
out ought to be permitted. A firewall for the most part works by making rules for which sort of
movement is satisfactory on which ports. By and large, firewalls square ports that are not utilized
by a particular application on a server.

NAT: NAT remains for arrange address interpretation. It is an approach to interpret demands that
are approaching into a steering server to the significant gadgets or servers that it thinks about in
the LAN. This is normally executed in physical LANs as an approach to course asks for through
one IP deliver to the essential backend servers.

VPN: VPN remains for virtual private system. It is a methods for interfacing separate LANs
through the web, while looking after security. This is utilized as a methods for associating
remote frameworks as though they were on a local system, frequently for security reasons.
There are numerous different terms that you may go over, and this rundown cannot stand to be
thorough. We will clarify different terms as we require them. Now, you ought to see some
fundamental, abnormal state ideas that will empower us to better talk about the themes to come.

NETWORK VULNERABILITIES

Vulnerabilities is any weakness in a system that can be exploited by attackers.


•Missing patches
•Weak or default passwords
•Misconfigured firewall rulebases
•Port Sealing
•Mobile devices
•Packet Capture
•USB Flash Drives

NETWORK SERVICES AND NETWORK DEVICES

•Running at the network application layer and above.


•Features it provide:
–data storage
–manipulation
–presentation
–communication
•Implemented using a client-server or peer-to-peer architecture

Other network services include:


•Directory services
•e-Mail
•File sharing
•Instant messaging
•Online game
•Printing
•File server
•Voice over IP
•Video on demand
•Video telephony
•World Wide Web
•Simple Network Management Protocol
•Time service
•Wireless sensor network
TYPES OF NETWORK SERVICES

•Remote AccesS
•File Management
Configuration and management
•Print Services
•Information
•Communication

NETWORK DEVICES

Hub is one of the basic icons of networking devices which works at physical layer and hence
connect networking devices physically together.

Hubs are fundamentally used in networks that use twisted pair cabling to connect devices. They
are designed to transmit the packets to the other appended devices without altering any of the
transmitted packets received. They act as pathways to direct electrical signals to travel along.
They transmit the information regardless of the fact if data packet is destined for the device
connected or not.

Hub falls in two categories:

Active Hub: They are smarter than the passive hubs. They not only provide the path for the
data signals infact they regenerate, concentrate and strengthen the signals before sending
them to their destinations. Active hubs are also termed as ‘repeaters’.

Passive Hub: They are more like point contact for the wires to built in the physical network.
They have nothing to do with modifying the signals.
Ethernet Hubs

It is a device connecting multiple Ethernet devices together and makes them perform the
functions as a single unit. They vary in speed in terms of data transfer rate.

Ether utilizes Carrier Sense Multiple Access with Collision Detect (CSMA/CD) to control Media
access. Ethernet hub communicates in half-duplex mode where the chances of data collision are
inevitable at most of the times.

Switches

Switches are the linkage points of an Ethernet network. Just as in hub, devices in switches are
connected to them through twisted pair cabling. But the difference shows up in the manner both
the devices; hub and a switch treat the data they receive.

Hub works by sending the data to all the ports on the device whereas a switch transfers it only to
that port which is connected to the destination device. A switch does so by having an in-built
learning of the MAC address of the devices connected to it. Since the transmission of data
signals are well defined in a switch hence the network performance is consequently enhanced.
Switches operate in full-duplex mode where devices can send and receive data from the switch at
the simultaneously unlike in half-duplex mode. The transmission speed in switches is double than
in Ethernet hub transferring a 20Mbps connection into 30Mbps and a 200Mbps connection to
become 300Mbps. Performance improvements are observed in networking with the extensive
usage of switches in the modern days.

The following method will elucidate further how data transmission takes place via switches:

● Cut-through transmission: It allows the packets to be forwarded as soon as they are


received. The method is prompt and quick but the possibility of error checking gets
overlooked in such kind of packet data transmission.
● Store and forward: In this switching environment the entire packet are received and
‘checked’ before being forwarded ahead. The errors are thus eliminated before being
propagated further. The downside of this process is that error checking takes
relatively longer time consequently making it a bit slower in processing and
delivering.
● Fragment Free: In a fragment free switching environment, a greater part of the packet
is examined so that the switch can determine whether the packet has been caught up
in a collision. After the collision status is determined, the packet is forwarded.

Bridges

A bridge is a computer networking device that builds the connection with the other bridge
networks which use the same protocol. It works at the Data Link layer of the OSI Model and
connects the different networks together and develops communication between them. It connects
two local-area networks; two physical LANs into larger logical LAN or two segments of the
same LAN that use the same protocol.

Apart from building up larger networks, bridges are also used to segment larger networks into
smaller portions. The bridge does so by placing itself between the two portions of two physical
networks and controlling the flow of the data between them. Bridges nominate to forward the
data after inspecting into the MAC address of the devices connected to every segment. The
forwarding of the data is dependent on the acknowledgement of the fact that the destination
address resides on some other interface. It has the capacity to block the incoming flow of data as
well. Today Learning bridges have been introduced that build a list of the MAC addresses on the
interface by observing the traffic on the network. This is a leap in the development field of
manually recording of MAC addresses.

Types of Bridges:

There are mainly three types in which bridges can be characterized:

● Transparent Bridge: As the name signifies, it appears to be transparent for the


other devices on the network. The other devices are ignorant of its existence. It only
blocks or forwards the data as per the MAC address.
● Source Route Bridge: It derives its name from the fact that the path which packet
takes through the network is implanted within the packet. It is mainly used in
Token ring networks.
● Translational Bridge: The process of conversion takes place via Translational
Bridge. It converts the data format of one networking to another. For instance
Token ring to Ethernet and vice versa.
Switches superseding Bridges:

Ethernet switches are seen to be gaining trend as compared to bridges. They are succeeding on
the account of provision of logical divisions and segments in the networking field. Infact
switches are being referred to as multi-port bridges because of their advanced functionality.

Routers

Routers are network layer devices and are particularly identified as Layer- 3 devices of the OSI
Model. They process logical addressing information in the Network header of a packet such as
IP Addresses. Router is used to create larger complex networks by complex traffic routing. It has
the ability to connect dissimilar LANs on the same protocol. It also has the ability to limit the
flow of broadcasts. A router primarily comprises of a hardware device or a system of the
computer which has more than one network interface and routing software.

Functionality:

When a router receives the data, it determines the destination address by reading the header of
the packet. Once the address is determined, it searches in its routing table to get know how to
reach the destination and then forwards the packet to the higher hop on the route. The hop could
be the final destination or another router.

Brouters

Brouters are the combination of both the bridge and routers. They take up the functionality of the
both networking devices serving as a bridge when forwarding data between networks, and
serving as a router when routing data to individual systems. Brouter functions as a filter that
allows some data into the local network and redirects unknown data to the other network.
Brouters are rare and their functionality is embedded into the routers functioned to act as bridge
as well.

Gateways

Gateway is a device which is used to connect multiple networks and passes packets from one
packet to the other network. Acting as the ‘gateway’ between different networking systems or
computer programs, a gateway is a device which forms a link between them. It allows the
computer programs, either on the same computer or on different computers to share information
across the network through protocols. A router is also a gateway, since it interprets data from one
network protocol to another.

Others such as bridge converts the data into different forms between two networking systems.
Then a software application converts the data from one format into another. Gateway is a viable
tool to translate the data format, although the data itself remains unchanged. Gateway might be
installed in some other device to add its functionality into another.

Network card

Network cards also known as Network Interface Cards (NICs) are hardware devices that connect
a computer with the network. They are installed on the mother board. They are responsible for
developing a physical connection between the network and the computer. Computer data is
translated into electrical signals send to the network via Network Interface Cards.

They can also manage some important data-conversion function. These days network cards are
software configured unlike in olden days when drivers were needed to configure them. Even if
the NIC doesn’t come up with the software then the latest drivers or the associated software can
be downloaded from the internet as well.

Modems

Modem is a device which converts the computer-generated digital signals of a computer into
analog signals to enable their travelling via phone lines. The ‘modulator-demodulator’ or modem
can be used as a dial up for LAN or to connect to an ISP. Modems can be both external, as in the
device which connects to the USB or the serial port of a computer, or proprietary devices for
handheld gadgets and other devices, as well as internal; in the form of add-in expansion cards for
computers and PCMCIA cards for laptops.

Configuration of a modem differs for both the external and internal modem. For internal
modems, IRQ – Interrupt request is used to configure the modem along with I/O, which is a
memory address. Typically before the installation of built-in modem, integrated serial interfaces
are disabled, simultaneously assigning them the COM2 resources.

For external connection of a modem, the modem assigns and uses the resources itself. This is
especially useful for the USB port and laptop users as the non-complex and simpler nature of the
process renders it far much more beneficial for daily usage.

Upon installation, the second step to ensure the proper working of a modem is the installation of
drivers. The modem working speed and processing is dependent on two factors:

● Speed of UART – Universal Asynchronous Receiver or Transmitter chip (installed in


the computer to which the modem connection is made)
● Speed of the modem itself

INTERNET SECURITY AND VULNERABILITIES


https://www.insightsonindia.com/security-issues/cyber-security/various-cyber-threats/whats-the-
difference-between-malware-trojan-virus-and-worm/

https://lansupport.co.uk/resources/what-is-malware/

Viruses use executable files to spread.

Worms take use of system flaws to carry out their attacks.

Trojan horse is a type of malware that runs through a program and is interpreted as utility
software.

LONG ANSWER

● Copyright Infringement
● Child Pornography
● Piracy
● Cyberextortion
● DOS
● Identity Theft
● Phishing
● Carding
Malware is a blanket term for a malicious piece of software, such as a virus, a worm,
ransomware, or spyware. These are used by cyber criminals to cause destruction and
steal your sensitive data.
There are three main types of malware:
A worm – A worm is a one-off piece of malicious software that travels from computer to
computer by reproducing itself.
A trojan – Unlike worm software, a trojan does not reproduce itself. It masks itself as something
that you are familiar with, so you accidentally activate it and then it starts spreading.
A virus – A virus is a specific computer code that inserts itself into the code of another program
and starts spreading so the malicious action can be done.
Scareware
Scareware is something that plant into your system and immediately inform you that you have
hundreds of infections which you don’t have. The idea here is to trick you into purchasing a
bogus anti-malware where it claims to remove those threats. It is all about cheating your money
but the approach is a little different here because it scares you so that you will buy.

Keylogger
Something that keeps a record of every keystroke you made on your keyboard. Key logger is a
very powerful threat to steal people’s login credential such as username and password. It is also
usually a sub-function of a powerful Trojan.

Adware
Is a form of threat where your computer will start popping out a lot of advertisement. It can be
from non-adult materials to adult materials because any ads will make the host some money. It is
not really harmful threat but can be pretty annoying.

Backdoor
Backdoor is not really a Malware, but it is a form of method where once a system is vulnerable
to this method, attacker will be able to bypass all the regular authentication service. It is usually
installed before any virus or Trojan infection because having a backdoor installed will ease the
transfer effort of those threats.

Wabbits
Is another a self-replicating threat but it does not work like a Virus or Worms. It does not harm
your system like a Virus and it does not replicate via your LAN network like a Worms. An
example of Wabbit’s attack is the fork bomb, a form of DDoS attack.
Exploit
Exploit is a form of software which is programmed specifically to attack certain vulnerability.
For instance if your web browser is vulnerable to some out-dated vulnerable flash plugin, an
exploit will work only on your web browser and plugin. The way to avoid hitting into exploit is
to always patch your stuff because software patches are there to fix vulnerabilities.

Botnet
Botnet is something which is installed by a BotMaster to take control of all the computer bots via
the Botnet infection. It mostly infects through drive-by downloads or even Trojan infection. The
result of this threat is the victim’s computer, which is the bot will be used for a large scale attack
like DDoS.

Obfuscated Spam
To be really honest, obfuscated Spam is a spam mail. It is obfuscated in the way that it does not
look like any spamming message so that it can trick the potential victim into clicking it. Spam
mail today looks very genuine and if you are not careful, you might just fall for what they are
offering.

Pharming
Pharming works more or less like phishing but it is a little tricky here. There are two types of
pharming where one of it is DNS poisoning where your DNS is being compromised and all your
traffic will be redirected to the attacker’s DNS. The other type of pharming is to edit your HOST
file where even if you typed www.google.com on your web browser, it will still redirect you to
another site. One thing similar is that both are equally dangerous.

Crimeware
Crimeware is a form of Malware where it takes control of your computer to commit a computer
crime. Instead of the hacker himself committing the crime, it plants a Trojan or whatever the
Malware is called to order you to commit a crime instead. This will make the hacker himself
clean from whatever crime that he had done.

Dialer
This threat is no longer popular today but looking at the technology 10 years back or more where
we still access the internet using a dial-up modem, it is quite a popular threat. What it does is it
will make use of your internet modem to dial international numbers which are pretty costly.
Today, this type of threat is more popular on Android because it can make use of the phone call
to send SMS to premium numbers.

Dropper
Looking at the name, a Dropper is designed to drop into a computer and install something useful
to the attacker such as Malware or Backdoor. There are two types of Dropper where one is to
immediately drop and install to avoid Antivirus detection. Another type of Dropper is it will only
drop a small file where this small file will auto trigger a download process to download the
Malware.

Fake AV
Fake Antivirus threat is a very popular threat among Mac user about 10 months ago. Due to the
reason that Mac user seldom faces a virus infection, scaring them with message which tells them
that their computer is infected with virus is pretty useful where it results them into purchasing a
bogus antivirus which does nothing.

Cookies
Cookies is not really a Malware. It is just something used by most websites to store something
into your computer. It is here because it has the ability to store things into your computer and
track your activities within the site. If you really don’t like the existence of cookies, you can
choose to reject using cookies for some of the sites which you do not know.

Bluesnarfing
Bluesnarfing is all about having an unauthorized access to a specific mobile phones, laptop, or
PDA via Bluetooth connection. By having such unauthorized access, personal stuff such as
photos, calender, contacts and SMS will all be revealed and probably even stolen.

Bluejacking
Bluejacking is also uses the Bluetooth technology but it is not as serious as Bluesnarfing. What it
does is it will connect to your Bluetooth device and send some message to another Bluetooth
device. It is not something damaging to your privacy or device system compared to the
Bluesnarfing threat.

Phishing and CounterMeasures


Phishing email messages, websites, and phone calls are designed to steal money. Cybercriminals
can do this by installing malicious software on your computer or stealing personal information
off of your computer.
Cybercriminals also use social engineering to convince you to install malicious software or hand
over your personal information under false pretenses. They might email you, call you on the
phone, or convince you to download something off of a website.

What does a phishing email message look like?


Here is an example of what a phishing scam in an email message might look like.

Spelling and bad gramma: Cybercriminals are not known for their grammar and spelling.
Professional companies or organizations usually have a staff of copy editors that will not allow a
mass email like this to go out to its users. If you notice mistakes in an email, it might be a scam.
Beware of links in email: If you see a link in a suspicious email message, don't click on it. Rest
your mouse (but don't click) on the link to see if the address matches the link that was typed in
the message. In the example below the link reveals the real web address, as shown in the box
with the yellow background. The string of cryptic numbers looks nothing like the company's web
address.

Phishing scams masked web address

Links might also lead you to .exe files. These kinds of file are known to spread malicious
software.
Threats. Have you ever received a threat that your account would be closed if you didn't
respond to an email message? The email message shown above is an example of the same trick.
Cybercriminals often use threats that your security has been compromised.
Spoofing popular websites or companies. Scam artists use graphics in email that appear to be
connected to legitimate websites but actually take you to phony scam sites or legitimate-looking
pop-up windows
Cybercriminals also use web addresses that resemble the names of well-known companies but
are slightly altered.
Denial of service
Denial of service attack (DoS attack), type of cybercrime in which an Internet site is made
unavailable, typically by using multiple computers to repeatedly make requests that tie up the
site and prevent it from responding to requests from legitimate users. In computing, a denial-of-
service attack (DoS attack) is a cyber-attack where the perpetrator disconnection of a wireless or
wired internet connection; long-term denial of access to the web or any internet services.

Virus
an infective agent that typically consists of a nucleic acid molecule in a protein coat, is too small
to be seen by light microscopy, and is able to multiply only within the living cells of a host.
OR

A piece of code which is capable of copying itself and typically has a detrimental effect,
such as corrupting the system or destroying data.
WORMS
A computer worm is a standalone malware computer program that replicates itself in order to
spread to other computers. Often, it uses a computer network to spread itself, relying on security
failures on the target computer to access it.

Trojan horse
In computing, a Trojan horse, or Trojan, is any malicious computer program which misleads
users of its true intent. The term is derived from the Ancient Greek story of the deceptive
wooden horse that led to the fall of the city of Troy. ... Ransomware attacks are often carried out
using a Trojan.
Spyware
Spyware is software that aims to gather information about a person or organization without their
knowledge that may send such information to another entity without the consumer's consent, or
that asserts control over a device without the consumer's knowledge.
Adware
Adware is any software application in which advertising banners are displayed while a program
is running. The ads are delivered through pop-up windows or bars that appear on the program's
user interface. Adware is commonly created for computers, but may also be found on mobile
devices.

Malware
Short for "malicious software, “malware refers to software programs designed to damage or do
other unwanted actions on a computer system.

Different types of malware

The term malware includes viruses, worms, Trojan Horses, root kits, spyware, key loggers and
more. To get an overview of the difference between all these types of threats and the way they
work, it makes sense to divide them into groups:

Viruses and worms – the contagious threat


Viruses and worms are defined by their behavior – malicious software designed to spread
without the user’s knowledge. A virus infects legitimate software and when this software is used
by the computer owner it spreads the virus – so viruses need you to act before they can spread.
Computer worms, on the other hand, spread without user action. Both viruses and worms can
carry a so-called “payload” – malicious code designed to do damage.

Trojans and Root kits – the masked threat


Trojans and rootkits are grouped together as they both seek to conceal attacks on computers.
Trojan Horses are malignant pieces of software pretending to be benign applications. Users
therefore download them thinking they will get a useful piece of software and instead end up
with a malware infected computer. Rootkits are different. They are a masking technique for
malware, but do not contain damaging software. Rootkit techniques were invented by virus
writers to conceal malware, so it could go unnoticed by antivirus detection and removal
programs. Today, antivirus products, like BullGuard Internet Security, strike back as they come
with effective rootkit removal tools.

Spyware and key loggers – the financial threat


Spyware and keyloggers are malware used in malicious attacks like identity theft, phishing and
social engineering - threats designed to steal money from unknowing computer users, businesses
and banks.
The latest security reports for the first quarter of 2011 put Trojan infections at the top of the
malware list, with more than 70% of all malicious files detected on computer systems, followed
by the traditional viruses and worms.
The popularity of rogue antiviruses has been decreasing over the end of 2010 and beginning of
2011, but the number of downloader Trojans significantly increased. The detection rates of new
malware have increased 15% in the first quarter of 2011 compared to the last quarter of 2010.

HTTP
HTTP means HyperText Transfer Protocol. HTTP is the underlying protocol used by the World
Wide Web and this protocol defines how messages are formatted and transmitted, and what
actions Web servers and browsers should take in response to various commands. The Hypertext
Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, and
hypermedia information systems. HTTP is the foundation of data communication for the World
Wide Web. Hypertext is structured text that uses logical links (hyperlinks) between nodes
containing text.
For example, when you enter a URL in your browser, this actually sends an HTTP command to
the Web server directing it to fetch and transmit the requested Web page. The other main
standard that controls how the World Wide Web works is HTML, which covers how Web pages
are formatted and displayed.
HTTPS
Hyper Text Transfer Protocol Secure (HTTPS) is the secure version of HTTP, the protocol over
which data is sent between your browser and the website that you are connected to. The 'S' at the
end of HTTPS stands for 'Secure'. It means all communications between your browser and the
website are encrypted. HTTPS (also called HTTP over Transport Layer Security [TLS],
HTTP over SSL, and HTTP Secure) is a communications protocol for secure communication
over a computer network which is widely used on the Internet. HTTPS consists of
communication over Hypertext Transfer Protocol (HTTP) within a connection encrypted by
Transport Layer Security, or its predecessor, Secure Sockets Layer. The main motivation for
HTTPS is authentication of the visited website and protection of the privacy and integrity of the
exchanged data. In its popular deployment on the internet, HTTPS provides authentication of the
website and associated web server with which one is communicating, which protects against
man-in-the-middle attacks. Additionally, it provides bidirectional encryption of communications
between a client and server, which protects against eavesdropping and tampering with or forging
the contents of the communication. In practice, this provides a reasonable guarantee that one is
communicating with precisely the website that one intended to communicate with (as opposed to
an impostor), as well as ensuring that the contents of communications between the user and site
cannot be read or forged by any third party.
Electronic Mail

Installation of authentication and encryption certificates on the e-mail system

Any user desiring to transfer secure e-mail with a specific identified external user may request to
exchange public keys with the external user by contacting the Privacy Officer or appropriate
personnel. Once verified, the certificate is installed on each recipient workstation, and the two
may safely exchange secure e-mail.

Use of WinZip encrypted and zipped e-mail

This software allows Practice personnel to exchange e-mail with remote users who have the
appropriate encryption software on their system. The two users exchange private keys that will
be used to both encrypt and decrypt each transmission. Any Practice staff member who desires to
utilize this technology may request this software from the Privacy Officer or appropriate
personnel.
Defining Security

Computer security means to protect information. It deals with the prevention and detection of
unauthorized actions by users of a computer. Lately it has been extended to include privacy,
confidentiality, and integrity. For example:

• Chinese Foreign Ministry spokesman Zhu Bangzao rejected allegations that China stole
U.S. nuclear secrets, saying such claims are meant to undermine China-U.S. relations.
Meanwhile, a CIA-led task force was assessing how much damage may have been done to U.S.
national security after a Chinese scientist at the Los Alamos National Laboratory in New Mexico
allegedly shared nuclear secrets]

• Two parties agree and seal their transaction using digital signatures. The signature cannot
be ruled invalid by state legislature or other law-making bodies because it uniquely identifies the
individuals involved.

• You visit a Website and the site collects more personal information than you are willing
to divulge or the site distributes data to outside parties. By doing this, it compromises your
privacy and opens your world to other parties.3

This definition implies that you have to know the information and the value of that information
in order to develop protective measures. You also need to know to which individuals need
unique identities and how much information may be divulged to the outside world. A rough
classification of protective measures in computer security is as follows:

• Prevention—Take measures that prevent your information from being damaged, altered,
or stolen. Preventive measures can range from locking the server room door to setting up high-
level security policies.

• Detection—Take measures that allow you to detect when information has been damaged,
altered, or stolen, how it has been damaged, altered, or stolen, and who has caused the damage.
Various tools are available to help detect intrusions, damage or alterations, and viruses.

• Reaction—Take measures that allow recovery of information, even if information is lost


or damaged.
The above measures are all very well, but if you do not understand how information may be
compromised, you cannot take measures to protect it. You must examine the components on how
information can be compromised:

• Confidentiality. The prevention of unauthorized disclosure of information. This can be


the result of poor security measures or information leaks by personnel. An example of poor
security measures would be to allow anonymous access to sensitive information.

• Integrity. The prevention of erroneous modification of information. Authorized users are


probably the biggest cause of errors and omissions and the alteration of data. Storing incorrect
data within the system can be as bad as losing data. Malicious attackers also can modify, delete,
or corrupt information that is vital to the correct operation of business functions.

• Availability. The prevention of unauthorized withholding of information or resources.


This does not apply just to personnel withholding information. Information should be as freely
available as possible to authorized users.

• Authentication. The process of verifying that users are who they claim to be when
logging onto a system. Generally, the use of usernames and passwords accomplishes this. More
sophisticated is the use of smart cards and retina scanning. The process of authentication does
not grant the user access rights to resources—this is achieved through the authorization process.

• Authorization. The process of allowing only authorized users access to sensitive


information. An authorization process uses the appropriate security authority to determine
whether a user should have access to resources.

History of Security

Computers and networks originally were built to ease the exchange of information. Early
information technology (IT) infrastructures were built around central computers or mainframe
solutions while others were developed around the personal computer. What some thought
impossible became reality and today businesses are being driven by the power of the personal
computer that users access with just a username and password.
What will information security be like in the 21st century? The nature of computing has changed
over the last few years. Networks are designed and built to facilitate the sharing and distribution
of data and information. Controlling access to these resources can become a problem because
you need to balance the requirement for access to free information with the value of the content
of that information.

Some information is more sensitive in nature than other information; this leads to the need for
security requirements. Today, IT security has progressed to more than just usernames and
passwords. It involves digital identities, biometric authentication methods, and modular security
strategies.

The easiest one to relate to is the use of smart cards. These are tamper-proof devices that store
security information. They are similar to a credit card with a built-in microprocessor and
memory used for identification or financial transactions. When the user inserts it into a reader, it
transfers data to and from a central computer. It is more secure than a magnetic stripe card and
can be programmed to self-destruct if the wrong password is entered too many times. As a
financial transaction card, it can be loaded with digital money and used like a traveler’s check,
except that variable amounts of money can be spent until the balance is zero.

The Need for Security

Administrators normally find that putting together a security policy that restricts both users and
attacks is time consuming and costly. Users also become disgruntled at the heavy security
policies making their work difficult for no discernable reason, causing bad politics within the
company. Planning an audit policy on huge networks takes up both server resources and time,
and often administrators take no note of the audited events. A common attitude among users is
that if no secret work is being performed, why bother implementing security.

There is a price to pay when a half-hearted security plan is put into action. It can result in
unexpected disaster. A password policy that allows users to use blank or weak passwords is a
hacker's paradise. No firewall or proxy protection between the organization's private local area
network (LAN) and the public Internet makes the company a target for cyber crime.
Organizations will need to determine the price they are willing to pay in order to protect data and
other assets. This cost must be weighed against the costs of losing information and hardware and
disrupting services. The idea is to find the correct balance. If the data needs minimal protection
and the loss of that data is not going to cost the company, then the cost of protecting that data
will be less. If the data is sensitive and needs maximum protection, then the opposite is normally
true.

Security Threats

Introduction

The first part of this section outlines security threats and briefly describes the methods, tools, and
techniques that intruders use to exploit vulnerabilities in systems to achieve their goals. The
section discusses a theoretical model and provides some real life scenarios. The appendixes give
detailed analyses of the various aspects and components that are discussed in this section.

Security Threats, Attacks, and Vulnerabilities

Information is the key asset in most organizations. Companies gain a competitive advantage by
knowing how to use that information. The threat comes from others who would like to acquire
the information or limit business opportunities by interfering with normal business processes.

The object of security is to protect valuable or sensitive organizational information while making
it readily available. Attackers trying to harm a system or disrupt normal business operations
exploit vulnerabilities by using various techniques, methods, and tools. System administrators
need to understand the various aspects of security to develop measures and policies to protect
assets and limit their vulnerabilities.

Attackers generally have motives or goals—for example, to disrupt normal business operations
or steal information. To achieve these motives or goals, they use various methods, tools, and
techniques to exploit vulnerabilities in a computer system or security policy and controls.

Goal + Method + Vulnerabilities = Attack.

These aspects will be discussed in more detail later in this section.


Security Threats

Figure 1 introduces a layout that can be used to break up security threats into different areas.

Natural Disasters

Nobody can stop nature from taking its course. Earthquakes, hurricanes, floods, lightning, and
fire can cause severe damage to computer systems. Information can be lost, downtime or loss of
productivity can occur, and damage to hardware can disrupt other essential services. Few
safeguards can be implemented against natural disasters. The best approach is to have disaster
recovery plans and contingency plans in place. Other threats such as riots, wars, and terrorist
attacks could be included here. Although they are human-caused threats, they are classified as
disastrous.

Human Threats

Malicious threats consist of inside attacks by disgruntled or malicious employees and outside
attacks by non-employees just looking to harm and disrupt an organization.

The most dangerous attackers are usually insiders (or former insiders), because they know many
of the codes and security measures that are already in place. Insiders are likely to have specific
goals and objectives, and have legitimate access to the system. Employees are the people most
familiar with the organization's computers and applications, and they are most likely to know
what actions might cause the most damage. Insiders can plant viruses, Trojan horses, or worms,
and they can browse through the file system.

The insider attack can affect all components of computer security. By browsing through a
system, confidential information could be revealed. Trojan horses are a threat to both the
integrity and confidentiality of information in the system. Insider attacks can affect availability
by overloading the system's processing or storage capacity, or by causing the system to crash.

People often refer to these individuals as "crackers" or "hackers." The definition of "hacker" has
changed over the years. A hacker was once thought of as any individual who enjoyed getting the
most out of the system he or she was using. A hacker would use a system extensively and study
it until he or she became proficient in all its nuances. This individual was respected as a source
of information for local computer users, someone referred to as a "guru" or "wizard."

Now, however, the term hacker refers to people who either break in to systems for which they
have no authorization or intentionally overstep their bounds on systems for which they do not
have legitimate access.

The correct term to use for someone who breaks in to systems is a "cracker." Common methods
for gaining access to a system include password cracking, exploiting known security weaknesses,
network spoofing, and social engineering.

Malicious attackers normally will have a specific goal, objective, or motive for an attack on a
system. These goals could be to disrupt services and the continuity of business operations by
using denial-of-service (DoS) attack tools. They might also want to steal information or even
steal hardware such as laptop computers. Hackers can sell information that can be useful to
competitors.
In 1996, a laptop computer was stolen from an employee of Visa International that contained
314,000 credit card accounts. The total cost to Visa for just canceling the numbers and replacing
the cards was $6 million.

Attackers are not the only ones who can harm an organization. The primary threat to data
integrity comes from authorized users who are not aware of the actions they are performing.
Errors and omissions can cause valuable data to be lost, damaged, or altered. Non-malicious
threats usually come from employees who are untrained in computers and are unaware of
security threats and vulnerabilities. Users who open up Microsoft Word documents using
Notepad, edit the documents, and then save them could cause serious damage to the information
stored on the document.

Users, data entry clerks, system operators, and programmers frequently make unintentional
errors that contribute to security problems, directly and indirectly. Sometimes the error is the
threat, such as a data entry error or a programming error that crashes a system. In other cases,
errors create vulnerabilities. Errors can occur in all phases of the system life cycle.
The following table gives some examples of the various aspects discussed above.

Threats Motives/Goals Methods Security Policies

• Employees • Deny services • Social engineering • Vulnerabilities

• Malicious • Steal information • Viruses, Trojan horses, • Assets


worms
• Ignorant • Alter information • Information and
• Packet replay data
• Non- • Damage
employees information • Packet modification • Productivity

• Outside • Delete • IP spoofing • Hardware


attackers information
• Mail bombing • Personnel
• Natural • Make a joke
• Various hacking tools
disasters
• Show off
• Password cracking
• Floods

• Earthquakes

• Hurricanes

• Riots and wars

The damage is accidental. Also, malicious attackers can deceive ignorant employees by using
"social engineering" to gain entry. The attacker could masquerade as an administrator and ask for
passwords and usernames. Employees who are not well trained and are not security aware can
fall for this.

Motives, Goals, and Objectives of Malicious Attackers

There is a strong overlap between physical security and data privacy and integrity. Indeed, the
goal of some attacks is not the physical destruction of the computer system but the penetration
and removal or copying of sensitive information. Attackers want to achieve these goals either for
personal satisfaction or for a reward.

Here are some methods that attackers use:

• Deleting and altering information. Malicious attackers who delete or alter information
normally do this to prove a point or take revenge for something that has happened to them. Inside
attackers normally do this to spite the organization because they are disgruntled about something.
Outside attackers might want to do this to prove that they can get in to the system or for the fun
of it.

April 27, 2000: Cheng Tsz-chung, 22, was put behind bars last night after changing the password
on another user's account and then demanding $500 (Hong Kong currency) to change it back.
The victim paid the money and then contacted police. Cheng has pleaded guilty to one charge of
unauthorized access of a computer and two counts of theft. The magistrate remanded Cheng in
custody and said his sentence, which will be handed down on May 10 pending reports, must have
a deterrent effect. Cheng's lawyer told Magistrate Ian Candy that his client committed the
offenses "just for fun."

• Committing information theft and fraud. Information technology is increasingly used to


commit fraud and theft. Computer systems are exploited in numerous ways, both by automating
traditional methods of fraud and by using new methods. Financial systems are not the only ones
subject to fraud. Other targets are systems that control access to any resources, such as time and
attendance systems, inventory systems, school grading systems, or long-distance telephone
systems.

• Disrupting normal business operations. Attackers may want to disrupt normal business
operations. In any circumstance like this, the attacker has a specific goal to achieve. Attackers
use various methods for denial-of-service attacks; the section on methods, tools, and techniques
will discuss these.

Methods, Tools, and Techniques for Attacks


Attacks = motive + method + vulnerability.

Malicious attackers can gain access or deny services in numerous ways. Here are some of them:

• Viruses. Attackers can develop harmful code known as viruses. Using hacking
techniques, they can break into systems and plant viruses. Viruses in general are a threat to any
environment. They come in different forms and although not always malicious, they always take
up time. Viruses can also be spread via e-mail and disks.

• Trojan horses. These are malicious programs or software code hidden inside what looks
like a normal program. When a user runs the normal program, the hidden code runs as well. It
can then start deleting files and causing other damage to the computer. Trojan horses are
normally spread by e-mail attachments. The Melissa virus that caused denial-of-service attacks
throughout the world in 1999 was a type of Trojan horse.

• Worms. These are programs that run independently and travel from computer to
computer across network connections. Worms may have portions of themselves running on
many different computers. Worms do not change other programs, although they may carry other
code that does.

• Password cracking. This is a technique attacker’s use to surreptitiously gain system


access through another user's account. This is possible because users often select weak
passwords. The two major problems with passwords is when they are easy to guess based on
knowledge of the user (for example, wife's maiden name) and when they are susceptible to
dictionary attacks (that is, using a dictionary as the source of guesses).

• Denial-of-service attacks. This attack exploits the need to have a service available. It is a
growing trend on the Internet because Web sites in general are open doors ready for abuse.
People can easily flood the Web server with communication in order to keep it busy. Therefore,
companies connected to the Internet should prepare for (DoS) attacks. They also are difficult to
trace and allow other types of attacks to be subdued.

• E-mail hacking. Electronic mail is one of the most popular features of the Internet. With
access to Internet e-mail, someone can potentially correspond with any one of millions of people
worldwide. Some of the threats associated with email are:
• Impersonation. The sender address on Internet e-mail cannot be trusted because the
sender can create a false return address. Someone could have modified the header in transit, or
the sender could have connected directly to the Simple Mail Transfer Protocol (SMTP) port on
the target computer to enter the e-mail.

• Eavesdropping. E-mail headers and contents are transmitted in the clear text if no
encryption is used. As a result, the contents of a message can be read or altered in transit. The
header can be modified to hide or change the sender, or to redirect the message.

• Packet replay. This refers to the recording and retransmission of message packets in the
network. Packet replay is a significant threat for programs that require authentication sequences,
because an intruder could replay legitimate authentication sequence messages to gain access to a
system. Packet replay is frequently undetectable, but can be prevented by using packet time
stamping and packet sequence counting.

• Packet modification. This involves one system intercepting and modifying a packet
destined for another system. Packet information may not only be modified, it could also be
destroyed.

• Eavesdropping. This allows a cracker (hacker) to make a complete copy of network


activity. As a result, a cracker can obtain sensitive information such as passwords, data, and
procedures for performing functions. It is possible for a cracker to eavesdrop by wiretapping,
using radio, or using auxiliary ports on terminals. It is also possible to eavesdrop using software
that monitors packets sent over the network. In most cases, it is difficult to detect eavesdropping.

• Social engineering. This is a common form of cracking. It can be used by outsiders and
by people within an organization. Social engineering is a hacker term for tricking people into
revealing their password or some form of security information.

• Intrusion attacks. In these attacks, a hacker uses various hacking tools to gain access to
systems. These can range from password-cracking tools to protocol hacking and manipulation
tools. Intrusion detection tools often can help to detect changes and variants that take place
within systems and networks.
• Network spoofing. In network spoofing, a system presents itself to the network as though
it were a different system (computer A impersonates computer B by sending B's address instead
of its own). The reason for doing this is that systems tend to operate within a group of other
trusted systems. Trust is imparted in a one-to-one fashion; computer A trusts computer B (this
does not imply that system B trusts system A). Implied with this trust is that the system
administrator of the trusted system is performing the job properly and maintaining an appropriate
level of security for the system. Network spoofing occurs in the following manner: if computer A
trusts computer B and computer C spoofs (impersonates) computer B, then computer C can gain
otherwise-denied access to computer A.

Application Security

Application security encompasses measures taken to improve the security of an application often
by finding, fixing and preventing security vulnerabilities.

Different techniques are used to surface such security vulnerabilities at different stages of an
applications lifecycle such design, development, deployment, upgrade, or maintenance.

An always evolving but largely consistent set of common security flaws are seen across different
applications, see common flaws

Techniques

Different techniques will find different subsets of the security vulnerabilities lurking in an
application and are most effective at different times in the software lifecycle. They each
represent different tradeoffs of time, effort, cost and vulnerabilities found.

Whitebox security review, or code review. This is a security engineer deeply understanding the
application through manually reviewing the source code and noticing security flaws. Through
comprehension of the application vulnerabilities unique to the application can be found.
Blackbox security audit. This is only through use of an application testing it for security
vulnerabilities, no source code required.

Design review. Before code is written working through a threat model of the application.
Sometimes alongside a spec or design document.

Tooling. There exist many automated tools that test for security flaws, often with a higher false
positive rate than having a human involved.

Utilizing these techniques appropriately throughout the software development life cycle (SDLC)
to maximize security is the role of an application security team.

Mobile Application Security

The proportion of mobile devices providing open platform functionality is expected to continue
to increase in future. The openness of these platforms offers significant opportunities to all parts
of the mobile ecosystem by delivering the ability for flexible program and service delivery=
options that may be installed, removed or refreshed multiple times in line with the user’s needs
and requirements. However, with openness comes responsibility and unrestricted access to
mobile resources and APIs by applications of unknown or untrusted origin could result in
damage to the user, the device, the network or all of these, if not managed by suitable security
architectures and network precautions. Application security is provided in some form on most
open OS mobile devices

• Application whitelisting

• Ensuring transport layer security

• Strong authentication and authorization

• Encryption of data when written to memory

• Sandboxing of applications

• Granting application access on a per-API level

• Processes tied to a user ID


• Predefined interactions between the mobile application and the OS

• Requiring user input for privileged/elevated access

• Proper session handling

Security testing for Applications

Security testing techniques scour for vulnerabilities or security holes in applications. These
vulnerabilities leave applications open to exploitation. Ideally, security testing is implemented
throughout the entire software development life cycle (SDLC) so that vulnerabilities may be
addressed in a timely and thorough manner. Unfortunately, testing is often conducted as an
afterthought at the end of the development cycle.

Vulnerability scanners, and more specifically web application scanners, otherwise known as
penetration testing tools (i.e. ethical hacking tools) have been historically used by security
organizations within corporations and security consultants to automate the security testing of http
request/responses; however, this is not a substitute for the need for actual source code review.
Physical code reviews of an application's source code can be accomplished manually or in an
automated fashion. Given the common size of individual programs (often 500,000 lines of code
or more), the human brain can not execute a comprehensive data flow analysis needed in order to
completely check all circuitous paths of an application program to find vulnerability points. The
human brain is suited more for filtering, interrupting and reporting the outputs of automated
source code analysis tools available commercially versus trying to trace every possible path
through a compiled code base to find the root cause level vulnerabilities.

The two types of automated tools associated with application vulnerability detection (application
vulnerability scanners) are Penetration Testing Tools (often categorized as Black Box Testing
Tools) and static code analysis tools (often categorized as White Box Testing Tools).

According to Gartner Research,[3] "...next-generation modern Web and mobile applications


requires a combination of SAST and DAST techniques, and new interactive application security
testing (IAST) approaches have emerged that combine static and dynamic techniques to improve
testing...". Because IAST combines SAST and DAST techniques, the results are highly
actionable, can be linked to the specific line of code, and can be recorded for replay later for
developers.

Banking and large E-Commerce corporations have been the very early adopter customer profile
for these types of tools. It is commonly held within these firms that both Black Box testing and
Whitebox testing tools are needed in the pursuit of application security. Typically sited, Black
Box testing (meaning Penetration Testing tools) are ethical hacking tools used to attack the
application surface to expose vulnerabilities suspended within the source code hierarchy.
Penetration testing tools are executed on the already deployed application. White Box testing
(meaning Source Code Analysis tools) are used by either the application security groups or
application development groups. Typically introduced into a company through the application
security organization, the White Box tools complement the Black Box testing tools in that they
give specific visibility into the specific root vulnerabilities within the source code in advance of
the source code being deployed. Vulnerabilities identified with White Box testing and Black Box
testing are typically in accordance with the OWASP taxonomy for software coding errors. White
Box testing vendors have recently introduced dynamic versions of their source code analysis
methods; which operates on deployed applications. Given that the White Box testing tools have
dynamic versions similar to the Black Box testing tools, both tools can be correlated in the same
software error detection paradigm ensuring full application protection to the client company.

The advances in professional Malware targeted at the Internet customers of online organizations
has seen a change in Web application design requirements since 2007. It is generally assumed
that a sizable percentage of Internet users will be compromised through malware and that any
data coming from their infected host may be tainted. Therefore, application security has begun to
manifest more advanced anti-fraud and heuristic detection systems in the back-office, rather than
within the client-side or Web server code

Virtualization

In computing, virtualization refers to the act of creating a virtual (rather than actual) version of
something, including virtual computer hardware platforms, storage devices, and computer
network resources.
Virtualization began in the 1960s, as a method of logically dividing the system resources
provided by mainframe computers between different applications. Since then, the meaning of the
term has broadened.

Hardware virtualization or platform virtualization refers to the creation of a virtual machine that
acts like a real computer with an operating system. Software executed on these virtual machines
is separated from the underlying hardware resources. For example, a computer that is running
Microsoft Windows may host a virtual machine that looks like a computer with the Ubuntu
Linux operating system; Ubuntu-based software can be run on the virtual machine.[2][3]

In hardware virtualization, the host machine is the actual machine on which the virtualization
takes place, and the guest machine is the virtual machine. The words host and guest are used to
distinguish the software that runs on the physical machine from the software that runs on the
virtual machine. The software or firmware that creates a virtual machine on the host hardware is
called a hypervisor or Virtual Machine Manager.

Different types of hardware virtualization include:

• Full virtualization – almost complete simulation of the actual hardware to allow software,
which typically consists of a guest operating system, to run unmodified.

• Para virtualization – a hardware environment is not simulated; however, the guest


programs are executed in their own isolated domains, as if they are running on a separate system.
Guest programs need to be specifically modified to run in this environment.

Hardware-assisted virtualization is a way of improving overall efficiency of virtualization. It


involves CPUs that provide support for virtualization in hardware, and other hardware
components that help improve the performance of a guest environment.

Hardware virtualization can be viewed as part of an overall trend in enterprise IT that includes
autonomic computing, a scenario in which the IT environment will be able to manage itself
based on perceived activity, and utility computing, in which computer processing power is seen
as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize
administrative tasks while improving scalability and overall hardware-resource utilization. With
virtualization, several operating systems can be run in parallel on a single central processing unit
(CPU). This parallelism tends to reduce overhead costs and differs from multitasking, which
involves running several programs on the same OS. Using virtualization, an enterprise can better
manage updates and rapid changes to the operating system and applications without disrupting
the user. "Ultimately, virtualization dramatically improves the efficiency and availability of
resources and applications in an organization. Instead of relying on the old model of “one server,
one application” that leads to underutilized resources, virtual resources are dynamically applied
to meet business needs.

Hardware virtualization is not the same as hardware emulation. In hardware emulation, a piece
of hardware imitates another, while in hardware virtualization, a hypervisor (a piece of software)
imitates a particular piece of computer hardware or the entire computer. Furthermore, a
hypervisor is not the same as an emulator; both are computer programs that imitate hardware, but
their domain of use in language differs.

Monitoring & Auditing

How to Accomplish Them

The general goal of monitoring is to detect suspicious behavior by external users or employees,
or malfunctions. An organization can do this directly, such as by monitoring for specific events,
or indirectly, such as by watching the state of a server over time and investigating anomalous
behavior.

Your security organization will have to determine its specific monitoring policy. Within this
policy, you will have to determine your organization's specific monitoring goals. Some questions
you will have to answer are:

• Are you going to baseline your server's performance? If so, what counters are you going
to collect and at what interval? How often are you going to take baselines?

• What objects are you going to audit? For each class of object, which accesses are you
going to audit? For each class of object, which instances are you going to audit?

• What other kinds of events are you going to audit?


• How are you going to manage your event logs? Are you going to use the tools Microsoft
provides, write your own, or purchase a third-party system?

• Which monitoring technologies are you going to use?

• How much are you going to monitor? Monitoring a system has a distinct impact on
system performance—the more you query or log a system's state, the more resources the system
must expend on these activities.)

Monitoring Architecture

Built-in Mechanisms

Windows NT 4.0 and Windows 2000 have several built-in mechanisms that can be used for
monitoring:

• Event logging (auditing). The system itself and applications on the system can generate
records of significant events.

• Performance monitoring. The system and certain applications on the system report
performance information regularly.

• Simple Network Management Protocol (SNMP). The system supports SNMP 2.0, an
industry-standard, platform-independent method of monitoring networked devices.

• Windows Management Instrumentation (WMI). Windows 2000 includes WMI, which


can be installed on Windows NT 4.0. WMI is not a monitoring service, but rather a unified
management interface that integrates the various other monitoring mechanisms.

Event Logging

Event log data is reported to the Event Log service by other parts of the system or by
applications running on the system. The Event Log service stores this data in .evt files in the
%systemroot%\system32\config directory. The built-in logs in Windows NT 4.0 and Windows
2000 Professional are the system log, the security log, and the application log. Windows 2000
Server installations may add logs for Domain Name System (DNS) and directory services. Any
application that needs to log events should register itself with the Event Log service.

Events stored in the event logs have a description field that displays text data describing the
event. Typically, this is the text of an error message. Usually, only information unique to a
particular instance of an event is stored with the event in the log; the text strings are stored in an
event message file (usually a .dll or .exe file) that is registered with the Event Log service. If,
when viewing an event, you see an error in the description field indicating that the event text is
not available, you will need to register the event message file with the Event Log service on the
computer at which you are trying to view the event.

Event Monitoring

When a system is configured to collect many different kinds of events, the event logs tend to fill
up quickly. Conversely, if only a few kinds of events are logged, then important information
could be missed. Therefore, deciding what to log and what not to log is a complicated decision.

When deciding which event categories to log, it is important to consider the volume of events
that will be generated and the utility of capturing any specific information. It is best to decide
exactly what type of information you want to capture and then configure the logging mechanisms
to log the minimum necessary number of events to collect that information. A "shotgun"
approach—that is, turning logging up to the maximum setting—often results in a large amount of
data that is ignored or discarded because there is too much to analyze or store.

Windows NT 4.0 and Windows 2000 include a utility called Event Viewer that allows the event
logs to be viewed (in a filtered or non-filtered fashion), saved, and cleared. This utility also
allows an administrator to control whether or not new events are allowed to overwrite old events
when the log is full, and to limit the maximum size of the event logs. Neither Windows NT 4.0
nor Windows 2000 includes any utilities to centrally collect and manage event logs, although
various third-party vendors sell such utilities.
Event log management is one of the most important facets of monitoring an enterprise. At the
system level, a utility such as Event Viewer may be sufficient for your entire event-monitoring
needs. However, when monitoring many computers across an enterprise, the amount of data
collected will generally far exceed the ability of an administrator to manage, and so some central
collection mechanism is generally needed.

You might also like