Library Management System
Library Management System
Library Management System
1. INTRODUCTION
1.1.2 Modules:
User management
Book Inventory Module
Book Borrowing Module
Search Facility Management
Assign Book Module
Pending Overdue Books From Inventory
1.1.3 Users:
Admin
1.1.4 Keywords:
Generic Technology Keywords:Database, User Interface, Programming.
Specific Technology Keywords:Asp.Net 3.5, C#.Net, MS SqlServer-05
1.2 Overview:
System Requirements:
Hardware:
The Hardware consists of physical components of the computer that input storage
processing control, output devices. The software that manages the resources of computer is
known as operating systems. Computer always includes an external storage system to store
data in programs. The popular storage medium are floppy disk other media are hard disks
and magnetic tapes etc. The kind of hardware used in the project is
Specification:
Processor: Intel Pentium or More
Server Hardware:
Min. 80GB hard disk
Ram 1 GB
Windows Server OS
Client Hardware:
Local Area Network
Software:
Software is a set of programs to do a particular task. Software is an essential requirement of
computer systems. The kind of software used in this project is.
Specification:
Operating System Server: Windows XP or later
Server Software:
Windows XP with Service Pack 2
Microsoft Visual Studio 2008
ASP.NET with C#
IIS (Internet Information Services)
Microsoft SQL Server 2005
Provision for connectivity
Client Software:
Internet Browser 6.0 and higher version
2. FEASIBILITY ANALYSIS
Whatever we think need not be feasible .It is wise to think about the feasibility of any
problem we undertake. Feasibility is the study of impact, which happens in the organization
by the development of a system. The impact can be either positive or negative. When the
positives nominate the negatives, then the system is considered feasible. Here the feasibility
study can be performed in two ways such as technical feasibility and Economical Feasibility.
The pre-coded solutions that form the framework's Base Class Library cover a large
range of programming needs in areas including: user interface, data access, database
connectivity, cryptography, web application development, numeric algorithms, and network
communications. The class library is used by programmers who combine it with their own
code to produce applications.
Programs written for the .NET Framework execute in a software environment that
manages the program's runtime requirements. This runtime environment, which is also a part
of the .NET Framework, is known as the Common Language Runtime (CLR).
The .NET Framework is included with all the new versions of Windows Server 2003,
and can be installed on older versions of Windows.
3.1 Interoperability:
Because interaction between new and older applications is commonly required, the
.NET Framework provides means to access functionality that is implemented in programs
that execute outside the .NET environment. Access to COM components is provided in the
System Runtime Interpol Services and System Enterprise Services namespaces of the
framework, and access to other functionality is provided using the P/Invoke feature.
3.6 Security:
The design is meant to address some of the vulnerabilities, such as buffer overflows,
that have been exploited by malicious software. Additionally, .NET provides a common
security model for all applications.
3.7 Portability:
The design of the .NET Framework allows for it to be platform agnostic, and thus be
cross platform compatible. That is, a program written to use the framework should run
without change on any type of system for which the framework is implemented.
In addition, Microsoft submits the specifications for the Common Language
Infrastructure (which includes the core class libraries, Common Type System, and the
Common Intermediate Language), and the C# language, and the C++/CLI language to both
ECMA and the ISO, making them available as open standards. This makes it possible for
third parties to create compatible implementations of the framework and its languages on
other platforms.
4 .ARCHITECTURE
4.1 CLI:
The core aspects of the .NET framework lie within the Common Language
Infrastructure, or CLI. The purpose of the CLI is to provide a language-agnostic platform
for application development and execution, including functions for exception handling,
garbage collection, security, and interoperability. Microsoft's implementation of the CLI is
called the Common Language Runtime, orCLR.
Metadata
4.2 Assemblies:
The intermediate CIL code is housed in .NET assemblies. As mandated by
specification, assemblies are stored in the Portable Executable (PE) format, common on the
Windows platform for all DLL and EXE files. The assembly consists of one or more files, but
one of these must contain the manifest, which has the metadata for the assembly.
The complete name of an assembly (not to be confused with the filename on disk)
contains its simple text name, version number, culture and public key token. The public key
token is a unique hash generated when the assembly is compiled; thus two assemblies with
the same public key token are guaranteed to be identical.
A private key can also be specified known only to the creator of the assembly and can
be used for strong naming and to guarantee that the assembly is from the same author when a
new version of the assembly is compiled (require adding an assembly to the Global Assembly
Cache).
4.3 Metadata:
All CIL is Self-Describing through .NET metadata. The CLR checks on metadata to
ensure that the correct method is called. Metadata is usually generated by language compilers
but developers can create their own metadata through custom attributes. Metadata also
contains information about the assembly. Metadata is also used to implement the reflective
programming capabilities of .NET Framework.
The BCL provides classes which encapsulate a number of common functions such as
file reading and writing, graphic rendering, database interaction, XML document
manipulation, and so forth. The BCL is much larger than other libraries, but has much more
functionality in one package.
4.5 Security:
Dot NET has its own security mechanism, with two general features:
Code Access Security is based on evidence that is associated with a specific assembly.
Typically the evidence is the source of the assembly (whether it is installed on the local
machine, or has been downloaded from the intranet or Internet). Code Access Security uses
evidence to determine the permissions granted to the code. Other code can demand that
calling code is granted a specified permission.
The demand causes the CLR to perform a call stack walk: every assembly of each
method in the call stack is checked for the required permission and if any assembly is not
granted the permission then a security exception is thrown.
When an assembly is loaded the CLR performs various tests. Two such tests are
validation and verification. During validation the CLR checks that the assembly contains
valid metadata and CIL, and it checks that the internal tables are correct. Verification is not
so exact. The verification mechanism checks to see if the code does anything
that is 'unsafe'. The algorithm used is quite conservative and hence sometimes code that is
'safe' is not verified. Unsafe code will only be executed if the assembly has the 'skip
verification' permission, which generally means code that is installed on the local machine.
Dot NET Framework uses Application domains as a mechanism for isolating code
running in a process. Application domains can be created and code loaded into or unloaded
from them independent of other Application domains. This helps increase fault tolerance of
the application, as faults or crashes in one Application domain do not affect rest of the
application
Each .NET application has a set of roots, which are a set of pointers maintained by the
CLR that point to objects on the managed heap (managed objects). These include references
to static objects and objects defined as local variables or method parameters currently in
scope, as well as objects referred to by CPU registers.
When the GC runs, it pauses the application, and for each objects referred to in the
root, it recursively enumerates all the objects reachable from the root objects and marks the
objects as reachable. It uses .NET metadata and reflection to discover the objects
encapsulated by an object, and then recursively walk them.
It then enumerates all the objects on the heap (which were initially allocated
contiguously) using reflection and all the objects, not marked as reachable, are garbage. This
is the markphase. Since the memory held by garbage is not of any consequence, it is
considered free space. However, this leaves chunks of free space between objects which were
initially contiguous.
The objects are then compacted together, by using memcpy to copy them over to the
free space to make them contiguous again. Any reference to an object invalidated by moving
the object is updated to reflect the new location by the GC. The application is resumed after
the garbage collection is over.
The GC used by .NET Framework is actually generational. Objects are assigned a
generation; newly created objects belong to Generation 0. The objects that survive a garbage
collection are tagged as Generation 1, and the Generation 1 objects that survive another
collection
5.1 Architecture:
The architecture of Microsoft SQL Server is broadly divided into three components:
SQLOS
Protocol Layer
SQLOS: implements the basic services required by SQL Server, including thread
scheduling, memory management and I/O management
The Relational Engine implements the relational database components including support
for databases, tables, queries and stored procedures as well as implementing the type system.
5.2 SQLOS:
SQLOSimplements the basic services required by SQL Server, including thread
scheduling, memory management and I/O management
Because the requirement of SQL Server is highly specialized, SQL Server implements
its own memory and thread management system, rather than using the generic one already
implemented in the Operating System. It divides all the operations it performs into a series of
Tasks - both background maintenance jobs as well as processing requests from clients.
Internally, a pool of worker threads is maintained, onto which the tasks are scheduled.
A task is associated with the thread until it is completed, only after its completion is
the thread freed and returned to the pool. If there are no free threads to assign the task to, the
task is temporarily blocked. Each worker thread is mapped onto either an Operating System
thread or a fibre.
Fibres are user mode threads that implement co-operative multitasking. Using fibers
means SQLOS does all the book-keeping of thread management itself, but then it can
optimize them for its particular use. SQLOS also includes synchronization primitives for
locking as well as monitoring for the worker threads to detect and recover from deadlocks.
SQLOS handles the memory requirements of SQL Server as well. Reducing disc I/O
is one of the primary goals of specialized memory management in SQL Server. It maintains a
buffer pool, which is used to cache data pages from the disc, and to satisfy the memory
requirements for the query processor, and for other internal data structures.
SQLOS monitors all the memory allocated from the buffer pool, ensuring that the
components return unused memory to the pool, and shuffles data out of the cache to make
room for newer data. For changes that are made to the data in buffer, SQLOS writes the data
back to the disc lazily, that is when the disc subsystem is either free, or there have significant
number of changes made to the cache, while still serving requests from the cache. For this, it
implements a Lazy Writer, which handles the task of writing the data back to persistent
storage
The Relational engine implements the relational data store using the capabilities
provided by SQLOS, which is exposed to this layer via the private SQLOS API. It
implements the type system, to define the types of the data that can be stored in the tables, as
well as the different types of data items (such as tables, indexes, logs etc) that can be stored.
It includes the Storage Engine, which handles the way data is stored on persistent storage
devices and provides methods for fast access to the data.
The storage engine implements log-based transaction to ensure that any changes to the
data are ACID compliant. It also includes the query processor, which is the component that
retrieves data. SQL queries specify what data to retrieve, and the query processor optimizes
and translates the query into the sequence of operations needed to retrieve the data. The
operations are then performed by worker threads, which are scheduled for execution by
SQLOS.
Data storage:
The main unit of data storage is a database, which is a collection of tables with typed
columns. SQL Server supports different data types, including primary types such as Integer,
Float, Decimal, Char (including character strings), Varchar (variable length character strings),
binary (for unstructured blobs of data), Text (for textual data) among others. It also allows
user-defined composite types (UDTs) to be defined and used.
SQL Server also makes server statistics available as virtual tables and views called
Dynamic Management Views or DMVs. A database can also contain other objects
including views, stored procedures, indexes and constraints, in addition to tables, along with
a transaction log.
An SQL Server database can contain a maximum of 231 objects, and can span
multiple OS-level files with a maximum file size of 220 TB. The data in the database are
stored in primary data files with an extension .mdf. Secondary data files, identified with an
.ndf extension, are used to store optional metadata. Log files are identified with the .ldf
extension.
A database object can either span all 8 pages in an extent ("uniform extent") or
share an extent with up to 7 more objects ("mixed extent"). A row in a database table cannot
span more than one page, so is limited to 8 KB in size. However, if the data exceeds 8 KB
and the row contains Varchar or Varbinary data, the data in those columns are moved to a
new page (or possible a sequence of pages, called Allocation unit) and replaced with a pointer
to the data.
5.5 Buffer management:
SQL Server buffers pages in RAM to minimize disc I/O. Any 8 KB page can be
buffered in-memory, and the set of all pages currently buffered is called the buffer cache. The
amount of memory available to SQL Server decides how many pages will be cached in
memory.
The buffer cache is managed by the Buffer Manager. Either reading from or writing
to any page copies it to the buffer cache. Subsequent reads or writes are redirected to the in-
memory copy, rather than the on-disc version. The page is updated on the disc by the Buffer
Manager only if the in-memory cache has not been referenced for some time.
While writing pages back to disc, asynchronous I/O is used whereby the I/O operation
is done in a background thread so that other operations do not have to wait for the I/O
operation to complete. Each page is written along with its checksum when it is written. When
reading the page back, its checksum is computed again and matched with the stored version
to ensure the page has not been damaged or tampered with in the meantime.
Any changes made to any page will update the in-memory cache of the page,
simultaneously all the operations performed will be written to a log, along with the
transaction ID which the operation was a part of. Each log entry is identified by an increasing
Log Sequence Number (LSN) which ensures that no event overwrites another. SQL Server
ensures that the log will be written onto the disc before the actual page is written back. This
enables SQL Server to ensure integrity of the data, even if the system fails.
If both the log and the page were written before the failure, the entire data is on
persistent storage and integrity is ensured. If only the log was written (the page was either not
written or not written completely), then the actions can be read from the log and repeated to
restore integrity. If the log wasn't written, then also the integrity is maintained, even though
the database is in a state when the transaction as if never occurred.
If it was only partially written, then the actions associated with the unfinished
transaction are discarded. Since the log was only partially written, the page is guaranteed to
have not been written, again ensuring data integrity. Removing the unfinished log entries
effectively undoes the transaction. SQL Server ensures consistency between the log and the
data every time an instance is restarted.
Shared locks are used when some data is being read - multiple users can read from
data locked with a shared lock, but not acquire an exclusive lock. The latter would have to
wait for all shared locks to be released. Locks can be applied on different levels of granularity
- on entire tables, pages, or even on a per-row basis on tables. For indexes, it can either be on
the entire index or on index leaves. The level of granularity to be used is defined on a per-
database basis by the database administrator. While a fine grained locking system allows
more users to use the table or index simultaneously, it requires more resources. So it does not
automatically turn into higher performing solution.
SQL Server also includes two more lightweight mutual exclusion solutions - latches
and spinlocks - which are less robust than locks but are less resource intensive. SQL Server
uses them for DMVs and other resources that are usually not busy. SQL Server also monitors
all worker threads that acquire locks to ensure that they do not end up in deadlocks - in case
they do, SQL Server takes remedial measures, which in many cases is to kill one of the
threads entangled in a deadlock and rollback the transaction it started.
To implement locking, SQL Server contains the Lock Manager. The Lock Manager
maintains an in-memory table that manages the database objects and locks, if any, on them
along with other metadata about the lock. Access to any shared object is mediated by the lock
manager, which either grants access to the resource or blocks it.
SQL Server also provides the optimistic concurrency control mechanism, which is
similar to the multi-version concurrency control used in other databases. The mechanism
allows a new version of a row to be created whenever the row is updated, as opposed to
overwriting the row, i.e., a row is additionally identified by the ID of the transaction that
created the version of the row.
Both the old as well as the new versions of the row are stored and maintained, though
the old versions are moved out of the database into a system database identified as Tempdb.
When a row is in the process of being updated, any other requests are not blocked (unlike
locking) but are executed on the older version of the row. If the other request is an update
statement, it will result in two different versions of the rows - both of them will be stored by
the database, identified by their respective transaction IDs.
The sequence of actions necessary to execute a query is called a query plan. There
might be multiple ways to process the same query. For example, for a query that contains a
join statement and a select statement, executing join on both the tables and then executing
select on the results would give the same result as selecting from each table and then
executing the join, but result in different execution plans. In such case, SQL Server chooses
the plan that is supposed to yield the results in the shortest possible time. This is called query
optimization and is performed by the query processor itself.
SQL Server includes a cost-based query optimizer which tries to optimize on the cost,
in terms of the resources it will take to execute the query. Given a query, the query optimizer
looks at the database schema, the database statistics and the system load at that time. It then
decides which sequence to access the tables referred in the query, which sequence to execute
the operations and what access method to be used to access the tables.
While a concurrent execution is more costly in terms of total processor time, because
the execution is actually split to different processors might mean it will execute faster. Once a
query plan is generated for a query, it is temporarily cached. For further invocations of the
same query, the cached plan is used. Unused plans are discarded after some time. SQL Server
also allows stored procedures to be defined. Stored procedures are parameterized T-SQL
queries, which are stored in the server itself (and not issued by the client application as is the
case with general queries). Stored procedures can accept values sent by the client as input
parameters, and send back results as output parameters.
They can also call other stored procedures, and can be selectively provided access to. Unlike
other queries, stored procedures have an associated name, which is used at runtime to resolve
into the actual queries. Also because the code need not be sent from the client every time (as
it can be accessed by name), it reduces network traffic and somewhat improves performance.
Execution plans for stored procedures are also cached as necessary.
5.9 Advantages
Support name management and avoid duplication
Store of organizational knowledge linking analysis, design and implementation
DELETE BOOK
DETAILS
ADMIN
ASSIGN
OVER
DUE
SAMPLE CODE
}
}
}
8. SCREEN SHOTS
9. ENTITY RELATIONSHIP DIAGRAM
12.1 Entity-relationship Model:
The first stage of information system design uses these models during the requirements
analysis to describe information needs or the type of information that is to be stored in a
database. The data modeling technique can be used to describe any ontology (i.e. an overview
and classifications of used terms and their relationships) for a certain universe of discourse
(i.e. area of interest). In the case of the design of an information system that is based on a
database, the conceptual data model is, at a later stage (usually called logical design), mapped
to a logical data model, such as the relational model; this in turn is mapped to a physical
model during physical design.
A relationship captures how two or more entities are related to one another. Relationships can
be thought of as verbs, linking two or more nouns.
SOFTWARE TESTING
Is the menu bar displayed in the appropriate contested some system related features
included either in menus or tools? Do pull –Down menu operation and Tool-bars work
properly? Are all menu function and pull down sub function properly listed ?; Is it possible
to invoke each menu function using a logical assumptions that if all parts of the system are
correct, the goal will be successfully achieved .? In adequate testing or non-testing will
leads to errors that may appear few months later.
This create two problem
2. The effect of the system errors on files and records within the system
The purpose of the system testing is to consider all the likely variations to which it will be
suggested and push the systems to limits.
The testing process focuses on the logical intervals of the software ensuring that all
statements have been tested and on functional interval is conducting tests to uncover errors
and ensure that defined input will produce actual results that agree with the required results.
Program level testing, modules level testing integrated and carried out.
White box sometimes called “Glass box testing” is a test case design uses the control
structure of the procedural design to drive test case.
Using white box testing methods, the following tests were made on the system
A) All independent paths within a module have been exercised once. In our system,
ensuring that case was selected and executed checked all case structures. The bugs that were
prevailing in some part of the code where fixed
b) All logical decisions were checked for the truth and falsity of the values.
Black box testing focuses on the functional requirements of the software. This is black box
testing enables the software engineering to derive a set of input conditions that will fully
exercise all functional requirements for a program. Black box testing is not an alternative to
white box testing rather it is complementary approach that is likely to uncover a different
class of errors that white box methods like..
1) Interface errors
3) Performance errors
CONCLUSION
Our project is only a humble venture to satisfy the needs in a library. Several user
friendly coding have also adopted. This package shall prove to be a powerful package in
satisfying all the requirements of the organization.
The objective of software planning is to provide a frame work that enables the manger to
make reasonable estimates made within a limited time frame at the beginning of the
software project and should be updated regularly as the project progresses. Last but not least
it is no the work that played the ways to success but ALMIGHTY
BIBLIOGRAPHY
REFERENCES
[1] R. Agrawal, A. Evfimievski, and R. Srikant. Information sharing across
private databases. In Proc. of the 2003 ACM SIGMOD, San Diego, California,
2003.
[2] C. Clifton, M. Kantarcioglu, J. Vaidya, X. Lin, and M. Y. Zhu. Tools for
privacy preserving data mining. SIGKDD Explorations, 4(2), December 2002.
[4] B. C.M. Fung, K.Wang, A.W. C. Fu, and J. Pei. Anonymity for continuous
data publishing. In Proc. of the 11th EDBT, Nantes, France, March 2008. ACM
Press.
[7] R. D. Hof. Mix, match, and mutate. Business Week, July 2005.
June 1998.
[20] A. C. Yao. Protocols for secure computations. In Proc. of the 23rd IEEE
Symposium on Foundations of Computer Science, 1982.