Theoretical Background of The Project
Theoretical Background of The Project
Theoretical Background of The Project
1. INTRODUCTION
Integration of Recruitment Process is designed to collect multiple resumes from the Job
Seeker. Integration of Recruitment Process aim is to provide Job Provider with enormous
amount of data.
Software consultants and recruiting agencies conduct a test to the candidates applying for
the job. They, then select the candidates who passed the test and make arrangements after
their interviews; fix the date and other formalities. This is then informated to the
candidates. Interview is conducted and the candidates passing the interview are recruited
as employees in their own firm.
The organization maintains data about the recruited employees; their personal details,
skills, experience etc.,. it also maintains information about the firms outsourcing the
recruitment. It maintains all the details about the vacancies in firms and their
requirements.
The organization then matches the requirements of the firms with that of the skills of its
employees and assigns them to the concerned firms. It still maintains the details of the
assigned employees; because these are the hired employees directly under the control of
the organization, and it is the organization who pays salaries to them.
The organization does all these manually. It writes down the data about the applicants,
employees and clients in the records. All this is a very tedious job requiring everything to
be done manually.
1
Introduction
Enterprise Resource Information System addresses all these by automating many of the
tasks of the organization. It relieves the employees by letting them do all the tedious jobs
electronically.
The aim of the module is to design a dynamic search engine for the Resume at
database, which can provide data for the job seekers and job providers.
2
Theoretical Background of the Project
Web services
Web services provide a web-enabled user interface with tools that include various
hypertext markup language (HTML) controls and web controls. Web services also handle
various web protocols, security, when code is targeted for .NET, it is called managed
code, which means that the code automatically runs under a “contract of cooperation”
with the CLR. Managed code supplies the information necessary for the runtime to
provide services such as memory management, cross-language integration, code access
security, and the automatic lifetime control of all of our objects.
3
Theoretical Background of the Project
•To make the developer experience consistent across widely varying types of
applications, such as Windows-based applications and Web-based applications.
•To build all communication on industry standards to ensure that code based on the .NET
Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime and the
.NET Framework class library. The common language runtime is the foundation of the
.NET Framework. You can think of the runtime as an agent that manages code at
execution time, providing core services such as memory management, thread
management, and remoting, while also enforcing strict type safety and other forms of
code accuracy that ensure security and robustness. In fact, the concept of code
management is a fundamental principle of the runtime. Code that targets the runtime is
4
known as managed code, while code that does not target the runtime is known as
unmanaged code. The class library, the other main component of the .NET Framework, is
a comprehensive, object-oriented collection of reusable types that you can use to develop
applications ranging from traditional command-line or graphical user interface (GUI)
The .NET Framework can be hosted by unmanaged components that load the common
language runtime into their processes and initiate the execution of managed code,
thereby creating a software environment that can exploit both managed and unmanaged
features. The .NET Framework not only provides several runtime hosts, but also
supports the development of third-party runtime hosts.
For example, ASP.NET hosts the runtime to provide a scalable, server-side environment
for managed code. ASP.NET works directly with the runtime to enable Web Forms
applications and XML Web services, both of which are discussed later in this topic.
5
Fig 1: .NET ARCHITECTURE
The above illustration shows the relationship of the common language runtime and the
class library to your applications and to the overall system. The illustration also shows
how managed code operates within a larger architecture.
The following sections describe the main components and features of the .NET
Framework in greater detail.
The common language runtime manages memory, thread execution, code execution, code
safety verification, compilation, and other system services. These features are intrinsic to
the managed code that runs on the common language runtime.
With regards to security, managed components are awarded varying degrees of trust,
depending on a number of factors that include their origin (such as the Internet, enterprise
network, or local computer). This means that a managed component might or might not
be able to perform file-access operations, registry-access operations, or other sensitive
functions, even if it is being used in the same active application.
6
The runtime enforces code access security. For example, users can trust that an
executable embedded in a Web page can play an animation on screen or sing a song, but
cannot access their personal data, file system, or network. The security features of the
runtime thus enable legitimate Internet-deployed software to be exceptionally feature
rich.
The runtime also enforces code robustness by implementing a strict type- and code-
verification infrastructure called the common type system (CTS). The CTS ensures that
all managed code is self-describing. The various Microsoft and third-party language
compilers generate managed code that conforms to the CTS. This means that managed
code can consume other managed types and instances, while strictly enforcing type
fidelity and type safety.
In addition, the managed environment of the runtime eliminates many common software
issues. For example, the runtime automatically handles object layout and manages
references to objects, releasing them when they are no longer being used. This automatic
memory management resolves the two most common application errors, memory leaks
and invalid memory references.
The runtime also accelerates developer productivity. For example, programmers can
write applications in their development language of choice, yet take full advantage of the
runtime, the class library, and components written in other languages by other developers.
Any compiler vendor who chooses to target the runtime can do so. Language compilers
that target the .NET Framework make the features of the .NET Framework available to
existing code written in that language, greatly easing the migration process for existing
applications.
While the runtime is designed for the software of the future, it also supports software of
today and yesterday. Interoperability between managed and unmanaged code enables
7
runtime provides many standard runtime services, managed code is never interpreted. A
feature called just-in-time (JIT) compiling enables all managed code to run in the native
machine language of the system on which it is executing. Meanwhile, the memory
manager removes the possibilities of fragmented memory and increases memory locality-
of-reference to further increase performance.
The .NET Framework class library is a collection of reusable types that tightly integrate
with the common language runtime. The class library is object oriented, providing types
from which your own managed code can derive functionality. This not only makes the
.NET Framework types easy to use, but also reduces the time associated with learning
new features of the .NET Framework. In addition, third-party components can integrate
seamlessly with classes in the .NET Framework.
For example, the .NET Framework collection classes implement a set of interfaces that
you can use to develop your own collection classes. Your collection classes will blend
seamlessly with the classes in the .NET Framework.
As you would expect from an object-oriented class library, the .NET Framework types
enable you to accomplish a range of common programming tasks, including tasks such as
string management, data collection, database connectivity, and file access. In addition to
these common tasks, the class library includes types that support a variety of specialized
development scenarios. For example, you can use the .NET Framework to develop the
following types of applications and services:
•Console applications.
8
•Scripted or hosted applications.
•Windows GUI applications (Windows Forms).
•ASP.NET applications.
•XML Web services.
•Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable types that
vastly simplify Windows GUI development. If you write an ASP.NET Web Form
application, you can use the Web Forms classes.
menus, buttons, and other GUI elements, and they likely access local resources such as
the file system and peripherals such as printers.
Another kind of client application is the traditional ActiveX control (now replaced by the
managed Windows Forms control) deployed over the Internet as a Web page. This
application is much like other client applications: it is executed natively, has access to
Server-side applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server. This model provides you with all the
features of the common language runtime and class library while gaining the performance
and scalability of the host server.
The following illustration shows a basic network schema with managed code running in
9
different server environments. Servers such as IIS and SQL Server can perform standard
operations while your application logic executes through the managed code.
ASP.NET is the hosting environment that enables developers to use the .NET Framework
to target Web-based applications. However, ASP.NET is more than just a runtime host; it
is a complete architecture for developing Web sites and Internet-distributed objects using
managed code. Both Web Forms and XML Web services use IIS and ASP.NET as the
publishing mechanism for applications, and both have a collection of supporting classes
in the .NET Framework.
The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web services
are built on standards such as SOAP (a remote procedure-call protocol), XML (an
extensible data format), and WSDL (the Web Services Description Language). The .NET
10
Framework is built on these standards to promote interoperability with non-Microsoft
solutions.
ASP.NET is a programming framework built on the common language runtime that can
be used on a server to build powerful Web applications. ASP.NET offers several
important advantages over previous Web development models:
• Enhanced Performance. ASP.NET is compiled common language runtime code
running on the server. Unlike its interpreted predecessors, ASP.NET can take
advantage of early binding, just-in-time compilation, native optimization, and caching
services right out of the box. This amounts to dramatically better performance before
you ever write a line of code.
WYSIWYG editing, drag-and-drop server controls, and automatic deployment are just a
few of the features this powerful tool provides.
• Simplicity. ASP.NET makes it easy to perform common tasks, from simple form
submission and client authentication to deployment and site configuration. For
11
example, the ASP.NET page framework allows you to build user interfaces that
cleanly separate application logic from presentation code and to handle events in a
simple, Visual Basic - like forms processing model. Additionally, the common
language runtime simplifies development, with managed code services such as
automatic reference counting and garbage collection.
• Scalability and Availability. ASP.NET has been designed with scalability in mind,
with features specifically tailored to improve performance in clustered and
multiprocessor environments. Further, processes are closely monitored and managed
by the ASP.NET runtime, so that if one misbehaves (leaks, deadlocks), a new process
can be created in its place, which helps keep your application constantly available to
handle requests.
12
custom-written component. Implementing custom authentication or state services has
never been easier.
Language Support
The Microsoft .NET Platform currently offers built-in support for three languages: C#,
Visual Basic, and JScript.
What is ASP.NET Web Forms?
The ASP.NET Web Forms page framework is a scalable common language runtime
programming model that can be used on the server to dynamically generate Web pages.
• The ability to create and use reusable UI controls that can encapsulate common
functionality and thus reduce the amount of code that a page developer has to write.
the ability for developers to cleanly structure their page logic in an orderly fashion (not
"spaghetti code").
• The ability for development tools to provide strong WYSIWYG design support for
pages (existing ASP code is opaque to tools).
ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be
deployed throughout an IIS virtual root directory tree. When a browser client requests
.aspx resources, the ASP.NET runtime parses and compiles the target file into a .NET
13
Framework class. This class can then be used to dynamically process incoming requests.
(Note that the .aspx file is compiled only the first time it is accessed; the compiled type
instance is then reused across multiple requests).
An ASP.NET page can be created simply by taking an existing HTML file and changing
its file name extension to .aspx (no modification of code is required). For example, the
following sample demonstrates a simple HTML page that collects a user's name and
category preference and then performs a form pushback to the originating page when a
button is clicked:
ASP.NET provides syntax compatibility with existing ASP pages. This includes support
for <% %> code render blocks that can be intermixed with HTML content within an
.aspx file. These code blocks execute in a top-down manner at page render time.
Code-Behind Web Forms
ASP.NET supports two methods of authoring dynamic pages. The first is the method
shown in the preceding samples, where the page code is physically declared within the
originating .aspx file. An alternative approach--known as the code-behind method—
In addition to (or instead of) using <% %> code blocks to program dynamic content,
ASP.NET page developers can use ASP.NET server controls to program Web pages.
Server controls are declared within an .aspx file using custom tags or intrinsic HTML
tags that contain a runat="server" attribute value. Intrinsic HTML tags are handled by
one of the controls in the System.Web.UI.HtmlControls namespace. Any tag that
doesn't explicitly map to one of the controls is assigned the type of
System.Web.UI.HtmlControls.HtmlGenericControl.
14
Server controls automatically maintain any client-entered values between round trips to
the server. This control state is not stored on the server (it is instead stored within an
<input type="hidden"> form field that is round-tripped between requests). Note also
that no client-side script is required.
1. ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI.
2. ASP.NET Web Forms pages can target any browser client (there are no script library
or cookie requirements).
3. ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages.
4. ASP.NET server controls provide an easy way to encapsulate common functionality.
5. ASP.NET ships with 45 built-in server controls. Developers can also use controls
built by third parties.
6. ASP.NET server controls can automatically project both uplevel and downlevel
HTML.
7. ASP.NET templates provide an easy way to customize the look and feel of list server
controls.
8. ASP.NET validation controls provide an easy way to do declarative client or server
data validation.
15
ADO.NET Overview
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the
web with scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Commandobjects, and
also introduces new objects. Key new ADO.NET objects include the DataSet,
DataReader, and DataAdapter.
The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct
from any data stores. Because of that, the DataSet functions as a standalone entity. You
can think of the DataSet as an always disconnected recordset that knows nothing about
the source or destination of the data it contains. Inside a DataSet, much like in a
database, there are tables, columns, relationships, constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed
while the DataSet held the data. In the past, data processing has been primarily
connection-based. Now, in an effort to make multi-tiered apps more efficient, data
processing is turning to a message-based approach that revolves around chunks of
information. At the center of this approach is the DataAdapter, which provides a bridge
to retrieve and save data between a DataSet and its source data store. It accomplishes this
by means of requests to the appropriate SQL commands made against the data store.
The XML-based DataSet object provides a consistent programming model that works
with all models of data storage: flat, relational, and hierarchical. It does this by having no
'knowledge' of the source of its data, and by representing the data that it holds as
collections and data types. No matter what the source of the data within the DataSet is, it
is manipulated through the same set of standard APIs exposed through the DataSet and
its subordinate objects.
16
While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill, and
persist the DataSet to and from data stores. The OLE DB and SQL Server .NET Data
Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net
Framework provide four basic objects: the Command, Connection, DataReader and
DataAdapter. In the remaining sections of this document, we'll walk through each part
of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they
are, and how to program against them.
The following sections will introduce you to some objects that have evolved, and some
that are new. These objects are:
Server data source. DataSets. For storing, remoting and programming against flat data,
XML data and relational data.
• DataAdapters. For pushing data into a DataSet, and reconciling data against a
database.
When dealing with connections to a database, there are two different options: SQL Server
.NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider.
17
These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data
Provider is used to talk to any OLE DB provider (as it uses OLE DB underneath).
Connections
Connections are used to 'talk to' databases, and are represented by provider-specific
classes such as SQLConnection. Commands travel over connections and resultsets are
returned in the form of streams which can be read by a DataReader object, or pushed
into a DataSet object.
Commands
Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SQLCommand. A command can be a stored procedure
call, an UPDATE statement, or a statement that returns results. You can also use input and
output parameters, and return values as part of your command syntax. The example
below shows how to issue an INSERT statement against the Northwind database.
DataReaders
DataSets
The DataSet object is similar to the ADO Recordset object, but more powerful, and with
one other important distinction: the DataSet is always disconnected. The DataSet object
represents a cache of data, with database-like structures such as tables, columns,
relationships, and constraints. However, though a DataSet can and does behave much
18
like a database, it is important to remember that DataSet objects do not interact directly
with databases, or other source data. This allows the developer to work with a
programming model that is always consistent, regardless of where the source data resides.
Data coming from a database, an XML file, from code, or user input can all be placed
into DataSet objects. Then, as changes are made to the DataSet they can be tracked and
verified before updating the source data. The GetChanges method of the DataSet object
actually creates a second DatSet that contains only the changes to the data. This DataSet
is then used by a DataAdapter (or other objects) to update the original data source.
The DataSet has many XML characteristics, including the ability to produce and
consume XML data and XML schemas. XML schemas can be used to describe schemas
interchanged via WebServices. In fact, a DataSet with a schema can actually be compiled
for type safety and statement completion.
DataAdapters (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data.
Using the provider-specific SqlDataAdapter (along with its associated SqlCommand
and SqlConnection) can increase overall performance when working with a Microsoft
SQL Server databases. For other OLE DB-supported databases, you would use the
The DataAdapter object uses commands to update the data source after changes have
been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT
command; using the Update method calls the INSERT, UPDATE or DELETE command
for each changed row. You can explicitly set these commands in order to control the
statements used at runtime to resolve changes, including the use of stored procedures. For
ad-hoc scenarios, a CommandBuilder object can generate these at run-time based upon
a select statement. However, this run-time generation requires an extra round-trip to the
server in order to gather required metadata, so explicitly providing the INSERT,
19
UPDATE, and DELETE commands at design time will result in better run-time
performance.
Overview of ADO.NET
ADO.NET provides consistent access to data sources such as Microsoft SQL Server, as
well as data sources exposed via OLE DB and XML. Data-sharing consumer applications
can use ADO.NET to connect to these data sources and retrieve, manipulate, and update
ADO.NET cleanly factors data access from data manipulation into discrete components
that can be used separately or in tandem. ADO.NET includes .NET data providers for
connecting to a database, executing commands, and retrieving results. Those results are
either processed directly, or placed in an ADO.NET DataSet object in order to be exposed
to the user in an ad-hoc manner, combined with data from multiple sources, or remoted
between tiers. The ADO.NET DataSet object can also be used independently of a .NET
data provider to manage data local to the application or sourced from XML.
The ADO.NET classes are found in System.Data.dll, and are integrated with the XML
classes found in System.XML.dll. When compiling code that uses the System.Data
namespace, reference both System.Data.dll and System.XML.dll. For an example of
20
compiling an ADO.NET application using a command line compiler, ADO.NET provides
functionality to developers writing managed code similar to the functionality provided to
native COM developers by ADO.
As application development has evolved, new applications have become loosely coupled
based on the Web application model. More and more of today's applications use XML to
encode data to be passed over network connections. Web applications use HTTP as the
fabric for communication between tiers, and therefore must explicitly handle maintaining
state between requests. This new model is very different from the connected, tightly
coupled style of programming that characterized the client/server era, where a connection
was held open for the duration of the program's lifetime and no special handling of state
was required.
ADO.NET was designed to meet the needs of this new programming model:
disconnected data architecture, tight integration with XML, common data representation
with the ability to combine data from multiple and varied data sources, and optimized
facilities for interacting with a database, all native to the .NET Framework.
The design for ADO.NET addresses many of the requirements of today's application
development model. At the same time, the programming model stays as similar as
possible to ADO, so current ADO developers do not have to start from the beginning in
learning a brand new data access technology. ADO.NET is an intrinsic part of the .NET
Framework without seeming completely foreign to the ADO programmer.
ADO.NET coexists with ADO. While most new .NET-based applications will be written
using ADO.NET, ADO remains available to the .NET programmer through .NET COM
interoperability services.
21
environment for which many new applications are written. The concept of working with a
solution for n-tier programming is the DataSet.
XML and data access are intimately tied — XML is all about encoding data, and data
access is increasingly becoming all about XML. The .NET Framework does not just
disconnected set of data has become a focal point in the programming model. The
ADO.NET support Web standards — it is built entirely on top of them.
XML support is built into ADO.NET at a very fundamental level. The XML classes in the
.NET Framework and ADO.NET are part of the same architecture — they integrate at
many different levels. You no longer have to choose between the data access set of
services and their XML counterparts; the ability to cross over from one to the other is
inherent in the design of both.
ADO.NET Architecture
ADO.NET and the XML classes in the .NET Framework converge in the DataSet object.
The DataSet can be populated with data from an XML source, whether it is a file or an
XML stream. The DataSet can be written as World Wide Web Consortium (W3C)
compliant XML, including its schema as XML Schema definition language (XSD)
schema, regardless of the source of the data in the DataSet. Because the native
serialization format of the DataSet is XML, it is an excellent medium for moving data
between tiers making the DataSet an optimal choice for remoting data and schema
context to and from an XML Web service.
22
The DataSet can also be synchronized with an XMLDataDocument to provide relational
and hierarchical access to data in real time
The design of the DataSet enables you to easily transport data to clients over the Web
using XML Web services, as well as allowing you to marshal data between .NET
components using .NET Remoting services. DataTable objects can also be used with
remoting services, but cannot be transported via an XML Web service.
This section provides an architectural overview of XML in the .NET Framework. The
design goals for the XML classes in the .NET Framework are:
• High-productivity.
• Standards-based.
• Multilingual support.
• Extensible.
23
• Pluggable architecture.
• Focused on performance, reliability, and scalability.
• Integration with ADO.NET.
The XML in the .NET Framework and ADO.NET provide a unified programming model
to access data represented as both XML data, text delimited by tags that structure the
data, and relational data, tables consisting of rows, columns, and constraints. The XML
reads XML data from any data stream into DOM node trees, where data can be accessed
programmatically, while ADO.NET provides the means to access and manipulate
relational data within a DataSet object.
Of the classes that comprise XML in the .NET Framework and ADO.NET, the DataSet
represents a relational data source in the ADO.NET, the XMLDocument implements the
DOM in XML, and the XMLDataDocument unifies the ADO.NET and XML by
24
representing relational data from a DataSet and synchronizing it with the XML document
model.
The XML integration with relational data occurs with the XMLDataDocument, a class
derived from the XMLDocument. The XMLDataDocument maps XML to relational data
in an ADO.NET DataSet.
A database typically has two components: the files holding the physical database and the
database management system (DBMS) software that applications use to access data.
• Ensuring that data is stored correctly, and that the rules defining data relationships are
not violated.
Database Objects
The database window has six object types.
a. Tables: stores data. Columns are fields and rows are records. Accessis
A relational database in that it allows data to be stored in multiple tables, and the
Data in one table can be related to another table by a single field.
b. Queries : Used to select specific data from the tables. A query could be
used to select only students who are in a specific department or only students
Who have attended a certain year.
c. Forms: Used to enter and display data in the tables. Used to make data
25
Entry easier and to customize the way the data is viewed.
d. Reports: Used to output the data from tables and queries.
e. Macros: Automate tasks within the database.
f. Modules: Program code (Visual Basic)
Table Relationships
One-To-Many: Each value in the primary table field is unique. Each value in the related
table field matches a value in the primary table field, but can appear more
than once in the related field table. In a one-to-many relationship, the
primary table field must be a key field, yet the related field does not have
Theoretical Background Of The Project
to be a primary key.
One-To-One: Each value in the primary table field is unique. Each value in the related
table field is also unique, both fields must be primary keys.
32
Table Design
1. Create tables in design view.
2. Toggle to the field properties by pressing F6.
Datasheet View
26
An efficient way to work with more than 1 record on the screen. Use the arrow-head
buttons on the status bar to scroll through records.
Form Design
The form design window displays labels and fields for the form. The form contains 2
sections, the Form Header and Detail.
Report Design
Report Header
Includes data to be printed at the top of the first page of the report.
Theoretical Background Of The Project
Page Header
Includes data to be printed at the top of each page of the report.
Detail
Display the fields for the report.
Page Footer
At the bottom of each page of the report.
Report Footer
Data to be printed at the bottom of the last page of the report.
27
Creating a Database Table
Data Types:
Text:
Maximum length of 255 characters
Number:
Numeric data used in calculations
Date/Time:
Dates and Times
Yes/No:
Fields that will contain either a yes or no entry.
33
Memo
More Tips
1. If there is not a unique field don't let Access primary key.
2. Index Fields sort faster.
3. Open a file by shift-clicking on the filename if you need to get passed a form.
4. Don't turn off the wizard button when using Tools.
Entering Records
1. Tab key or arrow keys to advance from column to column. Shift tab to go
Project backwards.
Theoretical Background Of The Project
28
3. Click the Total row of the field you want to summarize
4. On the Total row choose the function
5. Run button
Append Query
1. Create a query that retrieves the records you want to copy into a table
2. Run the query to test it
3. Click the View button to return to Design View
4. Choose Append Query
5. Choose the name of the table that the records will be copied to
6. Append to: If the field names are different use the drop down list and choose the
appropriate field. (Data types must be the same)
7. Run Query
Creating a Form
1. Forms Tab
2. Click on New
3. Choose Form Wizard
4. My favorite is: AutoForm: Columnar 34
5. Tables/Queries drop-down list, choose the first table or query
6. Choose the fields.
Requirement Gathering
3. REQUIREMENT GATHERING
29
• Use Case diagrams are the central for modeling the behavior of a system, a
subsystem, or a class.
• Class diagram is the most common diagram found in the modeling object oriented
systems
• It address the static design view of a system
• A Sequence diagrams is the interaction diagram that emphasizes the time ordering
of messages
• It is made up of objects and messages
An ERIS has UML diagrams representing the over Use case of the system, User level use
case, sequence diagram.
UML diagrams
30
ENTERPRISE RESOURCE INFORMATION SYSTEM
31
UML diagrams
32
UML diagrams
33
UML diagrams
34
UML diagrams
35
UML diagrams
36
UML diagrams
37
Modeling Requirements through use cases
Job Providers
Brief Description:
This is use case is used to add a new recruiter, modify the details of existing recruiter
Flow Of Events:
Basic Flow
1. User enters the recruiter details
2. If the recruiter information does not exist, a new instance will be created;
otherwise the information about the recruiter will be modified
Alternative Flow
None
Special Requirements
None
Pre Conditions
Recruiter should log on to the system
Post Conditions
If the use case is successful a new recruiter is added or existing recruiter
information is modified ore deleted.
Job Seeker
Brief Description:
This use case is used to add a new client, modify the details of existing clients
Flow Of Events:
Basic Flow
1. System asks the client information from the user
2. User enters the client details
3. If the client information does not exist a new instance will be created; otherwise the
information about the client will be modified
Alternative Flow
None
Special Requirements
None
Pre Conditions
Job Seeker should log on to the system.
38
Modeling Requirements through use cases
Post Conditions
If the use case is successful a new client is added or existing clients information is
modified.
POST JOB
Brief Description:
This describes the jobs available with the clients and their requirements and modify and
maintain the information pertaining to the job
Flow Of Events:
Basic Flow
1. The System asks the user to enter the Job code other related information about the
job(like the job title, description etc)
2. The user enters the required details about the job
3. The system then creates the new job order and stores all the details about it in the
database
Alternative Flow
None
Special Requirements
None
Pre Conditions
User should log on to the system
Post Conditions
If the use case is successful, the new job is added or the existing jobs information is
modified and stored in the database
UPDATE JOB ORDER
Brief Description:
This deals with updating job orders
Flow Of Events:
Basic Flow
1. The system asks the user about information regarding the updations
2. The user enters the required details
3. The System then updates the details of the existing job orders, if it already exists in
the system
Alternative Flow
None
Special Requirements
None
39
Modeling Requirements through use cases
Pre Conditions
User should log on to the system
Post Conditions
If the use case is successful, the updation will generated in the database.
SKILL SET
Brief Description:
This is used to add skills of the applicants and also helps in its modification
Flow Of Events:
Basic Flow
1. The system asks the user to enter the skills of an applicant along with their id
and other related details
2. The user enters the required details about the applicant
3. The system then add skills of applicant and stores all the details about it in the
database
Alternative Flow
None
Special Requirements
None
Pre Conditions
User should log on to the system
Post Conditions
If the use case is successful, the skills of applicant is added and the information is stored
in the database
40
Modeling Requirements through use cases
If the use case is successful, the skills of applicant is added and the information is stored
in the database
UPDATE SKILLS
Brief Description:
This use case deals with updating skills of existing applicants.
Flow Of Events:
Basic Flow
1. The system asks the user about information regarding the uploads
2. The user enters the required details
3. The system then add skills if it already exists in the system
Alternative Flow
None
Special Requirements
None
Pre Conditions
User should log on to the system
Post Conditions
If the use case is successful, the updation will generated in the database
SEARCH CANDIDATES
Brief Description:
This is used to search or fetch job Seekers according to a skills
Flow Of Events:
Basic Flow
1. The system asks the user to enter the skill for which it wants to perform the
search of applicants
2. The user enters the required skill
3. The system then displays all the applicants with the skill u entered
Alternative Flow
None
Special Requirements
None
Pre Conditions
User should log on to the system
Post Conditions
If the use case is successful, applicants with the required skill are displayed else it
displays the message saying ‘no subjects of said skills’
41
Modeling Requirements through use cases
Administrator
This use case is used to generate reports based on the criteria mentioned by the user
Flow Of Events:
Basic Flow
1. The system asks the user to select any of the following reports based on their
authorizations
a. Client Reports
b. Application Reports
c. Job Order Reports
d. Skill set Reports
42
Analysis
4. ANALYSIS
Each user is having a particular USER ID and it is not similar with any other ID even it
his or her is same.
The administrator of the consultancy can access the data of Users. The administrator
maintains the database consisting of tables, which stores the information about various
fields. The user can insert the information needed by the consultancy for the recurtment
process.
The applicant can communicate with the consultancy. This makes the user to post his
resume from the net, as well as easy to search for the job in various companies which are
providing the vacancies and the user can also specify his skills set along with his
resumes. This information will be carried to recruiter to maintain the details of different
applicants to search for the vacancies.
43
Analysis
1.job seeker
2.job provider
3.job search/applicant search
4.administrator
Job Seeker:
Applicants : These are the ones who apply for the jobs.the applicant can get registered
and login with his id and password.He can give his details like personal data, skills and
resume. The job seeker has got privlages to update his personal details,educational
details,skill details and aslo can perform job search.
Job Provider:
Job Provider is the one who conducts the tests to the applicants and interviews to the
selected candidates.The recruiter once registered can post job and can give details of job
like job title,description,skills required ,The Job Provider has got privilages to update his
personal details,update job order and can perform resume search.
44
Design
5. DESIGN
5.1. INPUT DESIGN:
The input design is the link that ties the information System into the real world of its
users. The inputs, which are given by the user, will form the core of the processes. So the
inputs have to be care fully analyzed and care has to be taken to avoid in correct inputs.
The guidelines that have been followed in designing the input forms are.,
Minimizing the number of input by collecting only required Data and grouping similar or
related data.
Computer out put is the most important and direct source of information to user.
Efficient, intelligible output design should provide user with systems relationships and
help in decision making.
The major form of output is hard copy from the printer, printouts are to be designed
around the out put requirements of the user. The output devices to be considered depends
on factors such as compatibility of the device with the system, response time
requirements, expected print quality and number of copies needed.
The output design was carried out in consultation with the user reports, which have many
columns, are printed n compressor mode, which can be printed on laser printers and
others in uncompressed mode so as to facilitate its usage online printers.
45
Design
46
Design
Desc compose
Name Type
----------------------------------------- -------- ----------------------
to TEXT
from TEXT
subject TEXT
attachfiles TEXT
message TEXT
47
Development
6. DEVELOPMENT
6.1SAMPLE CODE
Inserting Data
Imports System.Data.OleDb
Imports system.Data.OleDb.OleDbException
Partial Class jobseekersRegistration
Inherits System.Web.UI.Page
Dim con As New
OleDbConnection("provider=microsoft.jet.oledb.4.0;data
source=c:\erisdatabase.mdb")
Dim cmd As New OleDbCommand
Protected Sub Button1_Click(ByVal sender As Object, ByVal e As
System.EventArgs) Handles Button1.Click
con.Open()
cmd.Connection = con
cmd.CommandText = "insert into jsr values('" & TextBox1.Text &
"','" & TextBox2.Text & "','" & TextBox3.Text & "','" & TextBox4.Text &
"','" & TextBox5.Text & "','" & DropDownList1.SelectedItem.Text & "','"
& TextBox6.Text & "'," & TextBox7.Text & ")"
cmd.ExecuteNonQuery()
Response.Redirect("loginsucc.aspx")
con.Close()
End Sub
48
Development
End Sub
End Sub
49
Development
Imports System.Data.OleDb
Partial Class searchjobs
Inherits System.Web.UI.Page
Dim con As New
OleDbConnection("provider=microsoft.jet.oledb.4.0;data
source=c:\erisdatabase.mdb")
Dim cmd As New OleDbCommand()
End Sub
50
Development
51
Testing
7. TESTING
TEST PLAN
Software testing is a critical element of Software Quality Assurance and represents the
TESTING OBJECTIVES:
The main objective of testing is to uncover a host of errors, systematically and with
• A good test case is one that has a high probability of finding error, if it exists.
But there is one thing that testing cannot do (just to quote a very famous sentence)
“Testing cannot show the absence of defects, it can only show that software defects are
presents.”
As the test results are gathered and evaluated they begin to give a qualitative indication of
the reliability of the software. If severe errors are detected, the overall quality of the
software is a natural suspect. If, on the other hand, all the errors, which are encountered,
are easily modifiable, then one of the two conclusions can be made:
52
Testing
• The software more or less confirms to the quality and reliable standards.
For the purpose of the current project we are assuming that in the event that errors
that are easily modifiable points to the latter possibility, since repeating the entire testing
routine can be very time consuming. What we propose to do instead is to get it tested by
one or more persons who are not a part of the development team but is well versed with
the subject and with the concept of software testing (alpha testing). If he can detect no
serious errors, it will enable us to state with more confidence that the software does
TESTING STRATEGY:
A testing strategy is a roadway, giving there how-to conducting a test. Our testing
strategy is flexible enough to promote customization that may be necessary in due course
of development process. For instance during coding we find that a change in design (E.g.
Z de normalized table makes certain query easy to process), we maintain a change log
and refer to it at appropriate time during the testing. Software can be tested in one of the
following ways:
• Knowing the specific functions that the Software is expected to perform, tests can be
• Knowing the internal workings of the product, tests can be conducted to show that
internal operations of the system perform according to the specifications and all
53
Testing
The first approach is what is known as Black box testing and the second is called White
box testing. We will be using a mixed approach, more popularly known as sandwich
testing. We apply white box testing techniques to ascertain the functionalities top-down
and then we use black box testing to demonstrate that everything runs as expected.
Unit testing focuses verification effort on the smallest unit of software i.e. the module.
Using the detailed design and the process specifications testing is done to uncover errors
within the boundary of the module. All nodules must be successful in the unit test before
Integration testing is a systematic technique for constructing the program structure while
conducting tests at the same time to uncover errors associated with interfacing. We have
At the culmination of integration testing the software is complete as a package and the
interfacing errors have been uncovered and fixed, final tests- validation testing- may
begin. Validation tests succeed when the software performs exactly in the manner as
54
Software validation is done by a series of Black box tests that demonstrate the
conformance with requirements. Alpha and bets testing fall in this category. We will not
do beta testing but alpha testing will certainly will certainly be done.
Testing
The techniques that are used in deriving the test cases are explained below.
Condition Testing is a test case design method that exercises the logical conditions
parenthesis around simple or compound conditions. The condition testing method focuses
• We have used a special condition testing method called the Domain Testing where
the condition in a form lie E1 < Relational Operator> E2 will the tested for three
55
Boundary value analysis leads to a selection of test cases that exercise the boundary
conditions or bounding values. It has been observed that a large number of errors tend to
appear at the boundaries of the input domain than in the center. The guidelines for
Testing
• If the input condition has a low and high range then tests should be done at the
low and high boundaries. Values above and below these extremes should be
tested.
• Apply the same principle to output conditions. E.g.: Test cases should be designed
by the program.
EQUIVALENCE PARTITIONING:
Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived. A typical test case
uncovers a class of errors that might otherwise require many more test cases before the
error is observed.
Equivalence classes for input determine the valid and invalid inputs for the program.
Equivalence class test cases are generated using the following guidelines:
• If an input class specifies a range then one valid and two invalid equivalence
56
• If an input class specifies a value then one valid and one invalid equivalence
• If an input class specifies a member of a set then one valid and one invalid
• If an input class specifies a Boolean then one valid and two invalid equivalence
Test cases should be selected so that the largest number of attributes of an equivalence
57
Sample Screens
8. SAMPLE SCREENS
HOME PAGE
58
Sample Screens
SEARCH PAGE
59
Sample Screens
60
Sample Screens
JOBSEEKERS REGISTRATION
61
Sample Screens
62
Sample Screens
POST RESUME
63
Sample Screens
64
Sample Screens
65
Sample Screens
ADMINISTRATOR PAGE
66
Sample Screens
67
Sample Screens
68
Sample Screens
UPDATE JOB
69
Sample Screens
SEARCH PAGE
70
Sample Screens
SEARCH JOB
71
Sample Screens
72
Sample Screens
CONTACT PAGE
73
Sample Screens
JOBSEEKERS DETAILS
74
Sample Screens
RESUMES POSTED
75
Conclusion
9.CONCLUSION
The task given to us performed by keeping in mind the goals we have to achieve, these
are to provide user-friendliness. This project ERIS is mainly useful for software
consultants.
The ERIS is excepted to function as per the requirements and we expect that it will
satisfy the users. Working on such a project in the organization provided the professional
attitude and fuel needed to prepare for further hobs. The encounter with real life
professionals will surely pave the way towards a more successful illustrious career.
76
Future Scope
The current system is susceptible to handle the situation that, even the organization grows
and establishes various branches at various places in the world.
The Database maintained here grows, as time passes by thus it has to be cleaned up in
frequent intervals so as to save memory. There are no special provisions designed in the
system. If the user wants an option of backing of data in frequent intervals, it could be
added to the present system.
The information of the Employee's working in the organization, like payroll
(remuneration's), details of their work, are not incorporated in the system as it does not
fall under the scope of this project. If Payroll management is to be included it can be
easily coupled with the present modules.
Frequently enough the system should be reviewed for updating as we know that
maintenance is rather difficult than developing a system so there should be proper
feedback about the usefulness of the system, otherwise the basic purpose of automation
would not achieved.
77
BIBLIOGRAPHY
78
ENTERPRISE RESOURCE INFORMATION SYSTEM
79