C#.Net, ASP - Net, SQL Server FAQ's
C#.Net, ASP - Net, SQL Server FAQ's
• Framework
• OOPS
• C# Language features
• Access specifiers
• Constructor
• ADO.NET
• Asp.Net
• WebService & Remoting
• COM
• XML
• IIS
• Controls
• Programming
(Framework)
http://msdn.microsoft.com/netframework/technologyinfo/Overview/whatsnew.aspx
Note the hierarchy of code groups - the top of the hierarchy is the most general ('All
code'), which is then sub-divided into several groups, each of which in turn can be sub-
divided. Also note that (somewhat counter-intuitively) a sub-group can be associated with
a more permissive permission set than its parent.
How do I define my own code group?
Use caspol. For example, suppose you trust code from www.mydomain.com and you
want it have full access to your system, but you want to keep the default restrictions for
all other internet sites. To achieve this, you would add a new code group as a sub-group
of the 'Zone - Internet' group, like this:
caspol -ag 1.3 -site www.mydomain.com FullTrust
Now if you run caspol -lg you will see that the new group has been added as group 1.3.1:
...
1.3. Zone - Internet: Internet
1.3.1. Site - www.mydomain.com: FullTrust
...
Note that the numeric label (1.3.1) is just a caspol invention to make the code groups
easy to manipulate from the command-line. The underlying runtime never sees it.
How do I change the permission set for a code group?
Use caspol. If you are the machine administrator, you can operate at the 'machine' level -
which means not only that the changes you make become the default for the machine,
but also that users cannot change the permissions to be more permissive. If you are a
normal (non-admin) user you can still modify the permissions, but only to make them
more restrictive. For example, to allow intranet code to do what it likes you might do this:
caspol -cg 1.2 FullTrust
Note that because this is more permissive than the default policy (on a standard system),
you should only do this at the machine level - doing it at the user level will have no effect.
Can I create my own permission set?
Yes. Use caspol -ap, specifying an XML file containing the permissions in the permission
set. To save you some time, here is a sample file corresponding to the 'Everything'
permission set - just edit to suit your needs. When you have edited the sample, add it to
the range of available permission sets like this:
caspol -ap samplepermset.xml
Then, to apply the permission set to a code group, do something like this:
caspol -cg 1.3 SamplePermSet (By default, 1.3 is the 'Internet' code group)
I'm having some trouble with CAS. How can I diagnose my problem?
Caspol has a couple of options that might help. First, you can ask caspol to tell you what
code group an assembly belongs to, using caspol -rsg. Similarly, you can ask what
permissions are being applied to a particular assembly using caspol -rsp.
I can't be bothered with all this CAS stuff. Can I turn it off?
Yes, as long as you are an administrator. Just run:
caspol -s off
http://www.codeproject.com/dotnet/UB_CAS_NET.asp
40. Which namespace is the base class for .net Class library?
Ans: system.object
41. What are object pooling and connection pooling and difference? Where do we set
the Min and Max Pool size for connection pooling?
Object pooling is a COM+ service that enables you to reduce the overhead of creating
each object from scratch. When an object is activated, it is pulled from the pool. When
the object is deactivated, it is placed back into the pool to await the next request. You can
configure object pooling by applying the ObjectPoolingAttribute attribute to a class
that derives from the System.EnterpriseServices.ServicedComponent class.
Object pooling lets you control the number of connections you use, as opposed to
connection pooling, where you control the maximum number reached.
Following are important differences between object pooling and connection pooling:
COM+ object pooling is identical to what is used in .NET Framework managed SQL
Client connection pooling. For example, creation is on a different thread and minimums
and maximums are enforced.
24. What is serialization in .NET? What are the ways to control serialization?
Serialization is the process of converting an object into a stream of bytes.
Deserialization is the opposite process of creating an object from a stream of
bytes. Serialization/Deserialization is mostly used to transport objects (e.g. during
remoting), or to persist objects (e.g. to a file or database).Serialization can be
defined as the process of storing the state of an object to a storage medium.
During this process, the public and private fields of the object and the name of
the class, including the assembly containing the class, are converted to a stream
of bytes, which is then written to a data stream. When the object is subsequently
deserialized, an exact clone of the original object is created.
Binary serialization preserves type fidelity, which is useful for preserving the state
of an object between different invocations of an application. For example, you
can share an object between different applications by serializing it to the
clipboard. You can serialize an object to a stream, disk, memory, over the
network, and so forth. Remoting uses serialization to pass objects "by value"
from one computer or application domain to another.
XML serialization serializes only public properties and fields and does not
preserve type fidelity. This is useful when you want to provide or consume data
without restricting the application that uses the data. Because XML is an open
standard, it is an attractive choice for sharing data across the Web. SOAP is an
open standard, which makes it an attractive choice.
There are two separate mechanisms provided by the .NET class library - XmlSerializer
and SoapFormatter/BinaryFormatter. Microsoft uses XmlSerializer for Web Services, and
uses SoapFormatter/BinaryFormatter for remoting. Both are available for use in your own
code.
Why do I get errors when I try to serialize a Hashtable?
XmlSerializer will refuse to serialize instances of any class that implements IDictionary,
e.g. Hashtable. SoapFormatter and BinaryFormatter do not have this restriction.
Assemblies can be static or dynamic. Static assemblies can include .NET Framework
types (interfaces and classes), as well as resources for the assembly (bitmaps, JPEG
files, resource files, and so on). Static assemblies are stored on disk in PE files. You can
also use the .NET Framework to create dynamic assemblies, which are run directly from
memory and are not saved to disk before execution. You can save dynamic assemblies
to disk after they have executed.
There are several ways to create assemblies. You can use development tools, such as
Visual Studio .NET, that you have used in the past to create .dll or .exe files. You can use
tools provided in the .NET Framework SDK to create assemblies with modules created in
other development environments. You can also use common language runtime APIs,
such as Reflection.Emit, to create dynamic assemblies.
42. What are Satellite Assemblies? How you will create this? How will you get
the different language strings?
Satellite assemblies are often used to deploy language-specific resources for an
application. These language-specific assemblies work in side-by-side execution
because the application has a separate product ID for each language and installs
satellite assemblies in a language-specific subdirectory for each language. When
uninstalling, the application removes only the satellite assemblies associated with
a given language and .NET Framework version. No core .NET Framework files
are removed unless the last language for that .NET Framework version is being
removed.
(For example, English and Japanese editions of the .NET Framework version 1.1
share the same core files. The Japanese .NET Framework version 1.1 adds
satellite assemblies with localized resources in a \ja subdirectory. An application
that supports the .NET Framework version 1.1, regardless of its language,
always uses the same core runtime files.)
http://www.ondotnet.com/lpt/a/2637
**
43. How will u load dynamic assembly? How will create assemblies at run
time?
**
44. What is Assembly manifest? what all details the assembly manifest will
contain?
Every assembly, whether static or dynamic, contains a collection of data that
describes how the elements in the assembly relate to each other. The assembly
manifest contains this assembly metadata. An assembly manifest contains all the
metadata needed to specify the assembly's version requirements and security
identity, and all metadata needed to define the scope of the assembly and
resolve references to resources and classes. The assembly manifest can be
stored in either a PE file (an .exe or .dll) with Microsoft intermediate language
(MSIL) code or in a standalone PE file that contains only assembly manifest
information.
It contains Assembly name, Version number, Culture, Strong name information,
List of all files in the assembly, Type reference information, Information on
referenced assemblies.
45. Difference between assembly manifest & metadata?
assembly manifest - An integral part of every assembly that renders the
assembly self-describing. The assembly manifest contains the assembly's
metadata. The manifest establishes the assembly identity, specifies the files that
make up the assembly implementation, specifies the types and resources that
make up the assembly, itemizes the compile-time dependencies on other
assemblies, and specifies the set of permissions required for the assembly to run
properly. This information is used at run time to resolve references, enforce
version binding policy, and validate the integrity of loaded assemblies. The self-
describing nature of assemblies also helps makes zero-impact install and
XCOPY deployment feasible.
metadata - Information that describes every element managed by the common
language runtime: an assembly, loadable file, type, method, and so on. This can
include information required for debugging and garbage collection, as well as
security attributes, marshaling data, extended class and member definitions,
version binding, and other information required by the runtime.
46. What is Global Assembly Cache (GAC) and what is the purpose of it? (How
to make an assembly to public? Steps) How more than one version of an
assembly can keep in same place?
Each computer where the common language runtime is installed has a machine-
wide code cache called the global assembly cache. The global assembly cache
stores assemblies specifically designated to be shared by several applications on
the computer. You should share assemblies by installing them into the global
assembly cache only when you need to.
Steps
- Create a strong name using sn.exe tool
eg: sn -k keyPair.snk
- with in AssemblyInfo.cs add the generated file name
eg: [assembly: AssemblyKeyFile("abc.snk")]
- recompile project, then install it to GAC by either
drag & drop it to assembly folder (C:\WINDOWS\assembly OR
C:\WINNT\assembly) (shfusion.dll tool)
or
gacutil -i abc.dll
47. If I have more than one version of one assemblies, then how'll I use old
version (how/where to specify version number?)in my application?
**
48. How to find methods of a assembly file (not using ILDASM)
Reflection
49. What is Garbage Collection in .Net? Garbage collection process?
The process of transitively tracing through all pointers to actively used objects in
order to locate all objects that can be referenced, and then arranging to reuse
any heap memory that was not found during this trace. The common language
runtime garbage collector also compacts the memory that is in use to reduce the
working space needed for the heap.
50. What is Reflection in .NET? Namespace? How will you load an assembly
which is not referenced by current assembly?
All .NET compilers produce metadata about the types defined in the modules
they produce. This metadata is packaged along with the module (modules in turn
are packaged together in assemblies), and can be accessed by a mechanism
called reflection. The System.Reflection namespace contains classes that can
be used to interrogate the types for a module/assembly.
Using reflection to access .NET metadata is very similar to using
ITypeLib/ITypeInfo to access type library data in COM, and it is used for similar
purposes - e.g. determining data type sizes for marshaling data across
context/process/machine boundaries.
Reflection can also be used to dynamically invoke methods (see
System.Type.InvokeMember), or even create types dynamically at run-time (see
System.Reflection.Emit.TypeBuilder).
51. What is Custom attribute? How to create? If I'm having custom attribute in
an assembly, how to say that name in the code?
A: The primary steps to properly design custom attribute classes are as follows:
The following example demonstrates the basic way of using reflection to get access to
custom attributes.
class MainClass
{
public static void Main()
{
System.Reflection.MemberInfo info = typeof(MyClass);
object[] attributes = info.GetCustomAttributes();
for (int i = 0; i < attributes.Length; i ++)
{
System.Console.WriteLine(attributes[i]);
}
}
}
1. Choosing a compiler.
To obtain the benefits provided by the common language runtime, you must use
one or more language compilers that target the runtime.
2. Compiling your code to Microsoft intermediate language (MSIL).
Compiling translates your source code into MSIL and generates the required
metadata.
3. Compiling MSIL to native code.
At execution time, a just-in-time (JIT) compiler translates the MSIL into native
code. During this compilation, code must pass a verification process that
examines the MSIL and metadata to find out whether the code can be
determined to be type safe.
4. Executing your code.
The common language runtime provides the infrastructure that enables execution
to take place as well as a variety of services that can be used during execution.
52. What is Active Directory? What is the namespace used to access the
Microsoft Active Directories? What are ADSI Directories?
Active Directory Service Interfaces (ADSI) is a programmatic interface for
Microsoft Windows Active Directory. It enables your applications to interact with
diverse directories on a network, using a single interface. Visual Studio .NET and
the .NET Framework make it easy to add ADSI functionality with the
DirectoryEntry and DirectorySearcher components.
Using ADSI, you can create applications that perform common administrative
tasks, such as backing up databases, accessing printers, and administering user
accounts. ADSI makes it possible for you to:
using System.DirectoryServices;
1. The garbage collector searches for managed objects that are referenced in
managed code.
2. The garbage collector attempts to finalize objects that are not referenced.
3. The garbage collector frees objects that are not referenced and reclaims their
memory.
(COM)
68. Interop Services?
The common language runtime provides two mechanisms for interoperating with
unmanaged code:
Platform invoke, which enables managed code to call functions exported from an
unmanaged library.
COM interop, which enables managed code to interact with COM objects through
interfaces.
Both platform invoke and COM interop use interop marshaling to accurately move
method arguments between caller and callee and back, if required.
A proxy object generated by the common language runtime so that existing COM
applications can use managed classes, including .NET Framework classes, transparently.
Variables of reference types, referred to as objects, store references to the actual data.
This following are the reference types:
class
interface
delegate
object
string
17. What is Inheritance, Multiple Inheritance, Shared and Repeatable
Inheritance?
**
18. What is Method overloading?
Method overloading occurs when a class contains two methods with the same
name, but different signatures.
19. What is Method Overriding? How to override a function in C#?
Use the override modifier to modify a method, a property, an indexer, or an event.
An override method provides a new implementation of a member inherited from a
base class. The method overridden by an override declaration is known as the
overridden base method. The overridden base method must have the same
signature as the override method.
You cannot override a non-virtual or static method. The overridden base method
must be virtual, abstract, or override.
20. Can we call a base class method without creating instance?
Its possible If its a static method.
Its possible by inheriting from that class also.
Its possible from derived classes using base keyword.
21. You have one base class virtual function how will call that function from
derived class?
Ans:
22. class a
23. {
24. public virtual int m()
25. {
26. return 1;
27. }
28. }
29. class b:a
30. {
31. public int j()
32. {
33. return m();
34. }
}
C# Language features
How to implement getCommon method in class a? Are you seeing any problem in the
implementation?
Ans:
how to implement the Display in the class printDoc (How to resolve the naming Conflict)
A: no naming conflicts
class PrintDoc:IPrint,IWrite
{
public string Display()
{
return "s";
}
}
141.interface IList
142. {
143. int Count { get; set; }
144. }
145. interface ICounter
146. {
147. void Count(int i);
148. }
149. interface IListCounter: IList, ICounter {}
150. class C
151. {
152. void Test(IListCounter x)
153. {
154. x.Count(1); // Error
155. x.Count = 1; // Error
156. ((IList)x).Count = 1; // Ok,
invokes IList.Count.set
157. ((ICounter)x).Count(1); // Ok,
invokes ICounter.Count
158. }
159. }
160.Write one code example for compile time binding and one for run time
binding? What is early/late binding?
An object is early bound when it is assigned to a variable declared to be of a
specific object type. Early bound objects allow the compiler to allocate memory
and perform other optimizations before an application executes.
' Create a variable to hold a new object.
Dim FS As FileStream
' Assign a new object to the variable.
FS = New FileStream("C:\tmp.txt", FileMode.Open)
By contrast, an object is late bound when it is assigned to a variable declared to
be of type Object. Objects of this type can hold references to any object, but lack
many of the advantages of early-bound objects.
Dim xlApp As Object
xlApp = CreateObject("Excel.Application")
161.Can you explain what inheritance is and an example of when you might
use it?
162.How can you write a class to restrict that only one object of this class can
be created (Singleton class)?
(Access specifiers)
(Constructor / Destructor)
165.Difference between type constructor and instance constructor? What is
static constructor, when it will be fired? And what is its use?
(Class constructor method is also known as type constructor or type initializer)
Instance constructor is executed when a new instance of type is created and the
class constructor is executed after the type is loaded and before any one of the
type members is accessed. (It will get executed only 1st time, when we call any
static methods/fields in the same class.) Class constructors are used for static
field initialization. Only one class constructor per type is permitted, and it cannot
use the vararg (variable argument) calling convention.
A static constructor is used to initialize a class. It is called automatically to
initialize the class before the first instance is created or any static members are
referenced.
166.What is Private Constructor? and it’s use? Can you create instance of a
class which has Private Constructor?
A: When a class declares only private instance constructors, it is not possible for
classes outside the program to derive from the class or to directly create
instances of it. (Except Nested classes)
Make a constructor private if:
- You want it to be available only to the class itself. For example, you might have
a special constructor used only in the implementation of your class' Clone
method.
- You do not want instances of your component to be created. For example, you
may have a class containing nothing but Shared utility functions, and no instance
data. Creating instances of the class would waste memory.
167.I have 3 overloaded constructors in my class. In order to avoid making
instance of the class do I need to make all constructors to private?
(yes)
168.Overloaded constructor will call default constructor internally?
(no)
169.What are virtual destructors?
170.Destructor and finalize
Generally in C++ the destructor is called when objects gets destroyed. And one
can explicitly call the destructors in C++. And also the objects are destroyed in
reverse order that they are created in. So in C++ you have control over the
destructors.
In C# you can never call them, the reason is one cannot destroy an object. So
who has the control over the destructor (in C#)? it's the .Net frameworks Garbage
Collector (GC). GC destroys the objects only when necessary. Some situations of
necessity are memory is exhausted or user explicitly calls System.GC.Collect()
method.
Points to remember:
1. Destructors are invoked automatically, and cannot be invoked explicitly.
2. Destructors cannot be overloaded. Thus, a class can have, at most, one
destructor.
3. Destructors are not inherited. Thus, a class has no destructors other than the
one, which may be declared in it.
4. Destructors cannot be used with structs. They are only used with classes.
5. An instance becomes eligible for destruction when it is no longer possible for
any code to use the instance.
6. Execution of the destructor for the instance may occur at any time after the
instance becomes eligible for destruction.
7. When an instance is destructed, the destructors in its inheritance chain are
called, in order, from most derived to least derived.
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/cpguide/html/cpconfinalizemethodscdestructors.asp
171.What is the difference between Finalize and Dispose (Garbage collection)
Class instances often encapsulate control over resources that are not managed
by the runtime, such as window handles (HWND), database connections, and so
on. Therefore, you should provide both an explicit and an implicit way to free
those resources. Provide implicit control by implementing the protected Finalize
Method on an object (destructor syntax in C# and the Managed Extensions for
C++). The garbage collector calls this method at some point after there are no
longer any valid references to the object.
In some cases, you might want to provide programmers using an object with the
ability to explicitly release these external resources before the garbage collector
frees the object. If an external resource is scarce or expensive, better
performance can be achieved if the programmer explicitly releases resources
when they are no longer being used. To provide explicit control, implement the
Dispose method provided by the IDisposable Interface. The consumer of the
object should call this method when it is done using the object. Dispose can be
called even if other references to the object are alive.
Note that even when you provide explicit control by way of Dispose, you should
provide implicit cleanup using the Finalize method. Finalize provides a backup
to prevent resources from permanently leaking if the programmer fails to call
Dispose.
172.What is close method? How its different from Finalize & Dispose?
**
173.What is boxing & unboxing?
174.What is check/uncheck?
175.What is the use of base keyword? Tell me a practical example for base
keyword’s usage?
176.What are the different .net tools which u used in projects?
177.try
{
...
}
catch
{
...//exception occurred here. What'll happen?
}
finally
{
..
}
Ans : It will throw exception.
178.What will do to avoid prior case?
Ans:
179.try
180.{
181.try
182.{
183....
184.}
185.catch
186.{
187....
188.//exception occurred here.
189.}
190.finally
191.{
192....
193.}
194.}
195.catch
196.{
197....
198.}
199.finally
200.{
201....
}
202.try
203.{
204....
205.}
206.catch
207.{
208....
209.}
210.finally
211.{
212...
213.}
214.Will it go to finally block if there is no exception happened?
Ans: Yes. The finally block is useful for cleaning up any resources allocated in
the try block. Control is always passed to the finally block regardless of how the
try block exits.
215.Is goto statement supported in C#? How about Java?
Gotos are supported in C#to the fullest. In Java goto is a reserved keyword that
provides absolutely no functionality.
216.What’s different about switch statements in C#?
No fall-throughs allowed. Unlike the C++ switch statement, C# does not support
an explicit fall through from one case label to another. If you want, you can use
goto a switch-case, or goto default.
case 1:
cost += 25;
break;
case 2:
cost += 25;
goto case 1;
(ADO.NET)
217.Advantage of ADO.Net?
ADO.NET Does Not Depend On Continuously Live Connections
Database Interactions Are Performed Using Data Commands
Data Can Be Cached in Datasets
Datasets Are Independent of Data Sources
Data Is Persisted as XML
Schemas Define Data Structures
218.How would u connect to database using .NET?
SqlConnection nwindConn = new SqlConnection("Data
Source=localhost; Integrated Security=SSPI;" +
"Initial
Catalog=northwind");
nwindConn.Open();
219.What are relation objects in dataset and how & where to use them?
In a DataSet that contains multiple DataTable objects, you can use
DataRelation objects to relate one table to another, to navigate through the
tables, and to return child or parent rows from a related table. Adding a
DataRelation to a DataSet adds, by default, a UniqueConstraint to the parent
table and a ForeignKeyConstraint to the child table.
The following code example creates a DataRelation using two DataTable
objects in a DataSet. Each DataTable contains a column named CustID, which
serves as a link between the two DataTable objects. The example adds a single
DataRelation to the Relations collection of the DataSet. The first argument in
the example specifies the name of the DataRelation being created. The second
argument sets the parent DataColumn and the third argument sets the child
DataColumn.
custDS.Relations.Add("CustOrders",
custDS.Tables["Customers"].Columns["CustID"],
custDS.Tables["Orders"].Columns["CustID"]);
OR
(ASP.NET)
237.Asp.net and asp – differences?
Code Render Block Code Declaration Block
Compiled
Request/Response Event Driven
Object Oriented - Constructors/Destructors,
Inheritance, overloading..
Exception Handling - Try, Catch, Finally
Down-level Support
Cultures
User Controls
In-built client side validation
It can span across servers, It can survive
Session - weren't transferable across
server crashes, can work with browsers that
servers
don't support cookies
its an integral part of OS under the .net
built on top of the window & IIS, it was framework. It shares many of the same
always a separate entity & its functionality objects that traditional applications would
was limited. use, and all .net objects are available for
asp.net's consumption.
Garbage Collection
Declare variable with datatype
In built graphics support
Cultures
238.How ASP and ASP.NET page works? Explain about asp.net page life cycle?
**
239.Order of events in an asp.net page? Control Execution Lifecycle?
Phase What a control needs to do Method or event to override
Initialize Initialize settings needed during the Init event (OnInit method)
lifetime of the incoming Web request.
Load view state At the end of this phase, the ViewState LoadViewState method
property of a control is automatically
populated as described in Maintaining
State in a Control. A control can override
the default implementation of the
LoadViewState method to customize
state restoration.
Process Process incoming form data and update LoadPostData method (if
postback data properties accordingly. IPostBackDataHandler is
implemented)
Load Perform actions common to all requests, Load event
such as setting up a database query. At
this point, server controls in the tree are (OnLoad method)
created and initialized, the state is
restored, and form controls reflect client-
side data.
Send postback Raise change events in response to state RaisePostDataChangedEvent
change changes between the current and previous method (if IPostBackDataHandler
notifications postbacks. is implemented)
Handle postback Handle the client-side event that caused RaisePostBackEvent method(if
events the postback and raise appropriate events IPostBackEventHandler is
on the server. implemented)
Prerender Perform any updates before the output is PreRender event
rendered. Any changes made to the state (OnPreRender method)
of the control in the prerender phase can
be saved, while changes made in the
rendering phase are lost.
Save state The ViewState property of a control is SaveViewState method
automatically persisted to a string object
after this stage. This string object is sent
to the client and back as a hidden
variable. For improving efficiency, a
control can override the SaveViewState
method to modify the ViewState
property.
Render Generate output to be rendered to the Render method
client.
Dispose Perform any final cleanup before the Dispose method
control is torn down. References to
expensive resources such as database
connections must be released in this
phase.
Unload Perform any final cleanup before the UnLoad event (On UnLoad
control is torn down. Control authors method)
generally perform cleanup in Dispose and
do not handle this event.
240.Note To override an EventName event, override the OnEventName method
(and call base. OnEventName).
If none of the existing ASP.NET server controls meet the specific requirements of your
applications, you can create either a Web user control or a Web custom control that
encapsulates the functionality you need. The main difference between the two controls
lies in ease of creation vs. ease of use at design time.
Web user controls are easy to make, but they can be less convenient to use in advanced
scenarios. You develop Web user controls almost exactly the same way that you develop
Web Forms pages. Like Web Forms, user controls can be created in the visual designer,
they can be written with code separated from the HTML, and they can handle execution
events. However, because Web user controls are compiled dynamically at run time they
cannot be added to the Toolbox, and they are represented by a simple placeholder glyph
when added to a page. This makes Web user controls harder to use if you are
accustomed to full Visual Studio .NET design-time support, including the Properties
window and Design view previews. Also, the only way to share the user control between
applications is to put a separate copy in each application, which takes more maintenance
if you make changes to the control.
Web custom controls are compiled code, which makes them easier to use but more
difficult to create; Web custom controls must be authored in code. Once you have created
the control, however, you can add it to the Toolbox and display it in a visual designer with
full Properties window support and all the other design-time features of ASP.NET server
controls. In addition, you can install a single copy of the Web custom control in the global
assembly cache and share it between applications, which makes maintenance easier.
(Session/State)
You can create handlers for these types of events in the Global.asax file.
Note that if you try the sample above with this setting, you can reset the Web server
(enter iisreset on the command line) and the session state value will persist.
**
(Security)
264.Security types in ASP/ASP.NET? Different Authentication modes?
265.How .Net has implemented security for web applications?
266.How to do Forms authentication in asp.net?
267.Explain authentication levels in .net ?
268.Explain autherization levels in .net ?
269.What is Role-Based security?
A role is a named set of principals that have the same privileges with respect to
security (such as a teller or a manager). A principal can be a member of one or
more roles. Therefore, applications can use role membership to determine
whether a principal is authorized to perform a requested action.
**
270.How will you do windows authentication and what is the namespace? If a
user is logged under integrated windows authentication mode, but he is
still not able to logon, what might be the possible cause for this? In
ASP.Net application how do you find the name of the logged in person
under windows authentication?
271.What are the different authentication modes in the .NET environment?
272.<authentication mode="Windows|Forms|Passport|None">
273. <forms name="name"
274. loginUrl="url"
275. protection="All|None|Encryption|
Validation"
276. timeout="30" path="/" >
277. requireSSL="true|false"
278. slidingExpiration="true|false">
279. <credentials passwordFormat="Clear|SHA1|MD5">
280. <user name="username" password="password"/>
281. </credentials>
282. </forms>
283. <passport redirectUrl="internal"/>
</authentication>
Attribut Option Description
e
mode Controls the default authentication mode for an application.
Windows Specifies Windows authentication as the default authentication mode. Use
this mode when using any form of Microsoft Internet Information Services
(IIS) authentication: Basic, Digest, Integrated Windows authentication
(NTLM/Kerberos), or certificates.
Forms Specifies ASP.NET forms-based authentication as the default authentication
mode.
Passport Specifies Microsoft Passport authentication as the default authentication
mode.
None Specifies no authentication. Only anonymous users are expected or
applications can handle events to provide their own authentication.
284.How do you specify whether your data should be passed as Query string
and Forms (Mainly about POST and GET)
Through attribute tag of form tag.
285.What is the other method, other than GET and POST, in ASP.NET?
286.What are validator? Name the Validation controls in asp.net? How do u
disable them? Will the asp.net validators run in server side or client side?
How do you do Client-side validation in .Net? How to disable validator
control by client side JavaScript?
A set of server controls included with ASP.NET that test user input in HTML and
Web server controls for programmer-defined requirements. Validation controls
perform input checking in server code. If the user is working with a browser that
supports DHTML, the validation controls can also perform validation
("EnableClientScript" property set to true/false) using client script.
The following validation controls are available in asp.net:
RequiredFieldValidator Control, CompareValidator Control, RangeValidator
Control, RegularExpressionValidator Control, CustomValidator Control,
ValidationSummary Control.
287.Which two properties are there on every validation control?
ControlToValidate, ErrorMessage
288.How do you use css in asp.net?
Within the <HEAD> section of an HTML document that will use these styles, add
a link to this external CSS style sheet that
follows this form:
<LINK REL="STYLESHEET" TYPE="text/css" HREF="MyStyles.css">
MyStyles.css is the name of your external CSS style sheet.
289.How do you implement postback with a text box? What is postback and
usestate?
Make AutoPostBack property to true
290.How can you debug an ASP page, without touching the code?
291.What is SQL injection?
An SQL injection attack "injects" or manipulates SQL code by adding unexpected
SQL to a query.
Many web pages take parameters from web user, and make SQL query to the
database. Take for instance when a user login, web page that user name and
password and make SQL query to the database to check if a user has valid name
and password.
Username: ' or 1=1 ---
Password: [Empty]
This would execute the following query against the users table:
select count(*) from users where userName='' or 1=1 --' and
userPass=''
292.How can u handle Exceptions in Asp.Net?
293.How can u handle Un Managed Code Exceptions in ASP.Net?
294.Asp.net - How to find last error which occurred?
A: Server.GetLastError();
[C#]
Exception LastError;
String ErrMessage;
LastError = Server.GetLastError();
if (LastError != null)
ErrMessage = LastError.Message;
else
ErrMessage = "No Errors";
Response.Write("Last Error = " + ErrMessage);
295.How to do Caching in ASP?
A: <%@ OutputCache Duration="60" VaryByParam="None" %>
VaryByParam
Description
value
none One version of page cached (only raw GET)
n versions of page cached based on query string and/or
*
POST body
n versions of page cached based on value of v1 variable in
v1
query string or POST body
n versions of page cached based on value of v1 and v2
v1;v2
variables in query string or POST body
296.<%@ OutputCache Duration="60" VaryByParam="none" %>
<%@ OutputCache Duration="60" VaryByParam="*" %>
<%@ OutputCache Duration="60" VaryByParam="name;age" %>
The OutputCache directive supports several other cache varying options
VaryByHeader - maintain separate cache entry for header string changes
(UserAgent, UserLanguage, etc.)
VaryByControl - for user controls, maintain separate cache entry for properties
of a user control
VaryByCustom - can specify separate cache entries for browser types and
version or provide a custom GetVaryByCustomString method in
HttpApplicationderived class
297.What is the Global ASA(X) File?
298.Any alternative to avoid name collisions other then Namespaces.
A scenario that two namespaces named N1 and N2 are there both having the
same class say A. now in another class i ve written
using N1;using N2;
and i am instantiating class A in this class. Then how will u avoid name
collisions?
Ans: using alias
Eg: using MyAlias = MyCompany.Proj.Nested;
299.Which is the namespace used to write error message in event Log File?
300.What are the page level transaction and class level transaction?
301.What are different transaction options?
302.What is the namespace for encryption?
303.What is the difference between application and cache variables?
304.What is the difference between control and component?
305.You ve defined one page_load event in aspx page and same page_load
event in code behind how will prog run?
306.Where would you use an IHttpModule, and what are the limitations of any
approach you might take in implementing one?
307.Can you edit data in the Repeater control? Which template must you provide, in
order to display data in a Repeater control? How can you provide an alternating
color scheme in a Repeater control? What property must you set, and what
method must you call in your code, in order to bind the data from some data
source to the Repeater control?
308.What is the use of web.config? Difference between machine.config and
Web.config?
ASP.NET configuration files are XML-based text files--each named web.config--
that can appear in any directory on an ASP.NET
Web application server. Each web.config file applies configuration settings to the
directory it is located in and to all
virtual child directories beneath it. Settings in child directories can optionally
override or modify settings specified in
parent directories. The root configuration file--
WinNT\Microsoft.NET\Framework\<version>\config\machine.config--provides
default configuration settings for the entire machine. ASP.NET configures IIS to
prevent direct browser access to web.config
files to ensure that their values cannot become public (attempts to access them
will cause ASP.NET to return 403: Access
Forbidden).
At run time ASP.NET uses these web.config configuration files to hierarchically
compute a unique collection of settings for
each incoming URL target request (these settings are calculated only once and
then cached across subsequent requests; ASP.NET
automatically watches for file changes and will invalidate the cache if any of the
configuration files change).
http://samples.gotdotnet.com/quickstart/aspplus/doc/configformat.aspx
309.What is the use of sessionstate tag in the web.config file?
Configuring session state: Session state features can be configured via the
<sessionState> section in a web.config file. To double the default timeout of 20
minutes, you can add the following to the web.config file of an application:
<sessionState
timeout="40"
/>
310.What are the different modes for the sessionstates in the web.config file?
Off Indicates that session state is not enabled.
Inproc Indicates that session state is stored locally.
StateServer Indicates that session state is stored on a remote server.
SQLServer Indicates that session state is stored on the SQL Server.
311.What is smart navigation?
When a page is requested by an Internet Explorer 5 browser, or later, smart
navigation enhances the user's experience of the page by performing the
following:
eliminating the flash caused by navigation.
persisting the scroll position when moving from page to page.
persisting element focus between navigations.
retaining only the last page state in the browser's history.
Smart navigation is best used with ASP.NET pages that require frequent postbacks but
with visual content that does not change dramatically on return. Consider this carefully
when deciding whether to set this property to true.
Set the SmartNavigation attribute to true in the @ Page directive in the .aspx file. When
the page is requested, the dynamically generated class sets this property.
Open
machine.config(C:\WINDOWS\Microsoft.NET\Framework\v1.0.3705\CONFIG) &
add new extension under <httpHandlers> tag
<add verb="*" path="*.santhosh" type="System.Web.UI.PageHandlerFactory"/>
318.What is AutoEventWireup attribute for ?
From a command prompt, use Wsdl.exe to create a proxy class, specifying (at a
minimum) the URL to an XML Web service or a service description, or the path to
a saved service description.
Wsdl /language:language /protocol:protocol
/namespace:myNameSpace /out:filename
/username:username /password:password /domain:domain <url
or path>
2. What is a proxy in web service? How do I use a proxy server when invoking
a Web service?
3. asynchronous web service means?
4. What are the events fired when web service called?
5. How will do transaction in Web Services?
6. How does SOAP transport happen and what is the role of HTTP in it? How
you can access a webservice using soap?
7. What are the different formatters can be used in both? Why?.. binary/soap
8. How you will protect / secure a web service?
For the most part, things that you do to secure a Web site can be used to secure
a Web Service. If you need to encrypt the data exchange, you use Secure
Sockets Layer (SSL) or a Virtual Private Network to keep the bits secure. For
authentication, use HTTP Basic or Digest authentication with Microsoft®
Windows® integration to figure out who the caller is.
these items cannot:
Parse a SOAP request for valid values
Authenticate access at the Web Method level (they can authenticate at the Web
Service level)
Stop reading a request as soon as it is recognized as invalid
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/cpguide/html/cpcontransactionsupportinaspnetwebservices.asp
using System.Web;
using System.Web.Services;
12. What is Remoting?
The process of communication between different operating system processes,
regardless of whether they are on the same computer. The .NET remoting
system is an architecture designed to simplify communication between objects
living in different application domains, whether on the same computer or not, and
between different contexts, whether in the same application domain or not.
13. Difference between web services & remoting?
ASP.NET Web Services .NET Remoting
Can be accessed over any protocol
Protocol Can be accessed only over HTTP
(including TCP, HTTP, SMTP and so on)
Provide support for both stateful and
State Web services work in a stateless
stateless environments through Singleton
Management environment
and SingleCall objects
Web services support only the
Using binary communication, .NET
datatypes defined in the XSD type
Type System Remoting can provide support for rich type
system, limiting the number of
system
objects that can be serialized.
Web services support interoperability .NET remoting requires the client be built
Interoperability across platforms, and are ideal for using .NET, enforcing homogenous
heterogeneous environments. environment.
Reliability Highly reliable due to the fact that Can also take advantage of IIS for fault
Web services are always hosted in isolation. If IIS is not used, application
needs to provide plumbing for ensuring the
IIS
reliability of the application.
Provides extensibility by allowing us
Very extensible by allowing us to
to intercept the SOAP messages
Extensibility customize the different components of the
during the serialization and
.NET remoting framework.
deserialization stages.
Ease-of-
Easy-to-create and deploy. Complex to program.
Programming
14. Though both the .NET Remoting infrastructure and ASP.NET Web services can
enable cross-process communication, each is designed to benefit a different
target audience. ASP.NET Web services provide a simple programming model
and a wide reach. .NET Remoting provides a more complex programming model
and has a much narrower reach.
As explained before, the clear performance advantage provided by TCPChannel-
remoting should make you think about using this channel whenever you can
afford to do so. If you can create direct TCP connections from your clients to your
server and if you need to support only the .NET platform, you should go for this
channel. If you are going to go cross-platform or you have the requirement of
supporting SOAP via HTTP, you should definitely go for ASP.NET Web services.
Both the .NET remoting and ASP.NET Web services are powerful technologies
that provide a suitable framework for developing distributed applications. It is
important to understand how both technologies work and then choose the one
that is right for your application. For applications that require interoperability and
must function over public networks, Web services are probably the best bet. For
those that require communications with other .NET components and where
performance is a key priority, .NET Remoting is the best choice. In short, use
Web services when you need to send and receive data from different computing
platforms, use .NET Remoting when sending and receiving data between .NET
applications. In some architectural scenarios, you might also be able to use.NET
Remoting in conjunction with ASP.NET Web services and take advantage of the
best of both worlds.
The Key difference between ASP.NET webservices and .NET Remoting is how
they serialize data into messages and the format they choose for metadata.
ASP.NET uses XML serializer for serializing or Marshalling. And XSD is used for
Metadata. .NET Remoting relies on
System.Runtime.Serialization.Formatter.Binary and
System.Runtime.Serialization.SOAPFormatter and relies on .NET
CLR Runtime assemblies for metadata.
15. Can you pass SOAP messages through remoting?
16. CAO and SAO.
Client Activated objects are those remote objects whose Lifetime is directly
Controlled by the client. This is in direct contrast to SAO. Where the server, not
the client has complete control over the lifetime of the objects.
Client activated objects are instantiated on the server as soon as the client
request the object to be created. Unlike as SAO a CAO doesn’t delay the object
creation until the first method is called on the object. (In SAO the object is
instantiated when the client calls the method on the object)
17. singleton and singlecall.
Singleton types never have more than one instance at any one time. If an
instance exists, all client requests are serviced by that instance.
Single Call types always have one instance per client request. The next method
invocation will be serviced by a different server instance, even if the previous
instance has not yet been recycled by the system.
18. What is Asynchronous Web Services?
19. Web Client class and its methods?
20. Flow of remoting?
21. What is the use of trace utility?
Using the SOAP Trace Utility
The Microsoft® Simple Object Access Protocol (SOAP) Toolkit 2.0 includes a
TCP/IP trace utility, MSSOAPT.EXE. You use this trace utility to view the SOAP
messages sent by HTTP between a SOAP client and a service on the server.
1. On the server, open the Web Services Description Language (WSDL) file.
2. In the WSDL file, locate the <soap:address> element that corresponds to the
service and change the location attribute for this element to port 8080. For
example, if the location attribute specifies <http://MyServer/VDir/Service.wsdl>
change this attribute to <http://MyServer:8080/VDir/Service.wsdl>.
3. Run MSSOAPT.exe.
4. On the File menu, point to New, and either click Formatted Trace (if you don't
want to see HTTP headers) or click Unformatted Trace (if you do want to see
HTTP headers).
5. In the Trace Setup dialog box, click OK to accept the default values.
(XML)
<?xml version="1.0"?>
<diffgr:diffgram
xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<DataInstance>
</DataInstance>
<diffgr:before>
</diffgr:before>
<diffgr:errors>
</diffgr:errors>
</diffgr:diffgram>
<DataInstance>
The name of this element, DataInstance, is used for explanation purposes in this
documentation. A DataInstance element represents a DataSet or a row of a DataTable.
Instead of DataInstance, the element would contain the name of the DataSet or
DataTable. This block of the DiffGram format contains the current data, whether it has
been modified or not. An element, or row, that has been modified is identified with the
diffgr:hasChanges annotation.
<diffgr:before>
This block of the DiffGram format contains the original version of a row. Elements in this
block are matched to elements in the DataInstance block using the diffgr:id annotation.
<diffgr:errors>
This block of the DiffGram format contains error information for a particular row in the
DataInstance block. Elements in this block are matched to elements in the
DataInstance block using the diffgr:id annotation.
91. If I replace my Sqlserver with XML files and how about handling the same?
92. Write syntax to serialize class using XML Serializer?
(IIS)
93. In which process does IIS runs (was asking about the EXE file)
inetinfo.exe is the Microsoft IIS server running, handling ASP.NET requests
among other things. When an ASP.NET request is received (usually a file with
.aspx extension), the ISAPI filter aspnet_isapi.dll takes care of it by passing the
request to the actual worker process aspnet_wp.exe.
94. Where are the IIS log files stored?
C:\WINDOWS\system32\Logfiles\W3SVC1
OR
c:\winnt\system32\LogFiles\W3SVC1
95. What are the different IIS authentication modes in IIS 5.0 and Explain?
Difference between basic and digest authentication modes?
IIS provides a variety of authentication schemes:
Anonymous
Anonymous authentication gives users access to the public areas of your Web site
without prompting them for a user name or password. Although listed as an
authentication scheme, it is not technically performing any client authentication because
the client is not required to supply any credentials. Instead, IIS provides stored
credentials to Windows using a special user account, IUSR_machinename. By default,
IIS controls the password for this account. Whether or not IIS controls the password
affects the permissions the anonymous user has. When IIS controls the password, a sub
authentication DLL (iissuba.dll) authenticates the user using a network logon. The
function of this DLL is to validate the password supplied by IIS and to inform Windows
that the password is valid, thereby authenticating the client. However, it does not actually
provide a password to Windows. When IIS does not control the password, IIS calls the
LogonUser() API in Windows and provides the account name, password and domain
name to log on the user using a local logon. After the logon, IIS caches the security token
and impersonates the account. A local logon makes it possible for the anonymous user to
access network resources, whereas a network logon does not.
Basic Authentication
IIS Basic authentication as an implementation of the basic authentication scheme found
in section 11 of the HTTP 1.0 specification.
As the specification makes clear, this method is, in and of itself, non-secure. The reason
is that Basic authentication assumes a trusted connection between client and server.
Thus, the username and password are transmitted in clear text. More specifically, they
are transmitted using Base64 encoding, which is trivially easy to decode. This makes
Basic authentication the wrong choice to use over a public network on its own.
Basic Authentication is a long-standing standard supported by nearly all browsers. It also
imposes no special requirements on the server side -- users can authenticate against any
NT domain, or even against accounts on the local machine. With SSL to shelter the
security credentials while they are in transmission, you have an authentication solution
that is both highly secure and quite flexible.
Digest Authentication
The Digest authentication option was added in Windows 2000 and IIS 5.0. Like Basic
authentication, this is an implementation of a technique suggested by Web standards,
namely RFC 2069 (superceded by RFC 2617).
Digest authentication also uses a challenge/response model, but it is much more secure
than Basic authentication (when used without SSL). It achieves this greater security not
by encrypting the secret (the password) before sending it, but rather by following a
different design pattern -- one that does not require the client to transmit the password
over the wire at all.
Instead of sending the password itself, the client transmits a one-way message digest (a
checksum) of the user's password, using (by default) the MD5 algorithm. The server then
fetches the password for that user from a Windows 2000 Domain Controller, reruns the
checksum algorithm on it, and compares the two digests. If they match, the server knows
that the client knows the correct password, even though the password itself was never
sent. (If you have ever wondered what the default ISAPI filter "md5filt" that is installed
with IIS 5.0 is used for, now you know.
Integrated Windows Authentication
Integrated Windows authentication (formerly known as NTLM authentication and
Windows NT Challenge/Response authentication) can use either NTLM or Kerberos V5
authentication and only works with Internet Explorer 2.0 and later.
When Internet Explorer attempts to access a protected resource, IIS sends two WWW-
Authenticate headers, Negotiate and NTLM.
So, which mechanism is used depends upon a negotiation between Internet Explorer and
IIS.
When used in conjunction with Kerberos v5 authentication, IIS can delegate security
credentials among computers running Windows 2000 and later that are trusted and
configured for delegation. Delegation enables remote access of resources on behalf of
the delegated user.
Integrated Windows authentication is the best authentication scheme in an intranet
environment where users have Windows domain accounts, especially when using
Kerberos. Integrated Windows authentication, like digest authentication, does not pass
the user's password across the network. Instead, a hashed value is exchanged.
Client Certificate Mapping
A certificate is a digitally signed statement that contains information about an entity and
the entity's public key, thus binding these two pieces of information together. A trusted
organization (or entity) called a Certification Authority (CA) issues a certificate after the
CA verifies that the entity is who it says it is. Certificates can contain different types of
data. For example, an X.509 certificate includes the format of the certificate, the serial
number of the certificate, the algorithm used to sign the certificate, the name of the CA
that issued the certificate, the name and public key of the entity requesting the certificate,
and the CA's signature. X.509 client certificates simplify authentication for larger user
bases because they do not rely on a centralized account database. You can verify a
certificate simply by examining the certificate.
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/vsent7/html/vxconIISAuthentication.asp
When selecting an isolation level for your ASP application, keep in mind that out-process
settings - that is, Medium and High - are less efficient than in-process (Low). However,
out-process communication has been vastly improved under IIS5, and in fact IIS5's
Medium isolation level often deliver better results than IIS4's Low isolation. In practice,
you shouldn't set the Low isolation level for an IIS5 application unless you really need to
serve hundreds pages per second.
Controls
Programming
8. Write a program in C# for checking a given number is PRIME or not.
9. Write a program to find the angle between the hours and minutes in a clock
10. Write a C# program to find the Factorial of n
11. How do I upload a file from my ASP.NET page?
A: In order to perform file upload in your ASP.NET page, you will need to use two
classes: the System.Web.UI.HtmlControls.HtmlInputFile class and the
System.Web.HttpPostedFile class. The HtmlInputFile class represents and HTML
input control that the user will use on the client side to select a file to upload. The
HttpPostedFile class represents the uploaded file and is obtained from the
PostedFile property of the HtmlInputFile class. In order to use the HtmlInputFile
control, you need to add the enctype attribute to your form tag as follows:
<form id="upload" method="post" runat="server" enctype="multipart/form-data">
Also, remember that the /data directory is the only directory with Write
permissions enabled for the anonymous user. Therefore, you will need to make
sure that the your code uploads the file to the /data directory or one of its
subdirectories.
Below is a simple example of how to upload a file via an ASP.NET page in C#
and VB.NET.
C#
<%@ Import Namespace="System" %>
<%@ Import Namespace="System.Web" %>
<%@ Import Namespace="System.Web.UI.HtmlControls" %>
<%@ Import Namespace="System.IO" %>
<%@ Import Namespace="System.Drawing" %>
<html>
<head>
<title>upload_cs</title>
</head>
<script language="C#" runat="server">
public void UploadFile(object sender, EventArgs e)
{
if (loFile.PostedFile != null)
{
try
{
string strFileName, strFileNamePath, strFileFolder;
strFileFolder = Context.Server.MapPath(@"data\");
strFileName = loFile.PostedFile.FileName;
strFileName = Path.GetFileName(strFileName);
strFileNamePath = strFileFolder + strFileName;
loFile.PostedFile.SaveAs(strFileNamePath);
lblFileName.Text = strFileName;
lblFileLength.Text =
loFile.PostedFile.ContentLength.ToString();
lblFileType.Text = loFile.PostedFile.ContentType;
pnStatus.Visible = true;
}
catch (Exception x)
{
Label lblError = new Label();
lblError.ForeColor = Color.Red;
lblError.Text = "Exception occurred: " + x.Message;
lblError.Visible = true;
this.Controls.Add(lblError);
}
}
}
</script>
<body>
<form id="upload_cs" method="post" runat="server"
enctype="multipart/form-data">
<P>
<INPUT type="file" id="loFile" runat="server">
</P>
<P>
<asp:Button id="btnUpload" runat="server" Text=" Upload "
OnClick="UploadFile"></asp:Button></P>
<P>
<asp:Panel id="pnStatus" runat="server" Visible="False">
<asp:Label id="lblFileName" Font-Bold="True"
Runat="server"></asp:Label> uploaded<BR>
<asp:Label id="lblFileLength" Runat="server"></asp:Label>
bytes<BR>
<asp:Label id="lblFileType" Runat="server"></asp:Label>
</asp:Panel></P>
</form>
</body>
</html>
12. How do I send an email message from my ASP.NET page?
A: You can use the System.Web.Mail.MailMessage and the
System.Web.Mail.SmtpMail class to send email in your ASPX pages. Below is a
simple example of using this class to send mail in C# and VB.NET. In order to
send mail through our mail server, you would want to make sure to set the static
SmtpServer property of the SmtpMail class to mail-fwd.
C#
<%@ Import Namespace="System" %>
<%@ Import Namespace="System.Web" %>
<%@ Import Namespace="System.Web.Mail" %>
<HTML>
<HEAD>
<title>Mail Test</title>
</HEAD>
<script language="C#" runat="server">
private void Page_Load(Object sender, EventArgs e)
{
try
{
MailMessage mailObj = new MailMessage();
mailObj.From = "sales@joeswidgets.com";
mailObj.To = "ringleader@forexample-domain.com";
mailObj.Subject = "Your Widget Order";
mailObj.Body = "Your order was processed.";
mailObj.BodyFormat = MailFormat.Text;
SmtpMail.SmtpServer = "mail-fwd";
SmtpMail.Send(mailObj);
Response.Write("Mail sent successfully");
}
catch (Exception x)
{
Response.Write("Your message was not sent: " + x.Message);
}
}
</script>
<body>
<form id="mail_test" method="post" runat="server">
</form>
</body>
</HTML>
13. Write a program to create a user control with name and surname as data
members and login as method and also the code to call it. (Hint use event
delegates) Practical Example of Passing an Events to delegates
14. How can you read 3rd line from a text file?
This topic lists common programming tasks that can be summarized with a language
keyword.
Assign an object to = = =
an object variable
Function/method Sub2 void void
does not return a
value
Overload a function Overloads NEW (No language keyword required for this purpose) (No language
or method (Visual keyword required for
Basic: overload a this purpose)
procedure or
method)
Refer to the current Me3 this this
object
Object-Oriented Programming
1.Indexes
2.avoid more number of triggers on the table
3.unnecessary complicated joins
4.correct use of Group by clause with the select list
5.in worst cases Denormalization
• Every index increases the time in takes to perform INSERTS, UPDATES and DELETES,
so the number of indexes should not be very much. Try to use maximum 4-5 indexes on
one table, not more. If you have read-only table, then the number of indexes may be
increased.
• Keep your indexes as narrow as possible. This reduces the size of the index and reduces
the number of reads required to read the index.
• Try to create indexes on columns that have integer values rather than character values.
• If you create a composite (multi-column) index, the order of the columns in the key are
very important. Try to order the columns in the key as to enhance selectivity, with the
most selective columns to the leftmost of the key.
• If you want to join several tables, try to create surrogate integer keys for this purpose and
create indexes on their columns.
• Create surrogate integer primary key (identity for example) if your table will not have
many insert operations.
• Clustered indexes are more preferable than nonclustered, if you need to select by a
range of values or you need to sort results set with GROUP BY or ORDER BY.
• If your application will be performing the same query over and over on the same table,
consider creating a covering index on the table.
• You can use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large
Tables" trace to determine which tables in your database may need indexes. This trace
will show which tables are being scanned by queries instead of using an index.
• You can use sp_MSforeachtable undocumented stored procedure to rebuild all
indexes in your database. Try to schedule it to execute during CPU idle time and slow
production periods.
sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?')"
T-SQL Queries
1. 2 tables
Employee Phone
empid
empname empid
salary phnumber
mgrid
2. Select all employees who doesn't have phone?
SELECT empname
FROM Employee
WHERE (empid NOT IN
(SELECT DISTINCT empid
FROM phone))
3. Select the employee names who is having more than one phone
numbers.
SELECT empname
FROM employee
WHERE (empid IN
(SELECT empid
FROM phone
GROUP BY empid
HAVING COUNT(empid) > 1))
4. Select the details of 3 max salaried employees from employee
table.
SELECT TOP 3 empid, salary
FROM employee
ORDER BY salary DESC
5. Display all managers from the table. (manager id is same as emp
id)
SELECT empname
FROM employee
WHERE (empid IN
(SELECT DISTINCT mgrid
FROM employee))
6. Write a Select statement to list the Employee Name, Manager Name under a particular
manager?
SELECT e1.empname AS EmpName, e2.empname AS ManagerName
FROM Employee e1 INNER JOIN
Employee e2 ON e1.mgrid = e2.empid
ORDER BY e2.mgrid
7. 2 tables emp and phone.
emp fields are - empid, name
Ph fields are - empid, ph (office, mobile, home). Select all employees who doesn't have
any ph nos.
SELECT *
FROM employee LEFT OUTER JOIN
phone ON employee.empid = phone.empid
WHERE (phone.office IS NULL OR phone.office = ' ')
AND (phone.mobile IS NULL OR phone.mobile = ' ')
AND (phone.home IS NULL OR phone.home = ' ')
8. Find employee who is living in more than one city.
Two Tables:
Emp City
Empid Empid
empName City
Salary
9. SELECT empname, fname, lname
FROM employee
WHERE (empid IN
(SELECT empid
FROM city
GROUP BY empid
HAVING COUNT(empid) > 1))
10. Find all employees who is living in the same city. (table is same as above)
SELECT fname
FROM employee
WHERE (empid IN
(SELECT empid
FROM city a
WHERE city IN
(SELECT city
FROM city b
GROUP BY city
HAVING COUNT(city) > 1)))
11. There is a table named MovieTable with three columns - moviename, person and role.
Write a query which gets the movie details where Mr. Amitabh and Mr. Vinod acted and
their role is actor.
SELECT DISTINCT m1.moviename
FROM MovieTable m1 INNER JOIN
MovieTable m2 ON m1.moviename = m2.moviename
WHERE (m1.person = 'amitabh' AND m2.person = 'vinod' OR
m2.person = 'amitabh' AND m1.person = 'vinod') AND (m1.role =
'actor') AND (m2.role = 'actor')
ORDER BY m1.moviename
12. There are two employee tables named emp1 and emp2. Both contains same structure
(salary details). But Emp2 salary details are incorrect and emp1 salary details are correct.
So, write a query which corrects salary details of the table emp2
update a set a.sal=b.sal from emp1 a, emp2 b where a.empid=b.empid
13. Given a Table named “Students” which contains studentid, subjectid and marks. Where
there are 10 subjects and 50 students. Write a Query to find out the Maximum marks
obtained in each subject.
14. In this same tables now write a SQL Query to get the studentid also to combine with
previous results.
15. Three tables – student , course, marks – how do go at finding name of the students who
got max marks in the diff courses.
SELECT student.name, course.name AS coursename, marks.sid,
marks.mark
FROM marks INNER JOIN
student ON marks.sid = student.sid INNER JOIN
course ON marks.cid = course.cid
WHERE (marks.mark =
(SELECT MAX(Mark)
FROM Marks MaxMark
WHERE MaxMark.cID = Marks.cID))
16. There is a table day_temp which has three columns dayid, day and temperature. How do
I write a query to get the difference of temperature among each other for seven days of a
week?
SELECT a.dayid, a.dday, a.tempe, a.tempe - b.tempe AS Difference
FROM day_temp a INNER JOIN
day_temp b ON a.dayid = b.dayid + 1
OR
Select a.day, a.degree-b.degree from temperature a, temperature b
where a.id=b.id+1
17. There is a table which contains the names like this. a1, a2, a3, a3, a4, a1, a1, a2 and
their salaries. Write a query to get grand total salary, and total salaries of individual
employees in one query.
SELECT empid, SUM(salary) AS salary
FROM employee
GROUP BY empid WITH ROLLUP
ORDER BY empid
18. How to know how many tables contains empno as a column in a database?
SELECT COUNT(*) AS Counter
FROM syscolumns
WHERE (name = 'empno')
19. Find duplicate rows in a table? OR I have a table with one column which has many
records which are not distinct. I need to find the distinct values from that column
and number of times it’s repeated.
SELECT sid, mark, COUNT(*) AS Counter
FROM marks
GROUP BY sid, mark
HAVING (COUNT(*) > 1)
20. How to delete the rows which are duplicate (don’t delete both duplicate records).
SET ROWCOUNT 1
DELETE yourtable
FROM yourtable a
WHERE (SELECT COUNT(*) FROM yourtable b WHERE b.name1 = a.name1
AND b.age1 = a.age1) > 1
WHILE @@rowcount > 0
DELETE yourtable
FROM yourtable a
WHERE (SELECT COUNT(*) FROM yourtable b WHERE b.name1 = a.name1
AND b.age1 = a.age1) > 1
SET ROWCOUNT 0
21. How to find 6th highest salary
SELECT TOP 1 salary
FROM (SELECT DISTINCT TOP 6 salary
FROM employee
ORDER BY salary DESC) a
ORDER BY salary
22. Find top salary among two tables
SELECT TOP 1 sal
FROM (SELECT MAX(sal) AS sal
FROM sal1
UNION
SELECT MAX(sal) AS sal
FROM sal2) a
ORDER BY sal DESC
23. Write a query to convert all the letters in a word to upper case
SELECT UPPER('test')
24. Write a query to round up the values of a number. For example even if the user
enters 7.1 it should be rounded up to 8.
SELECT CEILING (7.1)
25. Write a SQL Query to find first day of month?
SELECT DATENAME(dw, DATEADD(dd, - DATEPART(dd, GETDATE()) + 1,
GETDATE())) AS FirstDay
Datepart Abbreviations
year yy, yyyy
quarter qq, q
month mm, m
dayofyear dy, y
day dd, d
week wk, ww
weekday dw
hour hh
minute mi, n
second ss, s
millisecond ms
26. Table A contains column1 which is primary key and has 2 values (1, 2) and Table B
contains column1 which is primary key and has 2 values (2, 3). Write a query which
returns the values that are not common for the tables and the query should return one
column with 2 records.
SELECT tbla.a
FROM tbla, tblb
WHERE tbla.a <>
(SELECT tblb.a
FROM tbla, tblb
WHERE tbla.a = tblb.a)
UNION
SELECT tblb.a
FROM tbla, tblb
WHERE tblb.a <>
(SELECT tbla.a
FROM tbla, tblb
WHERE tbla.a = tblb.a)
OR (better approach)
SELECT a
FROM tbla
WHERE a NOT IN
(SELECT a
FROM tblb)
UNION ALL
SELECT a
FROM tblb
WHERE a NOT IN
(SELECT a
FROM tbla)
27. There are 3 tables Titles, Authors and Title-Authors (check PUBS db). Write the query to
get the author name and the number of books written by that author, the result should
start from the author who has written the maximum number of books and end with the
author who has written the minimum number of books.
SELECT authors.au_lname, COUNT(*) AS BooksCount
FROM authors INNER JOIN
titleauthor ON authors.au_id = titleauthor.au_id INNER JOIN
titles ON titles.title_id = titleauthor.title_id
GROUP BY authors.au_lname
ORDER BY BooksCount DESC
28.
UPDATE emp_master
SET emp_sal =
CASE
WHEN emp_sal > 0 AND emp_sal <= 20000 THEN (emp_sal * 1.01)
WHEN emp_sal > 20000 THEN (emp_sal * 1.02)
END
29. List all products with total quantity ordered, if quantity ordered is null show it as 0.
SELECT name, CASE WHEN SUM(qty) IS NULL THEN 0 WHEN SUM(qty) > 0
THEN SUM(qty) END AS tot
FROM [order] RIGHT OUTER JOIN
product ON [order].prodid = product.prodid
GROUP BY name
Result:
coke 60
mirinda 0
pepsi 10
30. ANY, SOME, or ALL?
ALL means greater than every value--in other words, greater than the maximum value.
For example, >ALL (1, 2, 3) means greater than 3.
ANY means greater than at least one value, that is, greater than the minimum. So >ANY
(1, 2, 3) means greater than 1. SOME is an SQL-92 standard equivalent for ANY.
31. IN & = (difference in correlated sub query)
INDEX
32. What is Index? It’s purpose?
Indexes in databases are similar to indexes in books. In a database, an index allows the
database program to find data in a table without scanning the entire table. An index in a
database is a list of values in a table with the storage locations of rows in the table that
contain each value. Indexes can be created on either a single column or a combination of
columns in a table and are implemented in the form of B-trees. An index contains an
entry with one or more columns (the search key) from each row in a table. A B-tree is
sorted on the search key, and can be searched efficiently on any leading subset of the
search key. For example, an index on columns A, B, C can be searched efficiently on A,
on A, B, and A, B, C.
33. Explain about Clustered and non clustered index? How to choose between a
Clustered Index and a Non-Clustered Index?
There are clustered and nonclustered indexes. A clustered index is a special type of
index that reorders the way records in the table are physically stored. Therefore table can
have only one clustered index. The leaf nodes of a clustered index contain the data
pages.
A nonclustered index is a special type of index in which the logical order of the index
does not match the physical stored order of the rows on disk. The leaf nodes of a
nonclustered index does not consist of the data pages. Instead, the leaf nodes contain
index rows.
Consider using a clustered index for:
o Columns that contain a large number of distinct values.
o Queries that return a range of values using operators such as BETWEEN, >, >=,
<, and <=.
o Columns that are accessed sequentially.
o Queries that return large result sets.
Non-clustered indexes have the same B-tree structure as clustered indexes, with
two significant differences:
o The data rows are not sorted and stored in order based on their non-clustered
keys.
o The leaf layer of a non-clustered index does not consist of the data pages.
Instead, the leaf nodes contain index rows. Each index row contains the non-
clustered key value and one or more row locators that point to the data row (or
rows if the index is not unique) having the key value.
o Per table only 249 non clustered indexes.
34. Disadvantage of index?
Every index increases the time in takes to perform INSERTS, UPDATES and DELETES,
so the number of indexes should not be very much.
35. Given a scenario that I have a 10 Clustered Index in a Table to all their 10 Columns.
What are the advantages and disadvantages?
A: Only 1 clustered index is possible.
36. How can I enforce to use particular index?
You can use index hint (index=<index_name>) after the table name.
SELECT au_lname FROM authors (index=aunmind)
37. What is Index Tuning?
One of the hardest tasks facing database administrators is the selection of appropriate
columns for non-clustered indexes. You should consider creating non-clustered indexes
on any columns that are frequently referenced in the WHERE clauses of SQL statements.
Other good candidates are columns referenced by JOIN and GROUP BY operations.
You may wish to also consider creating non-clustered indexes that cover all of the
columns used by certain frequently issued queries. These queries are referred to as
“covered queries” and experience excellent performance gains.
Index Tuning is the process of finding appropriate column for non-clustered indexes.
SQL Server provides a wonderful facility known as the Index Tuning Wizard which greatly
enhances the index selection process.
38. Difference between Index defrag and Index rebuild?
When you create an index in the database, the index information used by queries is
stored in index pages. The sequential index pages are chained together by pointers from
one page to the next. When changes are made to the data that affect the index, the
information in the index can become scattered in the database. Rebuilding an index
reorganizes the storage of the index data (and table data in the case of a clustered index)
to remove fragmentation. This can improve disk performance by reducing the number of
page reads required to obtain the requested data
DBCC INDEXDEFRAG - Defragments clustered and secondary indexes of the specified
table or view.
**
39. What is sorting and what is the difference between sorting & clustered indexes?
The ORDER BY clause sorts query results by one or more columns up to 8,060 bytes.
This will happen by the time when we retrieve data from database. Clustered indexes
physically sorting data, while inserting/updating the table.
40. What are statistics, under what circumstances they go out of date, how do you
update them?
Statistics determine the selectivity of the indexes. If an indexed column has unique
values then the selectivity of that index is more, as opposed to an index with non-unique
values. Query optimizer uses these indexes in determining whether to choose an index or
not while executing a query.
Some situations under which you should update statistics:
1) If there is significant change in the key values in the index
2) If a large amount of data in an indexed column has been added, changed, or removed
(that is, if the distribution of key values has changed), or the table has been truncated
using the TRUNCATE TABLE statement and then repopulated
3) Database is upgraded from a previous version
41. What is fillfactor? What is the use of it ? What happens when we ignore it? When
you should use low fill factor?
When you create a clustered index, the data in the table is stored in the data pages of the
database according to the order of the values in the indexed columns. When new rows of
data are inserted into the table or the values in the indexed columns are changed,
Microsoft® SQL Server™ 2000 may have to reorganize the storage of the data in the
table to make room for the new row and maintain the ordered storage of the data. This
also applies to nonclustered indexes. When data is added or changed, SQL Server may
have to reorganize the storage of the data in the nonclustered index pages. When a new
row is added to a full index page, SQL Server moves approximately half the rows to a
new page to make room for the new row. This reorganization is known as a page split.
Page splitting can impair performance and fragment the storage of the data in a table.
When creating an index, you can specify a fill factor to leave extra gaps and reserve a
percentage of free space on each leaf level page of the index to accommodate future
expansion in the storage of the table's data and reduce the potential for page splits. The
fill factor value is a percentage from 0 to 100 that specifies how much to fill the data
pages after the index is created. A value of 100 means the pages will be full and will take
the least amount of storage space. This setting should be used only when there will be no
changes to the data, for example, on a read-only table. A lower value leaves more empty
space on the data pages, which reduces the need to split data pages as indexes grow
but requires more storage space. This setting is more appropriate when there will be
changes to the data in the table.
DATA TYPES
42. What are the data types in SQL
bigint Binary bit char cursor
datetime Decimal float image int
money Nchar ntext nvarchar real
smalldatetime Smallint smallmoney text timestamp
tinyint Varbinary Varchar uniqueidentifier
43. Difference between char and nvarchar / char and varchar data-type?
char[(n)] - Fixed-length non-Unicode character data with length of n bytes. n must be a
value from 1 through 8,000. Storage size is n bytes. The SQL-92 synonym for char is
character.
nvarchar(n) - Variable-length Unicode character data of n characters. n must be a value
from 1 through 4,000. Storage size, in bytes, is two times the number of characters
entered. The data entered can be 0 characters in length. The SQL-92 synonyms for
nvarchar are national char varying and national character varying.
varchar[(n)] - Variable-length non-Unicode character data with length of n bytes. n must
be a value from 1 through 8,000. Storage size is the actual length in bytes of the data
entered, not n bytes. The data entered can be 0 characters in length. The SQL-92
synonyms for varchar are char varying or character varying.
44. GUID datasize?
128bit
45. How GUID becoming unique across machines?
To ensure uniqueness across machines, the ID of the network card is used (among
others) to compute the number.
46. What is the difference between text and image data type?
Text and image. Use text for character data if you need to store more than 255 characters
in SQL Server 6.5, or more than 8000 in SQL Server 7.0. Use image for binary large
objects (BLOBs) such as digital images. With text and image data types, the data is not
stored in the row, so the limit of the page size does not apply.All that is stored in the row
is a pointer to the database pages that contain the data.Individual text, ntext, and image
values can be a maximum of 2-GB, which is too long to store in a single data row.
JOINS
47. What are joins?
Sometimes we have to select data from two or more tables to make our result complete.
We have to perform a join.
48. How many types of Joins?
Joins can be categorized as:
Inner joins (the typical join operation, which uses some comparison operator like
= or <>). These include equi-joins and natural joins.
Inner joins use a comparison operator to match rows from two tables based on
the values in common columns from each table. For example, retrieving all rows
where the student identification number is the same in both the students and
courses tables.
Outer joins. Outer joins can be a left, a right, or full outer join.
Outer joins are specified with one of the following sets of keywords when they are
specified in the FROM clause:
• LEFT JOIN or LEFT OUTER JOIN -The result set of a left outer join
includes all the rows from the left table specified in the LEFT OUTER
clause, not just the ones in which the joined columns match. When a row
in the left table has no matching rows in the right table, the associated
result set row contains null values for all select list columns coming from
the right table.
• RIGHT JOIN or RIGHT OUTER JOIN - A right outer join is the reverse of
a left outer join. All rows from the right table are returned. Null values are
returned for the left table any time a right table row has no matching row
in the left table.
• FULL JOIN or FULL OUTER JOIN - A full outer join returns all rows in
both the left and right tables. Any time a row has no match in the other
table, the select list columns from the other table contain null values.
When there is a match between the tables, the entire result set row
contains data values from the base tables.
Cross joins - Cross joins return all rows from the left table, each row from the left
table is combined with all rows from the right table. Cross joins are also called
Cartesian products. (A Cartesian join will get you a Cartesian product. A
Cartesian join is when you join every row of one table to every row of another
table. You can also get one by joining every row of a table to every row of itself.)
2. What is self join?
A table can be joined to itself in a self-join.
3. What are the differences between UNION and JOINS?
A join selects columns from 2 or more tables. A union selects rows.
4. Can I improve performance by using the ANSI-style joins instead of the old-
style joins?
Code Example 1:
select o.name, i.name
from sysobjects o, sysindexes i
where o.id = i.id
Code Example 2:
select o.name, i.name
from sysobjects o inner join sysindexes i
on o.id = i.id
You will not get any performance gain by switching to the ANSI-style JOIN
syntax.
Using the ANSI-JOIN syntax gives you an important advantage: Because the join
logic is cleanly separated from the filtering criteria, you can understand the query
logic more quickly.
The SQL Server old-style JOIN executes the filtering conditions before executing
the joins, whereas the ANSI-style JOIN reverses this procedure (join logic
precedes filtering).
Perhaps the most compelling argument for switching to the ANSI-style JOIN is
that Microsoft has explicitly stated that SQL Server will not support the old-style
OUTER JOIN syntax indefinitely. Another important consideration is that the
ANSI-style JOIN supports query constructions that the old-style JOIN syntax
does not support.
5. What is derived table?
Derived tables are SELECT statements in the FROM clause referred to by an
alias or a user-specified name. The result set of the SELECT in the FROM clause
forms a table used by the outer SELECT statement. For example, this SELECT
uses a derived table to find if any store carries all book titles in the pubs
database:
SELECT ST.stor_id, ST.stor_name
FROM stores AS ST,
(SELECT stor_id, COUNT(DISTINCT title_id) AS
title_count
FROM sales
GROUP BY stor_id
) AS SA
WHERE ST.stor_id = SA.stor_id
AND SA.title_count = (SELECT COUNT(*) FROM titles)
STORED PROCEDURE
6. What is Stored procedure?
A stored procedure is a set of Structured Query Language (SQL) statements that
you assign a name to and store in a database in compiled form so that you can
share it between a number of programs.
They allow modular programming.
They allow faster execution.
They can reduce network traffic.
They can be used as a security mechanism.
7. What are the different types of Storage Procedure?
Therefore, although the user-created stored procedure prefixed with sp_ may exist in the
current database, the master database is always checked first, even if the stored
procedure is qualified with the database name.
calling proc.
DECLARE @factorial int
EXEC dbo.sp_calcfactorial 4, @factorial OUT
SELECT @factorial
12. Nested Triggers
Triggers are nested when a trigger performs an action that initiates another
trigger, which can initiate another trigger, and so on. Triggers can be nested up to
32 levels, and you can control whether triggers can be nested through the nested
triggers server configuration option.
13. What is an extended stored procedure? Can you instantiate a COM object
by using T-SQL?
An extended stored procedure is a function within a DLL (written in a
programming language like C, C++ using Open Data Services (ODS) API) that
can be called from T-SQL, just the way we call normal stored procedures using
the EXEC statement.
14. Difference between view and stored procedure?
Views can have only select statements (create, update, truncate, delete
statements are not allowed) Views cannot have “select into”, “Group by”
“Having”, ”Order by”
15. What is a Function & what are the different user defined functions?
Function is a saved Transact-SQL routine that returns a value. User-defined
functions cannot be used to perform a set of actions that modify the global
database state. User-defined functions, like system functions, can be invoked
from a query. They also can be executed through an EXECUTE statement like
stored procedures.
1. Scalar Functions
Functions are scalar-valued if the RETURNS clause specified one of the scalar
data types
2. Inline Table-valued Functions
If the RETURNS clause specifies TABLE with no accompanying column list, the
function is an inline function.
3. Multi-statement Table-valued Functions
If the RETURNS clause specifies a TABLE type with columns and their data
types, the function is a multi-statement table-valued function.
2. What are the difference between a function and a stored procedure?
1. Functions can be used in a select statement where as procedures cannot
2. Procedure takes both input and output parameters but Functions takes only input
parameters
3. Functions cannot return values of type text, ntext, image & timestamps where as
procedures can
4. Functions can be used as user defined datatypes in create table but procedures
cannot
***Eg:-create table <tablename>(name varchar(10),salary getsal(name))
Here getsal is a user defined function which returns a salary type, when table is
created no storage is allotted for salary type, and getsal function is also not
executed, But when we are fetching some values from this table, getsal function
get’s executed and the return
Type is returned as the result set.
3. How to debug a stored procedure?
TRIGGER
4. What is Trigger? What is its use? What are the types of Triggers? What are
the new kinds of triggers in sql 2000?
Triggers are a special class of stored procedure defined to execute automatically
when an UPDATE, INSERT, or DELETE statement is issued against a table or
view. Triggers are powerful tools that sites can use to enforce their business
rules automatically when data is modified.
The CREATE TRIGGER statement can be defined with the FOR UPDATE, FOR
INSERT, or FOR DELETE clauses to target a trigger to a specific class of data
modification actions. When FOR UPDATE is specified, the IF UPDATE
(column_name) clause can be used to target a trigger to updates affecting a
particular column.
You can use the FOR clause to specify when a trigger is executed:
AFTER (default) - The trigger executes after the statement that triggered it
completes. If the statement fails with an error, such as a constraint violation or
syntax error, the trigger is not executed. AFTER triggers cannot be specified for
views.
INSTEAD OF -The trigger executes in place of the triggering action. INSTEAD
OF triggers can be specified on both tables and views. You can define only one
INSTEAD OF trigger for each triggering action (INSERT, UPDATE, and
DELETE). INSTEAD OF triggers can be used to perform enhance integrity
checks on the data values supplied in INSERT and UPDATE statements.
INSTEAD OF triggers also let you specify actions that allow views, which would
normally not support updates, to be updatable.
An INSTEAD OF trigger can take actions such as:
• Ignoring parts of a batch.
• Not processing a part of a batch and logging the problem rows.
• Taking an alternative action if an error condition is encountered.
In SQL Server 6.5 you could define only 3 triggers per table, one for INSERT, one for
UPDATE and one for DELETE. From SQL Server 7.0 onwards, this restriction is gone,
and you could create multiple triggers per each action. But in 7.0 there's no way to control
the order in which the triggers fire. In SQL Server 2000 you could specify which trigger
fires first or fires last using sp_settriggerorder.
Till SQL Server 7.0, triggers fire only after the data modification operation happens. So in
a way, they are called post triggers. But in SQL Server 2000 you could create pre triggers
also.
LOCK
6. What are locks?
Microsoft® SQL Server™ 2000 uses locking to ensure transactional integrity and
database consistency. Locking prevents users from reading data being changed
by other users, and prevents multiple users from changing the same data at the
same time. If locking is not used, data within the database may become logically
incorrect, and queries executed against that data may produce unexpected
results.
7. What are the different types of locks?
SQL Server uses these resource lock modes.
Lock mode Description
Used for operations that do not change or update data (read-only operations), such as
Shared (S)
a SELECT statement.
Used on resources that can be updated. Prevents a common form of deadlock that
Update (U) occurs when multiple sessions are reading, locking, and potentially updating
resources later.
Exclusive Used for data-modification operations, such as INSERT, UPDATE, or DELETE.
(X) Ensures that multiple updates cannot be made to the same resource at the same time.
Used to establish a lock hierarchy. The types of intent locks are: intent shared (IS),
Intent
intent exclusive (IX), and shared with intent exclusive (SIX).
Used when an operation dependent on the schema of a table is executing. The types
Schema
of schema locks are: schema modification (Sch-M) and schema stability (Sch-S).
Bulk Update
Used when bulk-copying data into a table and the TABLOCK hint is specified.
(BU)
8. What is a dead lock? Give a practical sample? How you can minimize the
deadlock situation? What is a deadlock and what is a live lock? How will
you go about resolving deadlocks?
Deadlock is a situation when two processes, each having a lock on one piece of
data, attempt to acquire a lock on the other's piece. Each process would wait
indefinitely for the other to release the lock, unless one of the user processes is
terminated. SQL Server detects deadlocks and terminates one user's process.
A livelock is one, where a request for an exclusive lock is repeatedly denied
because a series of overlapping shared locks keeps interfering. SQL Server
detects the situation after four denials and refuses further shared locks. (A
livelock also occurs when read transactions monopolize a table or page, forcing a
write transaction to wait indefinitely.)
9. What is isolation level?
An isolation level determines the degree of isolation of data between concurrent
transactions. The default SQL Server isolation level is Read Committed. A lower
isolation level increases concurrency, but at the expense of data correctness.
Conversely, a higher isolation level ensures that data is correct, but can affect
concurrency negatively. The isolation level required by an application determines
the locking behavior SQL Server uses.
SQL-92 defines the following isolation levels, all of which are supported by SQL
Server:
Read uncommitted (the lowest level where transactions are isolated only enough
to ensure that physically corrupt data is not read).
Read committed (SQL Server default level).
Repeatable read.
Serializable (the highest level, where transactions are completely isolated from
one another).
Isolation level Dirty read Nonrepeatable read Phantom
Read uncommitted Yes Yes Yes
Read committed No Yes Yes
Repeatable read No No Yes
Serializable No No No
10. Uncommitted Dependency (Dirty Read) - Uncommitted dependency occurs when
a second transaction selects a row that is being updated by another transaction.
The second transaction is reading data that has not been committed yet and may
be changed by the transaction updating the row. For example, an editor is
making changes to an electronic document. During the changes, a second editor
takes a copy of the document that includes all the changes made so far, and
distributes the document to the intended audience.
Inconsistent Analysis (Nonrepeatable Read) Inconsistent analysis occurs when a
second transaction accesses the same row several times and reads different
data each time. Inconsistent analysis is similar to uncommitted dependency in
that another transaction is changing the data that a second transaction is
reading. However, in inconsistent analysis, the data read by the second
transaction was committed by the transaction that made the change. Also,
inconsistent analysis involves multiple reads (two or more) of the same row and
each time the information is changed by another transaction; thus, the term
nonrepeatable read. For example, an editor reads the same document twice, but
between each reading, the writer rewrites the document. When the editor reads
the document for the second time, it has changed.
Phantom Reads Phantom reads occur when an insert or delete action is
performed against a row that belongs to a range of rows being read by a
transaction. The transaction's first read of the range of rows shows a row that no
longer exists in the second or succeeding read, as a result of a deletion by a
different transaction. Similarly, as the result of an insert by a different transaction,
the transaction's second or succeeding read shows a row that did not exist in the
original read. For example, an editor makes changes to a document submitted by
a writer, but when the changes are incorporated into the master copy of the
document by the production department, they find that new unedited material has
been added to the document by the author. This problem could be avoided if no
one could add new material to the document until the editor and production
department finish working with the original document.
11. nolock? What is the difference between the REPEATABLE READ and
SERIALIZE isolation levels?
Locking Hints - A range of table-level locking hints can be specified using the
SELECT, INSERT, UPDATE, and DELETE statements to direct Microsoft® SQL
Server 2000 to the type of locks to be used. Table-level locking hints can be used
when a finer control of the types of locks acquired on an object is required. These
locking hints override the current transaction isolation level for the session.
Locking hint Description
HOLDLOCK Hold a shared lock until completion of the transaction
instead of releasing the lock as soon as the required table,
row, or data page is no longer required. HOLDLOCK is
equivalent to SERIALIZABLE.
NOLOCK Do not issue shared locks and do not honor exclusive locks.
When this option is in effect, it is possible to read an
uncommitted transaction or a set of pages that are rolled
back in the middle of a read. Dirty reads are possible. Only
applies to the SELECT statement.
PAGLOCK Use page locks where a single table lock would usually be
taken.
READCOMMITTED Perform a scan with the same locking semantics as a
transaction running at the READ COMMITTED isolation
level. By default, SQL Server 2000 operates at this isolation
level.
READPAST Skip locked rows. This option causes a transaction to skip
rows locked by other transactions that would ordinarily
appear in the result set, rather than block the transaction
waiting for the other transactions to release their locks on
these rows. The READPAST lock hint applies only to
transactions operating at READ COMMITTED isolation
and will read only past row-level locks. Applies only to the
SELECT statement.
READUNCOMMITTED Equivalent to NOLOCK.
REPEATABLEREAD Perform a scan with the same locking semantics as a
transaction running at the REPEATABLE READ isolation
level.
ROWLOCK Use row-level locks instead of the coarser-grained page- and
table-level locks.
SERIALIZABLE Perform a scan with the same locking semantics as a
transaction running at the SERIALIZABLE isolation level.
Equivalent to HOLDLOCK.
TABLOCK Use a table lock instead of the finer-grained row- or page-
level locks. SQL Server holds this lock until the end of the
statement. However, if you also specify HOLDLOCK, the
lock is held until the end of the transaction.
TABLOCKX Use an exclusive lock on a table. This lock prevents others
from reading or updating the table and is held until the end
of the statement or transaction.
UPDLOCK Use update locks instead of shared locks while reading a
table, and hold locks until the end of the statement or
transaction. UPDLOCK has the advantage of allowing you
to read data (without blocking other readers) and update it
later with the assurance that the data has not changed since
you last read it.
XLOCK Use an exclusive lock that will be held until the end of the
transaction on all data processed by the statement. This lock
can be specified with either PAGLOCK or TABLOCK, in
which case the exclusive lock applies to the appropriate
level of granularity.
12. For example, if the transaction isolation level is set to SERIALIZABLE, and the
table-level locking hint NOLOCK is used with the SELECT statement, key-range
locks typically used to maintain serializable transactions are not taken.
USE pubs
GO
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
SELECT au_lname FROM authors WITH (NOLOCK)
GO
13. What is escalation of locks?
Lock escalation is the process of converting a lot of low level locks (like row
locks, page locks) into higher level locks (like table locks). Every lock is a
memory structure too many locks would mean, more memory being occupied by
locks. To prevent this from happening, SQL Server escalates the many fine-grain
locks to fewer coarse-grain locks. Lock escalation threshold was definable in
SQL Server 6.5, but from SQL Server 7.0 onwards it's dynamically managed by
SQL Server.
VIEW
14. What is View? Use? Syntax of View?
A view is a virtual table made up of data from base tables and other views, but
not stored separately.
Views simplify users perception of the database (can be used to present only the
necessary information while hiding details in underlying relations)
Views improve data security preventing undesired accesses
Views facilite the provision of additional data independence
15. Does the View occupy memory space?
No
16. Can u drop a table if it has a view?
Views or tables participating in a view created with the SCHEMABINDING clause
cannot be dropped. If the view is not created using SCHEMABINDING, then we
can drop the table.
17. Why doesn't SQL Server permit an ORDER BY clause in the definition of a
view?
SQL Server excludes an ORDER BY clause from a view to comply with the ANSI
SQL-92 standard. Because analyzing the rationale for this standard requires a
discussion of the underlying structure of the structured query language (SQL)
and the mathematics upon which it is based, we can't fully explain the restriction
here. However, if you need to be able to specify an ORDER BY clause in a view,
consider using the following workaround:
USE pubs
GO
CREATE VIEW AuthorsByName
AS
SELECT TOP 100 PERCENT *
FROM authors
ORDER BY au_lname, au_fname
GO
The TOP construct, which Microsoft introduced in SQL Server 7.0, is most useful
when you combine it with the ORDER BY clause. The only time that SQL Server
supports an ORDER BY clause in a view is when it is used in conjunction with
the TOP keyword. (Note that the TOP keyword is a SQL Server extension to the
ANSI SQL-92 standard.)
TRANSACTION
18. What is Transaction?
A transaction is a sequence of operations performed as a single logical unit of
work. A logical unit of work must exhibit four properties, called the ACID
(Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a
transaction:
Atomicity - A transaction must be an atomic unit of work; either all of its data
modifications are performed or none of them is performed.
Consistency - When completed, a transaction must leave all data in a consistent
state. In a relational database, all rules must be applied to the transaction's
modifications to maintain all data integrity. All internal data structures, such as B-
tree indexes or doubly-linked lists, must be correct at the end of the transaction.
Isolation - Modifications made by concurrent transactions must be isolated from
the modifications made by any other concurrent transactions. A transaction either
sees data in the state it was in before another concurrent transaction modified it,
or it sees the data after the second transaction has completed, but it does not
see an intermediate state. This is referred to as serializability because it results in
the ability to reload the starting data and replay a series of transactions to end up
with the data in the same state it was in after the original transactions were
performed.
Durability - After a transaction has completed, its effects are permanently in
place in the system. The modifications persist even in the event of a system
failure.
19. After one Begin Transaction a truncate statement and a RollBack
statements are there. Will it be rollbacked? Since the truncate statement
does not perform logged operation how does it RollBack?
It will rollback.
**
20. Given a SQL like
Begin Tran
Select @@Rowcount
Begin Tran
Select @@Rowcount
Begin Tran
Select @@Rowcount
Commit Tran
Select @@Rowcount
RollBack
Select @@Rowcount
RollBack
Select @@Rowcount
What is the value of @@Rowcount at each stmt levels?
Ans : 0 – zero.
@@ROWCOUNT - Returns the number of rows affected by the last statement.
@@TRANCOUNT - Returns the number of active transactions for the current
connection.
Each Begin Tran will add count, each commit will reduce count and ONE rollback
will make it 0.
OTHER
21. What are the constraints for Table Constraints define rules regarding the
values allowed in columns and are the standard mechanism for enforcing
integrity. SQL Server 2000 supports five classes of constraints.
NOT NULL
CHECK
UNIQUE
PRIMARY KEY
FOREIGN KEY
22. There are 50 columns in a table. Write a query to get first 25 columns
Ans: Need to mention each column names.
23. How to list all the tables in a particular database?
USE pubs
GO
sp_help
24. What are cursors? Explain different types of cursors. What are the
disadvantages of cursors? How can you avoid cursors?
Cursors allow row-by-row processing of the result sets.
Types of cursors: Static, Dynamic, Forward-only, Keyset-driven.
Disadvantages of cursors: Each time you fetch a row from the cursor, it results in
a network roundtrip. Cursors are also costly because they require more
resources and temporary storage (results in more IO operations). Further, there
are restrictions on the SELECT statements that can be used with some types of
cursors.
How to avoid cursor:
1. Most of the times, set based operations can be used instead of cursors. Here is
an example: If you have to give a flat hike to your employees using the following
criteria:
Salary between 30000 and 40000 -- 5000 hike
Salary between 40000 and 55000 -- 7000 hike
Salary between 55000 and 65000 -- 9000 hike
In this situation many developers tend to use a cursor, determine each
employee's salary and update his salary according to the above formula. But the
same can be achieved by multiple update statements or can be combined in a
single UPDATE statement as shown below:
UPDATE tbl_emp SET salary =
CASE WHEN salary BETWEEN 30000 AND 40000 THEN salary + 5000
WHEN salary BETWEEN 40000 AND 55000 THEN salary + 7000
WHEN salary BETWEEN 55000 AND 65000 THEN salary + 10000
END
2. You need to call a stored procedure when a column in a particular row meets
certain condition. You don't have to use cursors for this. This can be achieved
using WHILE loop, as long as there is a unique key to identify each row. For
examples of using WHILE loop for row by row processing, check out the 'My
code library' section of my site or search for WHILE.
2. What is Dynamic Cursor? Suppose, I have a dynamic cursor attached to
table in a database. I have another means by which I will modify the table.
What do you think will the values in the cursor be?
Dynamic cursors reflect all changes made to the rows in their result set when
scrolling through the cursor. The data values, order, and membership of the rows
in the result set can change on each fetch. All UPDATE, INSERT, and DELETE
statements made by all users are visible through the cursor. Updates are visible
immediately if they are made through the cursor using either an API function
such as SQLSetPos or the Transact-SQL WHERE CURRENT OF clause.
Updates made outside the cursor are not visible until they are committed, unless
the cursor transaction isolation level is set to read uncommitted.
3. What is DATEPART?
Returns an integer representing the specified datepart of the specified date.
4. Difference between Delete and Truncate?
TRUNCATE TABLE is functionally identical to DELETE statement with no
WHERE clause: both remove all rows in the table.
(1) But TRUNCATE TABLE is faster and uses fewer system and transaction log
resources than DELETE. The DELETE statement removes rows one at a time
and records an entry in the transaction log for each deleted row. TRUNCATE
TABLE removes the data by deallocating the data pages used to store the table's
data, and only the page deallocations are recorded in the transaction log.
(2) Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
(3) The counter used by an identity for new rows is reset to the seed for the
column. If you want to retain the identity counter, use DELETE instead.
Of course, TRUNCATE TABLE can be rolled back.
5. Given a scenario where two operations, Delete Stmt and Truncate Stmt,
where the Delete Statement was successful and the truncate stmt was
failed. – Can u judge why?
**
6. What are global variables? Tell me some of them?
Transact-SQL global variables are a form of function and are now referred to as
functions.
ABS - Returns the absolute, positive value of the given numeric expression.
SUM
AVG
AND
7. What is DDL?
Data definition language (DDL) statements are SQL statements that support the
definition or declaration of database objects (for example, CREATE TABLE,
DROP TABLE, and ALTER TABLE).
You can use the ADO Command object to issue DDL statements. To differentiate
DDL statements from a table or stored procedure name, set the CommandType
property of the Command object to adCmdText. Because executing DDL queries
with this method does not generate any recordsets, there is no need for a
Recordset object.
8. What is DML?
Data Manipulation Language (DML), which is used to select, insert, update, and
delete data in the objects defined using DDL
9. What are keys in RDBMS? What is a primary key/ foreign key?
There are two kinds of keys.
A primary key is a set of columns from a table that are guaranteed to have
unique values for each row of that table.
Foreign keys are attributes of one table that have matching values in a primary
key in another table, allowing for relationships between tables.
10. What is the difference between Primary Key and Unique Key?
Both primary key and unique key enforce uniqueness of the column on which
they are defined. But by default primary key creates a clustered index on the
column, where are unique creates a nonclustered index by default. Another major
difference is that, primary key doesn't allow NULLs, but unique key allows one
NULL only.
11. Define candidate key, alternate key, composite key?
A candidate key is one that can identify each row of a table uniquely. Generally a
candidate key becomes the primary key of the table. If the table has more than
one candidate key, one of them will become the primary key, and the rest are
called alternate keys.
A key formed by combining at least two or more columns is called composite key.
12. What is the Referential Integrity?
Referential integrity refers to the consistency that must be maintained between
primary and foreign keys, i.e. every foreign key value must have a corresponding
primary key value.
13. What are defaults? Is there a column to which a default can't be bound?
A default is a value that will be used by a column, if no value is supplied to that
column while inserting data. IDENTITY columns and timestamp columns can't
have defaults bound to them.
14. What is Query optimization? How is tuning a performance of query done?
15. What is the use of trace utility?
**
16. What is the use of shell commands? xp_cmdshell
Executes a given command string as an operating-system command shell and
returns any output as rows of text. Grants nonadministrative users permissions to
execute xp_cmdshell.
17. What is use of shrink database?
Microsoft® SQL Server 2000 allows each file within a database to be shrunk to
remove unused pages. Both data and transaction log files can be shrunk.
18. If the performance of the query suddenly decreased where you will check?
19. What is a pass-through query?
Microsoft® SQL Server 2000 sends pass-through queries as un-interpreted query
strings to an OLE DB data source. The query must be in a syntax the OLE DB
data source will accept. A Transact-SQL statement uses the results from a pass-
through query as though it is a regular table reference.
This example uses a pass-through query to retrieve a result set from a Microsoft
Access version of the Northwind sample database.
SELECT *
FROM OpenRowset('Microsoft.Jet.OLEDB.4.0',
'c:\northwind.mdb';'admin'; '',
'SELECT CustomerID, CompanyName
FROM Customers
WHERE Region = ''WA'' ')
20. How do you differentiate Local and Global Temporary table?
You can create local and global temporary tables. Local temporary tables are
visible only in the current session; global temporary tables are visible to all
sessions. Prefix local temporary table names with single number sign
(#table_name), and prefix global temporary table names with a double number
sign (##table_name). SQL statements reference the temporary table using the
value specified for table_name in the CREATE TABLE statement:
CREATE TABLE #MyTempTable (cola INT PRIMARY KEY)
INSERT INTO #MyTempTable VALUES (1)
21. How the Exists keyword works in SQL Server?
USE pubs
SELECT au_lname, au_fname
FROM authors
WHERE exists
(SELECT *
FROM publishers
WHERE authors.city = publishers.city)
When a subquery is introduced with the keyword EXISTS, it functions as an
existence test. The WHERE clause of the outer query tests for the existence of
rows returned by the subquery. The subquery does not actually produce any
data; it returns a value of TRUE or FALSE.
22. ANY?
USE pubs
SELECT au_lname, au_fname
FROM authors
WHERE city = ANY
(SELECT city
FROM publishers)
23. to select date part only
SELECT CONVERT(char(10),GetDate(),101)
--to select time part only
SELECT right(GetDate(),7)
24. How can I send a message to user from the SQL Server?
You can use the xp_cmdshell extended stored procedure to run net send
command. This is the example to send the 'Hello' message to JOHN:
EXEC master..xp_cmdshell "net send JOHN 'Hello'"
To get net send message on the Windows 9x machines, you should run the
WinPopup utility. You can place WinPopup in the Startup group under Program
Files.
25. What is normalization? Explain different levels of normalization? Explain
Third normalization form with an example?
The process of refining tables, keys, columns, and relationships to create an
efficient database is called normalization. This should eliminates unnecessary
duplication and provides a rapid search path to all necessary information.
Some of the benefits of normalization are:
There are a few rules for database normalization. Each rule is called a "normal form." If
the first rule is observed, the database is said to be in "first normal form." If the first three
rules are observed, the database is considered to be in "third normal form." Although
other levels of normalization are possible, third normal form is considered the highest
level necessary for most applications.
Eliminate duplicative columns from the same table. Clearly, the Subordinate1-
Subordinate4 columns are duplicative. What happens when we need to add or
remove a subordinate?
Subordinates
Bob Jim, Mary, Beth
Mary Mike, Jason, Carol, Mark
Jim Alan
This solution is closer, but it also falls short of the mark. The subordinates column
is still duplicative and non-atomic. What happens when we need to add or
remove a subordinate? We need to read and write the entire contents of the
table. That’s not a big deal in this situation, but what if one manager had one
hundred employees? Also, it complicates the process of selecting data from the
database in future queries.
Solution:
Subordinate
Bob Jim
Bob Mary
Bob Beth
Mary Mike
Mary Jason
Mary Carol
Mary Mark
Jim Alan
Records should not depend on anything other than a table's primary key (a
compound key, if necessary).
For example, consider a customer's address in an accounting system. The
address is needed by the Customers table, but also by the Orders, Shipping,
Invoices, Accounts Receivable, and Collections tables. Instead of storing the
customer's address as a separate entry in each of these tables, store it in one
place, either in the Customers table or in a separate Addresses table.
The Member table satisfies first normal form - it contains no repeating groups. It
satisfies second normal form - since it doesn't have a multivalued key. But the
key is MemberID, and the company name and location describe only a company,
not a member. To achieve third normal form, they must be moved into a separate
table. Since they describe a company, CompanyCode becomes the key of the
new "Company" table.
The motivation for this is the same for second normal form: we want to avoid
update and delete anomalies. For example, suppose no members from the IBM
were currently stored in the database. With the previous design, there would be
no record of its existence, even though 20 past members were from IBM!
Member Table
Company Table
6. The correct solution, to cause the model to be in 4th normal form, is to ensure
that all M:M relationships are resolved independently if they are indeed
independent.
**
Remember, these normalization guidelines are cumulative. For a database to be in 2NF,
it must first fulfill all the criteria of a 1NF database.
context - Specifies the execution context in which the newly created OLE object runs. If
specified, this value must be one of the following:
1 = In-process (.dll) OLE server only
4 = Local (.exe) OLE server only
5 = Both in-process and local OLE server allowed
Examples
A. Use Prog ID - This example creates a SQL-DMO SQLServer object by using its ProgID.
B. Use CLSID - This example creates a SQL-DMO SQLServer object by using its CLSID.
//details about database pubs. .mdf, .ldf file locations, size of database
sp_helpdb pubs
Audit and review activity that occurred on an instance of SQL Server. This allows a
security administrator to review any of the auditing events, including the success and
failure of a login attempt and the success and failure of permissions in accessing
statements and objects.
Permissions
2. A user is a member of Public role and Sales role. Public role has the
permission to select on all the table, and Sales role, which doesn’t have a
select permission on some of the tables. Will that user be able to select
from all tables?
**
3. If a user does not have permission on a table, but he has permission to a
view created on it, will he be able to view the data in table?
Yes.
4. Describe Application Role and explain a scenario when you will use it?
**
5. After removing a table from database, what other related objects have to be
dropped explicitly?
(view, SP)
6. You have a SP names YourSP and have the a Select Stmt inside the SP. You
also have a user named YourUser. What permissions you will give him for
accessing the SP.
**
7. Different Authentication modes in Sql server? If a user is logged under
windows authentication mode, how to find his userid?
There are Three Different authentication modes in sqlserver.
Administration
4. Explain the architecture of SQL Server?
**
5. Different types of Backups?
49. What are ‘jobs’ in SQL Server? How do we create one? What is tasks?
Using SQL Server Agent jobs, you can automate administrative tasks and run
them on a recurring basis.
**
50. What is database replication? What are the different types of replication
you can set up in SQL Server? How are they used? What is snapshot
replication how is it different from Transactional replication?
Replication is the process of copying/moving data between databases on the
same or different servers. SQL Server supports the following types of replication
scenarios:
In the details pane, right-click a Process ID, and then click Kill Process.
51. What is RAID and what are different types of RAID configurations?
RAID stands for Redundant Array of Inexpensive Disks, used to provide fault
tolerance to database servers. There are six RAID levels 0 through 5 offering
different levels of performance, fault tolerance.
52.
Some of the tools/ways that help you troubleshooting performance problems are:
SET SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO
ON, SQL Server Profiler, Windows NT /2000 Performance monitor, Graphical
execution plan in Query Analyzer.
53. How to determine the service pack currently installed on SQL Server?
The global variable @@Version stores the build number of the sqlservr.exe,
which is used to determine the service pack installed.
eg: Microsoft SQL Server 2000 - 8.00.760 (Intel X86) Dec 17 2002 14:22:05
Copyright (c) 1988-2003 Microsoft Corporation Enterprise Edition on Windows
NT 5.0 (Build 2195: Service Pack 3)
54. What is the purpose of using COLLATE in a query?
The term, collation, refers to a set of rules that determine how data is sorted and
compared. In Microsoft® SQL Server 2000, it is not required to separately specify
code page and sort order for character data, and the collation used for Unicode
data. Instead, specify the collation name and sorting rules to use. Character data
is sorted using rules that define the correct character sequence, with options for
specifying case-sensitivity, accent marks, kana character types, and character
width. Microsoft SQL Server 2000 collations include these groupings:
Windows collations - Windows collations define rules for storing character data
based on the rules defined for an associated Windows locale. The base Windows
collation rules specify which alphabet or language is used when dictionary sorting
is applied, as well as the code page used to store non-Unicode character data.
For Windows collations, the nchar, nvarchar, and ntext data types have the
same sorting behavior as char, varchar, and text data types
SQL collations - SQL collations are provided for compatibility with sort orders in
earlier versions of Microsoft SQL Server.
Sort Order
Binary is the fastest sorting order, and is case-sensitive. If Binary is selected, the Case-
sensitive, Accent-sensitive, Kana-sensitive, and Width-sensitive options are not
available.
Use Latin1_General for the U.S. English character set (code page 1252).
Use Modern_Spanish for all variations of Spanish, which also use the same
character set as U.S. English (code page 1252).
Use Arabic for all variations of Arabic, which use the Arabic character set (code
page 1256).
Use Japanese_Unicode for the Unicode version of Japanese (code page 932),
which has a different sort order from Japanese, but the same code page (932).
2. What is the STUFF Function and how does it differ from the REPLACE
function?
STUFF - Deletes a specified length of characters and inserts another set of
characters at a specified starting point.
SELECT STUFF('abcdef', 2, 3, 'ijklmn')
GO
Here is the result set:
---------
aijklmnef
REPLACE - Replaces all occurrences of the second given string expression in the first
string expression with a third expression.
SELECT REPLACE('abcdefghicde','cde','xxx')
GO
Here is the result set:
------------
abxxxfghixxx
3. What does it mean to have quoted_identifier on? What are the implications
of having it off?
When SET QUOTED_IDENTIFIER is OFF (default), literal strings in expressions
can be delimited by single or double quotation marks.
When SET QUOTED_IDENTIFIER is ON, all strings delimited by double
quotation marks are interpreted as object identifiers. Therefore, quoted identifiers
do not have to follow the Transact-SQL rules for identifiers.
SET QUOTED_IDENTIFIER must be ON when creating or manipulating indexes
on computed columns or indexed views. If SET QUOTED_IDENTIFIER is OFF,
CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on
computed columns or indexed views will fail.
The SQL Server ODBC driver and Microsoft OLE DB Provider for SQL Server
automatically set QUOTED_IDENTIFIER to ON when connecting.
When a stored procedure is created, the SET QUOTED_IDENTIFIER and SET
ANSI_NULLS settings are captured and used for subsequent invocations of that
stored procedure. When executed inside a stored procedure, the setting of SET
QUOTED_IDENTIFIER is not changed.
SET QUOTED_IDENTIFIER OFF
GO
-- Attempt to create a table with a reserved keyword as a
name
-- should fail.
CREATE TABLE "select" ("identity" int IDENTITY, "order" int)
GO
SET QUOTED_IDENTIFIER ON
GO
-- Will succeed.
CREATE TABLE "select" ("identity" int IDENTITY, "order" int)
GO
4. What is the purpose of UPDATE STATISTICS?
Updates information about the distribution of key values for one or more statistics
groups (collections) in the specified table or indexed view.
5. Fundamentals of Data warehousing & olap?
6. What do u mean by OLAP server? What is the difference between OLAP
and OLTP?
7. What is a tuple?
A tuple is an instance of data within a relational database.
8. Services and user Accounts maintenance
9. sp_configure commands?
Displays or changes global configuration settings for the current server.
10. What is the basic functions for master, msdb, tempdb databases?
Microsoft® SQL Server 2000 systems have four system databases:
master - The master database records all of the system level information for a
SQL Server system. It records all login accounts and all system configuration
settings. master is the database that records the existence of all other
databases, including the location of the database files.
tempdb - tempdb holds all temporary tables and temporary stored procedures. It
also fills any other temporary storage needs such as work tables generated by
SQL Server. tempdb is re-created every time SQL Server is started so the
system starts with a clean copy of the database.
By default, tempdb autogrows as needed while SQL Server is running. If the size
defined for tempdb is small, part of your system processing load may be taken
up with autogrowing tempdb to the size needed to support your workload each
time to restart SQL Server. You can avoid this overhead by using ALTER
DATABASE to increase the size of tempdb.
model - The model database is used as the template for all databases created
on a system. When a CREATE DATABASE statement is issued, the first part of
the database is created by copying in the contents of the model database, then
the remainder of the new database is filled with empty pages. Because tempdb
is created every time SQL Server is started, the model database must always
exist on a SQL Server system.
msdb - The msdb database is used by SQL Server Agent for scheduling alerts
and jobs, and recording operators.
11. What are sequence diagrams? What you will get out of this sequence
diagrams?
Sequence diagrams document the interactions between classes to achieve a
result, such as a use case. Because UML is designed for object-oriented
programming, these communications between classes are known as messages.
The sequence diagram lists objects horizontally, and time vertically, and models
these messages over time.
12. What are the new features of SQL 2000 than SQL 7? What are the new
datatypes in sql?
XML Support - The relational database engine can return data as Extensible
Markup Language (XML) documents. Additionally, XML can also be used to
insert, update, and delete values in the database. (for xml raw - to retrieve output
as xml type)
User-Defined Functions - The programmability of Transact-SQL can be extended
by creating your own Transact-SQL functions. A user-defined function can return
either a scalar value or a table.
Indexed Views - Indexed views can significantly improve the performance of an
application where queries frequently perform certain joins or aggregations. An
indexed view allows indexes to be created on views, where the result set of the
view is stored and indexed in the database.
New Data Types - SQL Server 2000 introduces three new data types. bigint is an
8-byte integer type. sql_variant is a type that allows the storage of data values
of different data types. table is a type that allows applications to store results
temporarily for later use. It is supported for variables, and as the return type for
user-defined functions.
INSTEAD OF and AFTER Triggers - INSTEAD OF triggers are executed instead
of the triggering action (for example, INSERT, UPDATE, DELETE). They can also
be defined on views, in which case they greatly extend the types of updates a
view can support. AFTER triggers fire after the triggering action. SQL Server
2000 introduces the ability to specify which AFTER triggers fire first and last.
Multiple Instances of SQL Server - SQL Server 2000 supports running multiple
instances of the relational database engine on the same computer. Each
computer can run one instance of the relational database engine from SQL
Server version 6.5 or 7.0, along with one or more instances of the database
engine from SQL Server 2000. Each instance has its own set of system and user
databases.
Index Enhancements - You can now create indexes on computed columns. You
can specify whether indexes are built in ascending or descending order, and if
the database engine should use parallel scanning and sorting during index
creation.
13. How do we open SQL Server in single user mode?
We can accomplish this in any of the three ways given below :-
NEW
8. @@IDENTITY ?
Ans: Returns the last-inserted identity value.
9. If a job is fail in sql server, how do find what went wrong?
10. Have you used Error handling in DTS?
http://www.smartdraw.com/resources/centers/software/erd.htm ER Diagram
1. How would you describe your personality? "I'm pretty even-tempered. I enjoy being
part of a team and feel comfortable both making decisions and following directions."
2. Tell me about yourself and your past experience? "I have been working in Information
Technology industry for last 4-5 years. I had an opportunity to work in teams and
independently. I am very comfortable in both the environments. I have developed
Client/Server and Intranet applications, involving full life cycle, which includes
Requirement Phase, Analysis/Design, Development, Unit Testing, Integrating Testing,
Customer/User Testing, and Rollout/Implementation. I have worked on various tools in
my career, which includes MS Access, MS SQL Server, SQL, VBA, JavaScript, VB
Script, COM/DCOM, ADO, Active X, ASP, HTML, DHTML, XML, XSL, CSS, Visual
InterDev, Visual Basic, FrontPage, Dreamweaver, Flash, Visual Source Safe and
Adobe PhotoShop. I have always met the project deadlines, for which I had to work
late hours and weekends. For the past two years, I have been working as a Web
Developer. Where I have involved in the beginning of the project itself, i.e.,
Requirement gathering phase. Online Job Search is a successful multi-tier web-based
application, which uses MS SQL Server 7.0 as backend data store and Visual Basic
6.0 and ASP as front-end. It provides a user-friendly platform for Job Seekers as well
as Job Submitters, from across the globe to register, search and submit Vacancies in
their local areas. We had meetings with the Customers/Users to get all the required
information. This application gives various functionalities (forgotten password, Mailing
list and etc) and different security levels to the Users and Administrators. My strengths
are my client-relationship management skills, and my leadership ability."
3. What are your strengths and weaknesses? Pick a weakness that could also be
consider a strength. "Sometimes I'm overly concerned with doing a good job and my
boss tells me I drive myself too hard." Then mention your strengths: your ability to get
the job done efficiently and on time; your pride in your work. Or also you can tell, My
strength is my flexibility. As director of operations at a startup company, I've had to
deal with and handle changes and new policies constantly. As far as weaknesses, I
really enjoy my work, and sometimes I put in too much time on some projects. But by
being aware of my tendency, I have learned to work smarter.
4. Why are you leaving your current job? Forget about the fact that you hate your boss
and your co-workers drive you crazy. Instead, say, "I'm ready to take on more
responsibilities and learn more, but the opportunities at my current job are limited. Or
I've set some goals for myself and my career, and unfortunately I'm at a standstill in
my current situation. I have begun to explore options available before I spend too
much time in a job where I can't advance. My goal is to continue to take on new
responsibilities and be a key contributor to the success of an online venture."
5. Why do you change jobs so often? "Mainly to learn and advance. I understand there's
a lot of room for growth here, and I hope to stay a long time if I'm offered the job."
6. Did you get along with your previous boss? If you didn't, and know you can't use her
as a reference, be candid but not bitter or complaining. "She's very professional and
taught me a lot, and I'm grateful for that. But I would have liked more responsibilities
than I was offered."
7. How would your boss describe you and your work style? "First, she'd say I have a lot
of initiative - I see a big picture and do what has to be done to achieve results.
Secondly, that I have business savvy - I know the business side as well as the
technical side. And thirdly, I have a high work ethic - if I say I'm going to do something,
I do it."
8. Why didn't you go further in school? "At the time, earning a living was more important.
But I'm thinking of furthering my education now."
9. What do you do in your spare time? Say you keep up with current events and have
been reading a best-selling business book (do it). Talk about any community activities
you're involved in, but stress that those commitments won't interfere with work.
http://www.ezsoftech.com/interviewtips3.html
180 action verbs and phrases that may be useful when writing a resume
Accomplished Achieved Acted Adapted Addressed
Administered Advanced Advised Allocated Analyzed
Appraised Approved Arranged Assembled Assigned
Assisted Attained Audited Authored Automated
Balanced Budgeted Built Calculated Catalogued
Chaired Clarified Classified Coached Collected
Compiled Completed Composed Computed Conceptualized
Conducted Consolidated Contained Contracted Contributed
Controlled Coordinated Corresponded Counseled Created
Critiqued Cut Decreased Delegated Demonstrated
Designed Directed Developed Devised Diagnosed
Directed Dispatched Distinguished Diversified Drafted
Edited Educated Eliminated Enabled Encouraged
Engineered Enlisted Established Evaluated Examined
Executed Expanded Expedited Explained Extracted
Fabricated Facilitated Familiarized Fashioned Focused
Forecast Formulated Founded Generated Guided
Headed up Identified Illustrated Implemented Improved
Increased Indoctrinated Influenced Informed Initiated
Innovated Inspected Instituted Instructed Integrated
Interpreted Interviewed Introduced Invented Investigated
Launched Lectured Led Maintained Managed
Marketed Mediated Moderated Monitored Motivated
Negotiated Operated Organized Originated Overhauled
Oversaw Performed Persuaded Planned Prepared
Presented Prioritized Processed Produced Programmed
Projected Promoted Provided Publicized Published
Purchased Recommended Reconciled Recorded Recruited
Reduced Referred Regulated Rehabilitated Remodeled
Repaired Represented Researched Restored Restructured
Retrieved Reversed Reviewed Revitalized Saved
Scheduled Schooled Screened Set Shaped
Skilled Solidified Solved Specified Stimulated
Streamlined Strengthened Summarized Supervised Surveyed
Systemized Tabulated Taught Trained Translated
Traveled Trimmed Upgraded Validated Worked
50 Self-Descriptive Words: Provided by Raza Abbas
Active adaptable aggressive alert ambitious
Analytical attentive broad-minded conscientious consistent
constructive creative dependable determined diplomatic
Disciplined discrete economical efficient energetic
enterprising enthusiastic extroverted fair forceful
imaginative independent logical loyal mature
methodical objective optimistic perceptive personable
Pleasant positive practical productive realistic
Reliable resourceful respective self-reliant sense of humor
Sincere sophisticated systematic tactful talented
A Test Specification defines exactly what tests will be performed and what their scope
and objectives will be. A Test Specification is produced as the first step in implementing a
Test Plan, prior to the onset of manual testing and/or automated test suite development. It
provides a repeatable, comprehensive definition of a testing campaign
1.Context Sesnitive:
Records your Operations in terms of GUI Object in your application. WinRunner
identifes each object you click (window, menu, list or button), and the type of operation
you perform (press, enable, move or select).
2. Analog:
WinRunner records the exact co-ordinates travelled by the mouse, as well as
mouse clicks and keyboard inputs.
7. What are the check points in winrunner and Types of check points
Checkpoints allow you to compare the current behavior of the application being tested to
its behavior in an earlier version.
GUI Checkpoints verify information about GUI obects.
Bitmap Checkpoints take a "snapshot" of a window or area of application and compare
this to an image captured in an earlier version.
Text Checkpoints read text in GUI objects and in bitmaps and enable you to verify their
contents.
Database Checkpoints check the contents and number of rows and columns of a result
set, which is based on a query you create on your database.
8. What is TSL
When you record a test, a test script is generated in Mecury Interactive's Test
Script Language. Each TSL statement in the test script represents keboard and/or mouse
input to the application being tested.
TSL is a C-like programming language designed for creating test scripts. It
combines functions developed specifically for testing with general purpose programming
language features such as variables, control-flow statements, arrays and user defined
functions. You can enhance a recorded test script simply by typing programming
elements into the test window.
Integration testing - An integration test verifies that all the parts of an application
"Integrate" together or work as expected together. This is important because after all the units are
tested individually we need to ensure that they are tested progressively.
22. As u said that u r leading a team of 2 members how will u motivate the team
point out their plus points, even if they fail to commit activities. Tell them how to meet
commitments in future. keep monitor them, analyse, give feedback.
23. If u need to do regression testing for n number of times, do u think doing the same work
is a boring.
25. Tell about Testing methodologies and their entry and exit criteria
26. How will you test a telephone set
27. How do you plan testing for a .Net project?
28. How did u manage testing in previous applications?
29. What is test estimation?
30. What are the possible criteria's for testing an Indian ATM machine?
31. How do you test an Remote control How long would you take to test it
32. What is Business requirement and How is different from functional specification
33. How do you do estimates for a test plan?
34. How do you prepare an effort plan
35. What is software validation matrix
36. What is traceability matrix
37. What is negative testing and what is a boundary testing?
38. What is com and how is it different from business logic?
39. What’s cyclomatic matrix?
40. what’s cmm / why (Accenture become cmm level 5 by dec 25) - what's kpa, level
41. What’s v- model?
42. inner join, outerj join, self join
43. How'll u test a coffee machine??
Ad Hoc TestingTesting carried out using no recognised test case design technique. [BCS]
Alpha Testing Testing of a software product or system conducted at the developer's site
by the customer.
Assertion Testing. (NBS) A dynamic analysis technique which inserts assertions about
the relationship between program variables into the program code. The truth of the
assertions is determined as the program executes.
Automated Testing Software testing which is assisted with software technology that
does not require operator (tester) input, analysis, or evaluation.
Background testing. is the execution of normal functional testing while the SUT is
exercised by a realistic work load. This work load is being processed "in the background"
as far as the functional testing is concerned. [ Load Testing Terminology by Scott
Stirling ]
Bug: glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion,
elision. [B. Beizer, 1990], defect, issue, problem
Beta Testing. Testing conducted at one or more customer sites by the end-user of a
delivered software product or system.
Big-bang testing Integration testing where no incremental testing takes place prior to all
the system's components being combined to form the system.[BCS]
Black box testing. A testing method where the application under test is viewed as a black
box and the internal behavior of the program is completely ignored. Testing occurs based
upon the external specifications. Also known as behavioral testing, since only the external
behaviors of the program are evaluated and analyzed.
Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that
it focuses on "corner cases" or values that are usually out of range as defined by the
specification. This means that if function expects all values in range of negative 100 to
positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to
derive the value often used as a technique for stress, load or volume testing. This type of
validation is usually performed after positive functional validation has completed
(successfully) using requirements specifications and user documentation.
Breadth test. - A test suite that exercises the full scope of a system from a top-down
perspective, but does not test any aspect in detail [Dorothy Graham, 1999]
Cause Effect Graphing. (1) [NBS] Test data selection technique. The input and output
domains are partitioned into classes and analysis is performed to determine which input
classes cause which effect. A minimal set of inputs is chosen which will cover the entire
effect set. (2)A systematic method of generating test cases representing combinations of
conditions. See: testing, functional.[G. Myers]
Clean test. A test whose primary purpose is validation; that is, tests designed to
demonstrate the software`s correct working.(syn. positive test)[B. Beizer 1995]
Code Inspection. A manual [formal] testing [error detection] technique where the
programmer reads source code, statement by statement, to a group who ask questions
analyzing the program logic, analyzing the code with respect to a checklist of historically
common programming errors, and analyzing its compliance with coding standards.
Contrast with code audit, code review, code walkthrough. This technique can also be
applied to other software and configuration items. [G.Myers/NBS] Syn: Fagan Inspection
Code Walkthrough. A manual testing [error detection] technique where program [source
code] logic [structure] is traced manually [mentally] by a group with a small set of test
cases, while the state of program variables is manually monitored, to analyze the
programmer's logic and assumptions.[G.Myers/NBS] Contrast with code audit, code
inspection, code review.
Coexistence Testing.Coexistence isn’t enough. It also depends on load order, how virtual
space is mapped at the moment, hardware and software configurations, and the history of
what took place hours or days before. It’s probably an exponentially hard problem rather
than a square-law problem. [from Quality Is Not The Goal. By Boris Beizer, Ph. D.]
Compatibility Testing. The process of determining the ability of two or more systems to
exchange information. In a situation where the developed software replaces an already
working program, an investigation should be conducted to assess possible comparability
problems between the new software and other programs or systems.
Composability testing –testing the ability of the interface to let users do more complex
tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck,
‘Easy’ and other lies, eWEEK April 28, 2003]
Condition Coverage. A test coverage criteria requiring enough test cases such that each
condition in a decision takes on all possible outcomes at least once, and each point of
entry to a program or subroutine is invoked at least once. Contrast with branch coverage,
decision coverage, multiple condition coverage, path coverage, statement
coverage.[G.Myers]
CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and
deletion. [William E. Lewis, 2000]
Data flow testing Testing in which test cases are designed based on variable usage within
the code.[BCS]
Database testing. Check the integrity of database field values. [William E. Lewis, 2000]
Defect Also called a fault or a bug, a defect is an incorrect part of code that is caused by
an error. An error of commission causes a defect of wrong or extra code. An error of
omission results in a defect of missing code. A defect may cause one or more
failures.[Robert M. Poston, 1996.]
Depth test. A test case, that exercises some part of a system to a significant level of
detail. [Dorothy Graham, 1999]
Decision Coverage. A test coverage criteria requiring enough test cases such that each
decision has a true and false result at least once, and that each statement is executed at
least once. Syn: branch coverage. Contrast with condition coverage, multiple condition
coverage, path coverage, statement coverage.[G.Myers]
Dynamic testing. Testing, based on specific test cases, by execution of the test object or
running programs [Tim Koomen, 1999]
End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or interacting with
other hardware, applications, or systems if appropriate.
Errors: The amount by which a result is incorrect. Mistakes are usually a result of a
human action. Human mistakes (errors) often result in faults contained in the source
code, specification, documentation, or other product deliverable. Once a fault is
encountered, the end result will be a program failure. The failure usually has some
margin of error, either high, medium, or low.
Error guessing. A test case design technique where the experience of the tester is used to
postulate what faults exist, and to design tests specially to expose them [from BS7925-1]
Error seeding. The purposeful introduction of faults into a program to test effectiveness
of a test suite or other quality assurance program. [R. V. Binder, 1999]
Follow-up testing, we vary a test that yielded a less-thanspectacular failure. We vary the
operation, data, or environment, asking whether the underlying fault in the code can yield
a more serious failure or a failure under a broader range of circumstances.[Measuring the
Effectiveness of Software Testers,Cem Kaner, STAR East 2003]
Formal Testing. (IEEE) Testing conducted in accordance with test plans and procedures
that have been reviewed and approved by a customer, user, or designated level of
management. Antonym: informal testing.
Free Form Testing. Ad hoc or brainstorming using intuition to define test cases.
[William E. Lewis, 2000]
Functional Decomposition Approach. An automation method in which the test cases are
reduced to fundamental tasks, navigation, functional tests, data verification, and return
navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]
Functional testing Application of test data derived from the specified functional
requirements without regard to the final program structure. Also known as black-box
testing.
Gray box testing Tests involving inputs and outputs, but test design is educated by
information about the code or the program operation of a kind that would normally be out
of scope of view of the tester.[Cem Kaner]
Gray box testing Test designed based on the knowledge of algorithm, internal states,
architectures, or other high -level descriptions of the program behavior. [Doug Hoffman]
Gray box testing Examines the activity of back-end components during test case
execution. Two types of problems that can be encountered during gray-box testing are:
§Ò¨i A component encounters a failure of some kind, causing the operation to be aborted.
The user interface will typically indicate that an error has occurred.
§Ò¨i The test executes in full, but the content of the results is incorrect. Somewhere in the
system, a component processed data incorrectly, causing the error in the results.
[Elfriede Dustin. "Quality Web Systems: Performance, Security & Usability."]
High-level tests. These tests involve testing whole, complete products [Kit, 1995]
Integration Testing. Testing conducted after unit and feature testing. The intent is to
expose faults in the interactions between software modules and functions. Either top-
down or bottom-up approaches can be used. A bottom-up method is preferred, since it
leads to earlier unit testing (step-level integration) This method is contrary to the big-
band approach where all source modules are combined and tested in one step. The big-
band approach to integration should be discouraged.
Interface Tests Programs that probide test facilities for external interfaces and function
calls. Simulation is often used to test external interfaces that currently may not be
available for testing or are difficult to control. For example, hardware resources such as
hard disks and memory may be difficult to control. Therefore, simulation can provide the
characteristics or behaviors for specific function.
Internationalization testing (I18N) - testing related to handling foreign text and data
within the program. This would include sorting, importing and exporting test and data,
correct handling of currency and date and time formats, string parsing, upper and lower
case handling and so forth. [Clinton De Young, 2003].
Latent bug A bug that has been dormant (unobserved) in two or more releases. [R. V.
Binder, 1999]
Lateral testing. A test design technique based on lateral thinking principals, to identify
faults. [Dorothy Graham, 1999]
Load testing Testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system's response time degrades or
fails.
Load §Ò¡Ìstress test. A test is design to determine how heavy a load the application can
handle.
Load-stability test. Test design to determine whether a Web application will remain
serviceable over extended time span.
Load §Ò¡Ìisolation test. The workload for this type of test is designed to contain only
the subset of test cases that caused the problem in previous testing.
Monkey Testing.(smart monkey testing) Input are generated from probability
distributions that reflect actual expected usage statistics -- e.g., from user profiles. There
are different levels of IQ in smart monkey testing. In the simplest, each input is
considered independent of the other inputs. That is, a given test requires an input vector
with five components. In low IQ testing, these would be generated independently. In high
IQ monkey testing, the correlation (e.g., the covariance) between these input distribution
is taken into account. In all branches of smart monkey testing, the input is considered as a
single event.
Mutation testing. A testing strategy where small variations to a program are inserted (a
mutant), followed by execution of an existing test suite. If the test suite detects the
mutant, the mutant is §Ò⌠ retired.§Ò¡ö If undetected, the test suite must be revised. [R.
V. Binder, 1999]
Multiple Condition Coverage. A test coverage criteria which requires enough test cases
such that all possible combinations of condition outcomes in each decision, and all points
of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition
coverage, decision coverage, path coverage, statement coverage.
Negative test. A test whose primary purpose is falsification; that is tests designed to
break the software[B.Beizer1995]
Orthogonal array testing: Technique can be used to reduce the number of combination
and provide maximum coverage with a minimum number of TC.Pay attention to the fact
that it is an old and proven technique. The OAT was introduced for the first time by
Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987
Oracle. Test Oracle: a mechanism to produce the predicted outcomes to compare with the
actual outcomes of the software under test [fromBS7925-1]
Parallel Testing Testing a new or an alternate data processing system with the same
source data that is used in another system. The other system is considered as the standard
of comparison. Syn: parallel run.[ISO]
Penetration testing The process of attacking a host from outside to ascertain remote
security vulnerabilities.
Performance Testing. Testing conducted to evaluate the compliance of a system or
component with specific performance requirements [BS7925-1]
Performance testing can be undertaken to: 1) show that the system meets specified
performance objectives, 2) tune the system, 3) determine the factors in hardware or
software that limit the system's performance, and 4) project the system's future load-
handling capacity in order to schedule its replacements" [Software System Testing and
Quality Assurance. Beizer, 1984, p. 256]
Prior Defect History Testing. Test cases are created or rerun for every defect found in
prior tests of the system. [William E. Lewis, 2000]
Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for
the consumer, to demonstrate that the software meets its specified requirements. See:
acceptance testing.
Quality Assurance (QA) Consists of planning, coordinating and other strategic activities
associated with measuring product quality against external requirements and
specifications (process-related activities).
Quality Control (QC) Consists of monitoring, controlling and other tactical activities
associated with the measurement of product quality goals.
Race condition defect. Many concurrent defects result from data-race conditions. A data-
race condition may be defined as two accesses to a shared variable, at least one of which
is a write, with no mechanism used by either to prevent simultaneous access. However,
not all race conditions are defects.
Recovery testingTesting how well a system recovers from crashes, hardware failures, or
other catastrophic problems.
Regression Testing. Testing conducted for the purpose of evaluating whether or not a
change to the system (all CM items) has introduced a new failure. Regression testing is
often accomplished through the construction, execution and analysis of product and
system tests.
Regression Testing. - testing that is performed after making a functional improvement or
repair to the program. Its purpose is to determine if the change has regressed other
aspects of the program [Glenford J.Myers, 1979]
Reliability testing. Verify the probability of failure free operation of a computer program
in a specified environment for a specified time.
Reliability of an object is defined as the probability that it will not fail under specified
conditions, over a period of time. The specified conditions are usually taken to be fixed,
while the time is taken as an independent variable. Thus reliability is often written R(t) as
a function of time t, the probability that the object will not fail within time t.
Any computer user would probably agree that most software is flawed, and the evidence
for this is that it does fail. All software flaws are designed in -- the software does not
break, rather it was always broken. But unless conditions are right to excite the flaw, it
will go unnoticed -- the software will appear to work properly. [Professor Dick Hamlet.
Ph.D.]
Range Testing. For each input identifies the range over which the system behavior
should be the same. [William E. Lewis, 2000]
Risk management.An organized process to identify what can go wrong, to quantify and
access associated risks, and to implement/control the appropriate approach for preventing
or handling each risk identified.
Robust test. A test, that compares a small amount of information, so that unexpected side
effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]
Sanity Testing - typically an initial testing effort to determine if a new software version
is performing well enough to accept it for a major testing effort. For example, if the new
software is often crashing systems, bogging down systems to a crawl, or destroying
databases, the software may not be in a 'sane' enough condition to warrant further testing
in its current state.
Sensitive test. A test, that compares a large amount of information, so that it is more
likely to defect unexpected differences between the actual and expected outcomes of the
test. [Dorothy Graham, 1999]
Smoke test describes an initial set of tests that determine if a new version of application
performs well enough for further testing.[Louise Tamres, 2002]
Spike testing. to test performance or recovery behavior when the system under test
(SUT) is stressed with a sudden and sharp increase in load should be considered a type of
load test.[ Load Testing Terminology by Scott Stirling ]
State-based testing Testing with test cases developed by modeling the system under test
as a state machine [R. V. Binder, 1999]
State Transition Testing. Technique in which the states of a system are fist identified
and then test cases are written to test the triggers to cause a transition from one condition
to another state. [William E. Lewis, 2000]
Static testing. Source code analysis. Analysis of source code to expose potential defects.
Statistical testing. A test case design technique in which a model is used of the statistical
distribution of the input to construct representative test cases. [BCS]
Stealth bug. A bug that removes information useful for its diagnosis and correction. [R.
V. Binder, 1999]
Storage test. Study how memory and space is used by the program, either in resident
memory or on disk. If there are limits of these amounts, storage tests attempt to prove that
the program will exceed them. [Cem Kaner, 1999, p55]
Stress / Load / Volume test. Tests that provide a high degree of activity, either using
boundary conditions as inputs or multiple copies of a program executing in parallel as
examples.
Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism
[structure] of a system or component. Types include branch testing, path testing,
statement testing. (2) Testing to insure each program statement is made to execute during
testing and that each program statement performs its intended function. Contrast with
functional testing. Syn: white-box testing, glass-box testing, logic driven testing.
Table testing. Test access, security, and data integrity of table entries. [William E. Lewis,
2000]
Test Case. A set of test inputs, executions, and expected results developed for a particular
objective.
Test conditions. The set of circumstances that a test invokes. [Daniel J. Mosley, 2002]
Test Coverage The degree to which a given test or set of tests addresses all specified test
cases for a given system or component.
Test Criteria. Decision rules used to determine whether software item or software
feature passes or fails a test.
Test data. The actual (set of) values used in the test or that are necessary to execute the
test. [Daniel J. Mosley, 2002]
Test Documentation. (IEEE) Documentation describing plans for, or results of, the
testing of a system or component, Types include test case specification, test incident
report, test log, test plan, test procedure, test report.
Test Driver A software module or application used to invoke a test item and, often,
provide test inputs (data), control and monitor execution. A test driver automates the
execution of test procedures.
Test Harness A system of test drivers and other tools to support test execution (e.g.,
stubs, executable test cases, and test drivers). See: test driver.
Test Log A chronological record of all relevant details about the execution of a
test.[IEEE]
Test Plan.A high-level document that defines a testing project so that it can be properly
measured and controlled. It defines the test strategy and organized elements of the test
life cycle, including resource requirements, project schedule, and test requirements
Test Procedure. A document, providing detailed instructions for the [manual] execution
of one or more test cases. [BS7925-1] Often called - a manual test script.
Test strategy. Describes the general approach and objectives of the test activities. [Daniel
J. Mosley, 2002]
Test Stub A dummy software component or object used (during development and
testing) to simulate the behaviour of a real component. The stub typically provides test
output.
Test Suites A test suite consists of multiple test cases (procedures and data) that are
combined and often managed by a test harness.
Testability. Attributes of software that bear on the effort needed for validating the
modified software [ISO 8402]
Testing. The execution of tests with the intent of providing that the system and
application under test does or does not perform according to the requirements
specification.
Unit Testing. Testing performed to isolate and expose faults and failures as soon as the
source code is available, regardless of the external interfaces that may be required.
Oftentimes, the detailed design and requirements documents are used as a basis to
compare how and what the unit is able to perform. White and black-box testing methods
are combined during unit testing.
Usability testing. Testing for 'user-friendliness'. Clearly this is subjective, and will
depend on the targeted end-user or customer.
Volume testing. Testing where the system is subjected to large volumes of data.[BS7925-
1]
Walkthrough In the most usual form of term, a walkthrough is step by step simulation of
the execution of a procedure, as when walking through code line by line, with an
imagined set of inputs. The term has been extended to the review of material that is not
procedural, such as data descriptions, reference manuals, specifications, etc.
White Box Testing (glass-box). Testing is done under a structural testing strategy and
require complete access to the object's structure¡that is, the source code.[B. Beizer, 1995
p8],
Why do so few projects succeed? Despite the decades of increasingly complex attempts to
manage projects, far too many managers overlook the 10 Unbreakable Rules for Project
Success. As outlined below, these common sense guidelines hold the key to increasing your
success rate and delivering greater consistency across your project's lifecycle.
Typically, projects are born from one of two notions: (1) there is an external force such as a
market demand or opportunity, or (2) there is an internal force such as operational inefficiencies
or manufacturing throughput problems. Either reason requires a project focused work effort. But
from a project perspective you need to know why you are doing what you are doing. This has
impact on creating metrics, identifying stakeholders and (possibly most important) creating a
comprehensive plan for execution. Is the project necessary? Does it align with the organization's
purpose and achieve major goals and objectives for the firm? Can you see the positive change it
has on the organization and its customers? If yes, proceed.
There has not been a single project that has succeeded under the guidance of the dishonest and
yours will not be the first. Trust in your team. Be proactively upfront and honest.
Remember Louis Pasteur: "Chance favours only the prepared mind," so forever be prepared.
Projects are too dynamic to depend on luck and chance to guide your way. Prepare to fail.
Prepare to be surprised. Prepare for the "what-ifs" you are sure to face throughout the lifecycle of
your project. And, prepare to succeed.
This theory still holds true today and is applicable not only from country to country as Smith
argued, but also from organization to organization. The point is, focus on your core competencies
and outsource or partner as much as possible with experts who posses a superior advantage.
Some enterprises follow a predetermined methodology for all projects. Many enterprises do not. If
you do not have a project methodology, know there is no easier way to fail than by just winging it.
The next easiest way to fail is to manage every project in a different way. With this approach, you
are sure to achieve lackluster performance and retain zero project-based knowledge. Find or
create a methodology that works for your organization's business and within your culture, then
manage and refine it as you grow.
Your plan needs to break down each work effort, allocate appropriate time for full completion of
each task and assign an owner responsible for successfully accomplishing the task. Please note:
You, as a project manager, need to understand that individuals responsible for task completion
must have the knowledge, skill and tools to achieve their tasks.
Knowing the who, what and when is not confined to only the project team. What about the
stakeholders? They have roles and responsibilities too, and therefore need coordinating. Whether
they are acknowledgers, advisers, critiquers or vetoers/approvers, you have to coordinate their
efforts and make sure they understand what they need to do, why and when.
Projects are fluid, dynamic, real. Hence, unexpected events are sure to arise and deviate the
team from its original plan. These occurrences do not mean the project outcomes are destined to
be lived out only in the theories of blue skies. Rather, they are mere occurrences that must be
addressed by intelligent people who can navigate precisely through problems and issues.
To keep the team informed of project work underway and forthcoming events, sponsor
regular project meetings and share regular project status reports and/or scorecards.
Remember, projects do not always remain on course. To that end, not all communication
is favorable. If bad things happen, communicate the bad and reinforce the risk/issue
mitigation plan that is in place. Do not shy away from any needed communication, but
know when to stop talking and get back to acting upon the information just shared.
All projects ride on three pillars of strength: people, resources and knowledge. When you
have professional personnel, enough time and money, and the right information, quality
results ensue from each project engaged. However, if you have too few or too many of
these, you will struggle (or worse).
Further, projects tied to a key organizational goals or major objective seem to have a
greater chance of success. If your project is not tied to an organizational goal, refer
to Rule No. 2 and make sure you understand why you are involved with the project.
A positive attitude is a must. Project leaders and team members must believe a project
can succeed or it never will. As well, the organization must be set up to succeed, with
every project underway addressing the goals of the firm. As each project succeeds it
reinforces the organization's goals and strengthens its chances for success.
This rule is not meant to be a way out of difficult projects. Again, projects are supposed
to succeed. But this rule is here to get you to know what to look for in a failing project
and be able to respond quickly with the mitigation plan. Projects fail for many reasons:
lack of commitment from senior management, no clear vision, deliverables are not
defined, no plan for success let alone quantified risk mitigation, decisions are made based
widely on assumptions rather than business data and fact, stakeholders are passively
involved, no understanding of a work breakdown structure and poor communication.
If your project seems to be slipping away, review this list and enact change. Get back on
course. Deploy a sense of urgency and strive to succeed! If, despite your valiant efforts,
the project is beyond repair, learn from it. Glean the invaluable knowledge of failure and
next time you can avoid these missteps on your way to success.
Rule No. 9: Know when the project is over.
At the end of each phase and at key milestones throughout the project's lifecycle, the
project is atop a fulcrum and is poised to continue or not. It is at each of these major
points that the project manager and other sponsors need to pay close attention to the
metrics and dynamics of the project. Are the goals being met? Has the environment or
reasons for the project changed? Can we still succeed? It is at these points these questions
must be answered. If all is well, the project goes on. If there are concerns, the project may
be better off coming to a brisk halt.
Do not be afraid to stop a project if the reasoning for continuing is no longer sound. It is
far better to terminate a project early than to push through to the end with a product or
output that satisfies no one and has cost the organization dearly. And, this says nothing
about what it does to the project teams' psyche. If it is not going to work, kill it. Your time
and money are better spent on some greater cause.
The success achieved from project management is more than simply enacting a
methodology standard or carrying out a set of template-driven exercises. Success, rather,
is achieved through the intelligent application of sound principles guided by experienced
project professionals. If this sounds like common business sense, it is. As measured, all
successful projects have similar attributes for us all to learn from.
The .NET Framework FAQ was first posted in July 2000, and is regularly updated. It covers the
fundamentals of the .NET Framework including assemblies, garbage collection, security, interop with
COM and remoting. Newcomers to the .NET framework may wish to read the FAQ from top to bottom
as a tutorial. More experienced practitioners may prefer to consult the contents list for topics of
particular interest.
This FAQ was inspired by discussions on the DOTNET mailing list. The list has now been split into
several DOTNET-X lists - for details see http://discuss.develop.com/.
Christophe Lauer has translated the FAQ into French. Royal has translated the FAQ into Chinese.
If you like this FAQ, you might be interested in my C# FAQ for C++ Programmers.
Latest updates:
27-Jan-2005: Rewritten Should I implement Finalize on my class? Should I implement IDisposable?
25-Jan-2005: What's new in the .NET 2.0 class library?
21-Jan-2005: What size is a .NET object?
18-Jan-2005: When do I need to call GC.KeepAlive?
13-Jan-2005: What is the lapsed listener problem?
08-Jan-2005: What is the difference between an event and a delegate?
06-Jan-2005: New section on .NET 2.0
Contents
• 1. Introduction
o 1.1 What is .NET?
o 1.4 What operating systems does the .NET Framework run on?
• 2. Terminology
o 2.1 What is the CLI? Is it the same as the CLR?
o 2.2 What is the CTS, and how does it relate to the CLS?
• 3. Assemblies
o 3.1 What is an assembly?
o 3.3 What is the difference between a private assembly and a shared assembly?
• 4. Application Domains
o 4.1 What is an application domain?
• 5. Garbage Collection
o 5.1 What is garbage collection?
o 5.2 Is it true that objects don't always get destroyed immediately when the last
reference goes away?
o 5.3 Why doesn't the .NET runtime offer deterministic destruction?
o 5.7 How can I find out what the garbage collector is doing?
• 6. Serialization
o 6.1 What is serialization?
o 6.2 Does the .NET Framework have in-built support for serialization?
o 6.7 XmlSerializer is throwing a generic "There was an error reflecting MyClass" error.
How do I find out what the problem is?
o 6.8 Why am I getting an InvalidOperationException when I serialize an ArrayList?
• 7. Attributes
o 7.1 What are attributes?
o 8.7 I'm having some trouble with CAS. How can I troubleshoot the problem?
• 11. Miscellaneous
o 11.1 How does .NET remoting work?
o 11.2 How can I get at the Win32 API from a .NET program?
13.1.4 How do I know when my thread pool work item has completed?
o 14.3 Blogs
1. Introduction
.NET is a general-purpose software development platform, similar to Java. At its core is a virtual
machine that turns intermediate language (IL) into machine code. High-level language compilers for
C#, VB.NET and C++ are provided to turn source code into IL. C# is a new programming language,
very similar to Java. An extensive class library is included, featuring all the functionality one might
expect from a contempory development platform - windows GUI development (Windows Forms),
database access (ADO.NET), web development (ASP.NET), web services, XML etc.
Bill Gates delivered a keynote at Forum 2000, held June 22, 2000, outlining the .NET 'vision'. The July
2000 PDC had a number of sessions on .NET technology, and delegates were given CDs containing a
pre-release version of the .NET framework/SDK and Visual Studio.NET.
The final version of the 1.0 SDK and runtime was made publicly available around 6pm PST on 15-Jan-
2002. At the same time, the final version of Visual Studio.NET was made available to MSDN
subscribers.
.NET 1.1 was released in April 2003 - it's mostly bug fixes for 1.0.
1.4 What operating systems does the .NET Framework run on?
The runtime supports Windows Server 2003, Windows XP, Windows 2000, NT4 SP6a and Windows
ME/98. Windows 95 is not supported. Some parts of the framework do not work on all platforms - for
example, ASP.NET is only supported on XP and Windows 2000/2003. Windows 98/ME cannot be used
for development.
IIS is not supported on Windows XP Home Edition, and so cannot be used to host ASP.NET. However,
the ASP.NET Web Matrix web server does run on XP Home.
The .NET Compact Framework is a version of the .NET Framework for mobile devices, running
Windows CE or Windows Mobile.
The Mono project has a version of the .NET Framework that runs on Linux.
• The .NET Framework SDK is free and includes command-line compilers for C++, C#, and
VB.NET and various other utilities to aid development.
• ASP.NET Web Matrix is a free ASP.NET development environment from Microsoft. As well as
a GUI development environment, the download includes a simple web server that can be
used instead of IIS to host ASP.NET apps. This opens up ASP.NET development to users of
Windows XP Home Edition, which cannot run IIS.
• Microsoft Visual C# .NET Standard 2003 is a cheap (around $100) version of Visual Studio
limited to one language and also with limited wizard support. For example, there's no wizard
support for class libraries or custom UI controls. Useful for beginners to learn with, or for
savvy developers who can work around the deficiencies in the supplied wizards. As well as
C#, there are VB.NET and C++ versions.
• Microsoft Visual Studio.NET Professional 2003. If you have a license for Visual Studio 6.0,
you can get the upgrade. You can also upgrade from VS.NET 2002 for a token $30. Visual
Studio.NET includes support for all the MS languages (C#, C++, VB.NET) and has extensive
wizard support.
At the top end of the price spectrum are the Visual Studio.NET 2003 Enterprise and Enterprise
Architect editions. These offer extra features such as Visual Sourcesafe (version control), and
performance and analysis tools. Check out the Visual Studio.NET Feature Comparison at
http://msdn.microsoft.com/vstudio/howtobuy/choosing.asp
I don't know what they were thinking. They certainly weren't thinking of people using search tools. It's
meaningless marketing nonsense - best not to think about it.
2. Terminology
The CLI (Common Language Infrastructure) is the definiton of the fundamentals of the .NET
framework - the Common Type System (CTS), metadata, the Virtual Execution Environment (VES)
and its use of intermediate language (IL), and the support of multiple programming languages via the
Common Language Specification (CLS). The CLI is documented through ECMA - see
http://msdn.microsoft.com/net/ecma/ for more details.
The CLR (Common Language Runtime) is Microsoft's primary implementation of the CLI. Microsoft
also have a shared source implementation known as ROTOR, for educational purposes, as well as the
.NET Compact Framework for mobile devices. Non-Microsoft CLI implementations include Mono and
DotGNU Portable.NET.
2.2 What is the CTS, and how does it relate to the CLS?
CTS = Common Type System. This is the full range of types that the .NET runtime understands. Not all
.NET languages support all the types in the CTS.
CLS = Common Language Specification. This is a subset of the CTS which all .NET languages are
expected to support. The idea is that any program which uses CLS-compliant types can interoperate
with any .NET program written in any language. This interop is very fine-grained - for example a
VB.NET class can inherit from a C# class.
IL = Intermediate Language. Also known as MSIL (Microsoft Intermediate Language) or CIL (Common
Intermediate Language). All .NET source code (of any language) is compiled to IL during development.
The IL is then converted to machine code at the point where the software is installed, or (more
commonly) at run-time by a Just-In-Time (JIT) compiler.
C# is a new language designed by Microsoft to work with the .NET framework. In their "Introduction to
C#" whitepaper, Microsoft describe C# as follows:
"C# is a simple, modern, object oriented, and type-safe programming language derived from C and
C++. C# (pronounced “C sharp”) is firmly planted in the C and C++ family tree of languages, and will
immediately be familiar to C and C++ programmers. C# aims to combine the high productivity of Visual
Basic and the raw power of C++."
Substitute 'Java' for 'C#' in the quote above, and you'll see that the statement still works pretty well :-).
If you are a C++ programmer, you might like to check out my C# FAQ.
The term 'managed' is the cause of much confusion. It is used in various places within .NET, meaning
slightly different things.
Managed code: The .NET framework provides several core run-time services to the programs that run
within it - for example exception handling and security. For these services to work, the code must
provide a minimum level of information to the runtime. Such code is called managed code.
Managed data: This is data that is allocated and freed by the .NET runtime's garbage collector.
Managed classes: This is usually referred to in the context of Managed Extensions (ME) for C++.
When using ME C++, a class can be marked with the __gc keyword. As the name suggests, this
means that the memory for instances of the class is managed by the garbage collector, but it also
means more than that. The class becomes a fully paid-up member of the .NET community with the
benefits and restrictions that brings. An example of a benefit is proper interop with classes written in
other languages - for example, a managed C++ class can inherit from a VB class. An example of a
restriction is that a managed class can only inherit from one base class.
2.6 What is reflection?
All .NET compilers produce metadata about the types defined in the modules they produce. This
metadata is packaged along with the module (modules in turn are packaged together in assemblies),
and can be accessed by a mechanism called reflection. The System.Reflection namespace contains
classes that can be used to interrogate the types for a module/assembly.
Using reflection to access .NET metadata is very similar to using ITypeLib/ITypeInfo to access type
library data in COM, and it is used for similar purposes - e.g. determining data type sizes for
marshaling data across context/process/machine boundaries.
3. Assemblies
An assembly is sometimes described as a logical .EXE or .DLL, and can be an application (with a main
entry point) or a library. An assembly consists of one or more files (dlls, exes, html files etc), and
represents a group of resources, type definitions, and implementations of those types. An assembly
may also contain references to other assemblies. These resources, types and references are
described in a block of data called a manifest. The manifest is part of the assembly, thus making the
assembly self-describing.
An important aspect of assemblies is that they are part of the identity of a type. The identity of a type is
the assembly that houses it combined with the type name. This means, for example, that if assembly A
exports a type called T, and assembly B exports a type called T, the .NET runtime sees these as two
completely different types. Furthermore, don't get confused between assemblies and namespaces -
namespaces are merely a hierarchical way of organising type names. To the runtime, type names are
type names, regardless of whether namespaces are used to organise the names. It's the assembly
plus the typename (regardless of whether the type name belongs to a namespace) that uniquely
indentifies a type to the runtime.
Assemblies are also important in .NET with respect to security - many of the security restrictions are
enforced at the assembly boundary.
Finally, assemblies are the unit of versioning in .NET - more on this below.
The simplest way to produce an assembly is directly from a .NET compiler. For example, the following
C# program:
You can then view the contents of the assembly by running the "IL Disassembler" tool that comes with
the .NET SDK.
Alternatively you can compile your source into modules, and then combine the modules into an
assembly using the assembly linker (al.exe). For the C# compiler, the /target:module switch is used to
generate a module instead of an assembly.
3.3 What is the difference between a private assembly and a shared assembly?
• Location and visibility: A private assembly is normally used by a single application, and is
stored in the application's directory, or a sub-directory beneath. A shared assembly is normally
stored in the global assembly cache, which is a repository of assemblies maintained by the
.NET runtime. Shared assemblies are usually libraries of code which many applications will
find useful, e.g. the .NET framework classes.
• Versioning: The runtime enforces versioning constraints only on shared assemblies, not on
private assemblies.
By searching directory paths. There are several factors which can affect the path (such as the
AppDomain host, and application configuration files), but for private assemblies the search path is
normally the application's directory and its sub-directories. For shared assemblies, the search path is
normally same as the private assembly path plus the shared assembly cache.
Each assembly has a version number called the compatibility version. Also each reference to an
assembly (from another assembly) includes both the name and version of the referenced assembly.
The version number has four numeric parts (e.g. 5.5.2.33). Assemblies with either of the first two parts
different are normally viewed as incompatible. If the first two parts are the same, but the third is
different, the assemblies are deemed as 'maybe compatible'. If only the fourth part is different, the
assemblies are deemed compatible. However, this is just the default guideline - it is the version policy
that decides to what extent these rules are enforced. The version policy can be specified via the
application configuration file.
3.6 How can I develop an application that automatically updates itself from the web?
For .NET 1.x, use the Updater Application Block. For .NET 2.x, use ClickOnce.
4. Application Domains
An AppDomain can be thought of as a lightweight process. Multiple AppDomains can exist inside a
Win32 process. The primary purpose of the AppDomain is to isolate applications from each other, and
so it is particularly useful in hosting scenarios such as ASP.NET. An AppDomain can be destroyed by
the host without affecting other AppDomains in the process.
Win32 processes provide isolation by having distinct memory address spaces. This is effective, but
expensive. The .NET runtime enforces AppDomain isolation by keeping control over the use of
memory - all memory in the AppDomain is managed by the .NET runtime, so the runtime can ensure
that AppDomains do not access each other's memory.
One non-obvious use of AppDomains is for unloading types. Currently the only way to unload a .NET
type is to destroy the AppDomain it is loaded into. This is particularly useful if you create and destroy
types on-the-fly via reflection.
AppDomains are usually created by hosts. Examples of hosts are the Windows Shell, ASP.NET and
IE. When you run a .NET application from the command-line, the host is the Shell. The Shell creates a
new AppDomain for every application.
AppDomains can also be explicitly created by .NET applications. Here is a C# sample which creates
an AppDomain, creates an instance of an object inside it, and then executes one of the object's
methods:
using System;
using System.Runtime.Remoting;
using System.Reflection;
Yes. For an example of how to do this, take a look at the source for the dm.net moniker developed by
Jason Whittington and Don Box. There is also a code sample in the .NET SDK called CorHost.
5. Garbage Collection
5.2 Is it true that objects don't always get destroyed immediately when the last
reference goes away?
Yes. The garbage collector offers no guarantees about the time when an object will be destroyed and
its memory reclaimed.
There was an interesting thread on the DOTNET list, started by Chris Sells, about the implications of
non-deterministic destruction of objects in C#. In October 2000, Microsoft's Brian Harry posted a
lengthy analysis of the problem. Chris Sells' response to Brian's posting is here.
Because of the garbage collection algorithm. The .NET garbage collector works by periodically running
through a list of all the objects that are currently being referenced by an application. All the objects that
it doesn't find during this search are ready to be destroyed and the memory reclaimed. The implication
of this algorithm is that the runtime doesn't get notified immediately when the final reference on an
object goes away - it only finds out during the next 'sweep' of the heap.
Futhermore, this type of algorithm works best by performing the garbage collection sweep as rarely as
possible. Normally heap exhaustion is the trigger for a collection sweep.
It's certainly an issue that affects component design. If you have objects that maintain expensive or
scarce resources (e.g. database locks), you need to provide some way to tell the object to release the
resource when it is done. Microsoft recommend that you provide a method called Dispose() for this
purpose. However, this causes problems for distributed objects - in a distributed system who calls the
Dispose() method? Some form of reference-counting or ownership-management mechanism is
needed to handle distributed objects - unfortunately the runtime offers no help with this.
This issue is a little more complex than it first appears. There are really two categories of class that
require deterministic destruction - the first category manipulate unmanaged types directly (generally
via an IntPtr representing an OS handle), whereas the second category manipulate managed types
that require deterministic destruction. An example of the first category is a class with an IntPtr member
representing an OS file handle. An example of the second category is a class with a
System.IO.FileStream member.
For the first category, it makes sense to implement IDisposable and override Finalize. This allows the
object user to 'do the right thing' by calling Dispose, but also provides a fallback of freeing the
unmanaged resource in the Finalizer, should the calling code fail in its duty. However this logic does
not apply to the second category of class, with only managed resources. In this case implementing
Finalize is pointless, as managed member objects cannot be accessed in the Finalizer. This is
because there is no guarantee about the ordering of Finalizer execution. So only the Dispose method
should be implemented. (If you think about it, it doesn't really make sense to call Dispose on member
objects from a Finalizer anyway, as the member object's Finalizer will do the required cleanup
anyway.)
For classes that need to implement IDisposable and override Finalize, see Microsoft's documented
pattern.
Note that some developers argue that implementing a Finalizer is always a bad idea, as it hides a bug
in your code (i.e. the lack of a Dispose call). A less radical approach is to implement Finalize but
include a Debug.Assert at the start, thus signalling the problem in developer builds but allowing the
cleanup to occur in release builds.
A little. For example the System.GC class exposes a Collect method, which forces the garbage
collector to collect all unreferenced objects immediately.
Also there is a gcConcurrent setting that can be specified via the application configuration file. This
specifies whether or not the garbage collector performs some of its collection activities on a separate
thread. The setting only applies on multi-processor machines, and defaults to true.
5.7 How can I find out what the garbage collector is doing?
Lots of interesting statistics are exported from the .NET runtime via the '.NET CLR xxx' performance
counters. Use Performance Monitor to view them.
The lapsed listener problem is one of the primary causes of leaks in .NET applications. It occurs when
a subscriber (or 'listener') signs up for a publisher's event, but fails to unsubscribe. The failure to
unsubscribe means that the publisher maintains a reference to the subscriber as long as the publisher
is alive. For some publishers, this may be the duration of the application.
This situation causes two problems. The obvious problem is the leakage of the subscriber object. The
other problem is the performance degredation due to the publisher sending redundant notifications to
'zombie' subscribers.
There are at least a couple of solutions to the problem. The simplest is to make sure the subscriber is
unsubscribed from the publisher, typically by adding an Unsubscribe() method to the subscriber.
Another solution, documented here by Shawn Van Ness, is to change the publisher to use weak
references in its subscriber list.
It's very unintuitive, but the runtime can decide that an object is garbage much sooner than you expect.
More specifically, an object can become garbage while a method is executing on the object, which is
contrary to most developers' expectations. Chris Brumme explains the issue on his blog. I've taken
Chris's code and expanded it into a full app that you can play with if you want to prove to yourself that
this is a real problem:
using System;
using System.Runtime.InteropServices;
class Win32
{
[DllImport("kernel32.dll")]
public static extern IntPtr CreateEvent( IntPtr lpEventAttributes,
bool bManualReset,bool bInitialState, string lpName);
[DllImport("kernel32.dll", SetLastError=true)]
public static extern bool CloseHandle(IntPtr hObject);
[DllImport("kernel32.dll")]
public static extern bool SetEvent(IntPtr hEvent);
}
class EventUser
{
public EventUser()
{
hEvent = Win32.CreateEvent( IntPtr.Zero, false, false, null );
}
~EventUser()
{
Win32.CloseHandle( hEvent );
Console.WriteLine("EventUser finalized");
}
IntPtr hEvent;
}
class App
{
static void Main(string[] args)
{
EventUser eventUser = new EventUser();
eventUser.UseEvent();
}
}
If you run this code, it'll probably work fine, and you'll get the following output:
SetEvent succeeded
EventDemo finalized
However, if you uncomment the GC.Collect() call in the UseEventInStatic() method, you'll get this
output:
EventDemo finalized
SetEvent FAILED!
(Note that you need to use a release build to reproduce this problem.)
So what's happening here? Well, at the point where UseEvent() calls UseEventInStatic(), a copy is
taken of the hEvent field, and there are no further references to the EventUser object anywhere in the
code. So as far as the runtime is concerned, the EventUser object is garbage and can be collected.
Normally of course the collection won't happen immediately, so you'll get away with it, but sooner or
later a collection will occur at the wrong time, and your app will fail.
A solution to this problem is to add a call to GC.KeepAlive(this) to the end of the UseEvent method, as
Chris explains.
6. Serialization
Serialization is the process of converting an object into a stream of bytes. Deserialization is the
opposite process, i.e. creating an object from a stream of bytes. Serialization/Deserialization is mostly
used to transport objects (e.g. during remoting), or to persist objects (e.g. to a file or database).
6.2 Does the .NET Framework have in-built support for serialization?
There are two separate mechanisms provided by the .NET class library - XmlSerializer and
SoapFormatter/BinaryFormatter. Microsoft uses XmlSerializer for Web Services, and
SoapFormatter/BinaryFormatter for remoting. Both are available for use in your own code.
It depends. XmlSerializer has severe limitations such as the requirement that the target class has a
parameterless constructor, and only public read/write properties and fields can be serialized. However,
on the plus side, XmlSerializer has good support for customising the XML document that is produced
or consumed. XmlSerializer's features mean that it is most suitable for cross-platform work, or for
constructing objects from existing XML documents.
SoapFormatter and BinaryFormatter have fewer limitations than XmlSerializer. They can serialize
private fields, for example. However they both require that the target class be marked with the
[Serializable] attribute, so like XmlSerializer the class needs to be written with serialization in mind.
Also there are some quirks to watch out for - for example on deserialization the constructor of the new
object is not invoked.
The choice between SoapFormatter and BinaryFormatter depends on the application. BinaryFormatter
makes sense where both serialization and deserialization will be performed on the .NET platform and
where performance is important. SoapFormatter generally makes more sense in all other cases, for
ease of debugging if nothing else.
Yes. XmlSerializer supports a range of attributes that can be used to configure serialization for a
particular class. For example, a field or property can be marked with the [XmlIgnore] attribute to
exclude it from serialization. Another example is the [XmlElement] attribute, which can be used to
specify the XML element name to be used for a particular property or field.
There is a once-per-process-per-type overhead with XmlSerializer. So the first time you serialize or
deserialize an object of a given type in an application, there is a significant delay. This normally doesn't
matter, but it may mean, for example, that XmlSerializer is a poor choice for loading configuration
settings during startup of a GUI application.
XmlSerializer will refuse to serialize instances of any class that implements IDictionary, e.g. Hashtable.
SoapFormatter and BinaryFormatter do not have this restriction.
6.7 XmlSerializer is throwing a generic "There was an error reflecting MyClass" error.
How do I find out what the problem is?
Look at the InnerException property of the exception that is thrown to get a more specific error
message.
XmlSerializer needs to know in advance what type of objects it will find in an ArrayList. To specify the
type, use the XmlArrayItem attibute like this:
7. Attributes
There are at least two types of .NET attribute. The first type I will refer to as a metadata attribute - it
allows some data to be attached to a class or method. This data becomes part of the metadata for the
class, and (like other class metadata) can be accessed via reflection. An example of a metadata
attribute is [serializable], which can be attached to a class and means that instances of the class can
be serialized.
The other type of attribute is a context attribute. Context attributes use a similar syntax to metadata
attributes but they are fundamentally different. Context attributes provide an interception mechanism
whereby instance activation and method calls can be pre- and/or post-processed. If you have
encountered Keith Brown's universal delegator you'll be familiar with this idea.
Yes. Simply derive a class from System.Attribute and mark it with the AttributeUsage attribute. For
example:
[AttributeUsage(AttributeTargets.Class)]
public class InspiredByAttribute : System.Attribute
{
public string InspiredBy;
class CApp
{
public static void Main()
{
object[] atts = typeof(CTest).GetCustomAttributes(true);
CAS is the part of the .NET security model that determines whether or not code is allowed to run, and
what resources it can use when it is running. For example, it is CAS that will prevent a .NET web
applet from formatting your hard disk.
The CAS security policy revolves around two key concepts - code groups and permissions. Each .NET
assembly is a member of a particular code group, and each code group is granted the permissions
specified in a named permission set.
For example, using the default security policy, a control downloaded from a web site belongs to the
'Zone - Internet' code group, which adheres to the permissions defined by the 'Internet' named
permission set. (Naturally the 'Internet' named permission set represents a very restrictive range of
permissions.)
Microsoft defines some default ones, but you can modify these and even create your own. To see the
code groups defined on your system, run 'caspol -lg' from the command-line. On my system it looks
like this:
Level = Machine
Code Groups:
Note the hierarchy of code groups - the top of the hierarchy is the most general ('All code'), which is
then sub-divided into several groups, each of which in turn can be sub-divided. Also note that
(somewhat counter-intuitively) a sub-group can be associated with a more permissive permission set
than its parent.
Use caspol. For example, suppose you trust code from www.mydomain.com and you want it have full
access to your system, but you want to keep the default restrictions for all other internet sites. To
achieve this, you would add a new code group as a sub-group of the 'Zone - Internet' group, like this:
Now if you run caspol -lg you will see that the new group has been added as group 1.3.1:
...
1.3. Zone - Internet: Internet
1.3.1. Site - www.mydomain.com: FullTrust
...
Note that the numeric label (1.3.1) is just a caspol invention to make the code groups easy to
manipulate from the command-line. The underlying runtime never sees it.
Use caspol. If you are the machine administrator, you can operate at the 'machine' level - which means
not only that the changes you make become the default for the machine, but also that users cannot
change the permissions to be more permissive. If you are a normal (non-admin) user you can still
modify the permissions, but only to make them more restrictive. For example, to allow intranet code to
do what it likes you might do this:
Note that because this is more permissive than the default policy (on a standard system), you should
only do this at the machine level - doing it at the user level will have no effect.
Yes. Use caspol -ap, specifying an XML file containing the permissions in the permission set. To save
you some time, here is a sample file corresponding to the 'Everything' permission set - just edit to suit
your needs. When you have edited the sample, add it to the range of available permission sets like
this:
8.7 I'm having some trouble with CAS. How can I troubleshoot the problem?
Caspol has a couple of options that might help. First, you can ask caspol to tell you what code group
an assembly belongs to, using caspol -rsg. Similarly, you can ask what permissions are being applied
to a particular assembly using caspol -rsp.
caspol -s off
Yes. MS supply a tool called Ildasm that can be used to view the metadata and IL for an assembly.
Yes, it is often relatively straightforward to regenerate high-level source from IL. Lutz Roeder's
Reflector does a very good job of turning IL into C# or VB.NET.
You can buy an IL obfuscation tool. These tools work by 'optimising' the IL in such a way that reverse-
engineering becomes much more difficult.
Of course if you are writing web services then reverse-engineering is not a problem as clients do not
have access to your IL.
Yes. Peter Drayton posted this simple example to the DOTNET mailing list:
.assembly MyAssembly {}
.class MyApp {
.method static void Main() {
.entrypoint
ldstr "Hello, IL!"
call void System.Console::WriteLine(class System.Object)
ret
}
}
Just put this into a file called hello.il, and then run ilasm hello.il. An exe assembly will be generated.
9.5 Can I do things in IL that I can't do in C#?
Yes. A couple of simple examples are that you can throw exceptions that are not derived from
System.Exception, and you can have non-zero-based arrays.
This subject causes a lot of controversy, as you'll see if you read the mailing list archives. Take a look
at the following two threads:
http://discuss.develop.com/archives/wa.exe?A2=ind0007&L=DOTNET&D=0&P=68241
http://discuss.develop.com/archives/wa.exe?A2=ind0007&L=DOTNET&P=R60761
The bottom line is that .NET has its own mechanisms for type interaction, and they don't use COM. No
IUnknown, no IDL, no typelibs, no registry-based activation. This is mostly good, as a lot of COM was
ugly. Generally speaking, .NET allows you to package and use components in a similar way to COM,
but makes the whole thing a bit easier.
Pretty much, for .NET developers. The .NET Framework has a new remoting model which is not based
on DCOM. DCOM was pretty much dead anyway, once firewalls became widespread and Microsoft
got SOAP fever. Of course DCOM will still be used in interop scenarios.
Not immediately. The approach for .NET 1.0 was to provide access to the existing COM+ services
(through an interop layer) rather than replace the services with native .NET ones. Various tools and
attributes were provided to make this as painless as possible. Over time it is expected that interop will
become more seamless - this may mean that some services become a core part of the CLR, and/or it
may mean that some services will be rewritten as managed code which runs on top of the CLR.
For more on this topic, search for postings by Joe Long in the archives - Joe is the MS group manager
for COM+. Start with this message:
http://discuss.develop.com/archives/wa.exe?A2=ind0007&L=DOTNET&P=R68370
Yes. COM components are accessed from the .NET runtime via a Runtime Callable Wrapper (RCW).
This wrapper turns the COM interfaces exposed by the COM component into .NET-compatible
interfaces. For oleautomation interfaces, the RCW can be generated automatically from a type library.
For non-oleautomation interfaces, it may be necessary to develop a custom RCW which manually
maps the types exposed by the COM interface to .NET-compatible types.
Here's a simple example for those familiar with ATL. First, create an ATL component which implements
the following IDL:
import "oaidl.idl";
import "ocidl.idl";
[
object,
uuid(EA013F93-487A-4403-86EC-FD9FEE5E6206),
helpstring("ICppName Interface"),
pointer_default(unique),
oleautomation
]
[
uuid(F5E4C61D-D93A-4295-A4B4-2453D4A4484D),
version(1.0),
helpstring("cppcomserver 1.0 Type Library")
]
library CPPCOMSERVERLib
{
importlib("stdole32.tlb");
importlib("stdole2.tlb");
[
uuid(600CE6D9-5ED7-4B4D-BB49-E8D5D5096F70),
helpstring("CppName Class")
]
coclass CppName
{
[default] interface ICppName;
};
};
When you've built the component, you should get a typelibrary. Run the TLBIMP utility on the
typelibary, like this:
tlbimp cppcomserver.tlb
You now need a .NET client - let's use C#. Create a .cs file containing the following code:
using System;
using CPPCOMSERVERLib;
Name is bob
Yes. .NET components are accessed from COM via a COM Callable Wrapper (CCW). This is similar to
a RCW (see previous question), but works in the opposite direction. Again, if the wrapper cannot be
automatically generated by the .NET development tools, or if the automatic behaviour is not desirable,
a custom CCW can be developed. Also, for COM to 'see' the .NET component, the .NET component
must be registered in the registry.
Here's a simple example. Create a C# file called testcomserver.cs and put the following in it:
using System;
using System.Runtime.InteropServices;
namespace AndyMc
{
[ClassInterface(ClassInterfaceType.AutoDual)]
public class CSharpCOMServer
{
public CSharpCOMServer() {}
public void SetName( string name ) { m_name = name; }
public string GetName() { return m_name; }
private string m_name;
}
}
Now you need to create a client to test your .NET COM component. VBScript will do - put the following
in a file called comclient.vbs:
Dim dotNetObj
Set dotNetObj = CreateObject("AndyMc.CSharpCOMServer")
dotNetObj.SetName ("bob")
MsgBox "Name is " & dotNetObj.GetName()
wscript comclient.vbs
And hey presto you should get a message box displayed with the text "Name is bob".
An alternative to the approach above it to use the dm.net moniker developed by Jason Whittington and
Don Box.
10.6 Is ATL redundant in the .NET world?
Yes. ATL will continue to be valuable for writing COM components for some time, but it has no place in
the .NET world.
11. Miscellaneous
.NET remoting involves sending messages along channels. Two of the standard channels are HTTP
and TCP. TCP is intended for LANs only - HTTP can be used for LANs or WANs (internet).
Support is provided for multiple message serializarion formats. Examples are SOAP (XML-based) and
binary. By default, the HTTP channel uses SOAP (via the .NET runtime Serialization SOAP Formatter),
and the TCP channel uses binary (via the .NET runtime Serialization Binary Formatter). But either
channel can use either serialization format.
• SingleCall. Each incoming request from a client is serviced by a new object. The object is
thrown away when the request has finished.
• Singleton. All incoming requests from clients are processed by a single server object.
• Client-activated object. This is the old stateful (D)COM model whereby the client receives a
reference to the remote object and holds that reference (thus keeping the remote object alive)
until it is finished with it.
Distributed garbage collection of objects is managed by a system called 'leased based lifetime'. Each
object has a lease time, and when that time expires the object is disconnected from the .NET runtime
remoting infrastructure. Objects have a default renew time - the lease is renewed when a successful
call is made from the client to the object. The client can also explicitly renew the lease.
If you're interested in using XML-RPC as an alternative to SOAP, take a look at Charles Cook's XML-
RPC.Net.
11.2 How can I get at the Win32 API from a .NET program?
Use P/Invoke. This uses similar technology to COM Interop, but is used to access static DLL entry
points instead of COM objects. Here is an example of C# calling the Win32 MessageBox function:
using System;
using System.Runtime.InteropServices;
class MainApp
{
[DllImport("user32.dll", EntryPoint="MessageBox", SetLastError=true,
CharSet=CharSet.Auto)]
public static extern int MessageBox(int hWnd, String strMessage, String
strCaption, uint uiType);
An event is just a wrapper for a multicast delegate. Adding a public event to a class is almost the same
as adding a public multicast delegate field. In both cases, subscriber objects can register for
notifications, and in both cases the publisher object can send notifications to the subscribers. However,
a public multicast delegate has the undesirable property that external objects can invoke the delegate,
something we'd normally want to restrict to the publisher. Hence events - an event adds public
methods to the containing class to add and remove receivers, but does not make the invocation
mechanism public.
Each instance of a reference type has two fields maintained by the runtime - a method table pointer
and a sync block. These are 4 bytes each on a 32-bit system, making a total of 8 bytes per object
overhead. Obviously the instance data for the type must be added to this to get the overall size of the
object. So, for example, instances of the following class are 12 bytes each:
class MyInt
{
...
private int x;
}
Generics, anonymous methods, partial classes, iterators, property visibility (separate visibility for get
and set) and static classes. See http://msdn.microsoft.com/msdnmag/issues/04/05/C20/default.aspx
for more information about these features.
Generics are useful for writing efficient type-independent code, particularly where the types might
include value types. The obvious application is container classes, and the .NET 2.0 class library
includes a suite of generic container classes in the System.Collections.Generic namespace. Here's a
simple example of a generic container class being used:
Anonymous methods reduce the amount of code you have to write when using delegates, and are
therefore especially useful for GUI programming. Here's an example
AppDomain.CurrentDomain.ProcessExit += delegate
{ Console.WriteLine("Process ending ..."); };
Partial classes is a useful feature for separating machine-generated code from hand-written code in
the same class, and will therefore be heavily used by development tools such as Visual Studio.
Iterators reduce the amount of code you need to write to implement IEnumerable/IEnumerator. Here's
some sample code:
int m_size = 0;
}
The use of 'yield return' is rather strange at first sight. It effectively synthethises an implementation of
IEnumerator, something we had to do manually in .NET 1.x.
.NET generics work great for container classes. But what about other uses? Well, it turns out that .NET
generics have a major limitation - they require the type parameter to be constrained. For example, you
cannot do this:
The C# compiler will refuse to compile this code, as the type T has not been constrained, and
therefore only supports the methods of System.Object. Dispose is not a method on System.Object, so
the compilation fails. To fix this code, we need to add a where clause, to reassure the compiler that our
type T does indeed have a Dispose method
The problem is that the requirement for explicit contraints is very limiting. We can use constraints to
say that T implements a particular interface, but we can't dilute that to simply say that T implements a
particular method. Contrast this with C++ templates (for example), where no constraint at all is
required - it is assumed (and verified at compile time) that if the code invokes the Dispose() method on
a type, then the type will support the method.
In fact, after writing generic code with interface constraints, we quickly see that we haven't gained
much over non-generic interface-based programming. For example, we can easily rewrite the Disposer
class without generics:
Here is a selection of new features in the .NET 2.0 class library (beta 1):
13.1 Threads
class MyThread
{
public MyThread( string initData )
{
m_data = initData;
m_thread = new Thread( new ThreadStart(ThreadMain) );
m_thread.Start();
}
In this case creating an instance of the MyThread class is sufficient to spawn the thread and execute
the MyThread.ThreadMain() method:
There are several options. First, you can use your own communication mechanism to tell the
ThreadStart method to finish. Alternatively the Thread class has in-built support for instructing the
thread to stop. The two principle methods are Thread.Interrupt() and Thread.Abort(). The former will
cause a ThreadInterruptedException to be thrown on the thread when it next goes into a
WaitJoinSleep state. In other words, Thread.Interrupt is a polite way of asking the thread to stop when
it is no longer doing any useful work. In contrast, Thread.Abort() throws a ThreadAbortException
regardless of what the thread is doing. Furthermore, the ThreadAbortException cannot normally be
caught (though the ThreadStart's finally method will be executed). Thread.Abort() is a heavy-handed
mechanism which should not normally be required.
class CApp
{
static void Main()
{
string s = "Hello, World";
ThreadPool.QueueUserWorkItem( new WaitCallback( DoWork ), s );
13.1.4 How do I know when my thread pool work item has completed?
There is no way to query the thread pool for this information. You must put code into the WaitCallback
method to signal that it has completed. Events are useful for this.
class C
{
public void f()
{
try
{
Monitor.Enter(this);
...
}
finally
{
Monitor.Exit(this);
}
}
}
C# has a 'lock' keyword which provides a convenient shorthand for the code above:
class C
{
public void f()
{
lock(this)
{
...
}
}
}
Note that calling Monitor.Enter(myObject) does NOT mean that all access to myObject is serialized. It
means that the synchronisation lock associated with myObject has been acquired, and no other thread
can acquire that lock until Monitor.Exit(o) is called. In other words, this class is functionally equivalent
to the classes above:
class C
{
public void f()
{
lock( m_object )
{
...
}
}
Actually, it could be argued that this version of the code is superior, as the lock is totally encapsulated
within the class, and not accessible to the user of the object.
13.2 Tracing
Yes. The Debug and Trace classes both have a Listeners property, which is a collection of sinks that
receive the tracing that you send via Debug.WriteLine and Trace.WriteLine respectively. By default the
Listeners collection contains a single sink, which is an instance of the DefaultTraceListener class. This
sends output to the Win32 OutputDebugString() function and also the
System.Diagnostics.Debugger.Log() method. This is useful when debugging, but if you're trying to
trace a problem at a customer site, redirecting the output to a file is more appropriate. Fortunately, the
TextWriterTraceListener class is provided for this purpose.
Here's how to use the TextWriterTraceListener class to redirect Trace output to a file:
Trace.Listeners.Clear();
FileStream fs = new FileStream( @"c:\log.txt", FileMode.Create,
FileAccess.Write );
Trace.Listeners.Add( new TextWriterTraceListener( fs ) );
Note the use of Trace.Listeners.Clear() to remove the default listener. If you don't do this, the output
will go to the file and OutputDebugString(). Typically this is not what you want, because
OutputDebugString() imposes a big performance hit.
Yes. You can write your own TraceListener-derived class, and direct all output through it. Here's a
simple example, which derives from TextWriterTraceListener (and therefore has in-built support for
writing to files, as shown above) and adds timing information and the thread ID for each trace line:
(Note that this implementation is not complete - the TraceListener.Write method is not overridden for
example.)
The beauty of this approach is that when an instance of MyListener is added to the Trace.Listeners
collection, all calls to Trace.WriteLine() go through MyListener, including calls made by referenced
assemblies that know nothing about the MyListener class.
14. Resources
I recommend the following books, either because I personally like them, or because I think they are
well regarded by other .NET developers. (Note that I get a commission from Amazon if you buy a book
after following one of these links.)
14.3 Blogs
Active Server Pages (ASP)—A Microsoft technology for creating server-side, Web-
based application services. ASP applications are typically written using a scripting
language, such as JScipt, VBScript, or PerlScript. ASP first appeared as part of
Internet Information Server 2.0 and was code-named Denali.
ADO (ActiveX Data Objects)—A set of COM components used to access data objects
through an OLEDB provider. ADO is commonly used to manipulate data in databases,
such as Microsoft SQL Server 2000, Oracle, and Microsoft Access.
ADO.NET (ActiveX Data Objects for .NET)—The set of .NET classes and data
providers used to manipulate databases, such as Microsoft SQL Server
2000. ADO.NET was formerly known as ADO+. ADO.NET can be used by any .NET
language.
Aero—The code name for the user experience provided by Microsoft's Longhorn
Operating System.
Application Center 2000—A deployment and management package for Web sites,
Web services, and COM components. Application Center is a key B2B and B2C
component of the .NET Enterprise Server product family.
Application domain—The logical and physical boundary created around every .NET
application by the CLR. The CLR can allow multiple .NET applications to be run in a
single process by loading them into separate application domains. The CLR isolates
each application domain from all other application domains and prevents the
configuration, security, or stability of a running .NET applications from affecting other
applications. Objects can only be moved between application domains by the use of
remoting.
Application Manifest—The part of an application that provides information to
describe the components that the application uses.
Array—A collection of objects of the same type, all of which are referenced by a
single identifier and an indexer. In the .NET Framework, all arrays inherits from the
Array class that is located in the System namespace.
ASP.NET (Active Server Pages for .NET)—A set of .NET classes used to create Web-
based, client-side (Web Form) and server-side (Web Service) applications. ASP.NET
was derived from the Microsoft Active Server Pages (ASP) Web technology and
adapted for use in the .NET Framework. Also called managed ASP and formerly
known as ASP+.
Assembly—All of the files that comprise a .NET application, including the resource,
security management, versioning, sharing, deployment information, and the actual
MSIL code executed by the CLR. An assembly may appear as a single DLL or EXE file,
or as multiple files, and is roughly the equivalent of a COM module. See assembly
manifest, private assembly, shared assembly.
Avalon—The code name for for the graphical subsystem (User Interface framework)
of Longhorn. It is worth noting that this will be a vector-based system.
BackOffice Server 2000—A suite of Microsoft servers applications used for B2B and
B2C services. Included in this suite are Windows 2000 Server, Exchange Server
2000, SQL Server 2000, Internet Security and Acceleration Server 2000, Host
Integration Server 2000, and Systems Management Server 2.0. These server
applications are now referred to as the .NET Enterprise Server product family.
Base class—The parent class of a derived class. Classes may be used to create
other classes. A class that is used to create (or derive) another class is called the
base class or super class. See Derived Class, Inheritance.
BizTalk Server 2000—A set of Microsoft Server applications that allow the
integration, automation, and management of different applications and data within
and between business organizations. BizTalk Server is a key B2B component of the
.NET Enterprise Server product family.
Class—In .NET languages, classes are templates used for defining new types.
Classes describe both the properties and behaviors of objects. Properties contain the
data that are exposed by the class. Behaviors are the functionality of the object, and
are defined by the public methods (also called member functions) and events of the
class. Collectively, the public properties and methods of a class are known as the
object interface. Classes themselves are not objects, but instead they are used to
instantiate (i.e., create) objects in memory. See structure.
Code Access Security (CAS)—The common language runtime's security model for
applications. This is the core security model for new features of the Longhorn
Operating System.
COM+—The "next generation" of the COM and DCOM software architectures. COM+
(pronounced "COM plus") makes it easier to design and construct distributed,
transactional, and component-based applications using a multi-tiered architecture.
COM+ also supports the use of many new services, such as Just-in-Time Activation,
object pooling, and Microsoft Transaction Server (MTS) 2.0. The use of COM, DCOM,
and COM+ in application design will eventually be entirely replaced by the Microsoft
.NET Framework.
COM+ 2.0—This was one of the pre-release names for the original Microsoft .NET
Framework. See also Web Services Platform.
COM Callable Wrapper (CCW)—A metadata wrapper that allows COM components
to access managed .NET objects. The CCW is generated at runtime when a COM
client loads a .NET object. The .NET assembly must first be registered using the
Assembly Registration Tool. See Runtime Callable Wrapper (RCW).
Common Type System (CTS)—The .NET Framework specification which defines the
rules of how the Common Language Runtime defines, declares, and manages types,
regardless of the programming language. All .NET components must comply to the
CTS specification.
Data provider—A set of classes in the .NET Framework that allow access to the
information a data source. The data may be located in a file, in the Windows registry,
or any any type of database server or network resource. A .NET data provider also
allows information in a data source to be accessed as an ADO.NET DataSet.
Programmers may also author their own data providers for use with the .NET
Framework. See Managed providers.
Derived class—A class that was created based on a previously existing class (i.e.,
base class). A derived class inherits all of the member variables and methods of the
base class it is derived from. Also called a derived type.
DOM (Document Object Model)—A programming interface that allows HTML pages
and XML documents to be created and modified as if they were program objects.
DOM makes the elements of these documents available to a program as data
structures, and supplies methods that may be invoked to perform common
operations upon the document's structure and data. DOM is both platform- and
language-neutral and is a standard of the World Wide Web Consortium (W3C).
DISCO—An Microsoft-created XML protocol used for discovering Web Services. Much
of DISCO is now a subset in the newer, more universal protocol UDDI. It is expected
that DISCO will become obsolete in favor of UDDI.
DTD (Document Type Definition)—A document defining the format of the contents
present between the tags in an HTML, XML, or SGML document, and how the content
should be interpreted by the application reading the document. Applications will use
a document's DTD to properly read and display a document's contents. Changes in
the format of the document can be easily made by modifying the DTD.
Everett—The pre-release code name of Visual Studio .NET 2003. Everett offers
increased performance over Visual Studio .NET 1.0, integration with Windows Server
2003 and SQL Server 2003 (Yukon), extended support for XML Web services, MS
Office programmability (the Visual Studio Tools for Office Development), improved
migration tools for VB6 code, new managed data providers for Oracle and ODBC, and
the addition of the Enterprise Instrumentation Framework (EIF) and mobile device
support in the form of the .NET Compact Framework.
Framework Class Library (FCL)—The collective name for the thousands of classes
that compose the .NET Framework. The services provided by the FCL include runtime
core functionality (basic types and collections, file and network I/O, accessing system
services, etc.), interaction with databases, consuming and producing XML, and
support for building Web-based (Web Form) and desktop-based (Windows Form)
client applications, and SOAP-based XML Web services.
GDI (Graphics Device Interface)—A Win32 API that provides Windows applications
the ability to access graphical device drivers for displaying 2D graphics and formatted
text on both the video and printer output devices. GDI (pronounced "gee dee eye") is
found on all version of Windows. See GDI+.
GDI+ (Graphics Device Interface Plus)—The next generation graphics subsystem for
Windows. GDI+ (pronounced "gee dee eye plus") provides a set of APIs for rendering
2D graphics, images, and text, and adds new features and an improved
programming model not found in its predecessor GDI. GDI+ is found natively in
Windows XP and the Windows Server 2003 family, and as a separate installation for
Windows 2000, NT, 98, and ME. GDI+ is the currently the only drawing API used by
the .NET Framework.
Global Assembly Cache (GAC)—A reserved area of memory used to store the
assemblies of all of the .NET applications running on a specific machine. The GAC is
necessary for side-by-side execution and for the sharing of assemblies among
multiple applications. To reside in the GAC, an assembly must be public (i.e., a
shared assembly) and have a strong name. Assemblies are added and removed from
the GAC using the Global Assembly Cache Tool.
Heap—An area of memory reserved for use by the CLR for a running programming.
In .NET languages, reference types are allocated on the heap. See Stack.
Indigo —The code name for the communications portion of Longhorn that is built
around Web services. This communications technology focuses on providing spanning
transports, security, messaging patterns, encoding, networking and hosting, and
more.
Indexer—A CLR language feature that allows array-like access to the properties of
an object using getter and setter methods and an index value. This construct is
identical to operator[] in C++. See Property.
Inheritance—The ability of a class to be created from another class. The new class,
called a derived class or subclass, is an exact copy of the base class or superclass
and may extend the functionality of the base class by both adding additional types
and methods and overriding existing ones.
Isolated storage—A data storage mechanism used by the CLR to insure isolation
and type safety by defining standardized ways of associating code with saved data.
Data contained in isolated storage is always identified by user and by assembly,
rather than by an address in memory, or the name and path of a file on disk. Other
forms of security credentials, such as the application domain, can also be used to
identify the isolated data.
Isolated storage tool—A .NET programming tool (Storeadm.exe) used to list and
remove all existing stores for the current user. See Isolated storage.
J2SE (Java 2 Standard Edition)—A Java-based, runtime platform that provides many
features for developing Web-based Java applications, including database access
(JDBC API), CORBA interface technology, and security for both local network and
Internet use. J2SE is the core Java technology platform and is a competitor to the
Microsoft .NET Framework.
Java Virtual Machine (JVM)—A component of the Java runtime environment that
JIT—compiles Java bytecodes, manages memory, schedules threads, and interacts
with the host operating environment (e.g., a Web browser running the Java
program). The JVM is the Java equivalent of the .NET Framework's CLR.
Just In Time (JIT)—The concept of only compiling units of code just as they are
needed at runtime. The JIT compiler in the CLR compiles MSIL instructions to native
machine code as a .NET application is executed. The compilation occurs as each
method is called; the JIT-compiled code is cached in memory and is never
recompiled more than once during the program's execution.
Local assembly cache—The assembly cache that stores the compiled classes and
methods specific to an application. Each application directory contains a \bin
subdirectory which stores the files of the local assembly cache. Also call the
application assembly cache. See Global Assembly Cache.
Locale—A collection of rules and data specific to a spoken and/or written language
and/or a geographic area. Locale information includes human languages, date and
time formats, numeric and monetary conventions, sorting rules, cultural and regional
contexts (semantics), and character classification. See Localization.
Make Utility—A .NET programming tool (nmake.exe) used to interpret script files
(i.e., makefiles) that contain instructions that detail how to build applications, resolve
file dependency information, and access a source code control system. Microsoft's
nmake program has no relation to the nmake program originally created by AT&T
Bell Labs and now maintained by Lucent. Although identical in name and purpose
these two tools are not compatible. See Lucent nmake Web site.
Managed data—Memory that is allocated and released by the CLR using Garbage
Collection. Managed data can only be accessed by managed code.
Managed execution—The process used by the CLR to execute managed code. Each
time a method in an object is called for the first time, its MSIL-encoded instructions
are JIT-compiled to the native code of the processor. Each subsequent time the same
method is called, the previous JIT-compiled code is executed. Compiling and
execution continued until the program terminates.
Managed pointer types—An object reference that is managed by the CLR. Used to
point to unmanaged data, such as COM objects and some parameters of Win32 API
functions.
Member variables—Typed memory locations used to store values. Also called fields.
Metadata—All information used by the CLR to describe and reference types and
assemblies. Metadata is independent of any programming language, and is an
interchange medium for program information between tools (e.g., compilers and
debuggers) and execution environments. See MSIL.
Method—A function defined within a class. Methods (along with events) defined the
behavior of an object.
MSDE 2000 (Microsoft Data Engine)—A light weight release of the SQL Server 7.0
data engine. The MSDE is used as a relational data store on many Microsoft
products, including BizTalk Server 2000, Host Integration Server 2000, SQL Server
2000, Visual Studio.NET, and the .NET Framework. The MSDE a modern replacement
for the older Microsoft Jet database technology.
.NET Compact Framework—A port of the .NET Framework to Windows CE, allowing
embedded and mobile devices to run .NET applications. See Smart Device
Extensions.
NGWS—Next Generation Web Service—This was one of the pre-release names for
.NET before its release.
Object type—The most fundamental base type (System.Object) that all other .NET
Framework types are derived from.
Orcas—The code name for the version of Visual Studio .NET to be released near the
time Microsoft Longhorn is released. This follows the release of Visual Studio .NET
Whidbey.
Pre-JIT compiler—Another name for the Native Image Generator tool used to
convert MSIL and metadata assemblies to native machine code executables.
Private assembly—An assembly that is used only by a single application. A private
assembly will run only with the application with which it was built and deployed.
References to the private assembly will only be resolved locally to the application
directory it is installed in. See Shared assembly.
Pointer—A variable that contains the address of a location in memory. The location
is the starting point of an allocated object, such as an object or value type, or the
element of an array.
Portable Executable (PE) file—The file format defining the structure that all
executable files (EXE) and Dynamic Link Libraries (DLL) must use to allow them to be
loaded and executed by Windows. PE is derived from the Microsoft Common Object
File Format (COFF). The EXE and DLL files created using the .NET Framework obey
the PE/COFF formats and also add additional header and data sections to the files
that are only used by the CLR. The specification for the PE/COFF file formats is
available at www.microsoft.com/hwdev/hardware/PECOFF.asp.
Pre-defined types—Types defined by the CLR in the System namespace. The pre-
defined values types are integer, floating point, decimal, character, and boolean
values. Pre-defined reference types are object and string references. See User-
defined types.
Property—A CLR language feature that allows the value of a single member variable
to be modified using getter and setter methods defined in a class or structure. See
Indexer.
R2—The codename for the Windows Server 2003 Update due in 2005.
Seamless Computing—A term indicating that a user should be able to find and use
information effortlessly. The hardware and software within a system should work in
an intuitive manner to make it seamless for the user. Seamless computing is being
realized with the improvements in hardware (voice, ink, multimedia) and software.
Shared name utility—A .NET programming tool (Sn.exe) used to verify assemblies
and their key information and to generate key files. This utility is also used to create
strong names for assemblies.
Side-by-Side Execution—Running multiple versions of the same assembly
simultaneously on the same computer, or even in the same process. Assemblies
must be specifically (and carefully) coded to make use of side-by-side execution.
Smart Device Extensions (SDE)—An installable SDK that allows Visual Studio .NET
1.0 to be used for developing .NET application for the Pocket PC and other handheld
devices that support the Microsoft Windows CE .NET operating system and the
Microsoft .NET Compact Framework. SDE will be fully integrated into Visual Studio
.NET 2003.
Stack—An area of program memory used to store local program variables, method
parameters, and return values. In .NET languages, value types are allocated on the
stack. See Heap.
Static fields—Types that declare member variables which are associated with a type
rather than an instance of the type. Static fields may be access without first
instantiating their associated type.
Starlite—A code name for the original Microsoft .NET Compact Framework
Static methods—Types that declare methods which are associated with a type
rather than an instance of the type. Static methods may be called without first
instantiating their associated type.
Strong name—An assembly name that is globally unique among all .NET
assemblies. A public key encryption scheme is used to create a digital signature to
insure that the strong name is truly different than all other names created at anytime
and anywhere in the known universe. The digital signature also makes it easy to
encrypt the assembly, authenticate who created the assembly, and to validate that
the assembly hasn't been corrupted or tampered with. Strong names are created
using the Shared name utility.
Strongly-typed—A programming language is said to be strongly-typed when it pre-
defines specific primitive data types, requires that all constants and variables be
declared of a specific type, and enforces their proper use by imposing rigorous rules
upon the programmer for the sake of creating robust code that is consistent in its
execution.
Structure—In .NET languages, structures are light-weight classes that are simpler,
have less overhead, and are less demanding on the CLR. Structures are typically
used for creating user-defined types that contain only public fields and no properties
(identical to structures in the C language). But .NET structures, like classes, also
support properties, access modifiers, constructors, methods, operators, nested
types, and indexers. Unlike classes, however, structures do not support inheritance,
custom constructors, a destructor (or Finalize) method, and no compile-time
initialization of instance fields. It is important to note that a structure is a value type,
while classes are a reference type. Performance will suffer when using structures in a
situation where references are expected (e.g., in collections) and the structure must
be boxed and unboxed for it to be used.
Types—A set of data and function members that are combined to form the modular
units used to build a .NET applications. Pre-defined types exist within the CLR and
user-defined types are created by programmers. Types include enumerations,
structures, classes, standard modules, interfaces, and delegates. See Type
members.
Type library—A compiled file (.tlb) containing metadata that describes interfaces
and data types. Type libraries can be used to describe vtable interfaces, regular
functions, COM components, and DLL modules. Type libraries are compiled from
Interface Definition Language (IDL) files using the MIDL compiler.
Unmanaged data—Data (i.e. memory) that is allocated outside of the control of the
CLR. Unmanaged data can be access by both managed and unmanaged code.
Unmanaged pointer types—Any pointer type that is not managed by the CLR. That
is, a pointer that store a reference to an unmanaged object or area of memory.
Unsafe—Same as unmanaged.
Value types—A variable that stores actual data rather than a reference to data,
which is stored elsewhere in memory. Simple value types include the integer, floating
point number, decimal, character, and boolean types. Value types have the minimal
memory overhead and are the fastest to access. See Reference types, Pointer types.
Variable—A typed storage location in memory. The type of the variable determines
what kind of data it can store. Examples of variables include local variables,
parameters, array elements, static fields and instance fields. See Types.
Vienna—Code name for the Microsoft Office Live Communications Server 2005 (LCS
2005) beta.
Visual C++ .NET—A Microsoft-supported language for .NET Framework. Visual C++
.NET allows developers to use the C++ language to write managed applications, and
to easily migrate legacy C++ code to the .NET Framework. Code written in Visual
C++ .NET is also referred to as managed C++; code written in the legacy Visual
C++ language is sometimes referred to as unmanaged C++.
Visual Studio .NET (VS .NET)—A full-featured, Interactive Development
Environment (IDE) created by Microsoft for the development of .NET applications. VS
.NET makes a better alternative to Visual Notepad for creating .NET applications.
Officially called Microsoft Visual Studio .NET 2002.
Visual Studio .NET 2003 (VS .NET)—The second version of Visual Studio .NET that
was also known by the code name Everett.
Visual Studio 2005(VS .NET)—The third version of Visual Studio .NET that was also
known by the code name Whidbey. This version is due to release in 2005.
Visual Studio Team System 2005 (VS .NET)—A high-end skew for Visual Studio
2005. This version includes enterprise-level tools and more. Codename for this
product was known as "Burton".
The Web Matrix Project—A free WSIWIG development product (IDE)for doing
ASP.NET development that was released as a community project. The most recent
version—The Web Matrix Project (Revisited)—can be found here.
Web service—An application hosted on a Web server that provides information and
services to other network applications using the HTTP and XML protocols. A Web
service is conceptually an URL-addressable library of functionality that is completely
independent of the consumer and stateless in its operation.
Web service consumer—An application that uses Internet protocols to access the
information and functionality made available by a Web service provider. See Web
service.
Web Service Platform—This was one of the pre-release names for the original
Microsoft .NET Framework. See also COM+ 2.0.
Whidbey—The pre-release code name for the "next generation" release of Visual
Studio after Everett and prior to Longhorn.
Whitehorse—The code name for the set of modeling tools included in Micrsoft Visual
Studio 2005 ("Whidbey"). See An Overview of Microsoft's Whitehorse.
Windows .NET Server 2003—The original name of Windows Server 2003. The
".NET" was dropped as part of an attempt to remarket the concept of .NET not as a
product, but instead as a business strategy.
Windows Server 2003—The next generation of Windows 2000 Server that offers
tighter integration with the .NET Framework, and greater support for Web services
using Internet Information Server 6.0 and XML and UDDI services. This product was
formerly known as Windows .NET Server 2003.
WinFS—("Windows Future System") The code name for the new type-aware,
transactional, unified file system and programming model that will be a key part of
Longhorn. WinFS allows various kinds of data and information stored on your
machine to be associated and categorized. You can associate relationships between
information and these associations can be used to access what is stored on your
machine.
WinFX—The new Windows API that will be released with the Microsoft Longhorn
Operating System. This will include features for Avalon, Indigo, and WinFS as well as
a number of fundamental routines.
XCOPY—An MS-DOS file copy program used to deploy .NET applications. Because
.NET assemblies are self-describing and not bound to the Windows registry as COM-
based application are, most .NET applications can be installed by simply being copied
from one location (e.g., directory, machine, CD-ROM, etc.) to another. Applications
requiring more complex tasks to be performed during installation require the use of
the Microsoft Windows Installer.
XDR (XML Data-Reduced)—A reduced version of XML Schema used prior to the
release of XML Schema 1.0.
Xlink (XML Linking Language)—A language that allows links to other resources to
be embedded in XML documents, similar to the hyperlinks found in HTML Web
pages. See the document XML Linking Language (XLink) Version 1.0.
XML Web services—Web-based .NET applications that provide services (i.e., data
and functionality) to other Web-based applications (i.e. Web service consumers).
XML Web services are accessed via standard Web protocols and data formats such as
HTTP, XML, and SOAP.
XPath (XML Path Language)—A language that uses path expressions to specify the
locations of structures and data within an XML document. XPath information is
processed using XSLT or XPointer. See the document XML Path Language (XPath)
Version 1.0.
XPointer (XML Pointer Language)—A language that supports addressing into the
internal structures of XML documents. XPointer allows the traversals of an XML
document tree and selection of its internal parts based on element types, attribute
values, character content, and relative position. XPointer is based on the XML Path
Language (XPath). See the document XML Pointer Language (XPointer).
XSD (XML Schema Definition)—A language used to describe the structure of an XML
document. XSD is used to defined classes that are in turn used to create instances of
XML documents which conform to the schema. See the document XML Schema Part
0: Primer.
XSL (eXtensible Stylesheet Language)—A language used for creating stylesheets for
XML documents. XSL consists of languages for transforming XML documents (XPath
and XSLT) and an XML vocabulary for specifying formatting semantics. See the
document Extensible Stylesheet Language (XSL) Version 1.0.
XQL (XML Query Language)—A query language used to extract data from XML
documents. XQL uses XML as a data model and is very similar to the pattern
matching semantics of XSL. See the document XML Query Language (XQL).
Yukon—The code name for the release of Microsoft SQL Server 2003 (a.k.a., SQL Server 9).
Yukon offers a tighter integration with both the .NET Framework and the Visual Studio .NET IDE.
Yukon will include full support for ADO.NET and the CLR, allowing .NET languages to
be used for writing stored procedures.
Z
No entries.