Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

What Is Page Thrashing?

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 4

What is page thrashing?

Some operating systems (such as UNIX or Windows in enhanced mode) use virtual
memory. Virtual memory is a technique for making a machine behave as if it had
more memory than it really has, by using disk space to simulate RAM (randomaccess memory). In the 80386 and higher Intel CPU chips, and in most other
modern microprocessors (such as the Motorola 68030, Sparc, and Power PC), exists
a piece of hardware called the Memory Management Unit, or MMU.
The MMU treats memory as if it were composed of a series of pages. A page of
memory is a block of contiguous bytes of a certain size, usually 4096 or 8192 bytes.
The operating system sets up and maintains a table for each running program
called the Process Memory Map, or PMM. This is a table of all the pages of memory
that program can access and where each is really located.
Every time your program accesses any portion of memory, the address (called a
virtual address) is processed by the MMU. The MMU looks in the PMM to find out
where the memory is really located (called the physical address). The physical
address can be any location in memory or on disk that the operating system has
assigned for it. If the location the program wants to access is on disk, the page
containing it must be read from disk into memory, and the PMM must be updated
to reflect this action (this is called a page fault).
Because accessing the disk is so much slower than accessing RAM, the operating
system tries to keep as much of the virtual memory as possible in RAM. If youre
running a large enough program (or several small programs at once), there might
not be enough RAM to hold all the memory used by the programs, so some of it
must be moved out of RAM and onto disk (this action is called paging out). The
operating system tries to guess which areas of memory arent likely to be used for a
while (usually based on how the memory has been used in the past). If it guesses
wrong, or if your programs are accessing lots of memory in lots of places, many
page faults will occur in order to read in the pages that were paged out. Because all
of RAM is being used, for each page read in to be accessed, another page must be
paged out. This can lead to more page faults, because now a different page of
memory has been moved to disk.
The problem of many page faults occurring in a short time, called page thrashing,
can drastically cut the performance of a system. Programs that frequently access
many widely separated locations in memory are more likely to cause page thrashing
on a system. So is running many small programs that all continue to run even
when you are not actively using them. To reduce page thrashing, you can run fewer
programs simultaneously. Or you can try changing the way a large program works
to maximize the capability of the operating system to guess which pages wont be
needed. You can achieve this effect by caching values or changing lookup
algorithms in large data structures, or sometimes by changing to a memory
allocation library which provides an implementation of malloc() that allocates
memory more efficiently. Finally, you might consider adding more RAM to the
system to reduce the need to page out.

How do you override a defined macro?


You can use the #undef preprocessor directive to undefine (override) a previously
defined macro.

How can you check to see whether a symbol is defined?


You can use the #ifdef and #ifndef preprocessor directives to check whether a
symbol has been defined (#ifdef) or whether it has not been defined (#ifndef).
Can you define which header file to include at compile time? Yes. This can be done
by using the #if, #else, and #endif preprocessor directives. For example, certain
compilers use different names for header files. One such case is between Borland
C++, which uses the header file alloc.h, and Microsoft C++, which uses the header
file malloc.h. Both of these headers serve the same purpose, and each contains
roughly the same definitions. If, however, you are writing a program that is to
support Borland C++ and Microsoft C++, you must define which header to include
at compile time. The following example shows how this can be done:
#ifdef _ _BORLANDC_ _
#include
#else
#include
#endif

Can a variable be both const and volatile?


Yes. The const modifier means that this code cannot change the value of the
variable, but that does not mean that the value cannot be changed by means
outside this code. For instance, in the example in FAQ 8, the timer structure was
accessed through a volatile const pointer. The function itself did not change the
value of the timer, so it was declared const. However, the value was changed by
hardware on the computer, so it was declared volatile. If a variable is both const
and volatile, the two modifiers can appear in either order.

Can include files be nested?


Answer Yes. Include files can be nested any number of times. As long as you use
precautionary measures , you can avoid including the same file twice. In the past,
nesting header files was seen as bad programming practice, because it complicates
the dependency tracking function of the MAKE program and thus slows down
compilation. Many of todays popular compilers make up for this difficulty by
implementing a concept called precompiled headers, in which all headers and
associated dependencies are stored in a precompiled state.
Many programmers like to create a custom header file that has #include statements
for every header needed for each module. This is perfectly acceptable and can help
avoid potential problems relating to #include files, such as accidentally omitting an
#include file in a module.

Write the equivalent expression for x%8?


x&7

When does the compiler not implicitly generate the address of the first element
of an array?
Whenever an array name appears in an expression such as
- array as an operand of the sizeof operator
- array as an operand of & operator
- array as a string literal initializer for a character array
Then the compiler does not implicitly generate the address of the address of the
first element of an array.

What is the benefit of using #define to declare a constant?


Using the #define method of declaring a constant enables you to declare a constant
in one place and use it throughout your program. This helps make your programs
more maintainable, because you need to maintain only the #define statement and
not several instances of individual constants throughout your program.
For instance, if your program used the value of pi (approximately 3.14159) several
times, you might want to declare a constant for pi as follows:
#define PI 3.14159
Using the #define method of declaring a constant is probably the most familiar way
of declaring constants to traditional C programmers. Besides being the most
common method of declaring constants, it also takes up the least memory.
Constants defined in this manner are simply placed directly into your source code,
with no variable space allocated in memory. Unfortunately, this is one reason why
most debuggers cannot inspect constants created using the #define method.

How can I search for data in a linked list?


Unfortunately, the only way to search a linked list is with a linear search, because
the only way a linked lists members can be accessed is sequentially. Sometimes it
is quicker to take the data from a linked list and store it in a different data
structure so that searches can be more efficient.

Why should we assign NULL to the elements (pointer) after freeing them?
This is paranoia based on long experience. After a pointer has been freed, you can
no longer use the pointed-to data. The pointer is said to dangle; it doesnt point at
anything useful. If you NULL out or zero out a pointer immediately after freeing it,
your program can no longer get in trouble by using that pointer. True, you might go
indirect on the null pointer instead, but thats something your debugger might be
able to help you with immediately. Also, there still might be copies of the pointer
that refer to the memory that has been deallocated; thats the nature of C. Zeroing
out pointers after freeing them wont solve all problems;

What is a null pointer assignment error? What are bus errors, memory faults,
and core dumps?
These are all serious errors, symptoms of a wild pointer or subscript.
Null pointer assignment is a message you might get when an MS-DOS program
finishes executing. Some such programs can arrange for a small amount of memory
to be available where the NULL pointer points to (so to speak). If the program tries

to write to that area, it will overwrite the data put there by the compiler.
When the program is done, code generated by the compiler examines that area. If
that data has been changed, the compiler-generated code complains with null
pointer assignment.
This message carries only enough information to get you worried. Theres no way to
tell, just from a null pointer assignment message, what part of your program is
responsible for the error. Some debuggers, and some compilers, can give you more
help in finding the problem.
Bus error: core dumped and Memory fault: core dumped are messages you might
see from a program running under UNIX. Theyre more programmer friendly. Both
mean that a pointer or an array subscript was wildly out of bounds. You can get
these messages on a read or on a write. They arent restricted to null pointer
problems.
The core dumped part of the message is telling you about a file, called core, that
has just been written in your current directory. This is a dump of everything on the
stack and in the heap at the time the program was running. With the help of a
debugger, you can use the core dump to find where the bad pointer was used.
That might not tell you why the pointer was bad, but its a step in the right
direction. If you dont have write permission in the current directory, you wont get a
core file, or the core dumped message

You might also like