Kernel Internals Student Notes
Kernel Internals Student Notes
cover
Front cover
Student Notebook
ERC 4.0
Student Notebook
Trademarks The reader should recognize that the following terms, which appear in the content of this training document, are official trademarks of IBM or other companies: IBM is a registered trademark of International Business Machines Corporation. The following are trademarks or registered trademarks of International Business Machines Corporation in the United States, or other countries, or both: AIX Chipkill Electronic Service Agent LoadLeveler pSeries S/370 zSeries AIX 5L DB2 IBM NUMA-Q PTX Sequent AS/400 DFS iSeries PowerPC RS/6000 SP
ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States, other countries, or both. Intel is a trademark of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States and other countries. Other company, product and service names may be trademarks or service marks of others.
V2.0.0.3
Student Notebook
TOC
Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Course Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Unit 1. Introduction to the AIX 5L Kernel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2 Operating System and the Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3 Kernel Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5 Address Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7 Mode and Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9 Context Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11 Interrupt Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13 AIX 5L Kernel Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16 AIX 5L Execution Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-18 System Header Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20 Conditional Compile Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22 Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-24 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-25 Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26 Unit 2. Kernel Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2 What tools will you be using in this class? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3 The Major Functions of KDB are: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4 Enabling the Kernel Debugger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6 Verifying the Debugger is Enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8 Starting the Debugger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9 System Dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10 kdb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13 Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-15 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-16 Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-17 Unit 3. Process Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2 Parts of a Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3 Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5 1:1 Thread Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7 M:1 Thread Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8 M:N Thread Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9 Creating Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11 Creating Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
Contents
iii
Student Notebook
Process State Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-15 The Process Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-18 pvproc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-20 pv_stat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-21 Table Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-22 Extending the pvproc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-24 PID Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-26 Finding the Slot Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-28 Kernel Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-29 Thread Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-31 pvthread Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-33 TID Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-34 u-block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-35 Six Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-37 Thread Scheduling Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-39 Thread State Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-40 Thread Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-43 Run Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-45 Dispatcher and Scheduler Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-46 Dispatcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-47 Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-48 Preemption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-49 Preemptive Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-51 Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-53 SMP - Multiple Run Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-56 NUMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-58 Memory Affinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-60 Global Run Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-62 Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-64 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-65 Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-66 Unit 4. Addressing Memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-2 Memory Management Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-3 Pages and Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-4 Address Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-6 Translating Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-8 Segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-9 Segment Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-11 32-bit Hardware Address Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-13 64 Bit Hardware Address Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-15 Segment Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-16 Shared Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-19 shmat Memory Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-21 Memory Mapped Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-23 32-bit User Address Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-26 32-bit Kernel Address Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-28
iv Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
TOC
Unit 5. Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2 Virtual Memory Management (VMM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3 Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5 Demand Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10 Hardware Page Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12 Page not in Hardware Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13 Page on Paging Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15 External Page Table (XPT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-16 Loading Pages From the File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18 Object Type / Backing Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20 Paging Space Management Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21 Paging Space Allocation Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-23 Free Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25 Clock Hand Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27 Fatal Memory Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29 Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31 Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-32 Unit 6. Logical Partitioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2 Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3 Physical Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5 Logical Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7 Components Required for LPAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9 Operating System Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13 Virtual Memory Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14 Real Address Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15 Real Mode Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17 Operating System Real Mode Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-19 Address Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-21 Allocating Physical Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-23 Partition Page Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-25 Translation Control Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-27 Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-29 Dividing Physical Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-31 Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-33 Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-34 Unit 7. LFS, VFS and LVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Contents
Student Notebook
What is the Purpose of LFS/VFS? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-3 Kernel I/O Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-5 Major Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-7 Logical File System Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-9 User File Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-11 The file Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-13 vnode/vfs Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-15 vnode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-17 vfs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-19 root (l) and usr File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-21 vmount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-23 File and File System Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-25 gfs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-27 vnodeops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-29 vfsops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-31 gnode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-33 kdb devsw Subcommand Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-35 kdb volgrp Subcommand Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-37 AIX lsvg Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-39 kdb lvol Subcommand Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-40 AIX lslv Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-44 kdb pvol Subcommand Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-46 AIX lspv Command Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-48 Checkpoint (1 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-49 Checkpoint (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-50 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-51 Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-52 Unit 8. Journaled File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-2 JFS File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-3 Reserved Inodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-7 Disk Inode Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-9 In-core Inodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-11 Direct (No Indirect Blocks) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-15 Single Indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-17 Double Indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-18 Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-19 Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-20 Unit 9. Enhanced Journaled File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-2 Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-3 Aggregate and Fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-4 Aggregate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-6 Allocation Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-9 Fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-11 Inode Allocation Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-13
vi Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
TOC
Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Increasing an Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Binary Tree of Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inline Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Binary Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . More Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continuing to Add Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Another Split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fsdb Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Directory Root Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Directory Slot Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Small Directory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding a File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding a Leaf Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding an Internal Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9-14 9-16 9-18 9-20 9-26 9-27 9-28 9-29 9-30 9-32 9-34 9-35 9-37 9-39 9-41 9-42 9-43 9-44 9-45 9-46 9-47
Unit 10. Kernel Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1 Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2 Kernel Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3 Relationship With the Kernel Nucleus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-5 Global Kernel Name Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-6 Why Export Symbols? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-9 Kernel Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-11 Configuration Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13 Compiling and Linking Kernel Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-15 How to Build a Dual Binary Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-19 Loading Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-21 sysconfig() - Loading and Unloading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-22 sysconfig() - Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-23 sysconfig() - Device Driver Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-24 The loadext() Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-26 System Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-28 Sample System Call - Export/Import File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-30 Sample System Call - question.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-31 Sample System Call - Makefile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-32 Argument Passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-33 User Memory Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-35 Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-38 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-39 Unit Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-40
Contents
vii
Student Notebook
Appendix A. Checkpoint Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1 Appendix B. KI Crash Dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unit Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Crash Dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About This Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1 B-2 B-3 B-5 B-6
viii
Kernel Internals
V2.0.0.3
Student Notebook
TMK
Trademarks
The reader should recognize that the following terms, which appear in the content of this training document, are official trademarks of IBM or other companies: IBM is a registered trademark of International Business Machines Corporation. The following are trademarks or registered trademarks of International Business Machines Corporation in the United States, or other countries, or both: AIX Chipkill Electronic Service Agent LoadLeveler pSeries S/370 zSeries AIX 5L DB2 IBM NUMA-Q PTX Sequent AS/400 DFS iSeries PowerPC RS/6000 SP
ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States, other countries, or both. Intel is a trademark of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States and other countries. Other company, product and service names may be trademarks or service marks of others.
Trademarks
ix
Student Notebook
Kernel Internals
V2.0.0.3
Student Notebook
pref
Course Description
AIX 5L Kernel Internals Concepts Duration: 5 days Purpose
This is a course in basic AIX 5L Kernel concepts. It is designed to provide background information useful to support engineers and AIX development/application engineers who are new to the AIX 5L Kernel environment as implemented in AIX releases 5.1 and 5.2. This course also provides background knowledge helpful for those planning to attend the AIX 5L Device Driver (Q1330) course.
Audience
AIX technical support personnel Application developers who want to achieve a conceptual understanding of AIX 5L Kernel Internals
Prerequisites
Students are expected to have programming knowledge in the C programming language, working knowledge of AIX system calls, and user-level working knowledge of AIX/UNIX, including editors, shells, pipes, and Input/Output (I/O) redirection. Additionally knowledge of basic system administration skills is required, such as the use of SMIT, configuring file systems and configuring dump devices. These skills can be obtained by attending the following courses or through equivalent experience: Introduction to C Programming - AIX/UNIX (Q1070) AIX 5L System Administration II: Problem Determination (AU16/Q1316) In addition, the following courses are helpful: KornShell Programming (AU23/Q1123) AIX Application Programming Environment (AU25/Q1125)
Course Description
xi
Student Notebook
Objectives
At the end of this course you will be able to: List the major features of the AIX 5L kernel Quickly traverse the system header files to find data structures Use the kdb command to examine data structures in the memory image of a running system or system dump Understand the structures used by the kernel to manage processes and threads, and the relationships between them Describe the layout of the segmented addressing model, and how logical to physical address translation is achieved Describe the operation of VMM subsystem and the different paging algorithms Describe the mechanisms used to implement logical partitioning Understand the purpose of the logical file system and virtual file system layers and the data structures they use List and describe the components and function of the JFS2 and JFS file systems Identify the steps required to compile, link and load kernel extensions
xii
Kernel Internals
V2.0.0.3
Student Notebook
pref
Agenda
Day 1
Welcome Unit 1 - Introduction to the AIX 5L Kernel lecture Exercise 1 - Introduction to the AIX 5L Kernel Unit 2 - Kernel Analysis Tools lecture Exercise 2 - Kernel Analysis Tools
Day 2
Daily review Unit 3 - Process Management lecture Exercise 3 - Process Management Unit 4 - Addressing Memory lecture
Day 3
Daily review Exercise 4 - Addressing Memory Unit 5 - Memory Management lecture Exercise 5 - Memory Management Unit 6 - Logical Partitioning lecture
Day 4
Daily review Unit 7 - LFS, VFS and LVM lecture Exercise 6 - LFS, VFS and LVM Unit 8 - Journaled File System lecture Unit 9 - Enhanced Journaled File System - Topic 1 lecture Exercise 7 - Enhanced Journaled File System - Topic 1 Unit 9 - Enhanced Journaled File System - Topic 2 lecture Exercise 8 - Enhanced Journaled File System - Topic 2
Day 5
Daily review Unit 10 - Kernel Extensions lecture Exercise 9 - Kernel Extensions
Agenda
xiii
Student Notebook
xiv
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
References
The Design of the UNIX Operating System, by Maurice J. Bach, ISBN: 0132017997 AIX Online Documentation: http://publib16.boulder.ibm.com/pseries/en_US/infocenter/base/aix.htm
1-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this unit you should be able to:
Describe the role the kernel plays in an operating system Define user and kernel mode and list the operations that can only be performed in kernel mode Describe when the kernel must make a context switch Describe the role of the mstsave area in a context switch Name the execution environments available on each of the platforms supported by AIX 5L Using the system header files, identify data element types for each of the available kernels in AIX 5L
BE0070XS4.0
Notes:
1-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Process
Process
tty
CPU
BE0070XS4.0
Kernel
The kernel is the base program of the operating system. It acts as intermediary between the application programs and the computer hardware. It provides the system call interface allowing programs to request use of the hardware. The kernel prioritizes these requests and manages the hardware through its hardware interface.
1-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
1-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Kernel Components
Applications
user kernel
Buffered I/O
Raw I/O
File systems Disk space managment (LVM) I/O Subsystem Buffered I/O Process managment Device driver Device driver
CPU
CPU
Disk
tty
BE0070XS4.0
Notes: Introduction
The kernel may be broken up into several sections based on the services provided to applications programs. Each of these sections are discussed in this class. The kernel components are shown in the visual above.
Process management
The process management function of the kernel is responsible for the creation, and termination of processes and threads, along with scheduling threads on CPUs.
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
system buffering and keeping track of which process memory is resident in physical memory and which is stored on disk.
I/O subsystem
Parts of the kernel that interact directly with I/O devices are called device drivers. Typically each type of device installed on the system will require its own device driver. Device drivers are covered in detail in a separate class on writing device drivers.
File system
AIX supports several types of file systems including JFS, JFS2, NFS and several CD-ROM file systems. The file system software interacts with the disk space management software. This class covers the JFS and JFS2 file systems.
1-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Address Space
Process A Process B Process C
Address space
Address space
Address space
user kernel
BE0070XS4.0
Notes: Introduction
AIX implements a virtual memory system. Addresses referenced by a user program do not directly reference physical memory; instead they reference a virtual address.
Memory management
Virtual addresses are mapped by the hardware to a physical memory address. Translation tables are used by the hardware to map virtual to physical addresses. The address translation tables are controlled by the kernel. One set of address translation
Copyright IBM Corp. 2001, 2003 Unit 1. Introduction to the AIX 5L Kernel 1-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
tables is kept for each process. To switch from one process address space to another, the kernel loads the appropriate address translation table into the hardware.
1-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Interrupt Environment
Invalid combination - interrupts always run in kernel mode
Hardware interrupt
BE0070XS4.0
Notes: Introduction
Two key concepts of mode and environment are described in this section.
Mode
The computer hardware provides two modes of execution; a privileged kernel mode and a less-privileged user mode. Application programs must run in user mode thus are given limited access to the hardware. The kernel, as you would expect, runs in kernel mode. The following table compares these two modes.
1-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
User mode Memory access is limited to the users private memory. Kernel memory is not accessible. I/O instructions are blocked. Cant modify hardware registers related to memory management.
Kernel mode Can access all memory on the system. All I/O is performed in kernel mode. Memory management registers may be modified. Interrupts must be handled in kernel mode.
Environment
The AIX kernel may execute in one of two environments: process environment or interrupt environment. In process environment, the kernel is running on behalf of a user process. This generally occurs when a user program makes a system call, although it is also possible to create a kernel-mode only process. When the kernel responds to an interrupt, it is running in the interrupt environment. In this context the kernel cannot access the user address space or any kernel data related to the user process that was running on the processor just before the interrupt occurred.
V2.0.0.3
Student Notebook
Uempty
Context Switches
CPU
Thread 2 mstsave
Saved: y CPUs registers y stack pointer y instruction pointer
BE0070XS4.0
Notes: Introduction
A context switch is the action of exchanging one thread of execution on a CPU for another.
Thread of execution
Threads of execution are simply logical paths through the instructions of a program. The AIX kernel manages many threads of execution by switching the CPUs between the different threads on the system.
1-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Context switches
Context switches can occur at two points: a. A hardware interrupt occurs. b. Execution of the thread is blocked waiting for the completion of an event.
mstsave
The context of the running thread must be saved when a context switch occurs. This context includes information such as the values of the CPU registers, the instruction address register and stack pointer. This information is saved in a structure called the mstsave (machine state save) structure. Each thread of execution has an associated mstsave structure.
Restoring a context
When a thread is restored (switched in), the system register values stored in the mstsave of the thread are loaded into the CPU. The CPU then performs a branch instruction to the address of the saved instruction pointer.
V2.0.0.3
Student Notebook
Uempty
Interrupt Processing
current save area
csa
mstsave
mstsave
mstsave
threads mstsave
BE0070XS4.0
Notes: Introduction
A hardware interrupt results in a temporary context switch. Each time an interrupt occurs, the current context of the processor must be saved so that processing can be continued after handling the interrupt.
mstsave pool
Interrupts can occur when the CPU is currently processing an interrupt; therefore, multiple mstsave areas are needed to save the context of each interrupt. AIX keeps a pool of mstsave areas to use. This is because a thread structure has an mstsave structure, however an interrupt is a transient entity and does not have its own thread structure.
1-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
csa pointer
Each processor has a pointer to the mstsave area it should use when an interrupt occurs. This pointer is called the current save area, or csa pointer.
Interrupt history
When AIX receives an interrupt that is of higher priority than the one it is currently handling it must save the current state in a new mstsave area linking the new save area to the previous one. This forms a history of interrupt processing.
Interrupt processing
Saving context
When an interrupt occurs, the steps AIX takes to save the currently running context are: Step 1. 2. 3. 4. Action Save the current context in the mstsave area pointed to by the CPUs csa. Get the next available mstsave area from the pool. Link the just used mstsave to the new mstsave. Update the CPUs csa pointer to point to the new mstsave area.
1.
2. 3. 4. 5.
V2.0.0.3
Student Notebook
Uempty
1-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
Notes: Introduction
The AIX kernel was the first mainstream UNIX operating system to implement several important features. These features are listed above.
Preemptable
Preemptable means that the kernel can be running in kernel mode (running a system call for example) and be interrupted by another more important task. Preemption causes a context switch to another thread inside the kernel. Many other UNIX kernels will not allow preemption to occur when running in kernel mode. This can result in long delays in the processing of real time threads. AIX improves real time processing by allowing for preemption in kernel mode. As an example, Linux does not support preemption when in kernel mode.
V2.0.0.3
Student Notebook
Uempty
Pageable
Not all of the kernels virtual memory space needs to be resident in physical memory at all times. Portions of the kernel memory may be paged out to disk when not needed. This allows for better utilization of physical memory. The ability to page kernel memory is a feature not found in all UNIX kernels. Most kernels support the paging of user-virtual-address space. AIX supports paging both user- and kernel-address space. As an example, the kernel memory of the Linux operating system is resident in physical memory at all times.
Pinning memory
Some areas of the kernels memory must stay resident meaning they may not be paged to disk. Areas of memory that are not subject to paging are called pinned memory; for example, portions of device drivers must be pinned in memory.
Extensible
The AIX kernel is dynamically extensible. This means that not all the code required for the kernel needs to be included in a single binary (/unix). Portions of the kernels code will be loaded at runtime. Dynamically loaded modules are called kernel extensions. Kernel extensions typically add functionality that may not be needed by all systems. This keeps the kernel smaller and requires less memory. Kernel extensions can include: - Device drivers - Extended system calls - File systems
1-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
32-bit Hardware
64-bit Hardware
64-bit Hardware
32-bit Applications
32-bit Applications
64-bit Applications
32-bit Applications
64-bit Applications
User Kernel
32-bit Kernel 32-bit Kernel 64-bit Kernel
BE0070XS4.0
Notes: Introduction
AIX 5L supports both 32-bit and a 64-bit execution environments. On 32-bit hardware platforms only the 32-bit environment can be used, but on 64-bit platforms either can be used. The key to this 64-bit platform flexibility is that a 64-bit VMM (Virtual Memory Manager) is run in both cases, using left zero fill of addresses for the 32-bit kernel environment.
V2.0.0.3
Student Notebook
Uempty
Selecting a kernel
The file /unix is a link to the kernel image file that is loaded at boot time. Depending on the hardware type and kernel type (32-bit or 64-bit) the link will point to the appropriate file as shown in this table. Hardware platform 32-bit or 64-bit 64-bit Kernel type 32-bit 64-bit Kernel file /usr/lib/boot/unix_mp /usr/lib/boot/unix_up /usr/lib/boot/unix_64
User applications
Both 32-bit and 64-bit applications are supported when running on 64-bit hardware, regardless of the kernel that is running.
User commands
User level commands included with the AIX 5L operating system are designed to work with either the 32-bit or 64-bit kernel. However, some commands require both a 32-bit and a 64-bit version. These are typically commands that must work directly with the internal structures of the kernel. For these commands, the 32-bit version of the command will determine if a 32-bit or 64-bit kernel is running. If a 64-bit kernel is detected, then a 64-bit version of the command is started. The steps are shown in this table. Step 1. 2. Action 32-bit version of command is run by user. The 32-bit command checks the kernel type (32- or 64-bit). If a 64-bit kernel is detected, then the 64-bit version of the command is run. For example, under the initial release of AIX 5.1 the command vmstat would run the command vmstat64. In later versions of AIX 5.1, and in AIX 5.2, vmstat (along with other performance commands) uses a performance tools API. If a 32-bit kernel is detected, the 32-bit command completes its execution.
3.
4.
Kernel extensions
Only 64-bit kernel extensions are supported under the 64-bit kernel. Only 32-bit kernel extensions are supported under the 32-bit kernel. All kernel extensions must be SMP safe. Earlier versions of AIX supported running non-SMP safe kernel extensions on SMP hardware using a mechanism called funneling. Funneling is not supported on the 64-bit AIX 5L kernel.
Copyright IBM Corp. 2001, 2003 Unit 1. Introduction to the AIX 5L Kernel 1-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
/ (root)
usr
include
sys
jfs
j2
BE0070XS4.0
Notes: Introduction
The system header files contain the definition of structures that are used by the AIX kernel. We will reference these files throughout this class, since they contain the C language definitions of the structures we will be describing.
V2.0.0.3
Student Notebook
Uempty
1-21
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
_KERNSYS
_KERNEL
_64BIT_KERNEL _64BIT
Example
Shown here is a portion of the definition of a struct thread. The compiler directive #ifndef __64BIT_KERNEL is used to create different definitions for the 32-bit and 64-bit kernels.
V2.0.0.3
Student Notebook
Uempty
struct thread { /* identifier fields */ tid_t t_tid; /* unique thread identifier */ tid_t t_vtid; /* Virtual tid */ /* related data structures */ struct pvthread *t_pvthreadp; /* my pvthread struct */ struct proc *t_procp; /* owner process */ struct t_uaddress { struct uthread *uthreadp; /* local data */ struct user *userp; /* owner process' ublock (const)*/ } t_uaddress; /* user addresses */ #ifndef __64BIT_KERNEL uint t_ulock64; /* high order 32-bits */ uint t_ulock; /* user addr - lock or cv */ uint t_uchan64; /* high order 32-bits */ uint t_uchan; /* key of user addr */ uint t_userdata64; /* high order 32-bits if 64-bit mode */ int t_userdata; /* user-owned data */ uint t_cv64; /* high order 32-bits if 64-bit mode */ int t_cv; /* User condition variable */ uint t_stackp64; /* high order 32-bits if 64bit mode */ char *t_stackp; /* saved user stack pointer */ uint t_scp64; /* high order 32-bits if 64bit mode */ struct sigcontext *t_scp; /* sigctx location in user space*/ #else long t_ulock; /* user addr - lock or cv */ long t_uchan; /* key of user addr */ long t_userdata; /* user-owned data */ long t_cv; /* User condition variable */ char *t_stackp; /* saved user stack pointer */ struct sigcontext *t_scp; /* sigctx location in user space*/ #endif . . . .
1-23
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Checkpoint
1. The______ is the base program of the operating system. 2. The processor runs interrupt routines in ______mode. 3. The AIX kernel is _______, ________ and __________. 4. The 64-bit AIX kernel supports only _______kernel extensions, and only runs on _______ hardware. 5. The 32-bit kernel supports 64-bit user applications when running on ________hardware.
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
Exercise
Complete exercise one Consists of theory and hands-on Ask questions at any time Activities are identified by a What you will do: Use the cscope tool to examine system header files
BE0070XS4.0
Notes:
Turn to your lab workbook and complete exercise one.
1-25
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Summary
Describe the role the kernel plays in an operating system Define user and kernel mode and list the operations that can only be performed in kernel mode Describe when the kernel must make a context switch Describe the role of the mstsave area in a context switch Name the execution environments available on each of the platforms supported by AIX 5L Using the system header files, identify data element types for each of the available kernels in AIX 5L
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
References
AIX Documentation: Kernel Extensions and Device Support Programming Concepts
2-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this unit you should be able to:
List the tools available for analyzing the AIX 5L kernel Use KDB to display and modify memory locations and interpret a stack trace Use basic kdb navigation to explore crash dump and live system
BE0070XS4.0
Notes:
2-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Typographic conventions
In this class an uppercase KDB will be used when referring to the kernel debugger, and lowercase kdb is used when referring to the image analysis command.
2-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
Notes: Introduction
This section covers describes the kernel debugger available in AIX 5L.
Overview
The kernel debugger is built into the AIX 5L production kernel. For the debugger to be used it must be enabled prior to booting.
2-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Concept
When KDB is invoked, it is the only running program until you exit the debugger. All processes are stopped and interrupts are disabled. The kernel debugger runs with its own Machine State Save Area (mst) and a special stack. In addition, the kernel debugger does not run operating system routines. Though this requires that kernel code be duplicated within the debugger, this means it is possible to set breakpoints anywhere within the kernel code. When exiting the kernel debugger, all processes continue to run unless the debugger was entered via a system halt.
2-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
2-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
bosboot syntax
The syntax of the bosboot command is: bosboot -a [-D | -I] -d device
Argument
Description
-d device -D -I -a
Specifies the boot device. The current boot disk is represented by the device: /dev/ipldevice Loads the kernel debugger. The kernel debugger will not automatically be invoked when the system boots. Loads and invokes the kernel debugger. The kernel debugger will be invoked immediately on boot. Creates complete boot image.
Example
The following command will build a new boot image with the kernel debugger loaded: # bosboot -a -D -d /dev/ipldevice The system must be rebooted for the change to take effect.
bosdebug
Attributes in the SWservAt ODM database can be set so that bosboot will enable the kernel debugger regardless of the command line argument used when building the boot image. The bosdebug command is used to view or set these attributes. To view the setting of the debug flags in the ODM database use the command: # bosdebug Memory debugger Memory sizes Network memory sizes Kernel debugger Real Time Kernel off 0 0 on off
To set the kernel debugger attribute on use the command: # bosdebug -D To set the kernel debugger attribute off use the command: # bosdebug -o Note: All this command does is to set attributes in the SWservAt ODM database. The bosboot command reads these values and sets up the boot image accordingly.
2-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Action
Start the kdb command #kdb View the dbg_avail memory flag (0)> dw dbg_avail 1 dbg_avail + 000000: 00000002 Compare the value of dbg_avail against the mask value in this table. Mask Description Do invoke at bootup. Don't invoke at boot, but debugger is still invokable. Debugger is not ever to be called
BE0070XS4.0
2-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
2-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
System Dumps
A dump image is not actually a full image of the system memory but a set of memory areas copied out by the dump routines. What is in a system dump? What is the effect of kernel paging? What is the role of the Master Dump Table? What tools are used to analyze system dumps?
BE0070XS4.0
Paged memory
The dump facility cannot page in memory, so only what is currently in physical memory can be dumped. Normally this is not a problem since most of the kernel data structures are in memory. The process and thread tables are pinned, and the uthread and ublock structures of the running thread are pinned as well.
V2.0.0.3
Student Notebook
Uempty
Analyzing dumps
System dumps can be examined using the kdb command.
Process overview
The following steps are used to write a dump to the dump device: Step 1. 2. 3. Interrupts are disabled 0c9 or 0c2 are written to the LED display, if present Header information about the dump is written to the dump device The kernel steps through each entry in the Master Dump Table, calling each Component Dump routine twice: Once to indicate that the kernel is starting to dump this component (1 is passed as a parameter). Again to say that the dump process is complete (2 is passed as a parameter). 4. After the first call to a Component Dump routine, the kernel processes the CDT that was returned For each CDT entry, the kernel : Checks every page in the identified data area to see if it is in memory or paged out Builds a bitmap indicating each page's status Writes a header, the bitmap, and those pages which are in memory to the dump device Action
2-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Step 5.
Action Once all dump routines have been called, the kernel enters an infinite loop, displaying 0c0 or flashing 888
V2.0.0.3
Student Notebook
Uempty
kdb
The kdb command allows examination of an operating system image Requires system image and /unix Can be run on a running system using /dev/mem Typical invocations: # kdb -m vmcore.X -u /usr/lib/boot/unix or # kdb
BE0070XS4.0
Notes:
kdb Command
Files needed
The kdb command requires both a memory image (dump device, vmcore or /dev/mem) and a copy of /unix to operate. The /unix file provides the necessary symbol mapping needed to analyze the memory image file. It is imperative that the /unix file supplied is the one that was running at the time the memory image was created. The memory image (whether a device such as /dev/dumplv or a file such as vmcore.0) must not be compressed.
2-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Parameters
The kdb command may be used with the following parameters:
Parameter Description
Use /dev/mem as the system image file and /usr/lib/boot/unix as the kernel file. In this case root permissions are required. Use the image file provided Use the kernel file. This is required to analyze a system dump on a different system. Add the kernel_modules listed View XCOFF object Print CDT entries Print help Disable in-line more, useful when running noninteractive session
Example
To run kdb against a vmcore file use the following command line: # kdb -m vmcore.X -u /unix To run kdb against the live (running kernel) no parameters are required. # kdb
V2.0.0.3
Student Notebook
Uempty
Checkpoint
1. _____is used for live system debugging. 2. _____is used for system image analysis. 3. The value of the _______kernel variable indicates how the debugger is loaded. 4. A system dump image contains everything that was in the kernel at the time of the crash. True or False?
BE0070XS4.0
Notes:
2-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Exercise
Complete exercise two Consists of theory and hands-on Ask questions at any time Activities are identified by a What you will do: Enable and start the kernel debugger Display and interpret stack traces Display and modify variables in kernel memory Perform basic kdb navigations on live system and crash dump
BE0070XS4.0
Notes: Introduction
Turn to your lab workbook and complete exercise two. Read the information blocks included with the exercises. They will provide you with information needed to do the exercise.
V2.0.0.3
Student Notebook
Uempty
Unit Summary
List the tools available for analyzing the AIX 5L kernel Use KDB to display and modify memory locations and interpret a stack trace Use basic kdb navigation to explore crash dump and live system
BE0070XS4.0
Notes:
2-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
References
AIX Documentation: Performance Management Guide AIX Documentation: System Management Guide: Operating System and Devices
3-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this unit you should be able to:
List the three thread models available in AIX 5L Identify the relationship between the six internal structures: pvproc, proc, pv_thread, thread, user and u_thread Use the kernel debugging tools in AIX to locate and examine a process proc, thread, user and u_thread data structures Identify the states of processes and threads on a live system and in a crash dump Analyze a crash dump caused by a run-away process Identify the features of AIX scheduling algorithms Identify the primary features of the AIX scheduler supporting SMP and large system architectures Identify the action the threads of a process will take when a signal is received by the process
BE0070XS4.0
Notes:
3-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Parts of a Process
Process
y y y y Resources Address space Open files pointers User credentials Management data
Thread
Stack CPU registers
Thread
Stack CPU registers
Thread
Stack CPU registers
BE0070XS4.0
Process
A process can be divided into two components: - A collection of resources - A set of one or more threads
3-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Resources
The resources making up a process are shared by all threads in the process. The resources are: - Address space (program text, data and heap) - A set of open files pointers - User credentials - Management data
Threads
A thread can be thought of as a path of execution through the instructions of the process. Each thread has a private execution context that includes: - A stack - CPU register values (loaded into the CPU when the thread is running)
3-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Threads
Three type of threads are available in AIX: Kernel Kernel-managed User Three thread programming models are available for user threads: 1:1 M:1 M:N
BE0070XS4.0
Notes:
Threads
Threads provide the execution context to the process.
Kernel threads
Kernel threads are not associated with a user process and therefore have no user context. Kernel threads run completely in kernel mode and have their own kernel stack. They are cheap to create and manage thus are typically used to perform a specific function like asynchronous I/O.
3-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Kernel-managed threads
Kernel-managed threads are sometimes called Light Weight Processes or LWPs and are the fundamental unit of execution in AIX. Each user process contains one or more kernel-managed threads. The scheduling and running of kernel-managed threads is managed by the kernel. Each thread is scheduled to run on a CPU independent of the other threads of the process. On SMP systems, the threads of one process can run concurrently.
User threads
User threads are an abstraction entirely at the user level. The kernel has no knowledge of their existence. They are managed by a user-level threads library and their scheduling and execution are managed at the user level.
Programming models
AIX 5L provides three models for mapping user threads on top of kernel-managed threads. The application developer can chose between 1:1, M:1 and M:N models.
3-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
User Thread
User Thread
User Thread
Thread Library
Kernelmanaged Thread
Kernelmanaged Thread
Kernelmanaged thread
BE0070XS4.0
3-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
User Thread
User Thread
User Thread
Library Scheduler
Thread Library
Kernelmanaged Thread
BE0070XS4.0
Notes: M:1
In the M:1 model all user threads are mapped to one kernel-managed thread. The scheduling and management of the user threads are completely handled by the thread library.
3-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
User Thread
User Thread
User Thread
User Thread
Thread Library
Library Scheduler
Kernelmanaged Thread
Kernelmanaged Thread
Kernelmanaged Thread
BE0070XS4.0
Notes: M:N
In the M:N model, user threads are mapped to a pool of kernel-managed threads. A user thread may be bound to a specific kernel-managed thread. An additional hidden user scheduler thread may be started by the library to handle mapping user threads onto kernel managed threads.
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
#export AIXTHREAD_SCOPE=S #<your_program> There are many similar options available for thread tuning. See the Performance Management Guide in the AIX online documentation.
V2.0.0.3
Student Notebook
Uempty
Creating Processes
When a process is created it is given: A process table entry Process identifier (PID) An address space (its contents are copied from the parent process) User-area Program text Data User and kernel stacks A single kernel-managed thread (even if the parent process had many threads)
BE0070XS4.0
Exec
When a process is first created it is running the same program as its parent. One of the exec() class of system calls is normally used to load a new program into the process address space.
3-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Example
Here is an example of fork and exec to start a new program: main(){ pid_t child; if ( (child=fork()) == -1){ perror("could not fork a child process"); exit(1); } if ( child==0 ) { /* child */ /* exec a new program */ if (execl("/bin/ls","-l",NULL) == -1 ){ perror("error on execl"); exit(1); } exit(0); /* all done end the new process */ } else { /* parent */ wait(NULL); /* Ensure parent terminates after child */ } } /* main */
V2.0.0.3
Student Notebook
Uempty
Creating Threads
A new thread is created by the thread_create() system call. When created the thread is assigned: A thread table entry A thread identifier An execution context (stack pointer and CPU registers)
BE0070XS4.0
Thread library
AIX provides a thread library to assist programers with the creation and management of threads. Typically, the library function pthread_create() is used to create threads rather than calling thread_create() directly. The thread library allows for creation and management of both kernel-managed threads and user threads using the same interface.
3-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
pthread_create example
Here is an example of the creating a new thread using pthread_create: #include <pthread.h> #include <errno.h> void *new_thread(void *arg); int main () { int i; pthread_t threadId; /* start up a new thread */ if (pthread_create (&threadId, NULL, new_thread, NULL )) { perror ("pthread_create"); exit (errno); } /* main thread code here */ } void *new_thread(void *arg) { /* new thread code here */ }
V2.0.0.3
Student Notebook
Uempty
Swapped
Active
Stopped
Zombie
Non-existent
BE0070XS4.0
3-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
States
The five process states are described in this table: State Idle Description A process is started with a fork() system call. During creation the process is in the idle state. This state is temporary until all of the necessary resources have been allocated. Once the creation of the process is done it is placed in the active state. This is the normal process state. The threads of the process can now be scheduled to run on a CPU. When a process receives a SIGSTOP signal, it is placed in the stopped state. If a process is stopped, all its threads are stopped and will not be scheduled on a CPU. A stopped process can be restarted by the SIGCONT signal. A swapped process has lost its memory resources and its address space has been moved onto disk. It cannot run until swapped back into memory. When a process terminates, some of its resources are not automatically released. A process is placed in the zombie state until its parent cleans up after it frees the resources. The parent must execute a wait() system call to retrieve the process exit status before the process will be removed from the process table.
Active
Stopped
Swapped
Zombie
Zombie process
Sometimes a Zombie process will stay in the process list for a long time. One example of this situation could be that a process has exited, but the parent process is busy or waiting in the kernel and unable to read the return code. If the parent process no longer exists when a child process exits, the init process (PID 1) frees the remaining resources held by the child.
V2.0.0.3
Student Notebook
Uempty
State
swapperACTIVE 00000 00000 00000 00000 00004812 init wait netm ACTIVE 00001 00000 00000 00000 0000342D ACTIVE 00204 00000 00000 00000 00004C13 ACTIVE 00306 00000 00000 00000 0000282A
3-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
pv_procp
proc
pv_procp
. . . . pvproc proc
pv_procp
. . . NPROC pvproc proc
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
Process table
The process table is a fixed-length array of pvproc structures allocated from kernel memory. For the 64-bit kernel, this table is divided into a number of sections called zones. At system startup, one zone is allocated on each SRAD (see later topic, Table Management).
proc structure
The proc structure is an extension on the pvproc structure. The pv_procp in the pvproc points to its associated proc structure. The proc and pvproc structures are split to accommodate large system architectures.
Slot number
Each entry in the process table is referred to by its slot number.
3-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
pvproc
Element pv_pid pv_ppid pv_uid pv_stat pv_flags *pv_procp *pv_threadlist *pv_child *pv_siblings Description Unique process identifier (PID) Parents process identifier (PPID) User identifier Process state Process flags Pointer to the proc entry Head of list of threads Head of list of children NULL termintated sibling list
Figure 3-11.
pvproc
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
pv_stat
Values SNONE SIDL SACTIVE SSWAP SSTOP SZOMB Meaning Slot is not being used Process is being created Process has at least one active thread Process is swapped out Process is stopped Process is zombie
BE0070XS4.0
Notes: pv_stat
The process state is stored in the pvproc->pv_stat data element. Values for pv_stat are defined in /usr/include/sys/proc.h as shown in this table.
3-21
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Table Management
Process Table
Zone 0 Zone 1
. . . . . . . . . . . . . . . . . . . . .
Zone 0
Slot 0 Pinned pages High water mark
Slot 8192
Zone 32
BE0070XS4.0
Zones
The process table used in the 64-bit kernel is split into equal sized sections called zones. Each zone contains a fixed number of process slots. The number of zones, and number of process slots per zone, is version dependent. The details can be determined by examining the value of PM_NUMSRAD_ZONES, defined in the header file <sys/pmzone.h>. At system startup, one zone is allocated on each SRAD in the system. When a zone on an SRAD fills up (i.e. all of the process slots in that zone are used) then another zone is
3-22 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
allocated to the SRAD and added to the pool. At the moment, there is only one SRAD per system.
32-bit kernel
The process table on 32-bit kernels has only one zone encompassing the entire process table. A single high water mark is used and pages are pinned as explained above.
Large systems
On some systems (64-bit kernel only) a zone would typically be associated with a single RAD (a group of resources connected together by some physical proximity).
Details
Two structures are used to manage the process table. Both are defined in /usr/include/sys/pmzone.h. The table is defined by a struct pm_heap_global. This structure has pointers to several pm_heap structures, one for each zone in the table. The high water mark for the zone is found in the pm_heap.
3-23
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
CPU CPU
CPU CPU
CPU
CPU
BE0070XS4.0
History
In older versions of AIX, the process table was made from an array of proc structures. In AIX 5L, each process is represented by two structures; the proc and a smaller pvproc.
Large systems
In some systems physical memory is divided into pools that have a degree of physical proximity to particular processors. Access speed to memory hosted from another processor may be slower than accessing memory hosted from the local processor. Using one large proc structure table could result in many "remote" accesses. The AIX
3-24 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
5L design allows the use of RADs (Resource Affinity Domains), a collection of resources grouped by some degree of physical proximity. An SRAD (scheduler RAD) is a RAD large enough to warrant a dedicated scheduler thread. The table of pvproc structures is separated into zones, which allows each zone to reside on its own SRAD, and refer to proc structures for processes running on that SRAD.
3-25
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
PID Format
32-bit Kernel
31 26 25 8 7 1 0
000000
Generation count
64-bit Kernel
63 26 25 13 12 8 7 1 0 Low order bits of Process table slot index
SRAD
(upper bits of index)
00 . . . 0
Generation count
BE0070XS4.0
PID format
The format of a PID is shown above.
V2.0.0.3
Student Notebook
Uempty Bits Bit 0 Generation count Process table slot index SRAD (Scheduler Resource Affinity Domain) Remaining bits Description Always set to zero making all PIDs even numbers, apart from init, which is a special case and always has process ID 1. A generation count used to prevent the rapid re-use of PIDs. The process table slot number. These bits are used to select the zone on the process table. The number of bits used for the SRAD is version dependent, and defined by PM_NUMSRAD_BITS defined in <sys/pmzone.h>. AIX 5.1 uses 5 bits, AIX 5.2 currently uses 4 bits. Set to zero.
pid_t
Process identifiers are stored internally using the pid_t typedef.
3-27
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
000000
Generation 0 count
SRAD
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
Kernel Processes
Kernel processes: Are created by the kernel Have a private u-area and kernel stack Share text and data with the rest of the kernel Are not affected by signals Can not use shared library object code or other user-protection domain code Run in the Kernel Protection Domain Can have multiple threads, as can user processes Are scheduled like user processes, but tend to have higher priorities
BE0070XS4.0
3-29
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
Thread Table
Thread Table
Slot Number 1 2 3 . . pvthread pvthread
tv_threadp
thread
tv_threadp
. . . . pvthread thread
tv_threadp
. . . NTHREAD pvthread thread
BE0070XS4.0
3-31
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
thread structure
The thread structure is an extension on the pvthread structure. The tv_threadp item in the pv_thread points to its associated thread structure. The thread and pvthread structures were split to accommodate large system architectures.
V2.0.0.3
Student Notebook
Uempty
pvthread Elements
Element tv_tid *tv_threadp *tv_pvprocp *tv_next thread
*tv_prevthread tv_state
Description Unique thread identifier (TID) Pointer to thread structure Pointer to pvproc for this thread Pointer to next thread (pvthread) in the process Pointer to previous thread (pvthread) in the process Thread state
BE0070XS4.0
Elements
Some of the key element of the pvthread structure are shown above.
Table management
The memory pages for the thread table are managed using the same mechanism that was described for the process table. The thread table is split into multiple zones. Each zone contains a high water mark representing the largest slot number used since system boot. All memory pages for the slots up to the high water mark are pinned. The size of each zone, and the number of zones are version dependent.
Copyright IBM Corp. 2001, 2003 Unit 3. Process Management 3-33
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
TID Format
32-bit Kernel
31 27 26 8 7 1 0
000000
Generation count
64-bit Kernel
63 27 26 13 12 8 7 1 0 Low order bits of thread table slot index
SRAD
(upper bits of index)
00 . . . 0
Generation count
BE0070XS4.0
Notes:
Thread identifier
Introduction
The thread identifier or TID is a unique number assigned to a thread. The format of a TID is similar to that of a PID except that all TIDs are odd numbers and PIDs are even numbers. The format of a TID is shown above.
tid_t
Thread identifiers are stored internally using the tid_t typedef.
V2.0.0.3
Student Notebook
Uempty
u-block
y Location - process private memory segment y Definition - /usr/include/sys/user.h uthread y Thread private data y stack pointers y mstsave
uthread uthread uthread uthread
BE0070XS4.0
Notes: Introduction
Each process (including a kernel process) contains a u-block area. The u-block is made up of a user structure (one per process) and one or more uthreads (one per thread).
Access
The u-block is part of the process private memory segment; however, it is only accessible when in kernel mode. It maintains the process state information which is only required when the process is running; therefore, it need not be accessible when the process is not running. It need not be in memory when the process is swapped out. It is pinned when the process is swapped into memory, and unpinned when the process is swapped out.
3-35
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Definitions
The u-block is described in the file /usr/include/sys/user.h.
user
Each process has one user structure. Information stored in the user structure is global and shared between all threads in the process. For example, the file descriptor table and the user credentials are kept in the user structure.
uthread
Each thread of a process has its own uthread structure. Threads are responsible for storing execution context; therefore, the uthread holds execution-specific items like the stack pointers and CPU registers. When a thread is interrupted or a context switch occurs the stack pointers and CPU registers of the interrupted thread are stored in the mst-save area of the uthread. When execution of the thread continues the stack pointers and registers are loaded from the mst-save area.
V2.0.0.3
Student Notebook
Uempty
Six Structures
tv_pvprocp pv_threadlist tv_nextthread
pvproc
pvthread
pvthread
pvthread
tv_threadp pv_procp
t_pvthreadp
t_procp proc
thread
thread
thread
u-block
user
BE0070XS4.0
Notes: Introduction
This unit has discussed the AIX 5L data structures: pvproc, proc, pvthread, thread, uthread and user. This section describes how these six structures are tied together.
Diagram
The above diagram depicts the structures for a single process containing three kernel-managed threads.
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
pvproc is extended in to the proc structure via the pv_procp pointer. Similarly, the pvthread structures are extended into the thread structures via the tv_threadp.
u-block
The u-block is divided into uthread sections, one per thread and one process-wide user structure. Pointers in the thread structure point to both of these sections. Data that is private to the thread-like stack pointers are kept in the uthread. Process-wide data is kept in the user area; for example, the file descriptor table. This allows all threads in a process to share the same open files.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes: Introduction
The object of thread scheduling is to manage the CPU resources of the system, sharing these resources between all the threads.
3-39
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Ready to Run
Sleeping
Running
Stopped by a signal
Zombie
BE0070XS4.0
Notes: Introduction
In AIX, the kernel allows many threads to run at the same time, but there can be only one thread actually executing on each CPU at one time. The thread state shows if a thread is currently running or is inactive.
State transitions
Threads can be in one of several states. A thread typically changes its state between running, ready to run, sleeping and stopped several times during its lifetime. The diagram above shows all the state transitions a thread can make.
V2.0.0.3
Student Notebook
Uempty
States
All the thread states are described in this table: State Idle Ready to Run Running Description When first created a thread is placed in the idle state. This state is temporary until all of the necessary resources for the the thread have been allocated. Once the new thread creation is completed, it is placed in the ready to run state. The thread waits in this state until the thread is run. A thread in the running state is the thread executing on a CPU. The thread state will change between running and ready to run until the thread finishes execution; the thread then goes to the zombie state. Whenever the thread is waiting for an event, the thread is said to be sleeping. A stopped thread is a thread stopped by the SIGSTOP signal. Stopped threads can be restarted by the SIGCONT signal. Though swapping takes place at the process level and all threads of a process are swapped at the same time, the thread table is updated whenever the thread is swapped. The zombie state is an intermediate state for the thread lasting only until all the resources owned by the thread are given up.
Sleeping Stopped
Swapped
Zombie
tv_state
The thread state is kept in the tv_state flag of the pv_thread structure. The defined values for this flag are: Flag TSNONE TSIDL TSRUN TSSLEEP TSSWAP TSSTOP TSZOMB slot is available being created (idle) runable (or running) awaiting an event (sleeping) swapped stopped being deleted (zombie) Meaning
3-41
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Running threads
No tv_state flag value has been defined for the running state. The running state is implied when a thread is currently being run; therefore a flag is not necessary. The value of the tv_state flag for running threads will be shown as ready to run (TSRUN). A thread must be ready to run before it can be run. A thread that is ready to run has a state of TSRUN, and a wait type of TWCPU, i.e. the thread is waiting for CPU access. A thread that is actually running has a state of TSRUN, and a wait type of TNOWAIT.
V2.0.0.3
Student Notebook
Uempty
Thread Priority
0
kernel
PUSER = 40
Highest priority
user
255
Priority values
Figure 3-25. Thread Priority
Lowest priority
BE0070XS4.0
Notes: Introduction
All threads are assigned a priority value and a nice value. The dispatcher examines these values to determine what thread to run.
Thread priority
Each thread is assigned a priority number between 0 and 255. CPU time is made available to threads according to their priority number. Precedence is given to the thread with the lowest priority number. The highest priority a thread can run in user mode is defined as PUSER or 40. Priorities above PUSER (example: numerically lower) are used for real-time threads.
3-43
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
nice
Each process is assigned a nice value between 0 and 39. The nice value is used to adjust thread priority. A process nice value is saved in the proc structure as p_nice=nice+PUSER. The default value for nice is 20. The nice value of a process can be set using the nice command or changed using the renice command.
V2.0.0.3
Student Notebook
Uempty
Run Queues
Run Queue
0 . . 20 . . 40 . . 60 . . 80 . . 100 . . 255 wait thread thread thread thread thread thread
BE0070XS4.0
Notes: Introduction
All runnable threads on the system (except the currently running threads) are listed on a run queue. A run queue is arranged as a set of doubly-linked lists, with one linked list for each thread priority. Since there are 256 different thread priorities, a single run queue consists of 256 linked lists. AIX selects the next thread to run by searching the run queues for the highest priority (example, numerically lowest) runnable thread. A single CPU system has one run queue.
Wait thread
The wait thread is always ready to run, and has a priority value of 255. It is the only thread on the system that will run at priority 255. If AIX finds no other ready to run thread, it will run the wait thread.
3-45
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Notes: Introduction
The scheduling and running of threads are the jobs of the dispatcher and scheduler. AIX is designed to handle many simultaneous threads.
Clock ticks
A clock tick is 1/100 of a second. The number of clock ticks a thread has accumulated will be used to calculate a new priority for the thread by the scheduler. Generally, a thread that has accumulated many clock ticks will have its priority decreased, (i.e. the priority value will grow larger).
V2.0.0.3
Student Notebook
Uempty
Dispatcher
Step Action 1 If invoked because a clock tick has passed, then increment the t_cpu element of the currently running thread. t_cpu is limited to a maximum value of T_CPU_MAX. if (thread->t_cpu < T_CPU_MAX) thread->t_cpu++; 2 Scan the run queue(s) looking for the highest priority read-to-run thread. 3 If the selected thread is different from the currently running thread, place the currently running thread back on the run queue, and place the selected thread at the end of the MST chain. 4 Resume execution of the thread at the end of the MST chain.
Figure 3-28. Dispatcher BE0070XS4.0
Notes: Dispatcher
The dispatcher runs under the following circumstances: - A time interval has passed (1/100 sec). - A thread has voluntarily given up the CPU. - A thread (from a non-threaded process) that has been boosted is returning to user mode from kernel mode. - A thread has been made runnable by an interrupt and the processor is about to finish interrupt processing and return to INTBASE. The steps the dispatcher takes are listed above.
3-47
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Scheduler
Step 1 Action
If the value of nice is greater than the default value of 20, double its value, making it possible to more strongly discriminate against upwardly nice'd threads. Recall that the value of p_nice is: nice+PUSER. Given: PUSER=40 and 0<=nice<=40.
Calculate the new priority using the equation: r new_nice + 4 ----------------------------------priority = new_nice + t_cpu 32 64 Degrade the value of t_cpu so that ticks the thread has used in the past have less affect as recent ticks.
d ---t_cpu = t_cpu 32
Figure 3-29. Scheduler BE0070XS4.0
Notes: Scheduler
The scheduler runs every second. Its job is to recalculate the priority of all runnable threads on the system. The priority of a sleeping thread will not be changed. The steps the scheduler uses to calculate thread priorities are shown in the table above.
r and d
The values of r and d can be set using the schedo command. The r and d values control how a process is impacted by the run time; r impacts how severely a process is penalized by used CPU time, while d controls how fast the system forgives previous CPU consumption. 0 <= r,d <= 32 The default value for r,d is 16.
V2.0.0.3
Student Notebook
Uempty
Preemption
What is preemption? Non-preemptive kernel vs. preemptive kernel Preventing deadlock in preemptive kernels Priority boost
BE0070XS4.0
Notes:
Preemption
Definition
When the dispatcher runs and finds a runnable thread with a higher priority than the current running thread the running context is switched to the higher priority thread. The thread that was displaced before its time slice expired is said to have been preempted.
Non-preemptive kernel
Most UNIX systems will not allow pre-emption to occur when running in kernel mode. If the current running thread is in kernel mode and a higher priority thread becomes ready to run, it will not be granted CPU time until the running thread returns to user mode and voluntarily gives up the CPU. This can result in long delays in processing high-priority or real-time threads.
Copyright IBM Corp. 2001, 2003 Unit 3. Process Management 3-49
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
Preemptive Kernels
Thread A Low priority
Holding lock
Running
BE0070XS4.0
1. 2. 3.
Thread A, a low priority thread, has obtained access to an exclusive resource lock. Thread B, running at a higher priority, is waiting to obtain the same resource lock. This thread cannot continue until thread A releases the lock. Thread Cs priority is higher than thread As and is ready to run. When the dispatcher runs, thread C pre-empts thread A.
3-51
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Step
Action
4.
Thread A is still holding the resource lock. Even though thread B is the highest priority thread on the system it cant proceed until it obtains the resource held by thread A. Thread A is not running so it cant release the lock.
Priority boost
To resolve this situation, priority boost was added to AIX. Priority boost increases the priority of threads holding locks. - When a high priority thread has to wait for a lock, it changes the priority of the thread that is holding the lock to its own priority. - The priority boost only applies to the low priority thread when it is holding the lock. The priority is set back to the original value when either: The scheduler notices that the boosted thread is no longer holding any locks. The boosted thread returns to user mode from kernel mode. The high priority thread that was waiting for the lock obtains the lock. Priority boost applies to both kernel locks and user (pthreads library) locks. A thread running in kernel mode must release any kernel locks it holds before returning to user mode.
V2.0.0.3
Student Notebook
Uempty
Scheduling Algorithms
SCHED_RR
Fixed priority Threads are timesliced
SCHED_FIFO
Fixed priority Threads ignore timeslicing
SCHED_OTHER
Default policy Priority based on CPU time and nice value
BE0070XS4.0
Notes: Introduction
AIX has 3 main types of scheduling algorithms that will affect how a threads priority is calculated by the scheduler. The main algorithms as defined in <sys/sched.h> are listed in the visual above.
3-53
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
SCHED_RR
This is a round robin scheduling mechanism in which the thread is time-sliced at a fixed priority. The amount of CPU time and the nice value have no affect on the threads priority. - This scheme is similar to creating a fixed-priority, real-time process. - The thread must have root authority to be able to use this scheduling mechanism. - It is possible to create a thread with SCHED_RR that has a high enough priority that it could monopolize the processor if it is always runnable and there are no other runnable threads with the same (or higher) priority.
SCHED_FIFO
Similar to SCHED_RR; however: - The thread runs at fixed priority and is not time-sliced. - It will be allowed to run on a processor until it voluntarily relinquishes by blocking or yielding, or until a higher priority thread is made runnable. - A thread using SCHED_FIFO must have root authority to use it. - It is possible to create a thread with SCHED_FIFO that has a high enough priority that it could monopolize the processor if it is always runnable. There are actually three other related policies, SCHED_FIFO2, SCHED_FIFO3 and SCHED_FIFO4. The FIFO policies differ in how they return threads to the run queue, and thereby provide a way of differentiating between their effective priorities. See the Performance Management Guide of the AIX online documentation for more details.
SCHED_OTHER
This is the default AIX scheduling policy that was discussed earlier. Thread priority is constantly being adjusted based on the value of nice and the amount of CPU time a thread has received. Priority degrades with CPU usage.
V2.0.0.3
Student Notebook
Uempty
t_policy
The scheduling policy a thread is using is stored in: thread->t_policy.
3-55
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
CPU 0
. . . .
CPU 1
. . . .
CPU 2
. . . .
BE0070XS4.0
Notes: Introduction
On Symmetric Multi-Processing systems (SMP) per-CPU run queues are used to compensate for the multiple memory caches used on these systems.
Memory cache
Each CPU in a symmetric multi-processing system has its own memory cache. The purpose of the cache is to speed up processing by pre-loading blocks of physical memory into the higher speed cache.
Cache warmth
A thread is said to have gained cache warmth to a CPU when a portion of the process memory had been loaded into the CPUs cache. In an SMP system, threads can be scheduled onto any CPU. The best performance is achieved when a thread runs on a
3-56 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
CPU where it has gained some cache warmth. The AIX thread scheduler takes advantage of cache warmth by attempting to schedule a thread on the same CPU it ran on last.
Hard affinity
Threads can be bound to a single CPU meaning they are never placed in the global run queue. This is called hard affinity. The bindprocessor() subroutine is used to give a single thread or all threads of a process hard affinity to a CPU. Hard affinity (or binding) is recorded in thread->t_cpuid. If t_cpuid is set to PROCESSOR_CLASS_ANY=-1 the thread is not using hard affinity (note that t_cpuid=0 means bound to cpu 0).
RT_GRQ
If a thread has exported the environment variable RT_GRQ=ON, it will sacrifice soft cache affinity. The thread will be placed only in the global run queue and hence run on the first available CPU.
Load balancing
The system uses load balancing techniques to ensure that work is distributed evenly between all of the CPUs in the system.
3-57
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
NUMA
Node
CPU
CPU
CPU
I/O
CPU
remote cache
Memory Interconnect
remote cache
CPU
remote cache
CPU
CPU
CPU
CPU
CPU
I/O
CPU CPU
I/O
Node
Figure 3-34. NUMA
Node
BE0070XS4.0
Notes:
In a true SMP architecture, the S stands for symmetric. This means that any CPU can access any piece of memory with virtually the same cost in terms of latency and bandwidth. The SMP architecture has a limit on the size that it can grow to, both in terms of the number of CPUs, and the amount of memory. The limits grow over time as individual technologies improve (such as processor speed and memory bandwidth), however this is still a point at which adding more CPUs, or adding more memory actually degrades performance. One approach that has been taken in the past to allow the development of large systems is to use building blocks of SMP systems, and couple them together into a single system. A good example of this is the NUMA-Q systems developed by Sequent. NUMA stands for Non-Uniform Memory Access. The memory in a NUMA system is effectively divided into two classes. Local memory, which is on the same system building block as the CPU trying to access it, and Remote Memory, which is located on a different system building block. In a NUMA architecture, there are relatively large differences in access latency (approximately 1 order of magnitude) and bandwidth between local and remote memory.
3-58 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
3-59
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Memory Affinity
GX Slot
L3 L3
GX
Mem Slot
Mem Slot
L3 L3
GX
P L2 P P L2 P
L3 L3
GX
L3 L3
GX
P L2 P
L3 L3
GX
P L2
L3 L3
GX
MCM 2
P P P L2 P P L2 P L2
MCM 3
P L2 P
L3 L3
GX
L3 L3
GX
L3 L3
GX
P L2
P L2
P L2
P L2
L3 L3
GX
L3 L3
GX
MCM 1
P L2 P P L2 P P L2 P
MCM 0
P L2 P
L3 L3
GX
GX
GX
GX
GX
L3 L3
L3 L3
L3 L3
L3 L3
Mem Slot
Mem Slot
GX Slot
Mem Slot
Mem Slot
BE0070XS4.0
Notes:
The visual above shows the system architecture of the pSeries 690. This system is an SMP system that has some characteristics of a NUMA system. Some memory is 'local' to a processor, and other parts of memory are 'remote'. The major difference between this architecture and a true NUMA one is that the latency and bandwidth differences between local and remote access are much smaller. Looking at this diagram, we could consider this architecture to be a single system (since all of the components are inside a single cabinet). However if we examine the diagram more closely, we can see that each MCM has two attached memory cards. We could consider an MCM and its two memory cards to be a RAD, since these resources have a degree of physical proximity when compared to other parts of memory or other processors.
V2.0.0.3
Student Notebook
Uempty
Definitions
This section defines some additional terms: - RAD - Resource Affinity Domain, is a group of resources connected together by some physical proximity. - SRAD - Scheduler RAD is the RAD that the scheduler will operate on; usually a physical node. - SDL - System Decomposition Level - A RAD exists at multiple levels. The top level is the entire system; the bottom, or atomic, level consists of individual CPUs and memory. The SDL determines how small the RAD will be.
3-61
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
. . . .
SRAD
SRAD
. . . .
. . . .
. . . .
. . . .
. . . . CPU 2
. . . . CPU 3
. . . .
. . . .
. . . . CPU 6
. . . . CPU 7
CPU 0 CPU 1
CPU 4 CPU 5
BE0070XS4.0
Notes: Introduction
This section talks about design enhancements to facilitate future systems.The goal of the thread scheduler is to balance the process load between all the CPUs in the system and reduce the amount of time a runnable thread waits to be run when other CPUs are idle.
Run queues
The design of the AIX 5L thread scheduler has been extended to allow per-node run queues and one global run queue.
Process placement
For most applications the most frequent memory access is to the process text. Other frequent accesses include private data, stack and some kernel data structures. To
3-62 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
minimize memory access time the process text, data, stack and kernel data structures are allocated from memory on the RAD containing the CPUs that will execute the threads belonging to that process. This RAD or set of RADs is called the process home RAD.
Process migration
In order to keep the system efficient, AIX will occasionally migrate a process between SRADs. For a process to migrate, its memory must be copied to the process new home RAD.
Logical attachment
Processes that share resources may be logically attached. Logically attached processes are required to run on the same RAD. An API is provided for the control of logical attachments.
Physical attachment
Processes can be attached to a physical collection of resources (CPU and memory) called an RSet. Processes attached to an RSet can only migrate between members of the RSet.
3-63
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Checkpoint
1. AIX provides _____ programming models for user threads. 2. A new thread is created by the __________system call. 3. The process table is an _____ of _______ structures. 4. All process IDs (except pid 1) are _____. 5. A thread table slot number is included in a thread ID. True or False? 6. A thread holding a lock may have its priority _______.
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
Exercise
Complete exercise 3 Consists of theory and hands-on Ask questions at any time Activities are identified by a What you will do: Examine the process and thread structures using kdb Apply what you learned to the analysis of a crash dump Learn about and configure system hang detection Explore how signal information is stored and used in AIX
BE0070XS4.0
Notes: Introduction
Turn to your lab workbook and complete exercise three. Read the information blocks contained within the exercise. They provide you with information you need to do the exercise.
3-65
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Summary
The primary unit of execution in AIX is the thread. AIX has three thread programing models available: 1:1, M:1, M:N The dispatcher Selects what thread to run The scheduler adjusts thread priority based on: nice CPU time Scheduling algorithms are SCHED_RR, SCHED_FIFO, SCHED_OTHER The six structures of a process are: pvproc, proc, pv_thread, thread, user, u_thread. Processes can handle or ignore signals; threads can mask signals.
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
References
PowerPC Microprocessor Family: The Programmers Reference Guide Available from http://www-3.ibm.com/chips/techlib/techlib.nsf/productfamilies/PowerPC
4-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this lesson you should be able to:
List the types of addressing spaces used by AIX 5L. List the attributes associated with each segment type. Given the effective address of a memory object, identify the segment number and object type.
BE0070XS4.0
Notes:
4-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes:
4-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Page
frame
BE0070XS4.0
Notes: Introduction
AIX manages memory in 4096-byte chunks called pages. Pages are organized and stored in real (physical) memory chunks called frames.
Page
A page is a fixed-sized chunk of contiguous storage that is treated as the basic entity transferred between memory and disk. Pages stay separate from each other; they do not overlap in virtual address space. AIX 5L uses a fixed page size of 4096 bytes. The smallest unit of memory managed by hardware and software is one page.
4-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Frame
The place in real memory used to hold the page is called the frame. Whereas a page is a collection of information, a frame is the place in memory to hold that information.
4-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Address Space
Physical Memory
Process 1
Effective address
Process 2
Filesystem pages
Paging space
BE0070XS4.0
Notes: Introduction
An address space is memory (real or virtual) defined by a range of addresses. AIX 5L defines several different address spaces: - Effective address space - Virtual address space - Physical address space
4-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Paging space
The paging space is the disk area used by the memory manager to hold inactive memory pages with no other home. In AIX, the paging space is mainly used to hold the pages from working storage (process data pages). If a memory page is not in physical memory, it may be loaded from disk; this is called a page-in. Writing a modified page to disk is called a page-out.
4-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Translating Addresses
Step
1
Action
The effective address is referenced by a process or by the kernel. The hardware translates the address into a system wide virtual address. The page containing the virtual address is located in physical memory or on disk. If the page is currently located on disk a free frame is found in physical memory and the page is loaded into this frame. The memory operation requested by the process or kernel is completed on the physical memory.
2 3 4 5
BE0070XS4.0
Notes: Introduction
When a program accesses an effective address, the hardware translates the address into a physical address using the above process.
4-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Segments
256 MB Segment
Segment number 0
Segment number 1
. . .
Segment number n
BE0070XS4.0
Notes: Introduction
Effective memory address space in AIX 5L is divided into 256 MB objects called segments.
Segments
The maximum number of segments available to a process depends on the effective address space size (32-bit or 64-bit).
Available memory
A process can control how much of its effective address space is available in two ways. A process can create or destroy segments in its address space. A process can adjust the number of pages in a single segment (up to 256 MB).
4-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
Segment Addressing
An effective address is broken down into the following three components
Segment # 4/36 bits Virtual Page Index Byte Offset 16 bits 12 bits
The first 4 bits (32 bit address) or 36 bits (64 bit address) is called an ESID and selects the segment register or STAB table slot The next 16 bits select the page within the segment The next 12 bits select the offset within the page
Figure 4-7. Segment Addressing BE0070XS4.0
Notes: Introduction
This section discusses how memory segments are addressed.
Segment addressing
Both the 64-bit and 32-bit effective address spaces are divided into 256 MB segments. Each segment has a Segment number or Effective Segment ID (ESID). In the 32-bit model, this number is 4 bits long, allowing for 16 segments. In this case the ESID identifies one of 16 Segment Registers. In the 64-bit model, 36 bits are used for the ESID, allowing for 236 (more than 64 million) segments. In this case the value identifies an entry in the STAB table, which is pointed to by the ASR (Address Space Register). In both cases the main item in the register/table entry is called a Virtual Segment ID (VSID). The virtual page index and byte offset are used together with the VSID to resolve the effective address. The address resolution information that follows describes this process.
Copyright IBM Corp. 2001, 2003 Unit 4. Addressing Memory 4-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
40
Translation Look-Aside Buffer (TLB) Hash Anchor Table (HAT) Hardware Page Frame Table (PFT) Software Page Frame Table Real Page Number 20 32 Real Address
BE0070XS4.0
Notes: Introduction
As already noted, the effective address segment number identifies a register or table value. We call this table value the Virtual Segment ID (VSID), and it is 24/52 bits long for 32/64 bit hardware. This value together with the remaining effective address information (segment page number and page offset) is used to resolve our effective address to a machine-usable address. This visual, as well as the following visual illustrate this process. Note that the virtual address space is larger than the effective or real address spaces (it is 52/80 bits wide on 32/64 bit hardware platforms, respectively).
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
These 24 bits are used with the 16 bit segment page number from the original address to yield a 40 bit virtual page number. Combine this with the 12 bit page offset, and we get a 52 bit virtual address which is used internally by the processor. The 40 bit virtual page number is then used in a lookup mechanism to find a 20 bit real page number, which is combined with the 12 bit page offset to end up with a 32 bit real address.
V2.0.0.3
Student Notebook
Uempty
Page Offset
16
12
Page Offset
Segment ID
68 80-bit Virtual Address
Translation Look-Aside Buffer (TLB) Hash Anchor Table (HAT) Hardware Page Frame Table (PFT) Software Page Frame Table
Real Page Number 52 64 Real Address
BE0070XS4.0
4-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Segment Types
Kernel Segment User Text Process Private Shared Library Text Shared Data Shared Library Data
BE0070XS4.0
Notes: Introduction
Several segment types are used in a processs address space. The segment types are listed in the visual above.
Kernel segments
Kernel segments are segments that are shared by all process on the system. These segments can only be accessed by code running in the kernel protection domain.
4-16 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
User text
The user text segments contain the code of the program. Threads in user mode have read-only access to text segments to prevent modification during program execution. This protection allows a single copy of a text segment to be shared by all processes associated with the same program. For example, if two processes in the system are running the ls command, then the instructions of ls are shared between them.
Running a debugger
When running a debugger, a private read/write copy of the text segment is used. This allows debuggers to set breakpoints directly in code. In that case, the status of the text segment is changed from shared to private.
Performance advantage
When a process calls fork, the process private segment of the child process is created as a copy-on-write segment. It shares its contents with the process private segment of the parent process. Whenever the parent or child process modifies a page that is part of the process private segment, the page is actually copied into the segment for the child process. This results in a major performance advantage for the kernel, especially in the (very common) situation where the newly created child process immediately performs an exec() call to start running a different program.
4-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Shared data
Mapped memory regions, also called shared memory areas, can serve as large pools for exchanging data among processes. - A process can create and/or attach a shared data segment that is accessible by other processes. - A shared data segment can represent a single memory object or a collection of memory objects. - Shared memory can be attached read-only or read-write.
V2.0.0.3
Student Notebook
Uempty
Shared Memory
Memory Segments Process A effective address space
Virtual memory
BE0070XS4.0
Notes: Introduction
Shared memory areas can be most beneficial when the amount of data to be exchanged between processes is too large to transfer with messages, or when many processes maintain a common large database.
Methods of sharing
The system provides two methods of sharing memory: - Mapping file data into the process address space (mmap() services).
4-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Serialization
There is no implicit serialization support when two or more processes access the same shared data segment. The available subroutines do not provide locks or access control among the processes. Therefore, processes using shared memory areas must set up a signal or semaphore control method to prevent access conflicts and to keep one process from changing data that another is using.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes: Introduction
The shmat services are typically used to create and use shared memory objects from a program.
shmat functions
A program can use the following functions to create and manage shared memory segments.
Using shmat
The shmget() system call is used to create a shared memory region; and, when supporting objects larger than 256 MB shared memory regions, creates multiple segments.
4-21
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
The shmat() system call is used to gain address ability to a shared memory region.
EXTSHM
The environment variable EXTSHM=ON allows shared memory regions to be created with page granularity instead of the default segment granularity. This allows more shared memory regions within the same-sized address space, with no increase in the total amount of shared memory region space.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes: Introduction
Memory segments can be used to map any ordinary file directly into memory. Instead of reading and writing to the file, using system calls, the program would just access variables stored in the segment.
mmap ()
The mmap() service is normally used to map disk files into a process address space; however, shmat()can also be used to map disk files.
Advantages
Memory mapped files provides easy random access, as the file data is always available. This avoids the system call overhead of read() and write(). This single-level store approach can also greatly improve performance by creating a form of
Copyright IBM Corp. 2001, 2003 Unit 4. Addressing Memory 4-23
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Direct Memory Access (DMA) file access. Instead of buffering the data in the kernel and copying the data from kernel to user, the file data is mapped directly into the users address space.
Shared files
A mapped file can be shared between multiple processes, even if some are using mapping and others are using the read/ write system call interface. Of course, this may require synchronization between the processes.
mmap services
The mmap() services are typically used for mapping files, although they may also be used for creating shared memory segments. Service madvise() mincore() mmap() mprotect() msync() munmap() Description Advises the system of a process' expected paging behavior. Determines residency of memory pages. Maps an object file into virtual memory. Modifies the access protections of memory mapping. Synchronizes a mapped file with its underlying storage device. Un-maps a mapped memory region.
Both the mmap()and shmat() services provide the capability for multiple processes to map the same region of an object so that they share addressability to that object. However, the mmap() subroutine extends this capability beyond that provided by the shmat() subroutine by allowing a relatively unlimited number of such mappings to be established.
V2.0.0.3
Student Notebook
Uempty
- For 32-bit applications, when eleven or fewer files are mapped simultaneously and each is smaller than 256 MB. - When mapping shared memory regions which need to be shared among unrelated processes (no parent-child relationship). - When mapping entire files.
Mapping types
There are a 3 mapping types: - Read-write mapping - Read-only mapping - Deferred-update mapping Read-write mapping allows loads and stores in the segment to behave like reads and writes to the corresponding file. Read-only mapping allows only loads from the segment. The operating system generates a SIGSEGV signal if a program attempts an access that exceeds the access permission given to a memory region. Just as with read-write access, a thread that loads beyond the end of the file loads zero values. Deferred-update mapping also allows loads and stores to the segment to behave like reads and writes to the corresponding file. The difference between this mapping and read-write mapping is that the modifications are delayed. Any storing into the segment modifies the segment, but does not modify the corresponding file. With deferred update (O_DEFER flag set on file open), the application can begin modifying the file data (by memory-mapped loads and stores) and then either commit the modifications to the file system (via fsync()) or discard the modifications completely. This can greatly simplify error recovery, and allows the application to avoid a costly temporary file that may otherwise be required. If all processes that have a file open with the O_DEFER flag set close that file before an fsync() or synchronous update operation is made against the file then that file is not updated.
4-25
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
13 14 15
BE0070XS4.0
Notes: Introduction
For the 32-bit hardware platform, segment numbers (Effective Segment IDs) have different uses in user and kernel modes.
V2.0.0.3
Student Notebook
Uempty
Segments 3 -12 are used for shmat() and mmap() areas. Segment 14 provides an additional segment for shmat() and mmap(). Segment 13 contains the text for shared libraries (library code). Segment 15 holds the library data.
4-27
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Segment Number
0 1 2 3 7-10 14 15
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes:
Segment
64-bit layout
The 64-bit model adds many more segments to the effective address space. Also, for the 64-bit case one segment layout applies to both user and kernel modes.
4-29
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Kernel segments
Segment 0 is the first kernel segment. The segments from 0xF00000000 and up may be used for additional kernel segments.
V2.0.0.3
Student Notebook
Uempty
Checkpoint
1. AIX divides physical memory into ______. 2. The _____________ provides each process with its own _______address space. 3. A segment can be up to ______ in size. 4. A 32-bit effective address contains a ______segment number. 5. Shared library data segments can be shared between processes. True or False? 6. The 32-bit user address space layout is the same s the 32-bit kernel address space layout. True or False?
BE0070XS4.0
Notes:
4-31
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Exercise
Complete exercise four Consists of theory and hands-on Ask questions at anytime Activities are identified by a What you will do:
Given the address of a memory object you will identify what segment the address belongs to and speculate as to how the object was created.
BE0070XS4.0
Notes:
Turn to your lab workbook and complete exericse four.
V2.0.0.3
Student Notebook
Uempty
Unit Summary
Pages size = 4096 Virtual memory management Address spaces effective virtual physical Segment size = 256 MB 32-bit vs 64-bit segment layout
BE0070XS4.0
Notes:
4-33
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
References
PowerPC Microprocessor Family: The Programmers Reference Guide Available from http://www-3.ibm.com/chips/techlib/techlib.nsf/productfamilies/PowerPC
5-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this lesson you should be able to:
Identify the key functions of the AIX virtual memory management system Given a memory object type identify the location of the backing store the VMM system will use for this object Describe the affect that different paging space allocation policies have on applications and the system Find the current paging space usage on the system Identify the paging characteristics of a system from a vmcore file
BE0070XS4.0
Notes:
5-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Physical Memory
Process 1
Effective address
Process 2
Filesystem pages
Paging space
BE0070XS4.0
Notes: Introduction
In the Addressing Memory lesson we saw how AIX 5L manages the effective address space for both the user and kernel. This lesson focuses on the management of the virtual address space by the Virtual Memory Manager (VMM).
5-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Each time a process accesses a virtual address, the virtual address is mapped (if it is not already mapped) by the VMM to a physical address (where the data is located).
Access protection
Another function of the VMM is to provide for access protection that prevents illegal access to data. This function protects programs from incorrectly accessing kernel memory or memory belonging to other programs. Access protection also allows programs to set up memory that may be shared between processes.
5-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Object Types
Working objects Persistent objects Client objects Log objects Mapping objects
BE0070XS4.0
Notes:
Working objects
Working objects (also called working storage and working segments) are temporary segments, used during the execution of a program, such as stack and data areas. Process data is created by the loader at run time and is paged in and out of paging space. The working storage segment holds the amount of paging space allocated to
5-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
pages in the segment. Part of the AIX kernel is also pageable and is part of the working storage.
Persistent objects
The VMM is used for performing I/O operations of file systems. Persistent objects are used to hold file data for the local file systems. When the process opens the file, the data pages are paged-in. When the contents of a file changes, the page is marked as modified and eventually paged-out directly to the original disk location. File system reads and writes occur by attaching the appropriate file system object and performing loads/stores between the mapped object and the user buffer. File data pages and program text are both part of persistent storage. The program text pages are read-only pages; they are paged-in, and never paged-out to disk. Persistent pages do not use paging space.
Client objects
Client objects are used for pages of client file systems. When remote pages are modified, they are marked and eventually paged-out to the original disk location across the network. Remote program text pages (read-only pages) page-out to paging space, from where they can be paged-in later if needed.
Log objects
Log objects are used for writing or reading journaled file systems file logs during journaling operations.
Mapping objects
Mapping objects are used to support the mmap() interfaces, which allows an application to map multiple objects to the same memory segment.
5-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Demand Paging
Physical Memory
Pinned
Page Fault
Filesystem pages
Paging space
Backing store
BE0070XS4.0
Notes: Introduction
AIX is a demand paging system. Physical pages (frames) are not allocated for virtual pages until they are needed (referenced).
How it works
Data is copied to a physical page only when referenced by a program or by the kernel. References to a non allocated page results in a page fault. Paging is done on-the-fly and is invisible to the program causing the page fault.
Page faults
A page fault occurs when a thread tries to access a page that is not currently in physical memory.
5-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
The mapping of effective addresses to physical addresses is done in the hardware on a page-by-page basis. When the hardware finds that there is no mapping to physical memory, it raises a page fault condition.
Page validity
The VMM checks to ensure that the effective address being referenced is part of the valid address range of the segment that contains the effective address. There are a number of possible scenarios. - The effective address is outside the valid address range for the segment. In this case, the page fault cannot be resolved. If the processor is running in kernel mode, an unresolveable page fault results in a system crash. If the processor is running in user mode, then the unresolved page fault results in the running process being sent either a SIGSEGV (Segmentation violation) or SIGBUS (Bus error) depending on the address being referenced. - The effective address is within the valid address range for the segment, and the page containing the effective address has already been instantiated. The actions of the VMM in this case are described over the next few pages of the class. - The effective address is within the valid address range for the segment, however the page containing the effective address has not been instantiated. For example, this happens when an application performs a large malloc() operation. The pages for the malloced space are not instantiated until they are referenced for the first time. In this case, the VMM allocates a physical frame for use by the page, and then updates the segment information to indicate that the page has been allocated. It then updates the hardware page frame table to reflect the physical location of the page, and allows the faulting thread to continue.
5-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Advantages
The demand paging system in AIX allows more virtual pages to be allocated than can be stored in physical memory. Demand paging also saves much of the overhead of creating new processes because the pages for execution do not have to be loaded until they are needed. If a process never makes use of a portion of its virtual space, valuable physical memory will never be used.
Pageable kernel
AIXs kernel is pageable. Only some of the kernel is in physical memory at one time. Kernel pages that are not currently being used can be paged out.
Pinned pages
Some parts of the kernel are required to stay in memory because it is not possible to perform a page-in when those pieces of code execute.These pages are said to be pinned. The interrupt processing portion of a device driver is pinned. Only a small part of the kernel is required to be pinned.
5-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Data Structures
Effective address space Hardware Page Frame Table Physical memory
Filesystem pages
BE0070XS4.0
Notes: Introduction
The main function of the VMM is to make translations from the effective address to the physical address. Address translation requires both hardware and software components. This section covers the relationship between the hardware and software components of the VMM.
Data structures
The diagram above shows the overall relationships between the major AIX data structures involved in mapping a virtual page to a physical page or to paging space.
V2.0.0.3
Student Notebook
Uempty
Page faults
A page fault causes the AIX (VMM) to do the bulk of its work. It handles the fault by first verifying that the requested page is valid. If the page is valid, the VMM determines the location of the page, recovers the page if necessary and updates the hardwares frame page frame table with the location of the page. A faulted page will be recovered from one of the following locations: - Physical memory (but not in the hardware PFT). - Paging disk (working object) - File system object (persistent object)
5-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Physical memory
Filesystem pages
BE0070XS4.0
Notes: Introduction
In a normal situation, an effective address refers to a piece of memory that is currently in real memory. We say the memory is paged in.
Illustration
The flow of the best case address translation is illustrated above.
V2.0.0.3
Student Notebook
Uempty
Physical memory
Filesystem pages
BE0070XS4.0
Notes: Introduction
The size of the hardware Page Frame Table is limited; therefore, the hardware can not satisfy all address translation requests. The VMM software must supplement the hardware table with a software-managed page table.
Illustration
When a translation cannot be found in the hardware table, a page fault is generated. The physical page may be resident in memory; however, the translation entry is not in the hardware table. The VMM must be called to update the hardware tables. The procedure is shown in the table above.
5-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Procedure
These steps assume that the memory page is in memory but not in the hardware Page Frame Table. Step 1. 2. 3. Action The hardware Page Frame Table is searched for a page translation and none is found. The hardware generates a page fault causing the VMM to be called. The VMM first verifies that the requested page is valid. If the page is not valid, a kernel exception is generated. If the page is valid, the VMM searches the software PFT for the page. This process resembles hardware processing, but uses a software page table instead. Only some parts of the software PFT are pinned. If the page is found: 5. The hardware Page Frame Table is updated with the real page number for this page, and the process resumes execution. No page-in of the page occurs, since it is already in memory.
4.
V2.0.0.3
Student Notebook
Uempty
Physical memory
Filesystem pages
BE0070XS4.0
Notes: Introduction
If a page is not found in physical memory, the VMM determines whether it is on paging space or elsewhere on disk. If the page is in paging space, then the disk block containing the page is located, and the page is loaded into a free memory page.
Illustration
Working pages are mapped to disk blocks in the paging space. The procedure for loading a page from paging space is shown in the visual on the previous page.
Copyright IBM Corp. 2001, 2003 Unit 5. Memory Management 5-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
XPT Root
0
page 255
1 MB
255
. . . .
page 65280
255
page 65535
255
. . . .
BE0070XS4.0
Structure
Each segment that is mapped to paging space has the following XPT structure.
Description
The first level of the tree is the XPT root block. The second level consists of 256 direct blocks. Each word in the root block is a pointer to one of the direct blocks. Each word in a direct block represents a single page in the segment. It contains the pages state and disk block information. Each XPT direct block covers 1 MB of the 256MB segment.
5-16 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
Procedure
In this procedure the faulting thread must be suspended until I/O for the faulting page has completed. Step 1. 2. 3. 4. 5. 6. 7. 8. Action The thread causing the fault is suspended. The VMM looks up the object ID for this address in the Segment ID table and gets the External Page Table (XPT) root pointer. The VMM finds the correct XPT (direct block from XPT root). The VMM gets the paging space disk block number from the XPT direct block. The VMM takes the first available frame from the free frame list. The free list contains one entry for each free frame of real memory. The VMM issues an I/O request to the device with the logical block and physical address of the page to be loaded. When the I/O completes, the VMM is notified. The VMM updates the hardware PFT. The thread waiting on the frame is awakened and resumes at the faulting instruction.
The net effect is that the process or thread has no knowledge that a page fault occurred except for a delay in its processing.
5-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Filesystem pages
BE0070XS4.0
Notes: Introduction
Persistent pages do not use external page tables. The VMM uses the information contained in a files inode structure to locate the pages for the file.
Procedure
Persistent pages are mapped to local files located on file systems. The effective address for the mapped page of the local file is indexed in the Segment Information Table (SID). The inode is pointed to by the SID entry, allowing the VMM to find and page-in the faulting block.
V2.0.0.3
Student Notebook
Uempty
Persistent pages
AIX uses a large portion of memory as the file system buffer cache. The pages for files compete for storage the same way as other pages. The VMM schedules the modified persistent pages to be written to their original location on disk when: - The VMM needs the frame for another page. - The file is closed. - The sync operation is performed. Scheduling a page to be written does not mean that the data is written to disk immediately. A sync() operation flushes all scheduled pages to disk. The sync() operation is performed by the syncd daemon every 60 seconds by default or by a user running the sync command.
5-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Object Type
A. Working B. Persistent C. Client
Backing Store
1. A regular disk file 2. An NFS disk file 3. Paging disk
BE0070XS4.0
Notes: Introduction
Paging provides automatic backup copies of memory objects on disk. This copy is called the backing store and can be located on a paging disk, a regular disk file, or even on a network accessible disk file.
Questions
Using what you know about memory object types, match the object types on the left with the location of its backing store on the right in the visual above.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes: Introduction
Proper management of paging space is required for the system to perform. Low paging space can result in failed applications and system crashes.
SIGDANGER
Application programs can ask AIX to notify them when paging space runs low by registering to receive a SIGDANGER signal. This feature allows applications to release memory or take other appropriate actions when paging space runs low. The default action for SIGDANGER is to ignore the signal.
Threshold
AIX has two paging space thresholds; they are: - Paging space warning level
Copyright IBM Corp. 2001, 2003 Unit 5. Memory Management 5-21
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
- Paging space kill level Application programs can monitor these thresholds and free paging space using the psdanger() function. Both thresholds are set with the vmtune (AIX 5.1) and vmo (AIX 5.2) commands.
Process
The table above describes the actions AIX takes when paging space becomes low.
Nokilluid
The SIGKILL signal is only sent to processes that do not have a handler for SIGDANGER and where the UID of the process is greater than or equal to the kernel variable nokilluid, which can be set with the vmtune (AIX 5.1) and vmo (AIX 5.2) commands. The value of nokilluid is 0 by default, which means processes owned by root are eligible to be sent a SIGKILL.
Example
The init process (pid 1) registers a signal handler for the SIGDANGER signal. The handler prints a warning message on the system console and attempts to free memory by unloading unused modules. int danger(void) { if (own_pid == SPECIALPID) { console(NOLOG, M_DANGER, "Paging space low!\n"); unload(L_PURGE); /* unload and remove any * unused modules in kernel or * library */ } return(0); }
V2.0.0.3
Student Notebook
Uempty
Description
Causes paging space to be allocated as soon as the memory request is made. This helps to ensure that the paging space will be available if it is needed. Note that this policy holds only for this process and is not system-wide. The system wide default applies. For AIX 4.3.2 and later releases the system default is Deferred Paging Space Allocation (DPSA), which means that paging space will not be allocated until a page Out occurs. This can be controlled with vmtune -d {0,1}. VMtune -d 0 will turn DPSA off, which means paging space will be allocated when requested memory is accessed. Note that this is a system-wide policy and applies to all processes running on the system.
PSALLOC=
BE0070XS4.0
Notes: Introduction
Individual processes may select when paging space will be allocated for them. This is called a paging space policy.
PSALLOC
A process that has the environment variable PSALLOC=early will cause the VMM to allocate paging space for any memory which is requested, whether or not the memory is accessed. This is the algorithm that was used on AIX v3.1.
5-23
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
When early allocation is selected the SPEARLYALLOC flag will be set in proc->p_flag. This flag is defined in proc.h as: #define SPEARLYALLOC 0x04000000 /* allocates paging space early */
This flag can be seen through kdb by running the p <slot_number> subcommand. If the flag is set it will show up in the second set of FLAGS indicated by the name: SPEARLYALLOC.
V2.0.0.3
Student Notebook
Uempty
Free Memory
free pages
<
minfree
maxfree
minfree
free pages
no
=>
maxfree
BE0070XS4.0
Notes: Introduction
To maintain system performance, the VMM always wants some physical memory to be available for page-ins. This section describes the free memory list and the algorithms used to keep pages on the list.
Page stealer
The page stealer is invoked when the number of memory pages on the free list drops below the threshold defined by the value of minfree. The page stealer attempts to
Copyright IBM Corp. 2001, 2003 Unit 5. Memory Management 5-25
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
replenish the free list until it reaches the high threshold defined by maxfree. The values of maxfree and minfree can be viewed or adjusted on AIX 5.1 with the vmtune command (/usr/samples/kernel/vmtune), and on AIX 5.2 with the vmo command.
Evidence
The page stealer is visible as the lrud kernel process.
V2.0.0.3
Student Notebook
Uempty
Physical page
Reference = 1
The reference bit is changed to zero when the clock hand passes
rotation
Reference = 0
Reference = 0
Reference = 1
BE0070XS4.0
5-27
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Process
This algorithm is commonly used in operating systems when the hardware provides only a reference bit for each page in the physical memory. The hardware automatically sets the reference bit for a page translation whenever the page is referenced. Step 1. 2. 3. 4. 5. Action Each time a page is referenced the hardware sets the referenced bit in the PTE (Page Table Entry) for that page. The clock hand algorithm scans all PTEs checking the reference bit. If the reference bit is found set the bit is reset. If the reference bit is found reset the page will be stolen. The process continues until the number of free pages reaches maxfree.
Bucket size
The clock hand algorithm examines a set of frames at a time. If it were to examine all memory frames in the system in one cycle, then it is likely that all frames would have been referenced by the time the algorithm starts its second pass. The number of frames considered in each cycle is known as the lrud bucket size.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes: Introduction
Not all page and protection faults can be handled by the OS. When a fault occurs that cannot be handled by the OS, the system will panic and immediately halt.
5-29
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Checkpoint
1. The system hardware maintains a table of recently referenced ______ to ______address translations. 2. The S_____ P____ F____ T____ contains information on all pages resident in _______ _______. 3. Each ______ _______ has an XPT. 4. A _________ signal is sent to every process when the free paging space drops below the warning threshold. 5. The ________environment variable can be used to change the paging space policy of a process. 6. A ______ ______ when interrupts are disabled will cause the system to crash.
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
Exercise
Complete exercise five Consists of theory and hands-on Ask questions at anytime Activities are identified by a What you will do:
Observe the effect of the AIX paging space allocation policies on an application program Investigate what effect running out of paging space has on applications and the system Diagnose a crash dump from a system with paging space depletion
BE0070XS4.0
Notes:
Turn to your lab workbook and complete exercise five.
5-31
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Summary
Virtual memory management Memory objects types Demand paging system Backing store Paging space allocation policies Free memory list - clock hand
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
References
AIX Documentation: AIX Installation in a Partitioned Environment Hardware Management Console for pSeries Installation and Operations Guide Available from http://www-1.ibm.com/servers/eserver/pseries/library/hardware_docs/hmc.html
6-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this lesson you should be able to:
Describe the implementation of logical partitioning List the components required to support partitioning Understand the terminology relating to partitions
BE0070XS4.0
Notes:
6-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Partitioning
Subdivision of a single machine to run multiple operating system instances Collection of resources able to run an operating system image
Processors Memory I/O devices
Physical partition
Building blocks
Logical partition
Independent assignment of resources
BE0070XS4.0
Notes: Introduction
Partitioning is the term used to describe the ability to run multiple independent operating system images on a single server machine. Each partition has its own allocation of processors, memory and I/O devices. A large system that can be partitioned to run multiple images offers more flexibility than using a collection of smaller individual systems.
6-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Partitioning types
In the UNIX market place, there are two main types of partitioning available: - Physical partitioning - Logical partitioning There are a number of distinct differences between the two implementations.
6-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Physical Partitioning
Interconnect
Operating System
Dedicated CPU, Memory and I/O Dedicated CPU, Memory and I/O
Operating System
Dedicated CPU, Memory and I/O
Physical Partition
Physical Partition
BE0070XS4.0
Notes: Introduction
Physical partitioning is the term used to describe a system where the partitions are based around physical building blocks. Each building block contains a number of processors, system memory and I/O device connections. A partition consists of one or more physical building blocks. The diagram shows a system that contains three building block units. The system currently is configured to run two partitions. One partition consists of all of the resources (CPU, memory, I/O) on two physical building blocks. The other partition contains of all of the resources on the remaining building block.
6-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Properties
A system that implements physical partitioning has the following characteristics: - Multiple memory coherence domains, each with an OS image. A memory coherence domain is a group of processors that are accessing the same physical system memory. Memory coherence traffic (such as cache line invalidation, and snooping) is shared between the processors in the domain. - Separation controlled by interfaces between physical units. Memory coherence information stays within the physical building blocks allocated to the partition. A processor that is part of one building block cannot access the memory on another building block that is not part of the memory coherence domain (partition). - Strong software isolation, strong hardware fault isolation. Applications running inside an operating system instance have no impact on applications running inside another partition. A failure of a component on one system building block will not (or should not) impact a partition running on other building blocks. However the system as a whole still contains components that could impact multiple partitions in the event of failure, for example a failure of the backplane interconnect. - Granularity of allocation at the physical building block level. A partition that does not have enough resources can only be grown by incorporating whole building blocks, and therefore will include all of the resources on the building block, even though they may not be desired. For example, a partition that needs more processors will need to add another building block. By doing so, the partition will also incorporate the memory and I/O devices on that building block. - Resources allocated only by contents of complete physical group. The granularity of growing individual resources (CPU, memory, I/O) is determined by the amount of each resource on the physical building block being added to the partition. For example, in a system where each building block contains 4 processors, a partition that required more CPU power would receive an increment of 4 processors, even though perhaps only 1 or 2 would be sufficient.
Example
The Sun Enterprise 10000 and Sun Fire15K are examples of systems that use physical partitioning. In the case of Sun machines, the term domain is used instead of partition.
6-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Logical Partitioning
RS232 RS422
OS
OS
Hypervisor OS Processors
Managed System
Memory
LPAR 1 LPAR 2
LPAR 3
Ethernet
BE0070XS4.0
Notes: Introduction
Logical partitioning is the term used to describe a system where the partitions are created independently of any physical boundaries. The diagram shows a system configured with three partitions. Each partition contains an amount of resource (CPU, memory, I/O slots) that is independent of the physical layout of the hardware. In the case of pSeries systems, an additional system, the Hardware Management Console for pSeries (HMC), is required for configuring and administering a partitioned server. The HMC connects to the system through a dedicated serial link connection to the service processor. Additionally, applications running on the HMC communicate over an Ethernet connection with the operating system instances in the partitions to provide service functionality, and in the case of AIX 5.2, dynamic partitioning capabilities.
6-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Properties
A system that implements logical partitioning has the following characteristics: - One memory coherence domain with multiple OS images. This basically means that all processors in the system are aware of the physical memory addresses being accessed by the other processors, even if they are in a different partition. Since each partition is allocated its own portion of physical memory, this has no real performance impact. - Separation controlled mainly by address mapping mechanisms. Rather than using physical boundaries between components to control the memory access available to each partition, a set of address mapping mechanisms provided by hardware and firmware features is used. The operating system running in each partition is restricted in its ability to access physical memory, and is only permitted to access physical memory that has been explicitly assigned to that partition. - Strong software isolation, fair-to-strong hardware fault isolation. Applications running inside an operating system instance have no impact on applications running inside another partition. The failure of the operating system in one partition has no impact on the others. - Granularity of allocation at the logical resource level (or below). In the case of pSeries systems, the current unit of allocation for each resource type is: One CPU Individual I/O slot 256MB memory - Resources allocated in almost any combinations or amounts. The amount of memory allocated to a partition is independent of the number of CPUs or I/O slots. Each resource quantity is based on the system administrators understanding of the needs of the partition, rather than the physical layout of the machine. - Some resources can even be shared. In the case of pSeries systems, some resources are shared by all partitions. These are divided into two classes: Physical resources (such as power supplies) that are visible to each partition. Logical resources, where each partition is given its own instance, for example, the operator panel and virtual console devices provided by the HMC.
6-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Firmware
Global firmware image Partition specific firmware instance Hypervisor code
Operating System
Use of Hypervisor callout by VMM Means no LPAR support for older operating systems (e.g. AIX 4.3)
BE0070XS4.0
Notes: Introduction
No single feature determines whether a pSeries system is capable of implementing LPAR or not. Rather, it is a combination of features provided by different components, all of which must be present.
Hardware
The following hardware features are required for LPAR support: - Interrupt controller hardware The interrupt controller hardware on the system directs interrupts to a CPU for processing. In the case of a partitioned system, the interrupt controller hardware must be capable of maintaining multiple global interrupt queues, one for each partition. The hardware must be capable of recognizing the source of an interrupt and determining which partition should receive the interrupt notification. For
Copyright IBM Corp. 2001, 2003 Unit 6. Logical Partitioning 6-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
example, an interrupt from a SCSI adapter card must be sent to the partition that controls the card and the devices connected to it. If the interrupt is sent to a CPU that is part of a different partition, the CPU would be unable to access the device to process the interrupt. - Processor support A processor requires 3 new registers in order to be used in a partitioned environment. The POWER4 processor is the first CPU used in pSeries systems that has the required capabilities. The registers are: Real Mode Offset (RMO) register The RMO register is used by the processor when referencing an address in real mode. All processors in the same partition have the same value loaded in the RMO register. The use of the register is described in detail in a later part of this unit. Real Mode Limit (RML) register The RML register is also used when the processor is referencing an address in real mode. All processors in the same partition have the same value loaded in the RML register. The use of the register is described in detail in a later part of this unit. Logical Partition Identity register The LPI register contains a value that indicates the partition to which the processor is assigned. All processors in the same partition have the same value loaded in the LPI register. - In order to implement the required isolation between partitions, a processor must have hypervisor support. The hypervisor is described in detail later. A processor implements hypervisor support by recognizing the HV bit in the Machine Status Register (MSR). The HV bit of the MSR, along with the Problem State bit indicates if the processor is in hypervisor mode. Hypervisor mode is implemented in a similar fashion to the system call mechanism used to transition the processor between Problem State (user mode) and Supervisor State (kernel mode). Hypervisor mode can only be invoked from Supervisor State. In other words, only kernel code can make hypervisor calls.
Firmware
The job of firmware in a system is to: - Identify and configure system components - Create a device tree - Initialize/Reset system components - Locate an operating system boot image - Load the boot image into memory and transfer control
V2.0.0.3
Student Notebook
Uempty
- When the operating system is running, it has control over the hardware. In order to allow AIX to run on different hardware platform types, it uses a component of firmware called Run-Time Abstraction Services (RTAS) to interact with the hardware. The RTAS functions are provided by pSeries RISC Platform Architecture (RPA) platforms to insulate the operating system from having to know about and manipulate a number of key functions which ordinarily would require platform-dependent code. The OS calls these functions rather than manipulating hardware registers directly, reducing the need for hard-coding the OS for each platform. Examples of RTAS functions include accessing the time-of-day clock, and updating the boot list in NVRAM. - When the operating system image is terminated, control is returned to firmware. Since firmware in a partitioned system now has to deal with multiple operating system images, a special version is required that provides additional functionality. The functionality of firmware is now divided into two parts, known as Global firmware, and Partition firmware. The global firmware is initialized when the system is first powered on. It identifies and configures all of the hardware components in the system, and creates a global device tree that contains information on all devices. When a partition is started, a partition specific instance of firmware is created. The partition specific instance contains a device tree that is a subset of the global device tree, and contains only the devices that have been assigned to the partition. It then continues with the task of locating an operating system image and loading it. The RTAS functionality provided by partition firmware performs validation checks and locking to ensure that the partition is permitted to access the particular hardware feature being used, and that its use does not conflict with that of another partition. An additional component of firmware required for LPAR support is the hypervisor function.
Hypervisor
The hypervisor can be considered as a special set of RTAS features, that run in hypervisor mode on the processor. The hypervisor is trusted code that allows a partition to manipulate physical memory that is outside the region assigned to the partition. The hypervisor code performs partition and argument validation before allowing the requested action to take place. The hypervisor provides the following functions: - Page Table access Page tables are described later in this unit when we examine the changes in translating a virtual address to a physical address in the LPAR environment. - Virtual console serial device When multiple partitions are running on a system, each partition requires an I/O device to act as the console. Most pSeries systems have two or three native serial ports, so it would be impractical to insist that each partition have its own native serial port, or enforce the addition of additional serial adapters. The hypervisor provides a
Copyright IBM Corp. 2001, 2003 Unit 6. Logical Partitioning 6-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
virtual serial console interface to each partition. The I/O from the virtual device is communicated to the HMC via the serial link from the service processor in the partitioned system. - Debugger support The hypervisor also provides support that permits the debugger running on the system to access specific memory and register locations.
Operating system
The operating system that will run in a partition needs to be modified to use hypervisor calls to manipulate the Page Frame Table (PFT), rather than maintain the table directly in memory. A few other low level kernel components are aware of the fact that the OS is running inside a partition. The vast bulk of the kernel however is unaware, since there is no need for any changes. This allows the operating system to present a consistent interface to the application layer, regardless of whether it is running in a partition or running as the only operating system on a regular standalone machine. The net effect of the required changes is that an operating system not designed for use in a partitioned environment will fail to boot. This means that older operating systems (such as AIX 4.3) will not work in a partition.
V2.0.0.3
Student Notebook
Uempty
AIX
Boot/Config VMM Kernel Virtual Debugger TTY Dev & Dump Driver
Register & Memory Access TTY Data Streams
Firmware
BE0070XS4.0
Notes: Introduction
The diagram summarizes the interfaces used by the operating system to interact with the hardware platform. It details the different components of the OS that interact with each function provided by the platform firmware. The Platform Adaptation Layer (PAL) is an operating system component similar in function to the RTAS layer provided by firmware. In other words, its job is to mask the differences between hardware platforms from other parts of the kernel.
6-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Process 1
Effective address
Process 2
Filesystem pages
Paging space
BE0070XS4.0
Notes: Introduction
The job of the Virtual Memory Manager (VMM) component of the operating system is to manage the effective address space of each process on the system, and ensure that pages are mapped to physical memory when required so that they can be accessed by the processors. The translation of a virtual address to a physical address is an area of the operating system that has undergone some changes to allow the implementation of a partitioned environment, since there are now multiple operating system images co-existing in a single machine.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes: Introduction
Before examining the changes in address translation for the LPAR environment, we first take a closer look at the memory layout on a non-LPAR system.
Device I/O
The hardware provides memory mapped access to I/O devices. A system has at least one Host Bridge (HB), which is mapped to a region in the address map. When the processor writes to specific addresses, the data is passed to the Host Bridge, rather than being stored in the DRAMS or other components used to implement physical memory. The Host Bridge device allocates portions of its address space to each I/O adapter plugged into a slot it controls. Data written to the Host Bridge is passed to a specific I/O adapter card, based on the address being written. Each HB is allocated a unique portion of the system address space.
6-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Physical memory
Another feature of the diagram that is worth noting is that the address range of physical memory in the system is not necessarily contiguous. The physical memory in the system always starts with address zero, however depending on the total amount of memory, and the number of Host Bridge devices in the system, the physical address range may be divided into multiple components. In other words, there appears to be holes in the physical address range used by the system. This is perfectly normal, and the VMM system of AIX (and most other modern operating systems) is designed to cope with this. As an example, a system with 8GB total of physical memory may address 3GB of that memory using physical addresses in the range 0 to 3GB, and the remaining part of memory using addresses 4.5GB to 9.5GB.
V2.0.0.3
Student Notebook
Uempty
On non-LPAR systems, real address = physical address On LPAR systems real address != physical address
Only one physical address zero in the system Physical address zero used by hypervisor Each partition requires its own address zero Requires mapping from real address generated by partition to physical address used by memory hardware
BE0070XS4.0
Notes: Introduction
In addition to considering the ranges used when addressing memory, another important distinction to make is the type of access being performed. The function of the VMM is to translate a virtual address into a real (or physical) address. Address translation can be enabled or disabled, and the status of this is indicated by bits in the MSR. Address translation for instructions and data can be enabled or disabled independently.
Real address
A real address is an address that is generated by the processor when address translation is disabled. Typically real addresses are used by specialized parts of kernel code, such as the boot process (before the VMM is initialized) or interrupt/exception handler code. Real mode memory starts at address zero. The size of real mode
Copyright IBM Corp. 2001, 2003 Unit 6. Logical Partitioning 6-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
memory is dependent on the requirements of the operating system. Another important thing to note is that on a non-LPAR system, a real address is equivalent to a physical address.
LPAR changes
The assertion that a real address is the same as a physical address no longer holds true in the partitioned environment however, since a system only has a single overall physical address range (although it may be split into multiple sections). Each partition requires its own address zero, but there is only one true physical address zero inside a system. In actual fact, physical address zero is used by the hypervisor, but we can generalize the statement as: For any given address n, each partition expects to be able to access address n. Obviously they cant all access the same physical address n, so something needs to be done to accommodate this. We explain things later, but for now, just know that: - For real mode addresses, this is where the RMO register of the processor is used. - For virtual addresses, partition page tables are used to translate the partition specific address n into a system-wide physical address.
V2.0.0.3
Student Notebook
Uempty
Notes: Introduction
The amount of real mode memory required by a partition depends upon two factors. 1) The version of the operating system. 2) The amount of memory allocated to the partition.
Alignment
Physical memory allocated in a partitioned environment for use as Real Mode memory by a partition must be contiguous, and aligned on an address boundary that is divisible by the size of the real mode region. For example, a 16GB real mode region must be aligned on an address boundary divisible by 16GB (i.e. 16GB, 32GB, 48GB, 64GB etc.). As we will see later, address 0 cannot be used, since it is used by the hypervisor.
6-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
AIX 5.1
The VMM in AIX 5.1 maintains tables in real mode memory that scale with the total amount of memory allocated to the partition. The result of this is that partitions running AIX 5.1 may need 256MB, 1GB or 16GB of real mode memory, rather than the 256MB required by AIX 5.2 and Linux. Sometimes the alignment requirements of the 1GB and 16GB real mode regions can cause problems on systems that are using a large percentage of their physical memory. In these situations, sometimes the order in which partitions are started can have an impact on whether all partitions can be started.
V2.0.0.3
Student Notebook
Uempty
Address Translation
If address translation enabled, VMM converts virtual address to real address
Treats address as segment ID, page number and page offset Determines physical page starting address from segment ID and page number Non-LPAR systems use software PFT (page frame table) LPAR systems use partition page tables (stored outside partition) Adds page offset to physical page address Value of RMO register is not used
BE0070XS4.0
Notes: Introduction
The method used by a partition to interpret an address depends if virtual address translation is currently enabled or disabled.
Translation enabled
When address translation is enabled, the VMM is in charge. In a normal non-LPAR system, the VMM is effectively translating the virtual address to a real address, but because a real address is the same as a physical address, there is no problem. In a partitioned environment, the VMM uses a slightly different method to convert a virtual address into a true system-wide physical address. The VMM converts the virtual address into a real address, however the real address is a logical address within all of the memory assigned to the partition. The VMM then performs an additional step, and converts the partition-specific real address into a system-wide physical address. It accomplishes this using partition page tables.
Copyright IBM Corp. 2001, 2003 Unit 6. Logical Partitioning 6-21
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Translation disabled
When address translation is disabled, the RMO (Real Memory Offset) register of the processor is used in the address calculation. The processor knows when it is dealing with a real address, as indicated by the status bits in the MSR. When dealing with a real address, the processor automatically (and without the knowledge of the operating system) adds the value loaded in the RMO to the address to convert the partition specific real address into a true system-wide physical address before submitting it to the memory controller hardware as part of the request to read or write the memory location. The RML register is used to limit the amount of memory that a partition can access in real mode.
V2.0.0.3
Student Notebook
Uempty
Multiple PMBs assigned to provide the logical address space for a partition
e.g. 2GB partition requires 8 PMBs assigned to a partition need not be contiguous
Logical Memory Block (LMB) is the name given to a block of memory when viewed from the partition perspective
Each LMB has a unique ID within a partition, and is associated with a PMB
Some PMBs are used for special purposes, and cannot be allocated to partitions
Partition page tables TCE space Hypervisor
Figure 6-12. Allocating Physical Memory BE0070XS4.0
Notes: Introduction
The physical memory of a partitioned system must be divided up between the partitions that are to be started.
Terminology
The physical memory of the system is divided up into 256MB chunks called Physical Memory Blocks (PMBs). Each PMB has a unique ID within the system, so that the hypervisor can track which PMBs are allocated for specific purposes. In order to be activated, a partition will be allocated sufficient PMBs to satisfy the minimum memory requirement as indicated by the partition profile being activated. The PMBs assigned to a partition need not be contiguous. The partition views the memory assigned to it as a number of logical memory blocks (LMBs). Each LMB has an ID that is unique within the partition.
Copyright IBM Corp. 2001, 2003 Unit 6. Logical Partitioning 6-23
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Some PMBs in the system are used for special purposes, and cannot be allocated for use by partitions. The number of PMBs allocated for these special purposes depends upon many factors.
V2.0.0.3
Student Notebook
Uempty
Placed in contiguous physical memory Aligned on address boundary divisible by table size
e.g. 64MB page table aligned on 64MB address
BE0070XS4.0
Notes: Introduction
As mentioned previously, each partition is allocated space for a partition page table. The table is used by the VMM in the partition to translate a partition specific virtual address into a system-wide physical address.
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
requirement that is not a power of two is rounded up to the next size that is a power of two. So a partition that has 2.5GB of memory has a page table requirement of 40MB, but this would be rounded up to 64MB, the next power of two. Page tables must be allocated on an address boundary that is divisible by the size of the page table. In addition, page tables must be allocated in contiguous physical memory. The hypervisor will attempt to place multiple page tables of 128MB or smaller inside a single PMB that has been allocated for page table use. If existing PMBs allocated for page table use do not contain sufficient space (or sufficient contiguous space), then the hypervisor will allocate more PMBs for page table use. The size of the page table allocated to a partition is large enough to handle the maximum memory amount the partition may grow to. The maximum memory amount is an attribute of the partition that is used in limiting the extent of dynamic LPAR operations.
Performance penalty
There is a small performance penalty associated with the action of the VMM in a partition accessing the partition page tables. This performance penalty is only experienced when a virtual page is mapped into physical memory. If the virtual page is already in physical memory, then the VMM can perform the virtual to physical address translation by accessing the Translation Lookaside Buffer (TLB), a processor specific cache of the most recently accessed virtual to physical translations. The performance penalty is only really noticeable when a partition is performing heavy paging activity, since this means the page tables are being accessed frequently.
V2.0.0.3
Student Notebook
Uempty
TCE space allocated at the top of physical memory Amount of TCE space depends on number of PCI slots/drawers
512MB for 5-8 I/O drawers on p690 256MB for all others
Figure 6-14. Translation Control Entries BE0070XS4.0
Notes: Introduction
Host bridge devices use Translation Control Entries (TCEs) to allow a PCI adapter that can only generate a 32-bit address (i.e. an address in the range 0 to 4GB) to access system memory above the 4GB address range. The translation entries are used to convert the 32-bit I/O address generated by the adapter card on the I/O bus into a 64-bit address that the host bridge will submit to the system memory controller.
TCE tables
TCE tables contain information on the current TCE mappings for each host bridge device. In a standalone system, the operating system controls all host bridge devices in the system, therefore all PCI slots are controlled by a single operating system instance. In this case, the TCE tables exist within the memory image of the operating system.
6-27
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
LPAR changes
In the partitioned environment, there is no requirement for all of the slots of a single host bridge device to be under the control of a single partition. As an example, a single host bridge device may support 4 PCI slots, and each slot may be assigned to a different partition. Since the TCEs need to be manipulated by the operating system as it establishes a mapping to the adapter card, we now have a situation where multiple partitions require to access adjacent memory locations. Rather than having the TCE tables under the control of a special partition, they are placed under the control of the hypervisor. The memory locations are not under the control of any specific partition. The hypervisor allocates each partition valid windows into the TCE address space, that relate to the adapter slots that have been assigned to the partition. Access to the TCE tables is performed by the partition in a manner similar to accessing partition page tables. The partition makes a hypervisor call (similar to a system call), and after validating the permissions and arguments, the hypervisor performs the requested action on the TCE table entry. Another benefit of having the TCE space under the control of the hypervisor is that it allows the windows that are valid for each partition to be changed on the fly, which is a requirement for the ability to dynamically reassign an I/O slot from one partition to another with a DLPAR operation. The amount of memory allocated for TCE space depends on the number of host bridge devices (and PCI slots) in the system. Currently a p690 system that has between 5 and 8 I/O drawers will use 512MB of memory (2 PMBs) for TCE space. p690 systems with less than 5 I/O drawers, and all other LPAR capable pSeries systems use 256MB (1 PMB) for TCE space. TCE space is always located at the top of physical memory.
V2.0.0.3
Student Notebook
Uempty
Hypervisor
Similar to system call mechanism
Hypervisor bit in MSR indicating processor mode Can only be invoked from Supervisor (kernel) mode
Hypervisor code validates arguments and ensures each partition can only access its allocated page table & TCE space
Checks tables of PMBs allocated to each partition Prevents a partition from accessing physical memory not assigned to the partition
BE0070XS4.0
Notes: Introduction
The hypervisor is the name given to code that runs under the hypervisor mode of the processor. The hypervisor code is supplied as part of the firmware image loaded onto the system. It is loaded in the first PMB in the system, starting at physical address zero.
Hypervisor mode
Hypervisor mode is entered using a mechanism similar to that used when a user application makes a system call. When a user application makes a system call, the processor state transitions between Problem State (user mode) and Supervisor State (kernel mode), and the kernel segment becomes visible. The transition to hypervisor mode can only be made from Supervisor State (i.e. kernel mode). Making a hypervisor call from user mode results in a permission denied error.
Copyright IBM Corp. 2001, 2003 Unit 6. Logical Partitioning 6-29
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Purpose
The hypervisor is trusted code that allows a partition to manipulate memory that is outside the bounds of that allocated to the partition. The operating system must be modified for use in the LPAR environment to make use of hypervisor calls to maintain page frame tables and TCE tables that would normally be managed by the OS directly if it were running on a non-LPAR system. This means that the parts of the VMM used for page table management and device I/O mapping are aware of the fact that the operating system is running within a partition. The hypervisor routines first validate that the calling partition is permitted to access the requested memory before performing the requested action.
V2.0.0.3
Student Notebook
Uempty
LPAR 2 0
RMO = M
4.5
Physical Memory
LPAR 1
0
RMO = N
N Physical Address 0
BE0070XS4.0
Notes: Introduction
The diagram above shows a sample system that has two active partitions. The first PMB is allocated to the hypervisor, and the PMB at the top of physical memory is allocated for TCE space.
LPAR 1
LPAR 1 has 4.5GB of memory allocated to the partition. The partition needs to run AIX 5.1, so this means it has a real mode memory requirement of 1GB, which must be contiguous. This means the first set of PMBs allocated to the partition must be contiguous for at least 1GB, and aligned on a 1GB address boundary. The remaining 3.5GB allocated to the partition consists of 14 PMBs, which may or may not be contiguous.
6-31
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
A partition with 4.5GB of memory has a page table requirement of 72MB, which will be rounded up to 128MB. If this is the first partition to be activated, the page table will be placed in a PMB that is marked by the hypervisor for use as page table storage. The partition will be permitted to access the portions of TCE space that are used to map the I/O slots that are assigned to the partition.
LPAR 2
LPAR 2 has 2GB of memory assigned. It is running AIX 5.2, so is quite happy with just 256MB of real mode memory. Since this is the same size as the PMB, it effectively means that a partition running AIX 5.2 can consist of the required number of PMBs to satisfy the requested memory amount. The allocated PMBs need not be contiguous, however the system firmware will allocate them in a contiguous fashion where possible. A partition with 2GB of memory (and an attribute of a maximum of 2GB) requires a page table of 32MB. This is already a power of 2, and so at partition activation time, the firmware allocates a page table of 32MB. It only allocates a new PMB for page tables if free space inside a PMB already being used for page tables cannot be found. In this example, there was one PMB allocated for page tables, and only 128MB was being used. This means the 32MB page table for LPAR 2 shares the same PMB as the 128MB page table for LPAR 1. LPAR 2 is permitted to access the portions of TCE space required for mapping the I/O slots assigned to the partition.
Typical example
The example shown in the diagram shows a situation where multiple partitions may have been activated and then terminated, resulting in the seemingly sparse allocation of PMBs. The algorithms used by the firmware to allocate PMBs try to make best use of those available, and are careful to avoid encroaching on a 16GB aligned 16GB contiguous group of PMBs if it can be avoided, since these are required for AIX 5.1 partitions that are 16GB or larger in size.
V2.0.0.3
Student Notebook
Uempty
Checkpoint
1) What processor features are required in a partitioned system? 2) Memory is allocated to partitions in units of __________MB. 3) All partitions have the same real mode memory requirements. True or False? 4) In a partitioned environment, a real address is the same as a physical address. True or False? 5) Any piece of code can make hypervisor calls. True or False? 6) Which physical addresses in the system can a partition access?
BE0070XS4.0
Notes: Introduction
Answer all of the questions above. We will review them as a group when everyone has finished.
6-33
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Summary
Hardware and software (operating system) changes are required for LPAR
Can't run LPAR on just any system Can't use just any OS inside a partition
Resources (CPU, memory, IO slots) are allocated to partitions independently of one another
A partition can receive as much (or as little) of each resource as it needs
Multiple partitions on a single machine imply changes to the addressing mechanism used by the operating system
Can't have all partitions using the same physical address range
Hypervisor is special code called by the operating system that allows it to modify memory outside the partitions
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
References
AIX Documentation: Kernel Extensions and Device Support Programming Concepts
7-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this lesson you should be able to:
List the design objectives of the logical and virtual file systems. Identify the data structures that make up the logical and virtual file systems. Use kdb to identify the data structures representing an open file. Use kdb to identify the data structures representing a mounted file system. Given a file descriptor of a running process, locate the file and the file system the descriptor represents. Identify the basic kernel structures for tracking LVM volume groups, logical and physical volumes. Identify the kdb subcommands for displaying these structures.
BE0070XS4.0
Notes:
7-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes: Introduction
This unit covers the interface, services and data structures that are provided by the Logical File System (LFS) and the Virtual File System (VFS).
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
- A CD-ROM file system, which supports ISO-9660, High Sierra and Rock Ridge formats
Extensible
The LFS/VFS interface also provides a relatively easy means by which third party file system types can be added without any changes to the LFS.
7-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
read(), write()
JFS, JFS2
VMM LVM
Device Driver
Device
BE0070XS4.0
Notes: Introduction
Several layers of the AIX kernel are involved in the support of file systems I/O as described in this section.
Hierarchy
Access to files and directories by a process is controlled by the various layers in the AIX 5L kernel, as illustrated above.
7-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Layers
The layers involved in file I/O are described in this table: Level System call interface Logical file system Virtual file system Purpose A user application can access files using the standard interface of the read() and write() system calls. The system call interface is supported in the LFS with a standard set of operations. The VFS defines a generic set of operation that can be performed on a file system. Different physical file systems can handle the request (JFS, JFS2, NFS). The file system type is invisible to the user. Files are mapped to virtual memory. I/O to a file causes a page fault and is resolved by the VMM fault handler. Device driver code to interface with the device. It is invoked by the page fault handler. The LVM is the device driver for JFS2 and JFS.
File system
Device drivers
7-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
u-block
inode
vnode
gnode
vfs
File System
BE0070XS4.0
Notes: Introduction
This illustration shows the major data structures that will be discussed in this unit. The illustration is repeated throughout the unit highlighting the areas being discussed.
7-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
File system
Each file system type extension provides functions to perform operations on the file system and its files. Pointers to these functions are stored in the vfsops (file system operations) and vnodeops (file operations) structures.
7-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
f_data
u-block
fp
vnode
read(0)
0 1
User File Descriptor Table vnode
n=open("file")
vnode
Process private
Global
BE0070XS4.0
Notes: Introduction
The user file descriptor table and the system file table are the key data bases used by the LFS. These memory structures and their relationship to vnodes are discussed in this section.
7-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
vnode
The vnode provides the connection between the LFS and the VFS. It is the primary structure the kernel uses to reference files. Each time an object is located, a vnode for that object is created. The vnode will be covered in more detail later.
V2.0.0.3
Student Notebook
Uempty
struct ufd { struct file * fp; unsigned short flags; unsigned short count; #ifdef __64BIT_KERNEL unsigned int reserved; #endif /* __64BIT_KERNEL */ };
BE0070XS4.0
Notes: Introduction
The user file descriptor table is private to a process and located in the process u-area. When a process opens a file, an entry is created in the users file descriptor table. The index of the entry in the table is returned to open()as a file descriptor.
Table management
One or more slots of the file descriptor table are used for each open file. The file descriptor table can extend beyond the first page of the u-block, and is pageable. There
7-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
is a fixed upper limit of 65534 open file descriptors per process (defined as OPEN_MAX in /usr/include/sys/limits.h). This value is fixed, and may not be changed.
V2.0.0.3
Student Notebook
Uempty
Notes: Introduction
The system file table is a global resource and is shared by all processes on the system. One entry is allocated for each unique open of a file, device, or socket in the system.
Structure definition
The file structure is described in /usr/include/sys/file.h. In the visual above the fileops definitions for __FULL_PROTO have been omitted for clarity.
Table management
The system file table is a large array of file structures. The array is partly initialized. It grows on demand and is never shrunk. Once entries are freed, they are added back onto the free list. The table can contain a maximum of 1,000,000 entries and is not configurable. The head of the free list is pointed to by ffreelist.
Copyright IBM Corp. 2001, 2003 Unit 7. LFS, VFS and LVM 7-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Table entries
The file table array consists of struct file data elements. Several of the key members of this data structure are described in this table: Member Description A reference count field detailing the current number of opens on the file. This value is increased each time the file is opened, and decremented on each close(). Once the reference count is zero, the slot is considered free, and may be re-used. Various flags described in fcntl.h A type field describing the type of file: f_type
/* f_type values */ #define DTYPE_VNODE #define DTYPE_SOCKET endpoint */ #define DTYPE_GNODE #define DTYPE_OTHER 1 2 3 -1 /* file */ /* communications /* device */ /* unknown */
f_count
f_flag
f_offset f_data
A read/write pointer. Defined as f_up.f_uvnode, it is a pointer to another data structure representing the object (typically the vnode structure). A structure containing pointers to functions for the following file operations: rw (read/write), ioctl, select, close and fstat.
f_ops
V2.0.0.3
Student Notebook
Uempty
vnode/vfs Interface
u-block
inode
vnode
gnode
vfs
File System
BE0070XS4.0
Notes: Introduction
The interface between the logical file system and the underlying file system implementations is referred to as the vnode/vfs interface. This interface provides a logical boundary between generic objects understood at the LFS layer, and the file system specific objects that the underlying file system implementation must manage.
Data structures
vnodes and vfs structures are the primary data structures used to communicate through the interface (with help from vmount).
Description
Descriptions of the vnode, vfs and vmount structures are given in this table:
7-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Function Represents a single file or directory Represents a mounted file system Contains specifics of the mount request
V2.0.0.3
Student Notebook
Uempty
vnode
struct vnode { ushort ulong32int64 int Simple_lock struct vfs struct vfs struct gnode struct vnode struct vnode struct vnode union v_data { void * struct vnode * } _v_data; char * */ };
v_flag; v_count; v_vfsgen; v_lock; *v_vfsp; *v_mvfsp; *v_gnode; *v_next; *v_vfsnext; *v_vfsprev; _v_socket; _v_pfsvnode; v_audit;
/* the use count of this vnode */ /* generation number for the vfs */ /* lock on the structure */ /* pointer to the vfs of this vnode */ /* pointer to vfs which was mounted over /* this vnode; NULL if not mounted */ /* ptr to implementation gnode */ /* ptr to other vnodes that share same gnode */ /* ptr to next vnode on list off of vfs /* ptr to prev vnode on list off of vfs /* vnode associated data */ /* vnode in pfs for spec */ /* ptr to audit object
BE0070XS4.0
Notes: Introduction
A vnode represents an active file or directory in the kernel. Each time a file is located, a vnode for that object is located or created. Several vnodes may be created as a result of path resolution.
Structure definition
The vnode structure is defined in /usr/include/sys/vnode.h.
vnode management
vnodes are created by the vfs-specific code when needed, using the vn_get kernel service. vnodes are deleted with the vn_free kernel service. vnodes are created as the result of a path resolution.
7-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Detail
Each time an object (file) within a file system is located (even if it is not opened), a vnode for that object is located (if already in existence), or created, as are the vnodes for any directory that has to be searched to resolve the path to the object. As a file is created, a vnode is also created, and will be re-used for every subsequent reference made to the file by a path name. Every path name known to the logical file system can be associated with, at most, one file system object, and each file system object can have several names because it can be mounted in different locations. Symbolic links and hard links to an object always get the same vnode if accessed through the same mount point.
V2.0.0.3
Student Notebook
Uempty
vfs
struct vfs { struct vfs struct gfs struct vnod struct vnode struct vnode int caddr_t unsigned int int #ifdef _SUN short unsigned short #else short unsigned short #endif /* _SUN */ struct vmount Simple_lock }; /* vfs's are a linked list */ /* ptr to gfs of vfs */ /* pointer to mounted vnode, */ /* the root of this vfs */ *vfs_mntdover; /* pointer to mounted-over */ /* vnode */ *vfs_vnodes; /* all vnodes in this vfs */ vfs_count; /* number of users of this vfs */ vfs_data; /* private data area pointer */ vfs_number; /* serial number to help distinguish between */ /* different mounts of the same object */ vfs_bsize; /* native block size */ vfs_exflags; vfs_exroot; vfs_rsvd1; vfs_rsvd2; *vfs_mdata; vfs_lock; /* for SUN, exported fs flags */ /* for SUN, " fs uid 0 mapping */ /* Reserved */ /* Reserved */ /* record of mount arguments */ /* lock to serialize vnode list */ *vfs_next *vfs_gfs; *vfs_mntd;
BE0070XS4.0
Notes: Introduction
There is one vfs structure for each file system currently mounted. The vfs structure connects the vnodes with the vmount information, and the gfs structure that help define the operations that can be performed on the file system and its files.
Structure definition
The vfs structure is defined in /usr/include/sys/vfs.h.
Key elements
Several key elements of the vfs structure are described in this table: Element *vfs_next Description The next mounted file system.
Unit 7. LFS, VFS and LVM 7-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Element vfs_mntd
Description The vfs_mntd pointer points to the vnode within the file system which generally represents the root directory of the file system. The vfs_mntdover pointer points to a vnode within another file system, usually represents a directory, which indicates where the file system is mounted. The pointer to all vnodes for this file system. The path back to the gfs structure and its file system specific subroutines through the vfs_gfs pointer. The pointer to vmount providing mount information for this file system
V2.0.0.3
Student Notebook
Uempty
v_mvfsp vfs_mntdover
vfs_mntdover v_vfsp
v_vfsp
4 3
BE0070XS4.0
7-21
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Description
The numbered items in the table match the number in the illustration. Item 1. 2. 3. 4. Description The global address rootvfs points to the vfs for the root file system The vfs_next pointers create a linked list of mounted file systems The vfs_mntd points to the vnode representing the root of the file system The vfs_mntdover points to the vnode of the directory the file system is mounted over
V2.0.0.3
Student Notebook
Uempty
vmount
struct vmount { uint vmt_revision; uint vmt_length; fsid_t vmt_fsid; int vmt_vfsnumber; uint vmt_time; uint vmt_timepad; int vmt_flags;
int vmt_gfstype; struct vmt_data { short vmt_off; /* I offset of data, word aligned short vmt_size; /* I actual size of data in bytes } vmt_data[VMT_LASTINDEX + 1]; };
/* I revision level, currently 1 */ /* I total length of structure & data */ /* O id of file system */ /* O unique mount id of file system */ /* O time of mount */ /* O (in future, time is 2 longs) */ /* I general mount flags */ /* O MNT_REMOTE is output only */ /* I type of gfs, see MNT_XXX above */ */ */
BE0070XS4.0
Notes: Introduction
The vmount structure contains specifics of the mount request. The vfs and vmount are created as pairs and linked together.
Structure definition
The vmount structure is defined in /usr/include/sys/vmount.h.
7-23
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
vfs management
The mount helper creates the vmount structure and calls the vmount subroutine. The vmount subroutine then creates the vfs structure, partially populates it, and invokes the file system dependent vfs_mount subroutine, which completes the vfs structure and performs any operations required internally by the particular file system implementation.
V2.0.0.3
Student Notebook
Uempty
u-block
inode
vnode
gnode
vfs
File System
BE0070XS4.0
Notes: Introduction
Each file system type extension provides functions to perform operations on the file system and its files. Pointers to these functions are stored in the vfsops (file system operations) and vnodeops (file operations) structures.
Data structures
For each file system type installed, one group of these three data structures shown above will be created.
7-25
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Structure descriptions
Descriptions of gfs, vnodeops, and vfsops are given in this table: Part gfs vnodeops vfsops Function Holds pointers to the vnodeops and the vfsops structures Contains pointers to file system dependent operations on files (open, close, read, write, etc.) Contains pointers to file system dependent operations on the file system (mount, umount, etc.)
V2.0.0.3
Student Notebook
Uempty
gfs
ops gn_
vnodeops
vfs
vfs_gfs
gfs
gfs _op s
vfsops
BE0070XS4.0
Notes: Introduction
gfs is used as a pointer to the vnodevops and the vfsops structures.
7-27
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Structure definition
The gfs structure is defined in /usr/include/sys/gfs.h:
struct gfs { struct vfsops *gfs_ops; struct vnodeops *gn_ops; int gfs_type; /* type of gfs (from vmount.h) */ char gfs_name[16]; /* name of vfs (eg. "jfs","nfs")*/ int (*gfs_init)(); /* ( gfsp ) - if ! NULL, */ /* called once to init gfs */ int gfs_flags; /* flags for gfs capabilities */ caddr_t gfs_data; /* gfs private config data*/ int (*gfs_rinit)(); int gfs_hold /* count of mounts */ }
gfs management
The gfs structures are stored within a global array accessible only by the kernel. The gfs entries are inserted with the gfsadd() kernel service, and only one gfs entry of a given gfs_type can be inserted into the array. Generally, gfs entries are added by the CFG_INIT section of the configuration code of the file system kernel extension. The gfs entries are removed with the gfsdel() kernel service. This is usually done within the CFG_TERM section of the configuration code of the file system kernel extension.
V2.0.0.3
Student Notebook
Uempty
vnodeops
vn_link()
vn_mkdir() vn_open()
vn_close()
vfs
vfs_gfs
gfs
gn_ops
vnodeops
vn_remove() vn_rmdir() vn_lookup()
BE0070XS4.0
Notes: vnodeops
The vnodeops structure contains pointers to the file system dependent operations that can be performed on the vnode, such as link, mkdir, mknod, open, close and remove.
7-29
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Structure definition
The vnodeops structure is defined in /usr/include/sys/vnode.h. Due to the size of this structure, only a few lines are detailed below:
struct vnodeops { /* creation/naming/deletion */ int (*vn_link)(struct vnode *, struct vnode *, char *, struct ucred *); int (*vn_mkdir)(struct vnode *, char *, int32long64_t, struct ucred *); int (*vn_mknod)(struct vnode *, caddr_t, int32long64_t, dev_t, struct ucred *); int (*vn_remove)(struct vnode *, struct vnode *, char *, struct ucred *); int (*vn_rename)(struct vnode *, struct vnode *, caddr_t, struct vnode *,struct vnode *,caddr_t,struct ucred *);
V2.0.0.3
Student Notebook
Uempty
vfsops
vfs_mount()
vfs_unmount() vfs_root()
vfs_sync)
vfs
vfs_gfs
gfs
gfs_ops
vfsops
vfs_vget() vfs_cntl() vfs_quotactl()
BE0070XS4.0
Notes: vfsops
The vfsops structure contains pointers to the file system dependent operations that can be performed on the vfs, such as mount, unmount, or sync.
7-31
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Structure definition
The vfsops structure is defined in /usr/include/sys/vfs.h:
struct vfsops { /* mount a file system */ int (*vfs_mount)(struct vfs *, struct ucred *); /* unmount a file system */ int (*vfs_unmount)(struct vfs *, int, struct ucred *); /* get the root vnode of a file system */ int (*vfs_root)(struct vfs *, struct vnode **, struct ucred *); /* get file system information */ int (*vfs_statfs)(struct vfs *, struct statfs *, struct ucred *); /* sync all file systems of this type */ int (*vfs_sync)(); /* get a vnode matching a file id */ int (*vfs_vget)(struct vfs *, struct vnode **, struct fileid *, struct ucred *); /* do specified command to file system */ int (*vfs_cntl)(struct vfs *, int, caddr_t, size_t, struct ucred *); /* manage file system quotas */ int (*vfs_quotactl)(struct vfs *, int, uid_t, caddr_t, struct ucred *); };
V2.0.0.3
Student Notebook
Uempty
gnode
in-core inode
vnode
v_gnode
gnode
specnode
vnode
v_gnode
gnode
rnode
vnode
v_gnode
gnode
BE0070XS4.0
Notes: Introduction
gnodes are generic objects pointed to by vnodes but may be contained in different structures depending on the file system type.
Location
The gnode is contained in an in-core-inode for a file on a local file system. Special files (such as /dev/tty), have gnodes contained in specnodes. NFS files have gnodes contained within rnodes.
Structure definition
The gnode structure is defined in /usr/include/sys/vnode.h:
struct gnode { enum vtype gn_type; /* type of object: VDIR,VREG etc */
Unit 7. LFS, VFS and LVM 7-33
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
short gn_flags; /* attributes of object */ ulong gn_seg; /* segment into which file is mapped */ long32int64 gn_mwrcnt; /* count of map for write */ long32int64 gn_mrdcnt; /* count of map for read */ long32int64 gn_rdcnt; /* total opens for read */ long32int64 gn_wrcnt; /* total opens for write */ long32int64 gn_excnt; /* total opens for exec */ long32int64 gn_rshcnt; /* total opens for read share */ struct vnodeops *gn_ops; struct vnode *gn_vnode; /* ptr to list of vnodes per this gnode*/ dev_t gn_rdev; /* for devices, their "dev_t" */ chan_t gn_chan; /* for devices, their "chan", minors minor*/ Simple_lock gn_reclk_lock; /* lock for filocks list */ int gn_reclk_event;/* event list for file locking */ struct filock *gn_filocks; /* locked region list */ caddr_t gn_data; /* ptr to private data (usually contiguous) }
Key elements
Some of the key elements of the gnode are described below: Element gn_type gn_ops gn_seg gn_data Description Identifies the type of object represented by the gnode. Some examples are directory, character, and block. Identifies the set of operations that can be performed on the object Segment number to which the file is mapped Pointer to private data. Points to the start of the inode the gnode is imbedded
Detail
Each file system implementation is responsible for allocating and destroying gnodes. Calls to the file system implementation serve as requests to perform an operation on a specific gnode. A gnode is needed, in addition to the file system inode, because some file system implementations may not include the concept of an inode. Thus the gnode structure substitutes for whatever structure the file system implementation may have used to uniquely identify a file system object. gnodes are created, as needed by file system specific code at the same time as implementation specific structures are created. This is normally immediately followed by a call to the vn_get kernel service to create a matching vnode. The gnode structure is usually deleted either when the file it refers to is deleted, or when the implementation specific structure is being reused for another file.
V2.0.0.3
Student Notebook
Uempty
dsdptr:
selptr: opts: (0)>
310E3000
DEV_DEFINED DEV_MPSAFE
00000000 0000002A
BE0070XS4.0
Notes: Introduction
The file systems discussed earlier in this unit are contained within Logical Volume Manager (LVM) Logical Volumes. The data defining LVM entities (including Volume Groups, Logical Volumes and Physical Volumes) is maintained both on disks and in the ODM. This architecture is discussed in other classes. Here we would like to introduce three kernel structures which maintain LVM data, and the kdb commands that display these structures. The structures are volgrp, lvol and pvol (defined in src/bos/kernel/sys/dasd.h, which is not distributed with the AIX product). The kdb subcommands to display these structures have corresponding names: volgrp, lvol and pvol. In the above visual we will illustrate the structure definitions with example output from the kdb subcommands and corresponding AIX commands. All definitions are from src/bos/kernel/sys/dasd.h unless otherwise noted.
7-35
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
volgrp structure
The administrative unit of LVM is a volume group. The kernel describes this in the volgrp structure. Portions of the structure definition follows:
struct volgrp { Simple_lockvg_lock;/* lock for all vg structures struct unique_idvg_id;/* volume group id */ intmajor_num;/* major number of volume group . short . . */ */
struct volgrp
struct lvol*lvols[NEW_MAXLVS];/* logical volume struct array*/ struct pvol*pvols[NEW_MAXPVS];/* physical volume struct array */ . }; . .
The Items in bold are defined as: - vg_id This is the 32 character volume id. - open_count This is the count of active logical volumes in this volume group. - *nextvg This is the volgrp linked list item. A value of zero means this is the last or only volume group. - *lvols[NEW_MAXLVS] points to the array of lvol structures for this volume group. The array is indexed by logical volume minor number. - *pvols[NEW_MAXPVS] points to the array of pvol structures for this volume group. The array is indexed by physical volume minor number.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
7-37
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Other items above in bold give: - major_num = 0xA means this is the rootvg volume group. - The vg_id value is rootvgs volume group id. *nextvg=0 means this volume group is the last or only one on the volgrp linked list.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
7-39
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
*parts[3]; /*partition arrays for each mirror*/ maxsize; tot_rds; /* max number of pp allowed in lv /* total number of reads to LV
parent_minor_num;/*if this is an online backup copy*/ /*this is the minor number of the real*/ /* or parent logical volume */
/* These fields of the lvol structure are read and/or written by * the bottom half of the LVDD; and therefore must be carefully * modified. */ int tid_t struct file unsigned int unsigned int Simple_lock uchar struct io_stat unsigned int unsigned int }; complcnt; waitlist; *fp; * completion count-used to quiesce */ /* event list for quiesce of LV */ /*file ptr for lv mir bkp open/close */ stripe_exp; /* 2**stripe_block_exp = stripe */ /* lvol_intlock; lv_behavior;/* special conditions lv may be under */ *io_stats[3];/* collect io statistics here */ syncing; blocked; /* Count of SYNC requests */ /* Count of blocked requests */ block size */
7-41
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Items shown in bold: lv_status: 0=> closed, 1=> trying to close, 2=> open, 3=> being deleted lv_options is a flag word. Some of the flags are: 0x0001=>write verify, 0x0020=>read-only, 0x0040=>dump in progress to this logical volume, 0x0080=>this logical volume is a dump device, 0x1000=>original default (not passive) mwcc (mirror write consistency check) on. nparts: Number of copies (1=>no mirror, 2=>single mirror, 3=>two mirrors). This gives the number of *parts array elements that are meaningful. i_sched: Scheduling policy for this logical volume values include: 0=>regular, non-mirrored LV, 1=>sequential write, sequential read, 2=>parallel write, read closest, 3=>sequential write, read closest, 4=> parallel write, sequential read, 5=>striped n_blocks: Number of 512 byte blocks in this logical volume *parts[3]: Each parts element is a part structure pointer, which points to an array of part structures, which define the physical volume storage for one logical volume copy. - Each of these part structures points to a pvol structure and disk start address for one part of the logical volume data. The structure is defined as follows:
struct part { struct pvol daddr_t int char char *pvol; start; sync_trk; ppstate; sync_msk; /* containing physical volume /* starting physical disk address /* current LTG being resynced /* physical partition state /* current LTG sync mask */ */ */ */ */
V2.0.0.3
Student Notebook
Uempty
- It points to the pvol structure at 0x310e4600. The physical volume major/minor numbers are 0x19 (decimal 25)/0. The disk start address is 0x00DE1100. The ls -l command on /dev/hd* tells us this is the major/minor number of hdisk0.
7-43
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
- SCHED POLICY: parallel is technically incorrect here. But it has no meaning because this logical volume is not mirrored. The i_sched=00000000 value from kdb correctly reflects this (SCH_REGULAR = 0 => regular, non-mirrored logical volume).
7-45
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
(0)> pvol 310e4600 PVOL............... 310E4600 dev................ 00190000 xfcnt.............. 00000000 pvstate............ 00000000 pvnum........ 00000000 vg_num......... 0000000A fp................. 10000C60 flags.............. 00000000 num_bbdir_ent...... 00000000 fst_usr_blk........ 00001100 beg_relblk......... 021E6B9F next_relblk........ 021E6B9Fl max_relblk......... 021E6C9E defect_tbl......... 310E4800 sa_area[0]....... @ 310E4638 sa_area[1]....... @ 310E4640 pv_pbuf.......... @ 310E4648 oclvm............ @ 310E46F0
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
daddr_t
*/
/* block in reloc blk pool at end */ /* of PV */ daddr_t max_relblk;/* largest blkno avail for reloc */ struct defect_tbl *defect_tbl; /* pointer to defect table */ struct sa_pv_whl { /* VGSA information for this PV */ daddr_t lsn; /* SA logical sector number - LV 0 */ ushort sa_seq_num;/* SA wheel sequence number */ char nukesa; /* flag set if SA to be deleted */ } sa_area[2]; /* one for each possible SA on PV */ struct pbuf short pv_pbuf; /* pbuf struct for writing cache bad_read;/* changed to 1 on first bad read */ */
#ifdef CLVM_2_3 struct clvm_2_3pv *oclvm;/* ptr to old CLVM pv struct #endif /* CLVM_2_3 */ int xfcnt; /* transfer count for this pv };
*/ */
Items shown in bold: - dev: major/minor device number for this disk (dev(31-16) = major, dev(15-0) = minor). Defined in /usr/include/sys/types.h. - pvstate: Physical volume state (0=>normal, 1=>cannot be accessed, 2=> No hw/sw relocation allowed, 3=> pv involved in snapshot) - vg_num: volume group major number
7-47
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
Checkpoint (1 of 2)
Each user process contains a private F___ D______ T____. The kernel maintains a _______structure and a _______structure for each mounted file system. There is one gfs structure for each mounted file system. True or False? The three kernel structures __________, __________ and __________ are used to track LVM volume group, logical volume and physical volume data, respectively. The kdb subcommand __________ and the AIX command _________ both reflect volume group information.
BE0070XS4.0
Notes:
7-49
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Checkpoint (2 of 2)
There is one vmount/vfs structure pair for each mounted filesystem. True or False? Every open file in a filesystem is represented by exactly one file structure. True or False? The inode number given by ls -id/usr is _____. Why? Each vnode for an open file points to a _______structure.
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
Exercise
Complete exercise six Consists of theory and hands-on Ask questions at any time Activities are identified by a What you will do:
Test what you have learned about the LFS and VFS Locate the LFS/VFS structures for an open file Identify what file a process has opened
BE0070XS4.0
Notes:
Turn to your lab workbook and complete exercise six.
7-51
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Summary
The LFS and VFS provide support for many different file systems types simultaneously The LFS/VFS allows for different types of file systems to be mounted together forming a singe homogenous view The LFS services the system call interface for read()write() The VFS defines files (vnodes) and file systems (vfs) Each file system type provides unique functions for file and file system types operations. Operations are defined by the vnodeops and vfsops structures. The gnode is a generic object connecting the VFS with the file system specific inode kdb has special subcommands for viewing LFS/VFS structures The kernel tracks LVM data in structures volgrp, lvol and pvol. There are kdb subcommands for displaying these structures.
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
References
AIX Documentation: System Management Guide: Operating System and Devices
8-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this lesson you should be able to:
Describe basic concepts of the JFS disk layout Describe JFS elements: inodes, allocation groups, superblock, indirect block and double indirect block Contrast on disk and incore inode structures Describe the relationship between JFS and LVM in performing I/O
BE0070XS4.0
Notes:
8-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes:
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Boot Block
The boot block occupies the first 4096 bytes of a JFS starting at byte offset 0. This area is from the original Berkeley Software Distribution (BSD) Fast File System design, and is not used in AIX.
Superblock
The superblock is 4096 bytes in size and starts at byte offset 4096. The superblock maintains information about the entire JFS and includes the following fields: - Size - Number of data blocks - A flag indicating the state - Allocation group sizes The superblock is critical to the JFS and if corrupted will prevent the file system from being mounted. For this reason a backup copy of the superblock is always written in block 31.
Blocks
A block is a 4096 byte data allocation unit.
Fragments
The journaled file system is organized in a contiguous series of fragments. JFS fragments are the basic allocation unit and the disk is addressed at the fragment level. JFS fragment support allows disk space to be divided into allocation units that are smaller than the default size of 4096 bytes. Smaller allocation units or fragments minimize wasted disk space by more efficiently storing the data in a file or directory's partial logical blocks. The functional behavior of JFS fragment support is based on that provided by Berkeley Software Distribution (BSD) fragment support.
8-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Inodes
The disk inode is the anchor for files in a JFS. There is a one to one correspondence between a disk inode, an i-number, and a file. The inode records file information such as size, allocation, owner, and so on. However, it is disjoint from the name since many different names can be refer to the same inode via the inode number. The collection of disk inodes can be referred to as the disk inode table.
Allocation groups
The set of fragments making up a JFS are divided into one or more fixed-sized units of contiguous fragments. These are called allocation groups. An allocation group is similar to BSD cylinder groups. The first 4096 bytes of the first allocation group holds the boot block and the second 4096 bytes holds the superblock. Each allocation group contains disk inodes and free blocks. This permits inodes and data blocks to be dispersed throughout the file system and allows file data to lie in closer proximity to its inode. Despite the fact that the inodes are distributed through the disk, a disk inode can be located using a simple formula based on the i-number and the allocation group information contained in the super block. For the first allocation group, the inodes occupy the fragments immediately following the reserved block area. For subsequent groups, the inodes are found at the start of each group. Inodes are 128 bytes in size and are identified by a unique inode number. The inode number maps an inode to its location on the disk or to an inode within its allocation group.
8-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Virtual memory
AIX exploits the segment architecture to implement its JFS physical file system. Just as virtual memory looks contiguous to a user program but may be scattered about real memory or paging space, disk files are made to look contiguous to the user program even though the physical disk blocks may be very scattered. When AIX needs to create a segment of virtual memory, it creates an External Page Table (XPT), which contains a collection of XPT blocks. When the physical file system creates a file, it creates a disk inode and possibly indirect blocks to describe the file. Disk inodes (and indirect blocks), and XPT blocks make their respective user-level resources appear contiguous. The JFS maps all file system information into virtual memory, including user data blocks. The read and write operations are much simplified in that they merely initialize the mapping and then copy the data. Likewise, a directory lookup operation merely maps the directory into virtual memory and then goes walking through the directory structure. This greatly simplifies the code by dividing the algorithmic problem of searching directory entries from the task of performing disk I/O operations and managing a buffer cache. The I/O function is handled by the Virtual Memory Manager (VMM). When a page fault occurs on a mapped file object, the VMM is able to determine what file is being accessed, examine the inode to determine where the data is, and initiate a page in to transfer the data from the file system into memory. Once completed, the faulting process can be resumed and the operation continues, oblivious to the fact that a memory mapped access caused a disk operation.
8-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Reserved Inodes
0 1 2 3 4 5 6 7 8 9-15
Not used Superblock (.superblock) Root directory of file system Disk inodes (.inodes) Indirect blocks (.indirect) Disk inode allocation map (.inodemap) Disk block allocation map (.diskmap) Disk inode extensions (.inodex) Inode extension map (.inodexmap) Reserved
BE0070XS4.0
Notes:
Reserved Inodes
Introduction
A unique feature of the JFS implementation is the implementation of file system data as unnamed files that reside in the file system. Every JFS file system has inodes 0-15 reserved. Most of these files names begin with a dot (.) because they are hidden files. But, these hidden files do not appear in any directory. This is done by manipulating the inodes so they do not require a directory entry to support their link count value. Every open file is represented by a segment in the VMM. Most of these reserved inodes never actually exist on the disk, but are only present in the VMM when a file system is mounted.
8-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Superblock Inode 1 is reserved for a file named .superblock. The superblock holds a concise description of the JFS: its size allocation information, and an indication of the consistency of on-disk data structures. The inode points to two data blocks, 1 and 31. Data block 31 is a spare copy of the superblock at data block 1. Root directory Inode 2 is always used for the JFS root directory. Disk inodes Inode 3 is reserved for a file named .inodes. Every JFS object is described by an disk inode. Each disk inode is a fixed size: 128 bytes. Indirect blocks Inode 4 is reserved for a file named .indirect. The most common JFS object is a regular file. For a regular file, the inode holds a list of the data blocks which compose the file. It would be impractical to allocate inodes large enough to directly hold this entire list. The list of physical blocks are held in a tree structure, rather than an array. The intermediate nodes of this tree are the indirect blocks. Disk inode allocation map Inode 5 is reserved for a virtual file named .inodemap. This allocation map has bit flags turned on or off showing if an inode is in use or free. Disk block allocation map Inode 6 is reserved for a virtual file named .diskmap. This bit map indicates whether each block on the logical volume is in use or free. Disk inode extensions Inode 7 is reserved for a virtual file named .inodex. This file contains information about inode extensions which are used by access control lists. Inode extension map Inode 8 is reserved for the virtual file named .inodexmap. This bit map is used to keep track of free and allocated inode extensions. Future use Inodes 9 through 15 are reserved for future extensions.
8-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes:
8-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Inode types
The private portion of the inode depends on its type. The types are defined in /usr/include/sys/mode.h and compose portions of the di_mode field. Inode types are:
Type
Description Regular file. The format of the private portion of an inode for a data file (including some symbolic links and directories) depends on the size of the file. The AIX file system always allocates full blocks to data files. Directory. The private portion of a directory inode is identical to that of a regular file. Block device. Block device inodes have only the dev_t Character device. Character device inodes have only the dev_t Symbolic link A UNIX domain socket FIFO. A FIFO inode has no persistent private data.
S_IFREG
V2.0.0.3
Student Notebook
Uempty
In-core Inodes
When a file is opened, an in-core inode is created in memory The in-core inode structure is defined in /usr/include/jfs/inode.h In-core inodes include: An exclusive-use lock Use count Open counts State flags Exclusion counts Hash table links Free list links Mount table entry In-core inode states Active Cached Free
BE0070XS4.0
Notes:
In-core inodes
Introduction
When a JFS file is opened, an in-core inode is created in memory. The in-core inode contains a copy of all the fields defined in the disk inode in addition to fields for keeping track of the in-core inode.
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Item
Notes Must be held before the in-core inode is updated Actually implemented with a simple lock The in-core inode cannot be destroyed while it has a non-zero use count Separate reader and writer counts are maintained in the gnode in the in-core inode Are incremented at each open, and decremented at close A process which has opened the file for both reading and writing is counted as both a reader and writer
Exclusive-use lock
Use count
Open counts
State flags
Maintain miscellaneous in-core inode state A bit indicates that the file has been opened for exclusive access A separate count of the number of readers who have specified read-only sharing (precluded writers) is also maintained If a process attempts to open the inode with a mode which conflicts with the current open status, it can be placed on a wait list for the inode (if the O_DELAY open flag was specified)
Exclusion counts
V2.0.0.3
Student Notebook
Uempty Item Notes All existing in-core inodes are kept in a hash table, accessed by device and index Allows finding an inode by file handle, and assures that multiple inodes are not created for the same object All unused in-core inodes are kept in a free list If an object is in use, its underlying device must currently be mounted Each in-core inode points back to its mount table entry to avoid searching the mount table to find the entry for this object
8-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Inode locking
The JFS serializes operations by obtaining an exclusive lock on each inode involved in the operation. For all operations which require locking more than one inode, all involved inodes are known at the start of the operation. The ilocklist() routine sorts these into a descending order before locking (highest inode number is locked first). This prevents deadlock conditions. Note: The iget() routine does not return a locked inode; nor does iput() free any lock on the inode.
V2.0.0.3
Student Notebook
Uempty
Inode
BE0070XS4.0
Notes:
Indirect blocks
Introduction
JFS uses indirect blocks to address the disk space allocated to larger files. There are three methods for addressing the disk space - Direct - Single indirect - Double indirect Beginning in AIX 4.2, file systems enabled for large files allow a maximum file size of slightly less than 64 gigabytes (68589453312). The first double indirect block contains 4096 byte fragments, and all subsequent double indirect blocks contain (32 X 4096 =
8-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
131072) byte fragments. The following produces the maximum file size for file systems enabling large files: (1 * (1024 * 4096)) + (511 * (1024 * 131072)) The fragment allocation assigned to a directory is divided into records of 512 bytes each and grows in accordance with the allocation of these records.
Direct
The first eight addresses point directly to a single allocation of disk fragments. Each disk fragment is 4 KB in size. (8 x 4KB = 32 KB). This method is used for files that are less than 32 KB in size.
V2.0.0.3
Student Notebook
Uempty
Single Indirect
File Size Between 32KB and 4MB Inode
indirect (page index in .indirect) Indirect Page indir[0] indir[1023]
BE0070XS4.0
8-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Double Indirect
Inode Disk Addresses for File Size > 4MB Inode
Indirect Root indir[0] Indirect Pages ind[0] indirect (page index in .indirect) indir[511]
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
Checkpoint
1. An allocation group contains __________ and __________.
2. The basic allocation unit in JFS is a disk block. True or False? 3. The root inode number of a filesystem is always 1. True or False? 4. The last 128 bytes of an in core JFS inode is a copy of the disk inode. True or False? 5. JFS maps user data blocks and directory information into virtual memory. True or False?
BE0070XS4.0
Notes:
8-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Summary
Principle components of the JFS are allocation groups, inodes, data blocks and indirect blocks. A JFS allocation group contains inodes and related data blocks. A JFS in core inode contains the disk inode data together with activity information such as open count and in core inode state information. The state information indicates whether the structure is active or available for re use. JFS accomplishes I/O by mapping all file system information into virtual memory, thus relying on VMM to do the actual I/O operations.
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
References
AIX Documentation: System Management Guide: Operating System and Devices
9-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this lesson you should be able to:
List the difference between the terms aggregate and fileset. Identify the various data structures that make up the JFS2 filesystem. Use the fsdb command to trace the various data structures that make up files and directories.
BE0070XS4.0
Notes:
9-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Numbers
Function
Block Size Architectural max. files size (this is not the supported size!)
Value
512 - 4096 Configurable block size 4 Petabytes
Max. file system size (supported) 1 Terabyte (16 Terabytes on AIX 5.2) Max. file size (supported) Number if Inodes Directory Organization 1 Terabyte (16 Terabytes on AIX 5.2) Dynamic, limited by disk space B+ tree
BE0070XS4.0
Notes: Introduction
The Enhanced Journaled File System (JFS2), is an extent based Journaled File System. It is the default file system for the 64-bit kernel of AIX 5L. The table above lists some general information about JFS2.
9-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
Notes: Introduction
The term aggregate is defined in this section. The layout of a JFS2 aggregate is also described.
Definitions
JFS2 separates the notion of a disk space allocation pool, called an aggregate, from the notion of a mountable file system sub-tree, called a fileset. The rules that define aggregates and filesets in JFS2 are listed above in the visual.
V2.0.0.3
Student Notebook
Uempty
no smaller than the physical block size (currently 512 bytes). Legal aggregate block sizes are: - 512 bytes - 1024 bytes - 2048 bytes - 4096 bytes. Do not confuse aggregate block size with the logical volume block size, which defines the smallest unit of I/O.
9-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Aggregate
Note: Aggregate Block Size is 1K in this example.
1KB
(One Aggregate Block)
Aggregate Block #
32 Inodes (1 6KB)
Aggregate Inode Ta ble; inode numbers shown
2 3
4 5
6 7
8 9
10 12 14 1 6 18 20 22 24 2 6 2 8 30 11 1 3 15 17 19 2 1 23 25 27 29 31
Control Page
IAG
1
32
36
40
44
60
a ggr inode #2 : block ma p o wner: perm: etc: size: root -rwx ----- blah blah 1638 4
aggr in od e #1 6: fi le set 0 owner: perm: etc: size: root -rwx-----blah blah 12288 0 240 8 8192 10284 4
aggr in od e # 17: fi le se t 1 owner: perm: etc: size: root -rwx-----blah blah 8192
BE0070XS4.0
9-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Part
Function The secondary aggregate superblock is a direct copy of the primary aggregate superblock. The secondary aggregate superblock is used if the primary aggregate superblock is corrupted. Both primary and secondary superblocks are located at a fixed locations. This allows the superblocks to be found without depending on any other information. Contains inodes that describe the aggregate-wide control structures. Inodes will be described later. Contains replicated inodes from the aggregate inode table. Since the inodes in the aggregate inode table are critical for finding file system information they are replicated in the secondary aggregate inode table. The actual data for the inodes will not be repeated, just the addressing structures used to find the data and the inode itself. Describes the aggregate inode table. It contains allocation state information on the aggregate inodes as well as their on-disk location. Describes the secondary aggregate inode table. Describes the control structures for allocating and freeing aggregate disk blocks within the aggregate. The block allocation map maps one-to-one with the aggregate disk blocks. Provides space for fsck to be able to track the aggregate block allocations. This space is necessary. For a very large aggregate, there might not be enough memory to track this information in memory when fsck is run. The space is described by the superblock. One bit is needed for every aggregate block. The fsck working space always exists at the end of the aggregate. Provides space for logging the meta-data changes of the aggregate. The space is described by the superblock. The in-line log always exist after the fsck working space.
In-line Log
9-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Aggregate inodes
When the aggregate is initially created, the first inode extent is allocated; additional inode extents are allocated and de-allocated dynamically as needed. Each of these aggregate inodes describe certain aspects of the aggregate itself, as follows: Inode # 0 Reserved Called the self inode, this inode describes the aggregate disk blocks comprising the aggregate inode map. This is a circular representation, in that aggregate inode one is itself in the file that it describes. The obvious circular representation problem is handled by forcing at least the first aggregate inode extent to appear at a well-known location, namely, 4 KB after the primary aggregate superblock. Therefore, JFS2 can easily find aggregate inode one, and from there it can find the rest of the aggregate inode table by following the B+tree in inode one Describes the block allocation map. Describes the In-line Log when mounted. This inode is allocated but no data is saved to disk. Reserved for future extensions. Starting at aggregate inode 16 there is one inode per fileset (the fileset allocation map Inode). These inodes describe the control structures that represent each fileset. As additional filesets are added to the aggregate, the aggregate inode table itself may have to grow to accommodate additional fileset inodes. Note that as of AIX 5.2 release there can only be one fileset. The preceding graphic shows a fileset 17. This is included to show design potential, and is not realizable at present. Description
1.
2. 3. 4 - 15
16 -
9-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Allocation Group
The maximum number of allocation groups per aggregate is 128. The minimum number of allocation group is 8192 aggregate blocks. The allocation group size must always be a power of 2 multiple of the number of blocks described by one dmap page. (for example 1, 2, 4, 8, ... dmap pages)
BE0070XS4.0
Notes: Introduction
Allocation Groups (AG) divide the space on an aggregate into chunks. Allocation groups are used for heuristics only. Allocation groups allow JFS2 resource allocation policies to use well known methods for achieving good JFS2 I/O performance.
Allocation policies
When locating data on the disk, JFS2 will attempt to: - Group disk blocks for related data and inodes close together. - Distribute unrelated data throughout the aggregate.
9-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
Fileset
Fileset Inode Table
0 2 3 4 5 6 7 8 10 12 14 16 18 20 22 24 26 28 30
Control Pag e
IAG
1 9 11 13 15 17 19 21 23 25 27 29 31
240
fileset #0: AG Free Inode List i nofree: e xtfree: n uminos: n umfree: 1 inofree: extfree: numinos: numfree: inofree: extfree: numinos: numfree: 1 1 3 2 2 8 -1 -1 0 0 -1 -1 0 0
2 44
Fileset Inode Allocation M ap : 2n d extent IAG Free List: 1st entry Fileset Inode Allocation M ap: 1st extent
Control S ection iagnum: 0 Working M ap 0xf000000 0 0xfffffff f ... Persisten t Map 0xf000000 0 0xfffffff f ... ixd Secti on length[0] : 16 addr[0]: 248 length[1] : 0 addr[1]: 0 ... fileset inode #2: root directory owner: perm: etc: size: r oot rwx-----b lah blah 4 096 Con trol Sect ion iag num: 1 iag free: -1 Wo rking Map 0x ffffffff 0x ffffffff .. . Pe rsistent Map 0x ffffffff 0x ffffffff .. . ix d Section le ngth[0]: 0 ad dr[0]: 0 le ngth[1]: 0 ad dr[1]: 0 .. .
id otdot:2
BE0070XS4.0
Notes: Introduction
A fileset is a set of files and directories that form an independently mountable sub-tree that is equivalent to a UNIX file system file hierarchy. A fileset is completely contained within a single aggregate. The visual illustration above and table below details the layout of a fileset. Part Fileset inode table Function Contains inodes describing the fileset-wide control structures. The Fileset Inode Table logically contains an array of inodes. A fileset inode allocation map which describes the Fileset Inode Table. The Fileset Inode allocation map contains allocation state information on the fileset inodes, as well as their on-disk location.
9-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Part
Function Every JFS2 object is represented by an inode, which contains the expected object-specific information such as time stamps, file type (regular or directory). They also contain a B+tree to record the allocation of extents. Note that all JFS2 meta data structures (except for the superblock) are represented as files. By reusing the inode structure for this data, the data format (on-disk layout) becomes inherently extensible.
Inodes
V2.0.0.3
Student Notebook
Uempty
11
13
15
17
19
21
23
25
27
29
31
Fileset Inode # 0 1 2 3 4-
Description reserved. additional fileset information that would not fit in the fileset allocation map inode in the aggregate inode table. The root directory inode for the fileset. The ACL file for the fileset. Fileset inodes from four onwards are used by ordinary fileset objects, user files, directories, and symbolic links.
BE0070XS4.0
Inodes
Every file and directory in a fileset is describe by an on-disk inode. When the fileset is initially created, the first inode extent is allocated; additional inode extents are allocated and de-allocated dynamically as needed. The inodes in a fileset are allocated as shown above in the visual.
9-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Extents
XADs for a file
flag reserved /usr/include/j2/j2_xtree.h struct xad { uint8 uint16 uint40 uint24 uint40 }; offset=0 len=3 addr=101 xad_flag; xad_reserved; xad_offset; xad_length; xad_address;
BE0070XS4.0
Notes: Introduction
Disk space in a JFS2 file system is allocated in a sequence of contiguous aggregate blocks called an extent.
V2.0.0.3
Student Notebook
Uempty
Extent rules
An extent: - Is made up of a series contiguous aggregate blocks. - Are variable in size and can range from 1 to 224- 1 aggregate blocks. - Are wholly contained within a single aggregate - Are indexed in a B+-tree. - Large extents may span multiple allocation groups.
xad description
The elements of the xad structure are described in this table. Member xad_flag xad_reserved Description Flags set on this extent. See /usr/include/j2/j2_xtree.h for a list of flags. Reserved for future use. Extents are generally grouped together to form a larger group of disk blocks. The xad_offset, describes the logical block offset this extent represents in the larger group. A 24-bit field, containing the length of the extent in aggregate blocks. An extent can range in size from 1 to 224 - 1 aggregate blocks. A 40-bit field, containing the address of the first block of the extent. The address is in units of aggregate blocks, and is the block offset from the beginning of the aggregate.
xad_offset
xad_length
xad_address
9-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Increasing an Allocation
File system disk blocks
BE0070XS4.0
Notes: Introduction
In general, the allocation policy for JFS2 tries to maximize contiguous allocation by allocating a minimum number of extents, keeping each extent as large and contiguous as possible. This allows for larger I/O transfer, resulting in improved performance.
V2.0.0.3
Student Notebook
Uempty
Exceptions
In special cases, this is not always possible to keep extent allocation contiguous. For example, copy-on-write clone of a segment will cause a contiguous extent to be partitioned into a sequence of smaller contiguous extents. Another case is restriction of the extent size. For example, the extent size is restricted for compressed files, since we must read the entire extent into memory and decompress it. We have a limited amount of memory available, so we must ensure we will have enough room for the decompressed extent.
Fragmentation
The user can configure a JFS2 aggregate with a small aggregate block size of 512 bytes to minimize internal fragmentation for aggregates with large numbers of small size files. The defragfs utility can be used to defragment a JFS2 file system.
9-17
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
Notes: Introduction
Objects in JFS2 are stored in groups of extents arranged in binary trees. The concepts of binary trees are introduced in this section.
Trees
Binary trees consists of nodes arranged in a tree structure. Each node contains a header describing the node. A flag in the node header identifies the role of the node in the tree. As we will show in subsequent material, these headers reside in the second inode quadrant and in 4KB blocks referenced by the inode.
V2.0.0.3
Student Notebook
Uempty
Header flags
This table describes the binary tree header flags: Flag BT_ROOT BT_LEAF BT_INTERNAL Description The root or top of the tree. The bottom of a branch of a tree. Leaf nodes point to the extents containing the objects data. An internal node points to two or more leaf nodes or other internal nodes.
Why B+-tree?
B+trees are used in JFS2, and help performance by: - Providing fast reading and writing of extents; the most common operations. - Providing fast search for reading a particular extent of a file. - Providing efficient append or insert of an extent in a file. - Being efficient for traversal of an entire B+tree.
B+-tree index
There is one generic B+tree index structure for all index objects in JFS2 (except for directories). The data being indexed depends upon the object. The B+tree is keyed by the offset of the xad structure of the data being described by the tree. The entries are sorted by the offsets of the xad structures, each of which is an entry in a node of a B+tree.
9-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Inodes
Inode Layout Section 1 Section 2 Section 3
y y y y POSIX Attributes extended attributes block allocation maps Inode allocation maps headers describing the inode data In-line data or xad's extended attributes or more in-line data or additional xad's
Section 4
BE0070XS4.0
Notes: Overview
Every file on a JFS2 file system is describe by an on-disk inode. The inode holds the root header for the extent binary tree. File attribute data and block allocation maps are also kept in the inode.
Inode layout
The inode is a 512 byte structure, split into four 128 byte sections. The sections of the inode are described in this table. Section Description This section describes the POSIX attributes of the JFS2 object including the inode and fileset number, object type, object size, user Id, group Id, created, access time, modified time, created time and more.
Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
1.
V2.0.0.3
Student Notebook
Uempty
Section
Description This section contains several parts: Descriptors for extended attributes.
2.
Block allocation maps. Inode allocation maps. Header pointing to the data (b+-tree root, directory, in-line data) This section can contain one of the following:
3.
In-line file data for very small files (up to 128 bytes). The first eight xad structures describing the extents for this file. This section extends section 3 by providing additional storage for more attributes, xad structures or in-line data.
4.
9-21
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Structure
The current definition of the on-disk inode structure is:
struct dinode { /* * I. base area (128 bytes) * -----------------------* * define generic/POSIX attributes */ ino64_t di_number; /* 8: inode number, aka file serial number */ uint32 di_gen; /* 4: inode generation number */ uint32 di_fileset; /* 4: fileset #, inode # of inode map file */ uint32 di_inostamp; /* 4: stamp to show inode belongs to fileset */ uint32 di_rsv1; /* 4: */ pxd_t int64 int64 uint32 uint32 int32 uint32 j2time_t j2time_t j2time_t j2time_t di_ixpxd; di_size; di_nblocks; di_uid; di_gid; di_nlink; di_mode; di_atime; di_ctime; di_mtime; di_otime; /* 8: inode extent descriptor */ /* 8: size */ /* 8: number of blocks allocated */ /* 4: uid_t user id of owner */ /* 4: gid_t group id of owner */ /* 4: number of links to the object */ /* 4: mode_t attribute format and permission */ /* /* /* /* 16: 16: 16: 16: time time time time last data accessed */ last status changed */ last data modified */ created */
/* * II. extension area (128 bytes) * -----------------------------*/ /* * extended attributes for file system (96); */ ead_t union { uint8 di_ea; /* 16: ea descriptor */
_data[80];
/* * block allocation map */ struct { struct bmap *__bmap; /* incore bmap descriptor */ } _bmap; #define di_bmap _data2._bmap.__bmap /*
9-22 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
* inode allocation map (fileset inode 1st half) */ struct { uint32 _gengen; /* di_gen generator */ struct inode *__ipimap2; /* replica */ struct inomap *__imap; /* incore imap control */ } _imap; } _data2; #define di_gengen _data2._imap._gengen #define di_ipimap2 _data2._imap.__ipimap2 #define di_imap _data2._imap.__imap /* * B+-tree root header (32) * * B+-tree root node header, or dtroot_t for directory, * or data extent descriptor for inline data; * N.B. must be on 8-byte boundary. */ union { struct { int32 _di_rsrvd[4]; /* 16: */ dxd_t _di_dxd; /* 16: data extent descriptor */ } _xd; int32 _di_btroot[8]; /* 32: xtpage_t or dtroot_t */ ino64_t _di_parent; /* 8: idotdot in dtroot_t */ } _data2r; #define di_dxd _data2r._xd._di_dxd #define di_btroot _data2r._di_btroot #define di_dtroot _data2r._di_btroot #define di_xtroot _data2r._di_btroot #define di_parent _data2r._di_parent /* * III. type-dependent area (128 bytes) * -----------------------------------* * B+-tree root node xad array or inline data * */ union { uint8 _data[128]; #define di_inlinedata _data3._data /* * regular file or directory * * B+-tree root node/inline data area */ struct { uint8 _xad[128]; } _file; /* * device special file */
Copyright IBM Corp. 2001, 2003 Unit 9. Enhanced Journaled File System 9-23
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
_rdev;
_data3._specfile._rdev
/* * symbolic link. * * link is stored in inode if its length is less than * IDATASIZE. Otherwise stored like a regular file. */ struct { uint8 _fastsymlink[128]; } _symlink; #define di_fastsymlink _data3._symlink._fastsymlink } _data3; /* * IV. type-dependent extension area (128 bytes) * ----------------------------------------* * user-defined attribute, or * inline data continuation, or * B+-tree root node continuation * */ union { uint8 _data[128]; #define di_inlineea _data4._data } _data4; }; typedef struct dinode dinode_t;
Allocation policy
JFS2 allocates inodes dynamically, which provides the following advantages: - Allows placement of inode disk blocks at any disk address, which decouples the inode number from the location. This decoupling simplifies supporting aggregate and fileset reorganization (to enable shrinking the aggregate). The inodes can be moved and still retain the same number, which makes it unnecessary to search the directory structure to update the inode numbers. - There is no need to allocate ten times as many inodes as you will ever need, as with file systems that contain a fixed number of inodes; thus, file system space utilization is optimized. This is especially important with the larger inode size of 512 bytes in JFS2. - File allocation for large files can consume multiple allocation groups and still be contiguous. Static allocation forces a gap containing the initially allocated inodes in each allocation group. With dynamic allocation, all the blocks contained in an allocation group can be used for data.
9-24 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
Dynamic inode allocation causes a number of problems, including: - With static allocation, the geometry of the file system implicitly describes the layout of inodes on disk. With dynamic allocation, separate mapping structures are required. - The inode mapping structures are critical to JFS2 integrity. Due to the overhead involved in replicating these structures we accept the risk of losing these maps. However, replicating the B+tree structures, allows us to find the maps.
Inode extents
Inodes are allocated dynamically by allocating inode extents that are simply a contiguous chunk of inodes on the disk. By definition, a JFS2 inode extent contains 32 inodes. With a 512 byte inode size, an inode extent occupies 16 KB on the disk.
Inode initialization
When a new inode extent is allocated, the inodes in the extent are initialized, i.e. their inode numbers and extent addresses are set, and the mode and link count fields are set to zero. Information about the inode extent is also added to the inode allocation map.
9-25
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Inline Data
In-line data
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
Binary Trees
Inode Info B+-tree header
offset: 0 addr: 68 length: 16 offset: 84 addr: 4096 length: 48
68
16KB Data
In-line data
4096
48KB Data
26624
8KB Data
BE0070XS4.0
INLINEEA bit
Once the 8 xad structures in the inode are filled, an attempt is made to use the last quadrant of the inode for more xad structures. If the INLINEEA bit is set in the di_mode field of the inode, then the last quadrant of the inode is available for 8 more xad structures. This design feature has not been implemented yet.
9-27
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
More Extents
ino de I node Info B+- tree header offset: addr: length: offset: addr: length: 0 412 4 0 0 0 header 412
68
16KB Data
4096
48KB Data
26624
8KB Data
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
68
4096
48KB Data
560 header
26624
8KB Data
BE0070XS4.0
9-29
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Another Split
inode Inode Info B+-tree header offset: addr: length: offset: addr: length: 0 380 4 8340 212 4 header 380 header 16KB Data 412
68
4096
48KB Data
560
26624
8KB Data
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
As extents continue to be added, additional leaf nodes are created to contain the xad structures for the extents, and these leaf nodes are added to the internal node. Once the first internal node is filled, a second internal node is allocated, the inodes second xad structure is updated to point to the new internal node. This behavior continues until all eight of the inodes xad structures contain internal nodes.
9-31
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
fsdb Utility
# fsdb /dev/lv00 Aggregate Block Size: 512 > > help Xpeek Commands a[lter] <block> <offset> <hex string> b[tree] <block> [<offset>] dir[ectory] <inode number> [<fileset>] d[isplay] [<block> [<offset> [<format> [<count>]]]] dm[ap] [<block number>] dt[ree] <inode number> [<fileset>] h[elp] [<command>] ia[g] [<IAG number>] [a | <fileset>] i[node] [<inode number>] [a | <fileset>] q[uit] su[perblock] [p | s]
BE0070XS4.0
Notes: Introduction
The fsdb command enables you to examine, alter, and debug a file system.
Starting fsdb
It is best to run fsdb against an unmounted file system. Use the following syntax to start fsdb: fsdb <path to logical volume> For example: # fsdb /dev/lv00 Aggregate Block Size: 512 >
V2.0.0.3
Student Notebook
Uempty
Commands
The commands available in fsdb can be viewed with the help command as shown in the visual.
9-33
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Exercise
Complete exercise seven Consists of theory and hands-on Ask questions at any time Activities are identified by a What you will do:
Use the fsdb utility to examine a JFS2 file system. Identify a file's inode number Identify extent descriptors Locate the data extents that hold the contents of a file
BE0070XS4.0
Notes:
Turn to your lab workbook and complete exercise seven.
V2.0.0.3
Student Notebook
Uempty
Directory
BE0070XS4.0
Notes: Introduction
In addition to files, an inode can represent a directory. A directory is a journaled meta-data file in JFS2, and is composed of directory entries, which indicate the files and sub-directories contained in the directory.
Directory entry
Stored in an array, the directory entries link the names of the objects in the directory to an inode number. The directory entry is a 32 byte structure and has the members shown here. Member inumber Description Inode number.
9-35
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Description If more than 22 characters are needed, additional entries are linked using the next pointer. Length of the name. File name, up to 22 characters.
next; namlen;
/* 1: */ /* 1: */
V2.0.0.3
Student Notebook
Uempty
/* /* /* /* /* /* /* /* /*
8: parent inode number */ 8: */ 1: */ 1: next free entry in stbl */ 1: free count */ 1: freelist header */ 4: */ 8: sorted entry index table */ (32) */
BE0070XS4.0
9-37
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Description The slot number of the head of the free list The indices to the directory entry slots that are currently in use. The entries are sorted alphabetically by name. The array of directory entries. There are eight entries, The header is stored in the first slot.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Example
In the example show above, the directory entry table contains four files. The stbl table contains the slot numbers of the entries ordering the entries alphabetically.
9-39
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
. and .. directories
A directory does not contain specific entries for the self (.) and parent (..) directories. Instead, these will be represented in the inode itself. Self is the directorys own inode number, and the parent inode number is held in the idotdot field in the header.
3.
4.
5.
V2.0.0.3
Student Notebook
Uempty
fl ag: B T_R OOT B T_L EA F ne xti nd ex: 4 fr eec nt : 3 fr eel is t: 6 id otd ot : 2 st bl: { 1,2 ,3, 4, 0,0 ,0 } 1 i nu mbe r: 69 652 n ex t: -1 n am ele n: 7 n am e: foo ba r1 i nu mbe r: 69 653 n ex t: -1 n am ele n: 8 n am e: foo ba r12 i nu mbe r: 69 654 n ex t: -1 n am ele n: 7 n am e: foo ba r2 i nu mbe r: 69 655 n ex t: 5 n am ele n: 37 n am e:l ong na med fi lew it hov er 2 n ex t: -1 c nt : 0 n am e: 2ch ar sin it sna me
BE0070XS4.0
Notes: Introduction
This section demonstrates how the directory structures change over time.
Small directories
Initial directory entries are stored in the directory inode in-line data area. Examine the example of a small directory. In the example shown above, all the inode information fits into the in-line data area. Note: the file with a long name has its name split across two slots.
9-41
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Adding a File
# ls -ai 69651 . 2 .. 69656 afile 69652 foobar1 69653 foobar2 69654 foobar3 69655 longnamedfilewithover22charsinitsname
f la g: B T_ RO OT B T_ LE AF n ex ti nd ex : 5 f re ec nt : 2 f re el is t: 7 i do td ot : 2 s tb l: { 6, 1, 2, 3, 4, 0, 0, 0} 1 in um be r: 6 96 52 ne xt : -1 na me le n: 7 na me : fo ob ar 1 in um be r: 6 96 53 ne xt : -1 na me le n: 8 na me : fo ob ar 12 in um be r: 6 96 54 ne xt : -1 na me le n: 7 na me : fo ob ar 2 in um be r: 6 96 55 ne xt : 5 na me le n: 3 7 na me :l on gn am ed fi le wi th ov er 2 ne xt : -1 cn t: 0 na me : 2c ha rs in it sn am e 6 in um be r: 6 96 56 ne xt : -1 na me le n: 5 na me : af il e
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
19
20
inumber: 23 next: -1 namelen: 6 name: file18 inumber: 24 next: -1 namelen: 6 name: file19
BE0070XS4.0
9-43
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Bl ock 1 18 fla g: BT _R OOT B T_I NT ERN AL nex ti nde x: 4 fre ec nt: 4 fre el ist : 5 ido td ot: 2 stb l: {1 ,3 ,4, 2, 6,7 ,2 ,8} 1 x d.l en : 1 x d.a dd r1: 0 x d.a dd r2: 11 8 n ext : -1 n ame le n: 0 n ame : fil e0 x d.l en : 1 x d.a dd r1: 0 x d.a dd r2: 12 04 n ext : -1 n ame le n: 8 n ame : fil e48 45 x d.l en : 1 x d.a dd r1: 0 x d.a dd r2: 19 91 n ext : -1 n ame le n: 9 n ame : fil e13 83 3 x d.l en : 1 x d.a dd r1: 0 x d.a dd r2: 26 09 n ext : -1 n ame le n: 8 n ame : fil e17 72 3 f lag : BT_ IN TER NA L n ext in dex : 64 f ree cn t: 59 f ree li st: 7 6 m axs lo t: 12 8 s tbl : {1, 19 ,18 , ... 7 ,8} 1 xd .le n: 1 xd .ad dr 1: 0 xd .ad dr 2: 52 ne xt: - 1 na mel en : 0 na me: f ile 0 x d. len : x d. add r1 : x d. add r2 : n ex t: n am ele n: n am e:
Bl ock 5 2 fl ag: BT _L EAF ne xti nde x: 64 fr eec nt: 5 9 fr eel ist : 21 ma xsl ot: 1 28 st bl: {1 ,2 ,15 . .. 11 3,1 12 } 1 i nu mbe r: 5 n ex t: -1 n am ele n: 5 n am e: fil e0 i nu mbe r: 6 n ex t: -1 n am ele n: 5 n am e: fil e1 i nu mbe r: 15 n ex t: -1 n am ele n: 6 n am e: fil e1 0
1 26
xd .le n: 0 xd .ad dr 1: -1 xd .ad dr 2: 14 73 ne xt: - 1 na mel en : 8 na me: f ile 14 72 x d. len : 1 x d. add r1 : 0 x d. add r2 : 1 47 2 n ex t: -1 n am ele n: 8 n am e: fi le1 01 7
12 6
12 7
12 7
inu mb er: 1 005 7 nex t: -1 nam el en: 9 nam e: fi le 100 52 inu mb er: 1 004 1 nex t: -1 nam el en: 9 nam e: fi le 100 36
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
Checkpoint
There is ____ aggregate per logical volume. An allocation group is at least ____aggregate blocks. The number of inodes in a JFS2 file system is fixed. True or False? The data contents of a file is stored in objects called _____. A single extent can be up to ____ in size. A JFS2 directory contains directory entries for the . and .. directories. True or False?
BE0070XS4.0
Notes:
9-45
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Exercise
Complete exercise eight Consists of theory and hands-on Ask questions at any time Activities are identified by a What you will do:
Use fsdb to examine the structures of directories in a JFS2 file system
BE0070XS4.0
Notes:
Turn to your lab workbook and complete exercise eight.
V2.0.0.3
Student Notebook
Uempty
Unit Summary
Aggregate is a pool of space allocated to filesets A fileset is a mountable file system The contents of files and directories are stored in extents Extents are arranged in B+ trees for fast file and directory traversal
BE0070XS4.0
Notes:
9-47
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
References
AIX Documentation: Kernel Extensions and Device Support Programming Concepts AIX Documentation: Technical Reference: Kernel and Subsystems, Volume 1 AIX Documentation: Technical Reference: Kernel and Subsystems, Volume 2
10-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this lesson you should be able to:
List the 3 uses for kernel extensions Build a kernel extension from scratch Compose an export file Create an extended system call
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
Kernel Extensions
Kernel extensions can include:
Device drivers System calls Virtual file systems Kernel processes Other device driver management routines
Kernel extensions run within the protection domain of the kernel Extensions can be loaded into the kernel during:
system boot runtime
BE0070XS4.0
Notes: Introduction
The AIX kernel is dynamically extensible and can be extended by adding additional routines called kernel extensions. A kernel extension could best be described as a dynamically loadable module that adds functionality to the kernel.
10-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Loading extensions
Extensions can be added at system boot or while the system is in operation. Extensions are loaded and removed from the running kernel using the sysconfig() system call.
Advantages
Allowing kernel extensions to be loaded and unloaded allows a system administrator to customize a system for particular environments and applications. Rather than bundling all possible options into the kernel at compile time (and creating a large kernel), kernel extensions allow maximum flexibility. The option of loading and unloading kernel extensions at runtime increases system availability and ease of use. In addition, development time is reduced since a new kernel does not have to be compiled and installed for each development cycle.
Disadvantages
Importing new code into the kernel allows the possibility of an unlimited number of runtime errors to be introduced into the system. Such issues as execution environment, path length, pageability, and serialization must be taken into account when writing extensions to the kernel.
V2.0.0.3
Student Notebook
Uempty
Private routines
Extended Kernel Mode Experts
10-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
/usr/lib/kernex.exp
Global kernel Name space export Core kernel services (/unix) Extended system calls import/ export import export Other kernel extensions import Kernel Extensions
Device drivers
BE0070XS4.0
Notes: Introduction
This section describes how symbol names are shared between the kernel and kernel extensions.
Name space
The kernel contains many functions and storage locations that are represented by symbols. The set of symbols used by the kernel makes up the kernels name space. Some of these symbols are private to the parts of the kernel that use them. Some of these symbols are made available for other parts of the kernel and kernel extensions to use.
V2.0.0.3
Student Notebook
Uempty
Exported symbols
The kernel makes symbols available for kernel extensions by exporting them. If a kernel extension or other program wants to reference these symbols they must import them. Extensions can make symbols they define visible to other extensions by exporting these symbols.
Export file
The kernel export file has the following format: #!/unix * list of kernel exports devswadd devswchg devswdel devswqry devwrite e_assert_wait e_block_thread e_clear_wait
System calls
There is an additional file that lists the system calls that are exported from the kernel (/usr/lib/syscalls.exp).
10-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
The format of the file syscalls.exp file is similar to the format of the kernel exports file except for an additional tag for each system call. This descriptor indicates the ability of the system call to interact with 64-bit processes. Here is a fragment of the file syscalls.exp and a description of the tags.
Description This system call does not pass any arguments by reference (address). This system call is a 32-bit system call and passes 32-bit addresses. This system call is only available in the 64-bit kernel. This system call supports both 32-bit and 64-bit applications.
V2.0.0.3
Student Notebook
Uempty
BE0070XS4.0
Notes: Introduction
Kernel extensions can export symbols that are defined by the extension, which makes these symbols available for reference outside the kernel extension. Symbols are exported by creating an export file. All symbols within a kernel extension remain private by default. This means that other kernel extensions cannot use the routines and variables within the extension. This default action can be changed by creating an export file for the extension. The export file lists the symbols you want to exported from the kernel extension. The format of the exports file is identical to the format of the imports file. An exports file for a kernel extension is used as an import file by other kernel extensions that wish to use the symbols exported by the latter. Any symbols which are exported by a kernel extension are automatically added to the kernel global name space when the module is explicitly loaded.
10-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
#!/usr/lib/drivers/pci/scsi_ddpin
V2.0.0.3
Student Notebook
Uempty
Kernel Libraries
libcsys.a
a641 164a memmove strchr strncat strspn atoi memccpy memset strcmp strncmp strstr bcmp memchr ovbcopy strcpy strncpy strtok bcopy memcmp remque strcspn strpbrk bzero memcpy strcat strlen strrchr
libsys.a
d_align newstack xdump d_roundup secs_to_date date_to_jul timeout date_to_secs timeoutcf untimeout
BE0070XS4.0
Notes: Introduction
Normal C applications are linked with the C library, libc.a, which provides a set of useful programming routines. The C library for application programs is a shared object. It is not possible to access this user-level library from within the kernel protection domain. For this reason, kernel extensions should not be linked the normal C library. Instead, the kernel extension may link with the libraries libcsys.a and libsys.a. These are static libraries (ar format library with static.o files), and contain special kernel safe versions of some useful routines such as atoi() and strlen() that are normally found in the regular C library. Note that the routines provided by libcsys.a are only a very small subset of those provided in the normal C library.
10-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Kernel libraries
Libraries available to kernel extensions are shown in the visual on the previous page.
Reference
Additional information on the libcsys.a and libsys.a are available in the AIX online documentation.
V2.0.0.3
Student Notebook
Uempty
Configuration Routines
Kernel extension
int module_entry (cmd, uiop) int cmd; struct uio *uiop;
Device Driver
int dd_entry (dev, cmd, uiop) dev_t dev; int cmd; struct uio *uiop;
BE0070XS4.0
Notes: Introduction
Unlike a normal user-level C language application, a kernel extension does not have a routine called main. Instead it has a configuration routine and one or more entry points. These routines can have any name, and are automatically exported to the global name space. In order to avoid conflicts in the kernel name space, it is normally best to prepend the names of exported symbols with something that indicates the extension which defines the symbol. For example, the symbol nfs_config is the entry point routine for the NFS kernel extension.
10-13
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Configuration routine
An extension configuration routine is typically executed shortly after loading the extension. When linking the extension the configuration routine is specified with the -e option of the ld command. The format of the configuration routine is below. The uio structure is used to pass arguments from the configuration method. The value of cmd depends on the operation the configuration method is being requested to perform. See later section on sysconfig() for details.
Entry points
Kernel extensions typically define one or more entry points. These are routines that could be called as a result of a system call or other action that invokes the kernel extension.
V2.0.0.3
Student Notebook
Uempty
Link
ld -b64 -o ext64 ext64.o -e init_routine -bE:extension.exp \ -bI: /usr/lib/kernex.exp -lsys -lcsys
BE0070XS4.0
Notes: Introduction
Compiling and linking a kernel extension must be split into two phases: 1) Compile each source file to create an object file. 2) Link the required object files to create the extension binary.
Compiler command
A number of different commands can be used to invoke the compiler on AIX. The commands call the same compiler core with a different set of options. In general, kernel code should be compiled with either the cc or xlc commands.
10-15
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
Compiler options
The default mode for the compiler is 32-bit. In order to compile 64-bit code, the -q64 option should be used. Other compiler options may be used to generate additional information about the source files being compiled. Compiler option -q64 -qlist -qsource -c -D<name>[=<def>] -M -O -S -v -qcpluscmt -qwarn64 Meaning Generate 64-bit object files. (-q32 is the default) Produce an object listing; output goes to .lst file. Produce a source listing; output goes to .lst file. Do not send files to the linkage editor. Define <name> as in #define directive. If <def> is not specified, 1 is assumed. Generate information to be included in a make description file. Generate optimized code. Produce a .s output file (assembler source) Displays language processing commands as they are invoked by the compiler; output goes to stdout. Allow C++ style comments // Enables checking for possible long-to-integer or pointer-to-integer truncation.
Linking
Once you have created all of the object files, use the linker (ld) to create the kernel extension binary. Some linker options will always be used when creating the binary; some are optional, and some are platform dependent. Linker option -b64 -b32 -eLabel -lcsys -lsys -oName -bE:FileID -bI:FileID Meaning Generate a 64-bit executable Generate a 32-bit executable Set the entry point of the executable to Label. Link the libcsys.a and libsys.a libraries with the kernel extension. Names the output file Name. Exports the external symbols listed in the file FileID. Imports the symbols listed in FileID.
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
ld -e entry_point [import files] [export files] \ -o output_file object1.o object2.o -lcsys -lsys The order of arguments is not important.
V2.0.0.3
Student Notebook
Uempty
Step
1 2
Action
Compile a 32-bit object file using the -q32 compiler option. cc -q32 -o ext32.o -c ext.c -D_KERNEL -D_KERNSYS Link a 32-bit module file using the -b32 linker option. ld -b32 -o ext32 ext32.o -e ext_init \ -bI: /usr/lib/kernex.exp -lcsys Build a 64-bit object file from the same source file as step 1.
cc -q64 -o ext64.o -c ext.c -D_KERNEL -D_KERNSYS \ -D__64BIT_KERNEL
Build a 64-bit object file using the -b64 linker option. ld -b64 -o ext64 ext64.o -e ext_init \ -bI: /usr/lib/kernex.exp -lcsys Create an archive of both 32- and 64-bit extensions ar -X32_64 -r -v ext ext32 ext64
BE0070XS4.0
Notes: Introduction
Machines with 64-bit hardware can run either the 32-bit kernel or the 64-bit kernel. A kernel extension must be of the same binary type as the kernel. A kernel extension that supports both 32-bit and 64-bit kernels is packaged as an ar format archive library. The library contains both the 32-bit and 64-bit binary versions of the kernel extension. When the extension is loaded, if the kernel detects that the file is an ar format library, it will load the appropriate binary for the type of kernel. For example, a 64-bit kernel will extract the 64-bit binary from the library.
10-19
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
Loading Extensions
sysconfig() system call can be used to: Load kernel extensions Unload kernel extensions Invoke the extension's entry point Query the kernel to determine if a extension is loaded
loadext() library routine can be used to: Load kernel extensions Unload kernel extensions Query the kernel to determine if an extension is loaded
BE0070XS4.0
Notes: Introduction
A user-level program called a Configuration Method is used to load a kernel extension into the kernel. The program is normally a 32-bit executable, even on systems running the 64-bit kernel.
10-21
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
/* ptr to object module pathname */ /* ptr to a substitute libpath */ /* kernel module id (returned) */
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
sysconfig() - Configuration
sysconfig(SYS_CFGKMOD, &cfg_kmod, sizeof(cfg_kmod) ) struct cfg_kmod { mid_t kmid; int cmd; caddr_t mdiptr; int mdilen; };
/* /* /* /*
module ID of module to call command parameter for module pointer to module dependent info length of module dependent info
*/ */ */ */
BE0070XS4.0
10-23
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
module ID of device driver*/ device major/minor number*/ config command code for device */ pointer to DD structure*/ length of DD structure */
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty CFG_UCODE
Value
sysconfig() commands
This table provides a complete list of commands for the sysconfig() system call:
Cmd Value SYS_KLOAD SYS_SINGLELOAD SYS_QUERYLOAD SYS_KULOAD SYS_QDVSW SYS_CFGDD SYS_CFGKMOD SYS_GETPARMS SYS_SETPARMS
Result Loads a kernel extension object file into kernel memory. Loads a kernel extension object file only if it is not already loaded. Determines if a specified kernel object file is loaded. Unloads a previously loaded kernel object file. Checks the status of a device switch entry in the device switch table. Calls the specified device driver configuration routine (module entry point). Calls the specified module at its module entry point for configuration purposes. Returns a structure containing the current values of run-time system parameters found in the var structure. Sets run-time system parameters from a caller-provided structure. When running on the 32-bit kernel, this flag can be bit-wise OR'ed with the cmd parameter (if the cmd parameter is SYS_KLOAD or SYS_SINGLELOAD). For kernel extensions, this indicates that the kernel extension does not export 64-bit system calls, but that all 32-bit system calls also work for 64-bit applications. For device drivers, this indicates that the device driver can be used by 64-bit applications.
SYS_64BIT
10-25
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
Notes: Introduction
The loadext() routine, defined in the libcfg.a library, is often used to perform the task of loading the extension code into the kernel. It uses a boolean logic interface to perform the query, load and unload of kernel extensions.
dd_name
The dd_name string specifies the pathname of the extension module to load. If the dd_name string is not a relative or absolute path name (in other words, it does not start with ./, ../, or a /), then it is concatenated to the string /usr/lib/drivers/. For example, PCI device drivers are normally stored in the /usr/lib/drivers/pci directory. The dd_name argument pci/fred would result in the loadext routine trying to load the file /usr/lib/drivers/pci/fred into the kernel.
V2.0.0.3
Student Notebook
Uempty
Multiple copies
If you require multiple copies of a kernel extension to be loaded, you should use the sysconfig interface with the SYS_KLOAD command, since loadext uses SYS_SINGLELOAD, which will only load the extension if it is not already loaded.
10-27
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
System Calls
1) Switch protection domain from user to kernel 2) Switch to the kernel stack. 3) Execute the system call code.
BE0070XS4.0
Notes: Introduction
A system call is a function called by user-process code that runs in the kernel protection domain.
V2.0.0.3
Student Notebook
Uempty
10-29
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
Notes: Introduction
This section describes the creation of a very simple kernel extension that adds a new system call to the kernel. The extended system call created here is called question().
V2.0.0.3
Student Notebook
Uempty
10-31
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
BE0070XS4.0
V2.0.0.3
Student Notebook
Uempty
Argument Passing
sys_call(int * )
sys_call(int * )
sys_call(int * )
sys_call(int * )
64-bit kernel
32-bit kernel
sys_call(int * )
sys_call(int * )
BE0070XS4.0
Notes: Introduction
System calls can accept up to 8 arguments. Often these arguments are 64-bits long or pointers to buffers in the users address space. Because AIX supports a mix of 32-bit and 64-bit environments, care must be taken when processing 64-bit arguments.
64-bit kernels
When running a 64-bit kernel, pointer arguments passed from a 32-bit process will be zero extended. This case requires no special handling.
32-bit kernels
In the 32-bit kernel, a kernel service that accepts a pointer as a parameter expects a 32-bit value. When dealing with a 64-bit user process however, things are different. Although the kernel expects (and indeed receives) 32-bit values as the arguments, the
Copyright IBM Corp. 2001, 2003 Unit 10. Kernel Extensions 10-33
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
parameters in the user process itself are 64-bit. The system call handler copies the low-order 32-bits of all parameters onto the kernel stack it creates before entering the system call. The high-order 32-bits are stored elsewhere. A new kernel service called get64bitparm() is used to retrieve the stored high-order 32-bits and reconstruct the 64-bit value inside the kernel.
get64bitparm()
The get64bitparm() kernel service is defined in the header file <sys/remap.h> as follows:
The get64bitparm() kernel service is used to reconstruct a 64-bit long pointer that was passed (and truncated) from a 64-bit user process to the 32-bit kernel. The 64-bit system call handler stores the high order 32-bits of all system call arguments. Once the 64-bit value has been re-constructed, the kernel service may use it for whatever purpose it deems necessary. In the following material we demonstrate the use of this service in forming a 64-bit address which is then used to read parameter data from a 64-bit process into a 32-bit kernel extension. In this case the get64bitparm() call is used to obtain a user space address which is then accessed by the copyin64() kernel service.
V2.0.0.3
Student Notebook
Uempty
user_buffer
copyout
copyin
kernel_buffer
BE0070XS4.0
Notes: Introduction
Within the kernel, a number of services can be used to copy data from user space to kernel space, and from kernel space to user space.
Overview
User applications reside in the user protection domain and cannot directly access kernel memory. Kernel extensions reside in the kernel protection domain and cannot directly access user space memory.
List of services
The following services can be used to transfer data between user and kernel address space. Prototypes are defined for the services in the header file <sys/uio.h>.
10-35
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
32-bit kernels
Additional services can be used by 32-bit kernels when dealing with a 64-bit user process. copyin64(unsigned long long uaddr, char * kaddr, int count) copyinstr64(unsigned long long uaddr, caddr_t kaddr, uint max, uint *actual); fubyte64(unsigned long long uaddr); fuword64(unsigned long long uaddr); copyout64(char * kaddr, unsigned long long uaddr, int count) subyte64(unsigned long long uaddr, uchar val); suword64(unsigned long long uaddr, int val);
IS64U
The macro IS64U can be used by system call code to determine if the calling process is 64-bit or 32-bit. The macro evaluates to true if the calling process is 64-bit. It checks the U_64bit member of the user structure described earlier. Both the user structure and the IS64U macro are defined in /usr/include/sys/user.h.
10-36 Kernel Internals Copyright IBM Corp. 2001, 2003
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
V2.0.0.3
Student Notebook
Uempty
10-37
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Checkpoint
Kernel extensions can be loaded at _____ _____ and during _______. A kernel extension can be compiled and linked like a regular user application. True or False? A kernel extension must supply a routine called main(). True or False? Kernel extensions are used mainly for D_____ D_____, F_____ S______ and S______ C______. The ________ system call is used to invoke the entry point of a kernel extension.
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
Exercise
Complete exercise ten Consists of theory and hands-on Ask questions at any time Activities are identified by a What you will do:
Compile, link and load a kernel extension Write your own system call Write a kernel extension that creates kernel processes Create your own ps command
BE0070XS4.0
Notes:
Developing code for the kernel environment is very different compared with developing a user-level application. In general, kernel services perform very little (or no) checking of arguments for error conditions. The consequences of invoking a kernel service with incorrect arguments include data corruption, or even causing the system to crash. This is in stark contrast to similar problems in a user-level application which normally would result in the application terminating because of a SIGSEGV signal. Turn to your lab workbook and complete exercise ten.
10-39
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Summary
Kernel extensions are used to implement device drivers, file systems and extended system calls Kernel extensions can be loaded at boot time or runtime, and can be unloaded at runtime Kernel extensions require special compile and link steps Kernel extensions need to match the binary type of the running kernel Kernel extension code must take into account that the kernel is pageable
BE0070XS4.0
Notes:
V2.0.0.3
Student Notebook
Uempty
Checkpoint Solutions
1. The kernel is the base program of the operating system. 2. The processor runs interrupt routines in kernel mode. 3. The AIX kernel is preemptable, pageable and dynamically extendable. 4. The 64-bit AIX kernel supports only 64-bit kernel extensions, and only runs on 64-bit hardware. 5. The 32-bit kernel supports 64-bit user applications when running on 64-bit hardware.
A-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit 2
Checkpoint Solutions
1. KDB is used for live system debugging. 2. kdb is used for system image analysis. 3. The value of the dbg_avail kernel variable indicates how the debugger is loaded. 4. A system dump image contains everything that was in the kernel at the time of the crash. True or False? False. The system dump image contains only selected areas of kernel memory.
A-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Unit 3
Checkpoint Solutions
1. AIX provides three programming models for user threads. 2. A new thread is created by the thread create() system call. 3. The process table is an array of pvproc structures. 4. All process IDs (except pid 1) are even. 5. A thread table slot number is included in a thread ID. True or False? True. 6. A thread holding a lock may have its priority boosted.
A-3
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit 4
Checkpoint Solutions
1. AIX divides physical memory into frames. 2. The virtual memory manager provides each process with its own effective address space. 3. A segment can be up to 256MB in size. 4. A 32-bit effective address contains a 4-bit segment number. 5. Shared library data segments can be shared between processes. True or False? False. The shared library text segments are shared, but the data segments are private. 6. The 32-bit user address space layout is the same s the 32-bit kernel address space layout. True or False? False.
A-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Unit 5
Checkpoint Solutions
1. The system hardware maintains a table of recently referenced virtual to physical address translations. 2. The Software Page Frame Table contains information on all pages resident in physical memory. 3. Each working storage has an XPT. 4. A SIGDANGER signal is sent to every process when the free paging space drops below the warning threshold. 5. The PSALLOC environment variable can be used to change the paging space policy of a process. 6. A page fault when interrupts are disabled will cause the system to crash.
A-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit 6
Checkpoint Solutions
1) What processor features are required in a partitioned system? RMO, RML and LPI registers are needed in a partitioned system 2) Memory is allocated to partitions in units of ____256___MB. 3) All partitions have the same real mode memory requirements. True or False? The statement is False. AIX 5.2 and Linux need 256MB. AIX 5.1 requires 256MB, 1GB or 16GB, depending on the amount of memory allocated to the partition. 4) In a partitioned environment, a real address is the same as a physical address. True or False? The statement is False. A real address is not equivalent to a physical address in the partitioned environment. 5) Any piece of code can make hypervisor calls. True or False? The statement is False. Only kernel code can make hypervisor calls. 6) Which physical addresses in the system can a partition access? A partition can access the PMBs allocated to the partition, (and with hypervisor assistance) the partition's own page table, and the TCE windows for the allocated I/O slots.
A-6
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Unit 7
Checkpoint Solutions (1 of 2)
Each user process contains a private File Descriptor Table. The kernel maintains a vfs structure and a vmount structure for each mounted file system. There is one gfs structure for each mounted file system. True or False? False. There is one gfs structure for each file system type registered with the kernel. The three kernel structures volgrp, lvol and pvol are used to track LVM volume group, logical volume and physical volume data, respectively. The kdb subcommand volgrp and the AIX command lsvg both reflect volume group information.
A-7
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit 7 (continued)
Checkpoint Solutions (2 of 2)
There is one vmount/vfs structure pair for each mounted filesystem. True or False? True. Every open file in a filesystem is represented by exactly one file structure. True or False? False. There is one file
structure (system file table entry) for each unique open() of a file. So, a given file may be represented by several file structures.
The inode number given by ls -id/usr is shown as 2. Why? The reason is that ls is giving us the root inode of the /usr filesystem, not
the inode of the /usr directory in the /(root) filesystem. To obtain this directory inode we need to follow the vfs_mntdover pointer in the /usr filesystem vfs structure. This will point us to the vnode structure of directory /usr in the root filesystem, which contains the directory inode number.
A-8
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Unit 8
Checkpoint Solutions
1. An allocation group contains disk inodes and fragments. 2. The basic allocation unit in JFS is a disk block. True or False? False. The basic allocation unit is a fragment. 3. The root inode number of a filesystem is always 1. True or False? False. The root inode number is always 2. 4. The last 128 bytes of an in core JFS inode is a copy of the disk inode. True or False? True. The first part of an in core JFS inode contains data relevant only when the associated object is being referenced. This includes such items as open count and in-core inode state. 5. JFS maps user data blocks and directory information into virtual memory. True or False? True. JFS itself does copy operations and relies on VMM to do the actual I/O operations. This is a reason for JFS I/O efficiency.
A-9
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit 9
Checkpoint Solutions
There is one aggregate per logical volume. An allocation group is at least 8192 aggregate blocks. The number of inodes in a JFS2 file system is fixed. True or False? False. The data contents of a file is stored in objects called extents. A single extent can be up to 224-1 in size. A JFS2 directory contains directory entries for the . and .. directories. True or False? False. The information for . and .. is contained in the inode of the directory.
V2.0.0.3
Student Notebook
Uempty
Unit 10
Checkpoint Solutions
Kernel extensions can be loaded at system boot and during runtime. A kernel extension can be compiled and linked like a regular user application. True or False? False. A kernel extension must supply a routine called main(). True or False? False. Kernel extensions are used mainly for Device Drivers, File Systems and System Calls. The sysconfig system call is used to invoke the entry point of a kernel extension.
A-11
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
V2.0.0.3
Student Notebook
Uempty
References
B-1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Unit Objectives
At the end of this unit you should be able to:
Configure a system to perform a system dump Test the system dump configuration of a system Validate a dump file
BE0070XS4.0
Notes:
B-2
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Crash Dumps
What is a crash dump? When is a crash dump created? What is a crash dump used for?
BE0070XS4.0
Notes:
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
B-4
Kernel Internals
V2.0.0.3
Student Notebook
Uempty
Process Flow
AIX 5L in production Stage 1 copycore copies dump into /var/adm/ras. copycore is called by rc.boot
System Panics
Memory dumper is run Memory is copied to disk location specified in SWservAt ODM object class
BE0070XS4.0
Notes:
B-5
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Student Notebook
Exercise
Complete exercise A Consists of theory and hands-on Ask questions at any time Activities are identified by a What you will do: Learn about the sysdumpdev command Configure your lab system to perform a system dump Test the crash dump configuration Verify you have obtained a successful system dump
BE0070XS4.0
Notes:
B-6
Kernel Internals
V2.0
backpg
Back page