The Linux kernel acts as an interface between applications and hardware, managing system resources and providing access through system calls; it uses a monolithic design where all components run in the kernel thread for high performance but can be difficult to maintain, though Linux allows dynamic loading of modules. Device drivers interface hardware like keyboards, hard disks, and network devices with the operating system, and are implemented as loadable kernel modules that are compiled using special makefiles and registered with the kernel through system calls.
This document outlines the key components of a Linux character device driver, including modules, major and minor numbers, data structures like struct file and struct file_operations, driver registration, and core functions like open, release, read and write. It provides an introduction to character device drivers in Linux and their basic architecture.
Linux is a widely used open source operating system kernel that can also refer to full operating system distributions. It is commonly used in embedded systems due to its portability, modularity, and ability to run on hardware with limited resources. Device drivers can be dynamically loaded and unloaded from the Linux kernel as modules, allowing new functionality to be added without rebooting the system. This makes Linux well-suited for embedded device development.
The board support package (BSP) provides hardware abstraction and initialization routines for the Linux kernel. It hides low-level hardware details and allows drivers and the kernel to work across different boards. Key BSP components include microprocessor support, board-specific routines for boot loading, memory mapping, interrupts, timers and power management. The BSP establishes the interface between the hardware and operating system.
A device driver allows operating systems and programs to interact with hardware devices. This document discusses device drivers in Linux, including that drivers are loaded as kernel modules, communicate between user and kernel space, and have character, block, and network classes. It provides examples of initializing and removing a sample "memory" driver that allows reading from and writing to a character device memory buffer.
The document provides an overview of the key components of the Linux operating system, including:
1) The Linux kernel, which acts as a resource manager for processes, memory, and hardware devices.
2) Process and memory management systems that control how processes are allocated and memory is allocated and freed.
3) The file system which organizes how files are stored and accessed.
4) Device drivers that allow the operating system to interface with hardware.
5) The network stack which handles network protocols and connections.
6) Architecture-dependent code that provides hardware-specific implementations of core functions.
This Presentation Speaks about Compiling Linux Kernel from source, How Device Drivers are implemented in Linux,Udev,Loading and Unloading of Kernel modules.
Shreyas MM
www.shreyasmm.com
Drivers do more than just I/O - 15% perform data processing. Only 23% of code is for I/O and interrupts, with 36% for initialization and cleanup. 44% of drivers do not conform to class definitions due to unique parameters, ioctls, and interactions. There are opportunities to improve abstractions, as at least 8% of code is similar across drivers. Better abstractions could reduce code size and bugs.
Linux device drivers act as an interface between hardware devices and user programs. They communicate with hardware devices and expose an interface to user applications through system calls. Device drivers can be loaded as kernel modules and provide access to devices through special files in the /dev directory. Common operations for drivers include handling read and write requests either through interrupt-driven or polling-based I/O.
The document provides an overview of the Linux kernel architecture and processes. It discusses key kernel concepts like the monolithic kernel design, system calls, loadable modules, virtual memory, and preemptive multitasking. It also covers kernel functions, layers, and context switching between processes. The CPU scheduler, multi-threading, inter-process communication techniques, and tunable kernel parameters are summarized as well.
The document summarizes the architecture of the Linux operating system. It discusses the main components of Linux including the kernel, process management, memory management, file systems, device drivers, network stack, and architecture-dependent code. The kernel is at the core and acts as a resource manager. It uses a monolithic design. Process and memory management are handled via data structures like task_struct and buddy allocation. Virtual memory is implemented using page tables. File systems organize files in a hierarchy with inodes. Device drivers interface with hardware. The network stack follows a layered model. Architecture code is separated by subdirectory.
Part 02 Linux Kernel Module ProgrammingTushar B Kute
Presentation on "Linux Kernel Module Programming".
Presented at Army Institute of Technology, Pune for FDP on "Basics of Linux Kernel Programming". by Tushar B Kute (http://tusharkute.com).
Some basic knowledges required for beginners in writing linux kernel module - with a description of linux source tree, so that the idea of where and how develops. The working of insmod and rmmod commands are described also.
The document provides an introduction to Linux and device drivers. It discusses Linux directory structure, kernel components, kernel modules, character drivers, and registering drivers. Key topics include dynamically loading modules, major and minor numbers, private data, and communicating with hardware via I/O ports and memory mapping.
The document provides an introduction to Linux kernel modules. It discusses that kernel modules extend the capabilities of the Linux kernel by executing code as part of the kernel. It then describes the anatomy of a kernel module, including initialization and cleanup functions. The document demonstrates a simple "hello world" kernel module example and how to build, load and unload kernel modules. It also introduces the idea of character device drivers as a more advanced kernel module example.
There are three ARM core variants for memory models:
- Flat memory model provides direct access to all memory but is not suitable for multi-tasking. Examples are ARM7TDMI and ARM7EJ-S.
- Memory Protection Unit (MPU) uses regions of memory to actively protect system resources for multi-tasking platforms. Examples are ARM946E-S.
- Memory Management Unit (MMU) provides protection of resources and enables virtual memory. The ARM1026EJ-S uses an MMU.
The document discusses Linux device drivers, including how they interface between user programs and hardware using a file interface, are implemented as loadable kernel modules (LKMs) that are inserted into the kernel, and how common operations like open, read, write, and release are mapped to kernel functions through a file operations structure.
This document discusses several important Android kernel modules. It begins by outlining key network modules like Netlink for IPC, network schedulers for packet sequencing, Netfilter for packet filtering, and Nftables as a new packet filtering framework. For sound, it describes ALSA as the software API for sound card drivers. Regarding graphics, it mentions the Direct Rendering Manager for interfacing with GPUs and configuring display modes, and the Graphics Execution Manager for managing graphics memory. Finally, it notes the evdev input driver handles input from devices like keyboards and mice.
The document discusses kernel mode and user mode in operating systems. It defines the kernel as the core software that manages computer resources and allows users to share them. The kernel can interact directly with hardware and runs with privileged access, while user applications run in user mode with restrictions to prevent crashes. A process switches from user to kernel mode by making a system call to access privileged resources like hardware, files or memory. Interrupts from devices can also trigger a switch to kernel mode to handle the interrupt.
This course gets you started with writing device drivers in Linux by providing real time hardware exposure. Equip you with real-time tools, debugging techniques and industry usage in a hands-on manner. Dedicated hardware by Emertxe's device driver learning kit. Special focus on character and USB device drivers.
The document discusses device drivers and their modeling for real-time schedulability analysis. It provides an overview of device drivers, their design and how they interact with hardware and operating systems. It then discusses challenges device drivers pose for real-time systems, where all tasks must complete within specified time constraints. It presents an analysis of the Linux e1000 network interface driver as a case study and references additional resources on the topic.
The document discusses macro processors and their functions. It covers:
- The basic functions of a macro processor including macro directives, prototypes, and expansion.
- One-pass and two-pass macro processing algorithms. One-pass can handle nested macros recursively.
- Data structures for storing macro definitions and arguments during expansion.
- Techniques for handling nested macros, generating unique labels, and conditional and looping expansion.
- Machine-independent features like concatenation, null arguments, and keyword parameters.
This document discusses software tools, defining them as system programs that interface programs with input/output data or help develop other programs. It describes two main types of software tools: tools for program development like program generators and editors, and user interface tools. The document then provides detailed explanations and examples of different tools used for program design, testing, debugging, documentation, and performance enhancement.
The macro processor detects macro triggers like % and & in the code and handles macro code and variable substitution. It stores macro variables and their values in a symbol table. When it detects a macro variable reference &variable, it looks up the variable name in the symbol table and substitutes the variable value into the code before passing it to the compiler. This allows macros to generate dynamic code with variable data.
The document provides an introduction to compilers. It discusses that compilers are language translators that take source code as input and convert it to another language as output. The compilation process involves multiple phases including lexical analysis, syntax analysis, semantic analysis, code generation, and code optimization. It describes the different phases of compilation in detail and explains concepts like intermediate code representation, symbol tables, and grammars.
Loader is a utility program that takes object code as input, prepares it for execution by allocating and relocating code into memory, and initiates the execution process. It can be time consuming as it must load and link all subroutines each time. Though smaller than an assembler, a loader still uses a considerable amount of space. Dividing the loading process into a binder and module loader can help address these problems by making the loading process more efficient.
This document discusses inherited and synthesized attributes in semantic analysis using syntax-directed translation (SDT). It covers:
- Synthesized attributes are defined by semantic rules associated with productions and rely only on child nodes, while inherited attributes rely on parent/sibling nodes.
- Terminals can have synthesized attributes from lexing but not inherited attributes. Nonterminals can have both.
- Annotated parse trees show attribute values, while dependency graphs determine evaluation order.
- S-attributed definitions rely only on synthesized attributes and evaluate bottom-up. L-attributed definitions restrict inherited attributes to avoid cycles.
- SDTs can construct syntax trees during parsing to decouple parsing from translation
The document discusses the role of the parser in compiler design. It explains that the parser takes a stream of tokens from the lexical analyzer and checks if the source program satisfies the rules of the context-free grammar. If so, it creates a parse tree representing the syntactic structure. Parsers are categorized as top-down or bottom-up based on the direction they build the parse tree. The document also covers context-free grammars, derivations, parse trees, ambiguity, and techniques for eliminating left-recursion from grammars.
Lex is a tool that generates lexical analyzers (scanners) that are used to break input text streams into tokens. It allows rapid development of scanners by specifying patterns and actions in a lex source file. The lex source file contains three sections - definitions, translation rules, and user subroutines. The translation rules specify patterns and corresponding actions. Lex compiles the source file to a C program that performs the tokenization. Example lex programs are provided to tokenize input based on regular expressions and generate output.
Introduction to embedded linux device driver and firmwaredefinecareer
This document provides an introduction to embedded Linux and device drivers. It defines an embedded system as a special-purpose computer designed to perform dedicated functions with real-time constraints, often embedded as part of a complete device. Linux is commonly used in embedded systems due to advantages like reuse of existing components, community support, and low cost. The document outlines the typical hardware and software requirements for developing embedded Linux systems, and provides an overview of the Linux kernel architecture for device drivers, including the unified device model, bus drivers, and platform devices.
The document discusses syntax-directed translation using attribute grammars. Attribute grammars assign semantic values or attributes to the symbols in a context-free grammar. A depth-first traversal of the parse tree executes semantic rules that calculate the values of attributes. There are two types of attributes: synthesized attributes which are computed bottom-up and inherited attributes which are passed top-down. L-attributed grammars allow efficient evaluation by passing inherited attributes left-to-right during a depth-first traversal.
The document discusses top-down and bottom-up parsing techniques. Top-down parsing constructs a parse tree starting from the root node and progresses depth-first. It can require backtracking. Bottom-up parsing uses shift-reduce parsing, shifting input symbols onto a stack until they can be reduced based on grammar rules.
A macro processor allows programmers to define macros, which are single line abbreviations for blocks of code. The macro processor performs macro expansion by replacing macro calls with the corresponding block of instructions defined in the macro. It uses a two pass approach, where the first pass identifies macro definitions and saves them to a table, and the second pass identifies macro calls and replaces them with the defined code, substituting any arguments.
This document provides an overview of a system software course, including:
- The course will cover compilers, assemblers, linkers, loaders, macro processors, and file/process management under Windows.
- The objective is to gain a deep understanding of how computers work by examining the relationship between system software and machine architecture, and how system software aids in program development and execution.
- Key topics will include an introduction to system software, compilers, loaders, operating systems, and the programming process from coding to running a program.
Compiler vs Interpreter-Compiler design ppt.Md Hossen
This document presents a comparison between compilers and interpreters. It discusses that both compilers and interpreters translate high-level code into machine-readable code, but they differ in their execution process. Compilers translate entire programs at once during compilation, while interpreters translate code line-by-line at runtime. As a result, compiled code generally runs faster but cannot be altered as easily during execution as interpreted code. The document provides examples of compiler and interpreter code and outlines advantages of each approach.
The document discusses the different phases of a compiler including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It provides details on each phase and the techniques involved. The overall structure of a compiler is given as taking a source program through various representations until target machine code is generated. Key terms related to compilers like tokens, lexemes, and parsing techniques are also introduced.
Phases of the Compiler - Systems ProgrammingMukesh Tekwani
The document describes the various phases of compilation:
1. Lexical analysis scans the source code and groups characters into tokens.
2. Syntax analysis checks syntax and constructs parse trees.
3. Semantic analysis generates intermediate code, checks for semantic errors using symbol tables, and enforces type checking.
4. Optional optimization improves programs by making them more efficient.
The document discusses operating system concepts like processor modes, system calls, inter-process communication (IPC), process creation, and linking and loading of processes. It defines an operating system as an interface between the user and computer hardware that manages system resources efficiently. It explains that the CPU has two modes - kernel mode and user mode - to distinguish between system and user code. System calls allow user programs to request services from the operating system by triggering an interrupt to switch to kernel mode. Common IPC mechanisms and their related system calls are also outlined.
Part 04 Creating a System Call in LinuxTushar B Kute
Presentation on "System Call creation in Linux".
Presented at Army Institute of Technology, Pune for FDP on "Basics of Linux Kernel Programming". by Tushar B Kute (http://tusharkute.com).
The document discusses trap handling in Linux, focusing on system calls. It begins with background on interrupts, traps, and system calls. It then describes the function call flow from start_kernel() and initialization of the Interrupt Descriptor Table (IDT). Next, it covers system call entry and initialization of the system call table. Finally, it discusses the system call procedure from a user application using glibc and the Linux kernel. Key topics include IDT structure and gates, MSR register usage for system calls, fast vs slow system call paths, and how system calls are invoked and handled in the kernel.
Operating System Concepts PresentationNitish Jadia
Operating System Concepts was presented by Nitish Jadia in Bhopal null meet, to make people aware of the internal workings of the OS they use.
The contents and explanation of this PPT was inspired and taken from Operating System Concepts by by silberschatz galvin gagne.
This document provides an overview of kernel APIs and system calls. It discusses that the kernel acts as an interface between user applications and hardware, and its objectives include managing communication between software and hardware, controlling tasks, and establishing communication. It describes different types of kernels like monolithic, micro, and hybrid kernels. It also explains important concepts like system calls, which allow processes to request services from the kernel, and examples of common system calls like wait, fork, exec, kill, and exit. Finally, it gives examples of kernel APIs for managing linked lists, including adding/removing entries and moving between lists.
This document provides an overview of Unix fundamentals, including computer hardware components, storage terminology, processing power terminology, what an operating system is, the difference between single-user and multi-user systems, multitasking and timesharing in Unix, components of Unix like the kernel and shell, the history and versions of Unix including Linux, how to log in and out of Unix, the Unix interface, Xwindows, common windows managers, the Unix bootup sequence, an overview of the Unix file system structure including inodes and data blocks, how permissions are checked, the process of reading and writing files, how the open system call works to retrieve an inode, and the structure and contents of directories.
This document discusses the key components of a file system and disk drive architecture. It covers disk structure, scheduling, management, and swap space. It describes concepts like disk formatting, boot blocks, bad blocks, and swap space location. The document also provides a high-level overview of the Windows 2000 operating system, covering its architecture, kernel, processes and threads, exceptions, and subsystems.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
This document discusses the key components of a file system and disk drive operation. It covers disk structure, scheduling, management, and swap space. Disks are made up of logical blocks that are mapped to physical sectors. Scheduling algorithms like FCFS, SSTF, SCAN and C-SCAN are used to optimize disk head movement. Disk formatting partitions disks and writes file system structures. Swap space is used for virtual memory paging.
The document discusses various components and structures of operating systems. It covers topics like process management, memory management, file management, I/O management, networking, protection systems, and system calls. Operating systems are commonly structured using layers or modules to separate mechanisms from policies and simplify debugging. Well-known examples like MS-DOS, UNIX, and OS/2 are discussed to illustrate different approaches to system design and implementation.
The document discusses operating system layers and components. It describes how kernels and servers work together to perform invocation-related tasks like communication, scheduling, and resource management. Kernels set protection for physical resources through memory management and address spaces. Processes have separate user and kernel address spaces to safely access resources. Threads and processes are core components managed by the operating system.
Unit I (8 Hrs)
Introduction to System Software , Overview of all system software’s: Operating system
I/O manager, Assembler, Compiler, Linker ,Loader.
Introductory Concepts: Operating system functions and characteristics, historical evolution
of operating systems, Real time systems, Distributed systems.
Unit II (8 Hrs)
Operating Systems: Methodologies for implementation of O/S service system calls,
system programs, Interrupt mechanisms.
Process - Concept of process and threads, Process states, Process management, Context
switching
Interaction between processes and OS Multithreading Process Control, Job schedulers,
Job Scheduling, scheduling criteria, scheduling algorithms
Unit III (8 Hrs)
Concurrency Control : Concurrency and Race Conditions, Mutual exclusion requirements
Software and hardware solutions, Semaphores, Monitors, Classical IPC problems and
solutions.
Deadlock : Characterization, Detection, Recovery, Avoidance and Prevention.
Unit IV (8 Hrs)
Memory management: Contiguous and non-contiguous, Swapping, Paging, Segmentation
and demand Paging, Virtual Memory, Management of Virtual memory: allocation, fetch and
replacement
Unit V (8 Hrs)
File Management: Concept, Access methods, Directory Structure, Protection, File System
implementation, Directory Implementation, Allocation methods, Free Space management,
efficiency and performance
IO systems: disk structure, disk scheduling, disk management.
Unit VI (8 Hrs)
Case Study of Linux: Structure of LINUX, design principles, kernel, process management and
scheduling, file systems installing requirement, basic architecture of UNIX/Linux system, Kernel,
Shell Commands for files and directories cd, cp, mv, rm, mkdir, more, less, creating and viewing
files, using cat, file comparisons, View files, disk related commands, checking disk free spaces,
Essential linux commands.
Understanding shells, Processes in linux – process fundamentals, connecting processes with pipes,
Redirecting input output, manual help, Background processing, managing multiple processes,
changing process priority, scheduling of processes at command, batch commands, kill, ps, who,
sleep, Printing commands, grep, fgrep, find, sort, cal, banner, touch, file, file related commands – ws,sat, cut, grep, dd, etc. Mathematical commands – bc, expr, factor, units. Vi, joe, vim editor
This document discusses how to add a system call to the Ubuntu operating system. It begins with an introduction to Ubuntu and explains what a kernel and system calls are. It then provides step-by-step instructions for adding a new system call, including editing relevant files, adding code, and recompiling the kernel. Sample code for calling the new system call is also included. The document concludes with instructions for replacing the current kernel with a new one that has been compiled.
Basic Organisation and fundamental Of Computer.pptxhasanbashar400
This document provides an introduction to the basic organization of a computer system. It discusses the central processing unit (CPU) which includes the arithmetic logic unit (ALU), control unit, and registers. It describes the instruction cycle and how pipelining improves processor efficiency. Memory, both primary and secondary, is covered as well as input/output devices. The bus system that connects the main components is defined, including address lines, data lines, and control lines. Finally, operating systems and their key functions in managing hardware and software are introduced.
The document discusses the boot sequence of a computer system. It examines each step including the PROM monitor, boot block, secondary boot loader, and OS kernel initialization. It also covers modifying the boot process, selecting alternate boot devices, different boot loaders, and proper system shutdown procedures.
The document discusses the boot sequence of a computer system. It examines each step including the PROM monitor, boot block, secondary boot loader, the OS kernel, and start-up scripts. The administrator should understand this boot process as well as how to modify the boot sequence, select alternate devices, and properly shut down the system.
Multimodal Embeddings (continued) - South Bay Meetup SlidesZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
Planetek Italia is an Italian Benefit Company established in 1994, which employs 120+ women and men, passionate and skilled in Geoinformatics, Space solutions, and Earth science.
We provide solutions to exploit the value of geospatial data through all phases of data life cycle. We operate in many application areas ranging from environmental and land monitoring to open-government and smart cities, and including defence and security, as well as Space exploration and EO satellite missions.
Using ScyllaDB for Real-Time Write-Heavy WorkloadsScyllaDB
Keeping latencies low for highly concurrent, intensive data ingestion
ScyllaDB’s “sweet spot” is workloads over 50K operations per second that require predictably low (e.g., single-digit millisecond) latency. And its unique architecture makes it particularly valuable for the real-time write-heavy workloads such as those commonly found in IoT, logging systems, real-time analytics, and order processing.
Join ScyllaDB technical director Felipe Cardeneti Mendes and principal field engineer, Lubos Kosco to learn about:
- Common challenges that arise with real-time write-heavy workloads
- The tradeoffs teams face and tips for negotiating them
- ScyllaDB architectural elements that support real-time write-heavy workloads
- How your peers are using ScyllaDB with similar workloads
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
Securiport Gambia is a civil aviation and intelligent immigration solutions provider founded in 2001. The company was created to address security needs unique to today’s age of advanced technology and security threats. Securiport Gambia partners with governments, coming alongside their border security to create and implement the right solutions.
Discover practical tips and tricks for streamlining your Marketo programs from end to end. Whether you're new to Marketo or looking to enhance your existing processes, our expert speakers will provide insights and strategies you can implement right away.
IT market in Israel, economic background, forecasts of 160 categories and the infrastructure and software products in those categories, professional services also. 710 vendors are ranked in 160 categories.
2. Kernel The central component of operating system The bridge between applications and the hardware Manages the system's resources providing the lowest-level abstraction layer (especially processors and I/O devices) and makes these facilities available through inter-process communication mechanisms and system calls.
3. Monolithic Kernel All OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access and increases the speed of the system The main disadvantages are the dependencies between system components and the fact that large kernels can become very difficult to maintain.
4. Micro Kernel Consists of defining a simple abstraction over the hardware, with a set of system calls to implement minimal OS services such as memory management etc. Other services, including those normally provided by the kernel such as networking, are implemented in user-space programs, referred to as servers. Easier to maintain, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls.
5. LINUX Kernel LINUX Kernel is monolithic with the capability of dynamically loading modules
6. System Call Interface Each call within the libc library is generally a syscall X macro, where 2 X+2 is the number of parameters used by the actual routine. Each syscall macro expands to an assembly routine which sets up the calling stack frame and calls _system_call() through an interrupt, via the instruction int $0x80 which is used to transfer control to the kernel. Assembly Code for Executing System Calls Since the system call interface is exclusively register-parametered, six parameters at most can be used with a single system call. %eax is the syscall number; %ebx , %ecx , %edx , %esi , %edi and %ebp are the six generic registers used as param0-5; and %esp cannot be used because it's overwritten by the kernel when it enters ring 0 (i.e., kernel mode). In case more parameters are needed, some structure can be placed wherever you want within your address space and pointed to from a register (not the instruction pointer, nor the stack pointer; the kernel-space functions use the stack for parameters and local variables). This case is extremely rare, though; most system calls have either no parameters or only one.
7. An Example read(fd,buffer,size) corresponds to a system call with three arguments. So this will be expanded by the _syscall3 macro. A statement static inline syscall3(int,read,int,fd,char *,buf,off_t,len) has been added in the header file for the macro expansion to take place. After the expansion the system call number will be in register 'zero' and the argument to the system call will be in the general purpose registers of the processor. Also the macro will call the int 0x80 instruction after loading the registers. So the kernel mode is initiated and kernel will execute on behalf of the process initiated by the system call. The int 0x80 instruction will call the system call handler. Each system call will have a routine or program defined in the kernel. Address of each of these routine are stored in the in array named sys_call_table (code in the file /usr/src/linux-/arch/i386/kernel/entry.S ).The path name has to be filled accordingly for different kernel version. The system call handler will call the service routine corresponding to the system call, by looking at the system call number loaded in the register "zero". So the service routine corresponding to the read system call will be executed. After executing the service routing the control comes back to the system call handler and it will then give control back to the user process, also the mode of operation is changed to user mode.
8. Interrupt Vector 0x80 is used to transfer control to the kernel. This interrupt vector is initialized during system startup, along with other important vectors such as the system clock vector. The startup_32() code found in /usr/src/linux/boot/head.S starts everything off by calling setup_idt() . This routine sets up an IDT (Interrupt Descriptor Table) with 256 entries. No interrupt entry points are actually loaded by this routine, as that is done only after paging has been enabled and the kernel has been moved to 0xC0000000. An IDT has 256 entries, each 4 bytes long, for a total of 1024 bytes. When start_kernel() (found in /usr/src/linux/init/main.c) is called it invokes trap_init() (found in /usr/src/linux/kernel/traps.c). trap_init() sets up the IDT via the macro set_trap_gate() (found in /usr/include/asm/system.h). trap_init() initializes the interrupt descriptor table as shown here:
9. User/kernel space User space is that portion of system memory in which user processes run. This contrasts with kernel space, which is that portion of memory in which the kernel executes and provides its services. Kernel space is where the kernel (i.e., the core of the operating system) executes (i.e., runs) and provides its services.
10. Characteristics of Kernel No conventional C library. Has its own Kernel version of C library. Doesn’t support floating points No memory protection LINUX kernel (scheduler) is preemptive
11. LINUX Kernel Versioning Version number is x.y.z X – major version number Y – minor version number (even is stable/production, odd is unstable/developer version) Z – release number
12. Device Drivers Softwares that interface hardwares with the OS Depending on data transmission there are three types Char device – byte transmission (keyboard) Block device – block transmission (hard disk) Network device – packet transmission (router, bridge) Implemented in two ways Statically loadable – automatically loaded with the OS image at the boot time Dynamically loadable – added to kernel space on demand
13. Compiling Driver Programs Cant compile in the conventional way using CC. Need to write makefile to include special files. Makefile priority – makefile Makefile gnu_makefile To use the customized makefile use make –f my_make_file
14. Programming Drivers Programs are called modules Like a main(), in driver programs the entry point is init_module() and exit is cleanup_module() After compiling a .ko file is generated Using the utility insmod you can load your module in the kernel Insmod is the runtime linker which Allocates memory Uses .ko to generate an executable Adds executable to the kernel space
15. Programming Drivers II Using macros we can rename the entry/exit points module_init( my_entry ) Module_exit( my_exit )
18. Passing parameters to module Process and threads are same in LINUX In UNIX they are different In any program we can use struct task_struct *current to see the process info i.e. current->pid
19. Registration of modules In init_module(), use register_chrdev(major number,”name”,&fops) It returns 0 on success and negative on failure Use register_blkdev for block device Declaration struct file_operations fops. f ops contains all the driver function mapping to file functions (open, close, read, write) These major numbers are reserved for dynamic modules 60-63 | 120-127 | 240-254 register function can also be used for a dynamic registration (set first argument zero) in which case the returned positive number is the major number or negative on failure Minor number is used for different devices for the same driver Network device is treated as socket We can start a virtual device using mknod command mknod LED c 254 0 12 bits for major number and 20 bits for minor number
20. File System Information in LINUX Every open file has an integer file descriptor value that the operating system uses as an index to a 1024-entry file descriptor table located in the u (user) area for the process. The per-process file descriptor table references a system file table, which is located in kernel space. In turn, the system file table maps to a System Inode table that contains a reference to a more complete internal description of the file.