Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Report I-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Report I

COS40003

Nadelin Nop 103487208

1
Week 1 3
Concurrent Programming 3
Parallel Programming 3
Concurrent vs Parallel Programming 3
Example of Concurrent and Parallel Programming 4
Distributed Computing 4
Cluster Computing 4
Grid Computing 4
Cloud Computing 4
Fog/edge computing 5
Quantum computing 5
Week 2 5
Process 5
Process APIs 7
Adverse Effects of Context Switching on Modern OS 7
Week 3 8
Week 4 10
Threads 10
Threads vs Processes 10
User-Level vs. Kernel-Level Threads: 10
Advantages of Threads 10
Disadvantages of Threads 10
Threads in java 10
Thread pools 12
Thread creation in current applications 12
Week 5 12
Locks implementation in Java 13
Lock Interface 13
Synchronized keywords 13
Adverse effects of placing loops inside locks 15
Week 6 15
Controlling interrupts 15
Loads/Store 15
Test and Set 16
Compare and Swap 16
Fetch and Add 16
Yield() 16
Queues 16
References 18

2
Week 1

Concurrent Programming
Is the programming approach where processes are designed to run concurrently, meaning
they are executed during overlapping time periods even though they may not be processed
simultaneously. This can occur when the system has limited resources, such as a single
CPU, which allows only one task to be in progress at any given time. The order in which
these tasks are processed depends on the architecture of the system.

A key part of concurrent computing is time-sharing, which is a way to manage how tasks
share the computer's processor, and it helps improve the system response time. Another
important aspect is concurrency control, which manages how tasks access shared resources
to prevent problems like race conditions and deadlocks.

Parallel Programming
This is a programming approach that involves executing processes simultaneously, usually
by breaking down large tasks into smaller subtasks that can be processed independently on
multiple processors and then combined. In parallel programming, the main objective is to
maximise resource utilisation by executing multiple tasks simultaneously on different
processors or cores. Thus the number of CPUs required would also be more to keep up with
the tasks.

Concurrent vs Parallel Programming


Overall the difference between parallel programming and concurrent programming is their
mechanism in completing tasks. Concurrent programming juggles multiple tasks within a
singular processor where parallel programming as mentioned earlier divides tasks into
smaller ones and distributes them across multiple processors. As shown in the example
below, this can be depicted with multiple tasks sharing one processor for concurrency and
two processors with different tasks each running in parallel.

3
Figure 1 - Task distribution of CPU between concurrent and parallel programming (Kumar, 2019)

Example of Concurrent and Parallel Programming

An example of concurrent programming would be like having one chef in the kitchen
multitasking by switching between cooking dishes, working on one dish at a time but
handling many overall. As for parallel programming, it’s like having 10 chefs working on 10
different dishes speeding up the completion of each dish.

Distributed Computing
It is a system where computers with their own local memory communicate with each other
through message passing. The computers work together, often geographically dispersed, to
perform tasks collectively.

Cluster Computing
It is a system where a number of low-cost computers are interconnected via a fast local area
network to provide high performance computing power.

Grid Computing
Grids are composed of loosely coupled networks of heterogeneous computers that work
together for resource-sharing and solving large-scale tasks. It facilitates the sharing and
utilization of computing resources, including processing power, storage, and network
bandwidth. They are often geographically dispersed.

Cloud Computing
Cloud computing is a model for delivering on-demand computing resources over the internet,
such as servers, storage, databases, software, and networking. Instead of owning and
maintaining physical infrastructure locally, organisations can access these resources from
cloud service providers on a pay-as-you-go basis.

4
It delivers services over the internet, such as Software as a Service (SaaS), Platform as a
Service (PaaS), and Infrastructure as a Service (IaaS). It aims to hide computation details
from users by leveraging distributed computing techniques. It has many benefits such as

● Scalability
● Cost effective
● Reliable
● Global accessibility
● Security

Fog/edge computing

Fog and edge computing bring computing resources closer to where data is generated,
reducing the need for constant communication with centralized cloud servers. Fog
computing typically involves intermediate nodes between end devices and the cloud, while
edge computing focuses on processing data directly at the source. Both paradigms aim to
minimize latency, optimize bandwidth usage, and improve system efficiency, making them
essential for applications requiring real-time processing and low-latency communication,
such as IoT deployments and autonomous systems.

Quantum computing

Quantum computing uses principles of quantum physics to process information. Instead of


using bits to encode information, quantum uses qubit, which can exist in multiple states
simultaneously due to a phenomenon known as superposition.This allows quantum
computers to perform many calculations at once, drastically increasing their computational
power. Furthermore, qubits can be entangled, meaning the state of one qubit can depend on
the state of another, no matter the distance between them, which is a property that can lead
to unprecedented processing speeds.

Week 2

Process
To understand what a process is, it is essential to understand the difference between a
process and a program first. A program itself is the binary machine code, whereas the
process is the code being executed. It involves the current program and its activity, as
opposed to the program itself, which is a set of static instructions.

Processes in programming are necessary for managing tasks and running multiple programs
concurrently. They allow for efficient use of the CPU by executing multiple tasks during
overlapping time periods. This is especially crucial in systems with limited resources.
Moreover, processes facilitate the execution of different parts of a program simultaneously
on different processors or cores, maximising resource utilisation. Furthermore,
process-related APIs enable the creation of new processes, waiting for child processes to

5
exit, and running new programs within existing processes, contributing to the overall control
and efficiency of task execution.

A process includes the image of the program, memory, CPU state and the operating system
state. It also includes the process stack, which contains temporary data such as subroutine
parameters, return addresses, and local variables, and a data section, which contains global
variables.

Process address space is the memory range within which a process is defined and can
operate. This space is divided into several segments, such as the program code, stack
space, data space. A process can also have a heap, which is memory dynamically allocated
during process runtime.

A context switch is the process of storing and restoring the state of a process or thread so
that execution can be resumed from the same point at a later time. This allows multiple
processes to share a single CPU.

Figure 2.1: Phase 1 of context switching (Swinburne University of Technology 2, 2024, p21)

The creation of a process involves four stages:

1. The 'New' state, where the process is being created.


2. The 'Ready' state, where the process is waiting to be assigned to a processor.
3. The 'Running' state, where instructions are being executed.
4. The 'Terminated' state, where the process has finished execution.

Process abstraction is the concept of designing a process without considering the low-level
details of the implementation. It allows us to focus on what a process does rather than how it
does it.

A process can be in one of the following states:

● 'Ready': The process is waiting to be assigned to a processor.


● 'Running': Instructions are being executed.
● 'Blocked': Process waits for another operation to occur before getting executed.

6
Process state transitions occur when circumstances change, and they are usually triggered
by events or conditions such as switches from running to waiting, waiting to ready, or running
to ready. Consider a simple program that reads a file and processes its contents. If the file is
large, the process may enter a blocked state, waiting for the disk to provide the data (I/O
request). Once the data is available, it moves back to the 'running' state, processing the data
until the task is complete. After processing, the process is terminated.

Figure 2.2: State Transitioning (Swinburne University of Technology 2, 2024, p27)

Process APIs

● fork(): This API is used to create a new process.


● wait(): This API causes the parent process to pause until one of its child processes
terminates.
● exec(): This API is used to execute a new program within a process. When a
process calls exec(), the existing process image is replaced with a new process
image, effectively changing the process's activity. This is different from fork(),
which creates an entirely new process. exec() merely changes the current activity
of an existing process.

Adverse Effects of Context Switching on Modern OS


Context switching can adversely impact the performance of applications, particularly in
environments with high concurrency.

● The overhead associated with saving and loading process states during a switch
consumes valuable CPU cycles, reducing overall system throughput.
● Additionally, frequent context switches lead to more times spent on switching
between processes than executing them and this will affect the overall performance.
This is especially problematic for real-time applications such as video streaming or
online gaming, where even minimal delays can degrade the user experience.

To mitigate these issues, developers may use advanced scheduling algorithms such as
priority based scheduling or implement a form of process synchronisation.

Week 3

7
Scheduling methods are used to place processes into the CPU based on priorities or
execution time. The goal of this is to ensure efficient CPU allocation, which can be measured
by metrics such as
● CPU utilization - measures the workload of the CPU
● Turnaround Time - time measured from the start of the job until termination
● Response Time - time it takes for CPU to response to task upon arrival
● Missed deadlines - unprocessed tasks that are scheduled to be done

There are 6 main scheduling methods:

First Come, First Serve (FIFO): This is the simplest scheduling algorithm. The process that
requests the CPU first is the one that gets the CPU. The main disadvantage of this approach
is that it can lead to poor performance if a long process arrives ahead of short ones.

Advantage

● simple to implement

Drawback

● A long process can block the queue, especially if shorter tasks are lined up behind it.
This can lead to inefficient utilization of CPU resources.
● It needs to know the arrival time for implementation.

Shortest Job First (SJF): This scheduling algorithm assigns the CPU to the process with
the smallest total execution time. While it minimizes waiting time, it can suffer from the
problem of knowing the time of the next request.

Advantage

● Shorter jobs are completed faster, which can significantly reduce the overall
turnaround time for all processes.

Drawback

● Longer processes may suffer starvation because shorter processes are always given
preference. This can lead to indefinite blocking or very long wait times for those
longer processes.
● It also needs to know the duration and arrival time of all tasks.

Preemptive Shortest Job First/Shortest Time-To-Completion First (PSJF/STCF): This is


a variant of SJF, where the running process is interrupted if a new process with a shorter
total time arrives. This approach minimizes waiting time for all processes.

Round Robin (RR): This scheduling algorithm allocates a fixed time slot or quantum for
each process in the system. The CPU is assigned to each process for a time quantum or
until it finishes, whichever comes first. This ensures all processes get a fair share of the
CPU.

8
Figure 3: RR assigned processes(Swinburne University of Technology 3, 2024, p87)

Lottery Scheduling: This is a probabilistic scheduling algorithm where the scheduler


assigns 'lottery tickets' to each process, and the scheduler draws a random ticket to select
the next process. The more tickets a process has, the more likely it is to get CPU time. This
reduces the biasness of the system.

Multi-Level Feedback Queue (MLFQ): This scheduling algorithm separates processes


according to their CPU bursts. If a process uses too much CPU time, it is moved to a
lower-priority queue. This method prevents short processes from waiting for long ones to
finish. The five rules of Multi-Level Feedback Queue (MLFQ) scheduling are:

If Priority(A) > Priority(B), A runs (B doesn't).

If Priority(A) = Priority(B), A & B run in round-robin.

When a job enters the system, it is placed at the highest priority.

If a job uses its entire time slice at the given level, its priority level is reduced.

If a job yields the CPU before its time slice is up, it stays at the same priority level.

Other scheduling algorithms include:

● Rate Monotonic Scheduling (RMS): This is used for real-time systems where tasks
have a certain periodicity. The task with the shortest period gets the highest priority.
● Deadline Scheduling: This is another real-time scheduling algorithm where the
tasks are scheduled based on their deadlines. The task with the earliest deadline
gets the highest priority.

Week 4

Threads
A thread is essentially a lightweight process that can run concurrently with other threads
within the same process. The reason for their existence is due to the fact that prior to
threads, a single process would sequentially execute its tasks and cause delay. Additionally,
a process also has different address spaces, which leads to higher data and communication

9
cost. In order to mitigate these problems, threads were invented to act as sub-tasks of a
process that share the same memory space and resources of their parent process.

Threads vs Processes
● Process is a running program whereas thread is a sub-task of a program.Therefore
processes allows OS to multitask, whereas threads allows a process to multitask.
● Processes have their own resources allocated in an operating System. Threads of a
process share resources and memory addresses.
● Process takes more time for context switching whereas threading takes less time

User-Level vs. Kernel-Level Threads:


After introducing the concept of threads, it is important to distinguish between the two
primary types of threads used in modern operating systems:
● User-Level Threads: Managed entirely in user space by the application, without OS
intervention.
● Kernel-Level Threads:Managed directly by the operating system, visible and
scheduled by the OS.

Advantages of Threads
● Overlap I/O with computation allowing for optimization of CPU time
● Less context switched required as they share same memory space so more cost
effective
● Better mapping to multi processors, leading to better performance

Disadvantages of Threads
● Unwanted thread interactions due to the same shared memory space
● Complexity of debugging
● Complexity of meli-threaded programming
● Incompatibility with existing code since older code isn’t built for concurrent execution

Threads in java
In Java, threads can be created either by extending the `Thread` class or implementing the
`Runnable` interface, with the latter approach providing greater flexibility. Once instantiated,
threads are started using the `start()` method, which in turn invokes their `run()` method to
execute the designated task. Java offers a suite of thread management functions, such as
`wait()`, `notify()`, and `join()`, to handle synchronization and execution order, while
deprecated methods like `stop()` are avoided for safe operation.

Concurrency is further harnessed in multi-core systems, where threads can run in parallel,
significantly boosting performance. However, this causes an issue with thread safety,
particularly when dealing with non-thread-safe Java classes, such as those in the `java.util`
package. Since operations like incrementing an integer aren't atomic, synchronization is
needed to prevent inconsistencies when threads share variables.

10
Thread implementation in Java can be thoroughly explained below in the code snippet as it
highlights how threading allows multitasking, the sharing of resources, and thread safety.

Figure 4: RR assigned processes(Swinburne University of Technology Lecture Code Example 3, 2024)

The code defines a class `IncrementTest` that can be run by threads. It demonstrates basic
threading and shared resource access in Java:
● classData: a variable shared by all instances of the class and thus shared by all
threads created from these instances.
● instanceData: a variable, which would normally be unique to each instance of the
class. However, in this code, because both threads are given the same instance, they
both access and modify this same variable.
● localData: A local variable that is not shared.
When the `run()` method is called, each thread will:
1. Increment localData from 0 to 10,000,000.
2. At the same time, increment instanceData and classData
However, since both instanceData and classData are shared between the threads ,
increments to these variables are not thread-safe. This means that without proper
synchronization, the threads may interfere with each other's increments, leading to an
incorrect total value when they print out their results. This shows the how thread safety might
be put to risk without proper synchronisation.

In the main method it creates two instances of the thread class and pass them into the same
instance of the IncrementTest class, meaning that both threads will operate on the same
instance of the IncrementTest class and share the same instanceData and classData
variable. When the program is running, both threads will start running the run() of the
IncrementClass and increment the variables 10,000,000 times in a loop, resulting in
20,000,000 increments.

11
Thread pools
To manage the overhead associated with numerous short-lived threads, Java employs
thread pools. These pools maintain a set of threads ready to execute tasks, reducing the
time and memory cost of thread creation. Thread pools work with a task queue system,
where worker threads pick up tasks as they become available, with synchronization
primitives like `notify()` used to alert threads of new tasks in an empty queue.

The size of thread pools and task queues is critical to balance; too many threads can waste
resources, while too few can lead to processing delays. Thus, thread pools must be carefully
sized according to the workload and expected task characteristics to ensure efficient
operation and avoid potential bottlenecks or resource starvation in high-demand scenarios.

Thread creation in current applications


● C#-threads can be created by using the System.Threading.Thread to create and
manage classes.
● Python-provides the threading module, which can be used to create threads. Due to
the Global Interpreter Lock (GIL), threading in Python is typically used for I/O-bound
tasks rather than CPU-bound tasks.

Week 5

Based on the example earlier regarding the threads, it has already been mentioned that
there is a conflict between two threads, since they are sharing the resource- specifically
classData and instanceData variables. In order to have a better understanding it is required
for us to have a base knowledge on what these key terms mean
● Critical section - snippet of code that accesses shared resources, that should’t be
executed by more than one thread at a time.
● Race condition - occurs when multiple threads enter the critical section, leading to
unpredictable outcomes.
● Indeterminate - program that has one or more race conditions, leading to different
program output each run.
● Mutual Exclusion - principle that ensures race condition doesn’t occur, meaning only
one thread enters the critical section.
● Atomacity - ensures operations entered in critical sections remain uninterrupted.
With these concepts in mind, the conflict in our example arises from both threads attempting
to increment classData and instanceData without mutual exclusion. In essence, if two
threads read the same value simultaneously, increment it, and then write it back, they end up
overwriting each other's work, leading to an incorrect total increment. This manifests as
non-deterministic and often incorrect results upon program execution, where the final values
of classData and instanceData do not match the expected outcomes.

In order to mitigate this issue, a lock is required. A lock serves as a mechanism to ensure
mutual exclusion by only allowing one thread at a time to execute a critical section of the
code. By locking classData and instanceData before attempting to increment them, a thread
can safely complete its operation without interference from other threads. After the

12
operation, the thread releases the lock, allowing the next thread to proceed. This
sequentializes access to the shared variables, effectively resolving the race condition and
ensuring that operations on shared resources are atomic, thereby maintaining data integrity
and consistency across threads.

Locks implementation in Java


There are two ways this can be done and they are Lock Interface, Synchronized keywords.

Lock Interface
A class that is widely used to implement this is Reentrant lock. In this class, we can
instantiate a ReentrantLock and use its lock() and unlock() methods within a function. The
critical section of this code will be enclosed by try and finally, which is important since this
will release the lock immediately after the thread finishes executing.

Figure 5.1: Lock Interface implementation(Swinburne University of Technology 5, 2024, p48)

Key methods of lock interface


● lock(): Acquires the lock.
● unlock(): Releases the lock, typically placed in a finally block to ensure the lock is
released even if an exception occurs.
● tryLock(): Attempts to acquire the lock without blocking.
● lockInterruptibly(): Acquires the lock unless the thread is interrupted.
● tryLock(long time, TimeUnit unit): Attempts to acquire the lock within a given waiting
time.

Synchronized keywords
For simpler use cases, Java also allows the use of the synchronized keyword to automate
lock acquisition and release for code blocks or methods, ensuring mutual exclusion with less
control compared to explicit locks. Each object actually has an associated intrinsic lock.
When an object synchronised method is called, the method must acquire the object’s
intrinsic lock, which is initially unowned.If no other thread currently holds the intrinsic lock on
the object, the thread calling the synchronized method acquires the lock and proceeds to
execute the method. This ensures that while the method is executing, no other thread can
execute any synchronized method on the same object. Essentially, the lock acts as a
gatekeeper, allowing only one thread at a time to execute synchronized methods of the given
object. There are two ways this can be implemented:

Synchronized Statement

13
Figure 5.2: Synchronized Statements implementation(Swinburne University of Technology 5, 2024, p55)
A synchronized statement, also known as a synchronized block, is a block of code that is
synchronized on a particular object. Unlike synchronized methods, which lock on the entire
method, synchronized blocks can lock on any object, providing a more granular level of
control over the blocks of code that need synchronization.

Synchronized Methods

Figure 5.3: Synchronized Statements implementation(Swinburne University of Technology 5, 2024, p52)

A synchronized method is a method that is defined with the synchronized keyword. This
ensures that only one thread can execute the method at a time per object instance. When a
thread invokes a synchronized method, it automatically acquires a lock on the object that
contains the method (if the method is an instance method) or on the class (if the method is a
static method).

Ultimately deciding between whether to use lock interface or the synchronized keyword and
which one is better. The answer is lock interface as they provide more functionalities such as
● More flexible structure by acquiring lock without waiting indefinitely using methods
like tryLock() and if the lock could not be acquired then something else could be
done. This leads to better performance for applications with high concurrency
demands.
● With methods like lockInterruptibly(), the Lock interface allows a thread waiting for a
lock to be interrupted and proceed with alternative actions if necessary. This is
particularly useful in applications where timely response to user actions or events is
crucial, as it prevents a thread from being stuck indefinitely while trying to acquire a
lock.

14
Adverse effects of placing loops inside locks
Placing loops inside locks can significantly impact performance and scalability in Java
applications. This practice can lead to:

● Withholding a lock for a long time and this will increase the time other threads must
wait to acquire the lock, leading to potential bottlenecks.
● This withholding of the lock for a long period can serialize access to resources that
might otherwise be accessed concurrently, reducing the application's overall
parallelism and efficiency.
● Incorrectly managing locks within loops can increase the risk of deadlocks, especially
if multiple locks are acquired in a nested manner without careful ordering.

Week 6
In order to build a lock the hardware support and operating system support are required.
However to build a good lock certain criteria has to be met such as
● Mutual exclusion- preventing multiple threads from entering critical sections of code.
Examples of these include Reentrant lock
● Fairness- each thread queueing for the lock in a fair and predictable manner
● Performance- low time overhead added by using this lock. Examples include intrinsic
lock
Some examples of lock implementations include

Controlling interrupts
This involves disabling interrupts to prevent the CPU from switching to another thread while
the current thread is in a critical section. It's a direct way to achieve mutual exclusion by
making sure the current execution cannot be preemptively interrupted. However this
approach requires too much trust in a program since greedy programs can call lock() and
use up the resources which violates the fairness criteria. It also doesn't work on multiple
processors. Turning off interrupts for a period can also lead to system problems such as the
system cannot respond to other hardware systems causing delay processing.

Loads/Store
It uses a basic boolean flag to indicate whether the lock has been acquired.

Figure 6: No Mutual Exclusion(Swinburne University of Technology 6, 2024, p29)


The main issue in this code is no guarantee of atomicity between test and set operation. This
means that even though a thread might find the lock (the flag) to be available (i.e., set to 0),

15
by the time it sets the flag to 1, another thread could have already changed the flag's value.
This breaks the mutual exclusion principle and deem this implementation a failed one.
Another issue is with performance, since it will endlessly check for the value of the flag,
spin-waiting, this will waste the thread’s time to wait for a lock.

Test and Set


This is an extension of the previous implementation, essentially it is an atomic operation that
tests a value and sets it to a new value. To prevent the mutual exclusion issue present
earlier, test and set will be made into a single atomic operation to ensure that only one
thread will acquire the lock.

Compare and Swap


Similar to test-and-set, compare-and-swap is an atomic operation but with an added
condition. It only sets a new value if the current value matches an expected value.

Fetch and Add


An atomic operation that increments a variable and returns its previous value. This can be
used for constructing ticket locks, where each thread gets a unique ticket number upon
entering the critical section and must wait for its turn to access the shared resource.

Yield()
It is assumed that the operating system has a primitive that allows threads to yield control of
the CPU. The yield function is a concurrency mechanism that a thread can call when it wants
to give up the CPU voluntarily. This is typically used in a situation where the thread enters a
busy-wait state while trying to acquire a lock (also known as spinning). By yielding, the
thread signals to the scheduler that it can switch to running another thread, potentially
reducing wasted CPU cycles that would occur if it just spun waiting for the lock to become
available.

Queues
Yielding was an improvement over busy-waiting because it allowed other threads to run
instead of letting one thread use up the entire CPU slice without doing any meaningful work.
However, this approach isn't completely efficient. Consider a situation with many threads,
like a hundred, all competing for the same lock. Once a thread acquires the lock, the others,
which will inevitably fail to obtain the lock, end up using their allocated CPU time just to yield.
This leads to a lot of unnecessary context-switching, which consumes valuable CPU cycles
that could be used for threads that are ready to perform actual work.

To improve upon this, a more sophisticated mechanism provided by the operating system is
necessary. This mechanism involves two key functions:
● park()-allows a thread that cannot acquire a lock to go into a sleeping state, not just
yielding control but actually stopping execution and not using any CPU time. The
thread remains asleep until it is specifically woken up, which avoids the inefficiency of
repeatedly checking for lock availability.

16
● unpark(threadID)-function is used to wake up a sleeping thread. When the lock
becomes available, instead of all waiting threads rushing to acquire the lock and then
yielding when they fail, the thread that is next in line is specifically woken up using
`unpark()`. This method ensures that only one thread is attempting to acquire the lock
at any given time, reducing context-switching overhead and making better use of
CPU resources.
This queuing mechanism is far more efficient than either busy-waiting or yielding because it
reduces unnecessary CPU consumption and context switching, allowing the CPU to focus
on threads that can actually make progress.
Overview of different lock implementations

Correctness Fairness Performance

Controlling No No No
Interrupts

Load/Store No No No

Test and Set Yes No Bad (single


processor)
All
right(multiprocessor)

Compare and Yes No Better than Test and


Swap Set
Bad(single
processor)
All right
(multiprocessor)

Fetch and Add Yes Yes Better than compare


and Swap
Bad(single
processor)
All
right(multiprocessor)

Yield() Yes Yes Yes

Queues Yes Yes Yes(lpark() and


unpark() usage)

Semaphore Yes Yes Varies depending on


contention

Stamped lock Yes No Good for scenarios


with more reads
than writes

17
References
● Context Switching in OS, Common issues & Challenges | DataTrained 2023,
viewed
<https://datatrained.com/post/context-switching-in-os/#:~:text=Overhead%3A
%20Context%20Switching%20in%20OS>.
● GUO, X, LIU, S & WANG, X 2017, ‘Impact of phase-locked loop on stability of
active damped LCL-filter-based grid-connected inverters with capacitor
voltage feedback’, Journal of Modern Power Systems and Clean Energy, vol.
5, no. 4, pp. 574–583.
● ‘Deadline Scheduling’ n.d., www.ibm.com, viewed 14 April 2024,
<https://www.ibm.com/docs/tr/zos/2.4.0?topic=scheduling-deadline>.
● Kumar, S 2019, ‘Parallel Programming vs. Concurrent Programming’,
Medium, viewed
<https://medium.com/@sanju.skm/parallel-programming-vs-concurrent-progra
mming-f993d3f9ceea>.
● Lu, D 2019, ‘What is a quantum computer?’, New Scientist, viewed
<https://www.newscientist.com/question/what-is-a-quantum-computer/>.
● ‘Rate-monotonic scheduling’ 2020, GeeksforGeeks, viewed
<https://www.geeksforgeeks.org/rate-monotonic-scheduling/>.

18

You might also like