AI (My Variant)
AI (My Variant)
AI (My Variant)
Symbolická UI
1. Turingov stroj a Turingov test
A Turing machine is a theoretical computing device that is able to read and write
symbols on a tape, and to move the tape left or right based on a set of rules. It is a
simple model of a computer that is used to understand the fundamental principles of
computation and to study the limits of what can be computed.
The Turing test is a method for evaluating the intelligence of a machine. It is based on
the idea that if a machine can engage in a conversation with a human in such a way that
the human cannot tell that they are talking to a machine, then the machine can be
considered intelligent.
1. Simple reflex agents: These agents are based on simple reflexes and do not have
any internal state. They act based on the current state of the environment and do
not consider past experiences.
2. Model-based agents: These agents maintain a model of the environment and use
this model to make predictions and plan their actions.
3. Goal-based agents: These agents have a specific goal in mind and take actions
to try to achieve this goal.
4. Utility-based agents: These agents assign a value or utility to each possible
action and choose the action with the highest expected utility.
5. Learning agents: These agents are able to learn from their experiences and
improve their performance over time.
6. Autonomous agents: These agents are able to operate independently and make
decisions on their own, without direct human intervention.
7. Multi-agent systems: These systems consist of multiple agents that interact with
each other and the environment to achieve a common goal.
8. Bounded rational agents: These agents make decisions based on limited
information and computational resources, rather than considering all possible
options.
9. Rational agents: These agents make decisions based on a systematic and logical
process, considering all available information and options.
3. Kedy, kým a pri akej príležitosti bol uvedený pojem umelá inteligencia
The term "artificial intelligence" was coined in 1956 by John McCarthy, a computer
scientist at Dartmouth College, during a conference on the topic. McCarthy defined
artificial intelligence as "the science and engineering of making intelligent machines."
In a state space, the states of the system are represented by points, and the transitions
between states are represented by edges. The edges can be labeled with the conditions
or actions that cause the transition to occur.
For example, consider a simple system that has two states: "on" and "off". The state
space for this system would consist of two points, representing the two states, and one
edge connecting the two points. The edge could be labeled with the condition "flip
switch" to indicate that flipping the switch causes the system to transition from one
state to the other.
A state space can be used to represent a wide range of systems, including physical
systems, software systems, and biological systems. It is a useful tool for understanding
and predicting the behavior of complex systems, and it can be used to design control
systems that can regulate and optimize the performance of those systems.
5. Ako môže byť definovaný problém, ktorý riešime
A problem that we are trying to solve can be defined as a gap between the current state
and a desired state, and the goal of problem-solving is to find a solution that closes this
gap.
There are several key elements to consider when defining a problem that we are trying
to solve:
1. The current state: This is the starting point, or the situation we are in right now. It
is important to clearly define and understand the current state in order to identify
the problem and the gap between the current and desired states.
2. The desired state: This is the state that we want to reach or the goal we are trying
to achieve. It is important to define the desired state clearly and specifically in
order to have a clear target to work towards.
3. The gap: The gap is the difference between the current and desired states. It is
important to identify the gap in order to understand what needs to be done to
solve the problem.
4. The constraints: Constraints are any factors that limit our ability to solve the
problem or that must be taken into account when finding a solution. It is
important to identify and understand the constraints in order to find a feasible
solution.
1. Depth-first search: This is the most basic form of DFS, in which the algorithm
explores the nodes of the state space in a depth-first manner, following a path to
its maximum depth before backtracking and exploring other paths.
2. Depth-first search with memoization: This type of DFS uses a technique called
memoization to store the results of subproblems in order to avoid re-computing
them. It can be used to improve the efficiency of DFS in problems with
overlapping subproblems.
3. Iterative deepening search: This type of DFS uses a depth-first search approach,
but it repeats the search at increasing depth levels until the goal is found. It is
useful for finding solutions in large state spaces, as it avoids the cost of
expanding all the nodes at once.
4. Depth-limited search: This type of DFS is similar to iterative deepening search,
but it only searches to a fixed depth level and does not continue searching at
deeper levels if the goal is not found.
There are several key considerations when using problem decomposition to solve a
problem:
1. Best-first search: Best-first search is a heuristic search algorithm that expands the
nodes in the state space in order of their estimated cost to the goal. It always
chooses the node that is estimated to be the closest to the goal, based on a
heuristic function that estimates the cost of reaching the goal from each node.
2. Greedy search: Greedy search is a heuristic search algorithm that always chooses
the node that appears to be the best option at the current moment, without
considering the long-term consequences of that choice. It may not always find the
optimal solution, but it is often faster than other heuristic search algorithms.
Heuristic search algorithms can be an effective tool for solving problems in artificial
intelligence and computer science, as they can find good solutions quickly in situations
where an optimal solution is not necessary or is not possible to compute.
The A* search algorithm uses a heuristic function, also known as a "heuristic estimate," to
estimate the cost of reaching the goal from each node. The heuristic function should be
designed to underestimate the actual cost of reaching the goal, in order to ensure that A*
search finds the optimal solution if one exists.
11. Heuristická funkcia (skutočná heuristická funkcia, odhad heuristickej funkcie a jej zložky)
A heuristic function, also known as a "heuristic estimate," is a function that is used to
estimate the cost of reaching the goal from a given node in a state space. It is used by
heuristic search algorithms, such as A* search, to guide the search for a solution to a
problem.
1. The cost of reaching the node: This is the actual cost of reaching the node from
the start state, and it is typically calculated using a cost function that assigns a
cost to each action.
2. The heuristic estimate of the cost to the goal: This is the estimated cost of
reaching the goal from the node, based on a heuristic function that estimates the
cost of reaching the goal from each node.
3. The total estimated cost: The total estimated cost is the sum of the cost of
reaching the node and the heuristic estimate of the cost to the goal. It is used to
guide the search for a solution and to determine the order in which the nodes are
expanded.
1. Knowledge base contains all the information and knowledge that the system uses
to perform its tasks.
2. Inference engine: It applies a set of rules or algorithms to the knowledge base to
infer new conclusions or make decisions based on the information that is
available.
3. User interface: The user interface is the part of the expert system that allows
users to interact with the system and input data or ask questions
There are several different types of knowledge that are used in knowledge-based
systems, including:
A production rule system, also known as a rule-based system is system that uses a set of
rules or "productions" to make decisions or draw conclusions based on the information
that is available
4. Dopredné a spätné reťazenie pravidiel
Forward chaining is an approach to reasoning that starts with the available data and uses
a set of rules to draw conclusions or make decisions based on that data.
Backward chaining is an approach to reasoning that starts with the goal and uses a set of
rules to work backwards to the data, gathering the necessary information along the way.
There are several different ways to traverse an inference network, including depth-first
search, breadth-first search, and best-first search.
There are several different ways to handle uncertain knowledge in AI systems, including:
8. Vysvetľovací mechanizmus
Explanatory mechanisms are important in AI because they allow users to understand
how the system arrived at a particular conclusion or decision.
Evaluating a KBS is involves protecting the system and its data from unauthorized access
or modification.
10. Porovnanie ZS s ľudským riešiteľom a jeho aplikačné možnosti ZS
1. One major difference between a KBS and a human problem-solver is that a KBS
relies on a predetermined set of rules or knowledge, whereas a human
problem-solver can use a variety approaches.
2. KBS can process and analyze large amounts of data much faster. However, a KBS
may lack the ability to make intuitive or creative leaps.
Some potential applications for KBS include: medicine, engineering, and computer
systems,spam filtering, image recognition,financial planning, weather forecasting,
customer behavior prediction etc.
Strojové učenie
1. Definícia strojového učenia (SU)
Machine learning (ML) is a type of artificial intelligence (AI) that allows systems to learn
and improve their performance without being explicitly programmed.
Numeric representation:This is often used for data that can be naturally represented as
numbers.
1. Supervised learning algorithms: Use labeled training data to learn a mapping from
input data to output labels.
2. Unsupervised learning algorithms: Use unlabeled data to learn patterns and
relationships in the data.
3. Reinforcement learning algorithms: Learn by interacting with an environment and
receiving feedback in the form of rewards or penalties
4. Deep learning algorithm:Use multiple layers of artificial neural networks to learn
complex relationships in the data
5. Transfer learning algorithms: Are a type of supervised learning algorithm that can
use knowledge learned from one task to improve performance on a related task.
Decision trees are a type of machine learning model that uses a tree-like structure to
make decisions based on input data.
To build a decision tree, an algorithm is used to split the data into smaller and smaller
subsets based on the values of the input features.Decision trees are commonly used for
classification tasks, although they can also be used for regression tasks.
6. Princípy na ktorých pracuje algoritmus ID3
The ID3 (Iterative Dichotomiser 3) algorithm is a decision tree learning algorithm that was
developed by Ross Quinlan in the 1980s. It is based on the concept of entropy, which measures
the uncertainty or randomness in a dataset. The ID3 algorithm works by iteratively dividing the
data into smaller and smaller subsets based on the values of the input features, with the goal of
creating leaf nodes that are as pure as possible.
based on the Bayes theorem of probability, often used for classification tasks.
The Naive Bayes classifier works by making predictions based on the probability of an
event occurring given the presence of certain features. For example, if we want to classify
an email as spam or not spam, the Naive Bayes classifier would estimate the probability
of an email being spam given the presence of certain words or phrases in the email.
10. Chyba klasifikácie a miery efektívnosti
Classification error defined as the proportion of misclassified data points in a dataset. For
example, if a model correctly classifies 90 out of 100 data points, its classification error is
10%.
Subsymbolic AI often relies on statistical and probabilistic methods, and is typically used
for tasks that require learning from data or adapting to changing environments.
In machine learning, a feature space is a set of all possible values that a feature can take
on.
The relationship between function approximation and logic is that logic can be used to
encode rules or constraints that the function should satisfy.
3. Biologická motivácia subsymbolickej umelej inteligencie
On the other hand, a "white box" model is a model with its inner workings visible to the user.
Explainable AI (XAI) focuses on developing and using algorithms and models that can
provide explanations for their predictions and decisions.
Trustworthy AI refers to the use of AI in a way that is ethical, responsible, and transparent,
and that does not harm or discriminate against individuals or groups.
The Machine Trust Index (MTI) is a measure of the level of trust that users have in a
particular AI system.
Brain-like systems, also known as artificial neural networks, are computational models
that are inspired by the structure and function of the human brain.
AI inference computers are computers that are specifically designed to perform tasks
related to artificial intelligence.
AI learning computers are computers that are able to learn from data and adapt to new
situations.
7. Základy tvorby človeka, zrozumiteľný systém na vstupe a výstupe, elementy neistoty v SUI ,
fuzzy množiny
The creation of a human-like artificial intelligence (AI) system would require natural
language processing, perception, reasoning, and decision-making.
AI system would need to have a understandable input and output system. This would
require the development of advanced natural language processing and speech
recognition capabilities.
There are several elements of uncertainty that can be present in AI systems, including:
Data uncertainty,Prediction uncertainty.
Fuzzy sets are characterized by a membership function, which defines the degree to
which an element belongs to the set.
Search algorithms are algorithms that are used to find a solution to a problem by
exploring a search space and evaluating potential solutions.
Kansei systems are systems that are designed to capture and analyze human emotions
and preferences in order to improve products or services.
Chromosomes are structures that are found in the cells of living organisms, and that
contain genetic information in the form of DNA.
The search space of an algorithm is the set of all possible solutions to a problem, or the
set of all possible states that can be reached during the execution of an algorithm.
Operations that involve "movement" is actions moving solution to a different part of the
search space to explore other solutions.
9. Tri základné výzvy v UI – Klasifikácie, Predikcia, Zreťazené rozhodovanie s odmenou
a trestom
SK: Klasifikácia - proces zaraďovania údajov do špecifických skupín alebo tried.
Predikcia - proces použitia modelu na predikciu výsledku na základe vstupných údajov.
Rozhodovanie s odmenou a trestom - proces trénovania modelov na prijímanie rozhodnutí,
ktoré maximalizujú odmenu a minimalizujú trest.
ENG: Classification - process of assigning data to specific groups or classes.
Prediction - process of using a model to predict an outcome based on input data.
Decision making with reward and punishment - process of training models to make
decisions that maximize a reward and minimize a punishment.
Neurónové siete
19. Biologická inšpirácia neurónových sietí
SK: Biologická inšpirácia neurónových sietí - koncept z neurofyziológie na vytvorenie
umelej neurónovej siete podobnej ľudskej sieti v mozgu.
ENG: The biological inspiration of neural networks - concept from neurophysiology to
create an artificial neural network similar to the human network in the brain.
IThere are several different forms that a feature space can take, depending on the type of
data and the goals of the analysis.
Numeric feature space: This type of feature space represents data as a set of numerical
values.
Categorical feature space: This type of feature space represents data as a set of
categories or discrete values.
Binary feature space: This type of feature space represents data as a set of binary values,
such as 0 or 1, or true or false.
Multivariate feature space: This type of feature space represents data as a set of multiple
variables or features.
1. Classification: The output is a class label, indicating which category the input data
belongs to.
2. Regression: output of is a numerical value.
3. Probability: output a probability value
4. Activation value: The output of is often referred to as its activation value.
24. Perceptrón ako jednoduchá neurónová sieť
A perceptron is a type of simple neural network that was developed in the 1950s as a
model of how the brain works. It consists of a single layer of artificial
neurons.Perceptrons are capable of learning simple linear classifiers.
The universal approximation theorem is a mathematical result that a neural network with
a single hidden layer can approximate any continuous function to any desired degree of
accuracy, given enough hidden units.
Image recognition: Neural networks can be trained to recognize and classify images.
Speech recognition: Neural networks can be used to transcribe and translate spoken
language.
Machine translation: Neural networks can be used to translate text from one language to
another.
Robotics: Neural networks can be used to control robots and other devices.
Evolučné algoritmy
10. Čo je základný princíp evolučných algoritmov ?
Selection: The fittest individuals are selected from the population based on their fitness,
and they are chosen to reproduce and create a new generation of solutions.
Reproduction: Genetic operators, such as crossover and mutation, are applied to the
selected individuals to generate new solutions by combining and modifying their traits.
Replacement: The new generation of solutions replaces the previous generation, and the
process is repeated iteratively until the algorithm converges on an optimal solution, or
until a predetermined number of generations has been reached.
12. Aké sú základné pojmy v EA ?
The goal of GP is to find a program that satisfies the given problem specification, or a set
of constraints and objectives that the program must satisfy. The result of GP is a program
that solves the given problem or performs the given task.
17. Čo sú Interaktívne Evolučné výpočty – uveďte príklad
Example of IEC is IEM. IEM involves a human listener who assesses the quality of the
generated music and provides feedback to guide the evolution process.
Analysis and prediction of financial markets, such as stock prices, exchange rates, and
risk management
Generation and optimization of art and music, such as paintings, sculptures, and
melodies