Since the Bin Packing Problem (BPP) is one of the main NP-hard problems, a lot of approximation algorithms have been suggested for it. It has been proven that the best algorithm for BPP has the approximation ratio of and the time order... more
Since the Bin Packing Problem (BPP) is one of the main NP-hard problems, a lot of approximation algorithms have been suggested for it. It has been proven that the best algorithm for BPP has the approximation ratio of and the time order of, unless. In the current paper, a linear-approximation algorithm is presented. The suggested algorithm not only has the best possible theoretical factors, approximation ratio, space order, and time order, but also outperforms the other approximation algorithms according to the experimental results; therefore, we are able to draw the conclusion that the algorithms is the best approximation algorithm which has been presented for the problem until now.
We consider problems of partitioning n points in a metric space into k clusters such that the maximum distance from a point to the associated cluster center is minimized. For the more general triangle inequality versions of these k-center... more
We consider problems of partitioning n points in a metric space into k clusters such that the maximum distance from a point to the associated cluster center is minimized. For the more general triangle inequality versions of these k-center problems, it is NP-hard to approximate the solution to within better than a factor of two, and polynomial-time 2-approximation algorithms are known. Using reductions from Planar 3SAT we show that the factor of two bound is also tight or nearly tight for L 1 , L ∞ , and L 2 metrics in R d for d ≥ 2.
The traveling salesman problem (TSP) is to find a tour of a given number of cities (visiting each city exactly once) where the length of this tour is minimized. Testing of every path in an N city tour, would be N!. A 30 city tour would... more
The traveling salesman problem (TSP) is to find a tour of a given number of cities (visiting each city exactly once) where the length of this tour is minimized. Testing of every path in an N city tour, would be N!. A 30 city tour would have to measure the total distance of be 2.65 X 1032 different tours. Assuming a trillion additions per second, this would take 252,333,390,232,297 years. Adding one more city would cause the time to increase by a factor of 31. Obviously, this is an impossible solution. Genetic algorithms (GA) are a relatively new optimization technique which can be applied to various problems, including those that are NP-hard. The technique does not ensure an optimal solution, however it usually gives good approximations in a reasonable amount of time. This, therefore, would be a good algorithm to try on the traveling salesman problem, one of the most famous NP-hard problems.
Flow shop problem is a NP-hard combinatorial optimization problem. Its application appears in logistic, industrial and other fields. It aims to find the minimal total time execution called makespan. This research paper propose a... more
Flow shop problem is a NP-hard combinatorial optimization problem. Its application appears in logistic, industrial and other fields. It aims to find the minimal total time execution called makespan. This research paper propose a novel adaptation never used before to solve this problem, by using computational intelligence based on cats behavior, called Cat Swarm Optimization, which is based on two sub modes, the seeking mode when the cat is at rest, and the tracing mode when the cat is at hunt. These two modes are combined by the mixture ratio. The operations, operators will be defined and adapted to solve this problem. To prove the performance of this adaptation, some real instances of OR-Library are used. The obtained results are compared with the best-known solution found by others methods.
This report is a result of a study about Monte Carlo algorithm applied to Travelling Salesman Problem (TSP) exploring the Simulated Annealing (SA) meta-heuristic. We've a discrete space of cities and the algorithm finds the shortest route... more
This report is a result of a study about Monte Carlo algorithm applied to Travelling Salesman Problem (TSP) exploring the Simulated Annealing (SA) meta-heuristic. We've a discrete space of cities and the algorithm finds the shortest route that starts at one of the towns, goes once through everyone of the others and returns to the first one. The main goal is explore the possibility of having a zero cost solution with n cities and p processors running in parallel. To perform this analysis we'll gonna use a TSP algorithm with MATLAB. This is an academic work developed on University Of Minho.
In this paper, Gurobi Optimizer is utilized in Python to illustrate the computational complexity in the traveling salesman problem (TSP). We divide the TSP into two models of (1) 30 nodes and (2) 253 nodes. To define the nodes, we use GPS... more
In this paper, Gurobi Optimizer is utilized in Python to illustrate the computational complexity in the traveling salesman problem (TSP). We divide the TSP into two models of (1) 30 nodes and (2) 253 nodes. To define the nodes, we use GPS coordinates of the 30 active MLB ballparks and the 253 locations around the globe which have ever held a MLB game. While the program uses brute force calculations on both problems, an optimal solution is attainable for the 30-node model and we are left with an approximation for the 253-node model. The two models are proposed to demonstrate the rate at which complexity increases as the size of TSP increases. This experiment provides further evidence that TSP optimization is an NP-Hard problem and solutions cannot be proposed or verified in polynomial time.
This paper addresses the issue of scheduling halls for university courses and laboratory works among the number of faculties and or departments in the university. It uses the Izundu Hall Scheduling Algorithm to address this issue; this... more
This paper addresses the issue of scheduling halls for university courses and laboratory works among the number of faculties and or departments in the university. It uses the Izundu Hall Scheduling Algorithm to address this issue; this algorithm not only selects a hall on a random point, but uses an economy protocol to ensure that halls assigned for a particular course is dependent on some factors that permeates comfort among students and manages the limited hall resources in the university.
Green supply chain is among the hottest recent research subjects in supply chain management which not only optimises the costs and service levels during a time period all over the chain, but also considers effect of emission of CO 2... more
Green supply chain is among the hottest recent research subjects in supply chain management which not only optimises the costs and service levels during a time period all over the chain, but also considers effect of emission of CO 2 greenhouse gas on the overall value of supply chain for sustainable development. Therefore, in this paper, a bi-objective, multi-stage, multi-product quadratic optimisation model is proposed by taking into account the quality level of the purchased materials plus their reprocessing extra costs to a system in stochastic mode and the environmental costs of CO 2 emission. Here we also consider both straight and step by step transportation in supply chain network design. Since the problem is NP-hard, a priority-based genetic algorithm model is proposed. In this paper, three methods including BOM, LP-metric and elastic BOM are applied to solve small data, whereas, elastic BOM offers larger solution space and more justified solutions. Hence, for solving large data, three priority-based genetic algorithms corresponding to MODM techniques are utilised. Then, by using TOPSIS for small and ANOVA for medium and large problems, the optimum procedure for balancing the existing costs and CO 2 emission in the chain is selected.
In a pair of correlated quantum systems a measurement in one corresponds to a change in the state of the other. In the process, information about the original state of the system is lost. Measurement along which set of projectors would... more
In a pair of correlated quantum systems a measurement in one corresponds to a change in the state of the other. In the process, information about the original state of the system is lost. Measurement along which set of projectors would accompany minimum loss in information content is the optimization problem of quantum discord and is an important aspect of a classical to quantum transition because it asks us to look for the most classical states. This optimization problem is known to be NP-complete and is important because discord is defined through it making it a major obstacle on every computation. The standard zero discord condition helps us move to a stronger measure that addresses the correlated observables instead, in such a context we show that minimum discord occurs at the diagonal basis of the reduced density matrices and present an analytical expression of the measure. The work employs manipulations of information inequalities that leads to an exact optimization.
An emerging technique, inspired from the natural and social tendency of individuals to learn from each other referred to as Cohort Intelligence (CI) is presented. Learning here refers to a cohort candidate’s effort to self supervise its... more
An emerging technique, inspired from the natural and social tendency of individuals to learn from each other referred to as Cohort Intelligence (CI) is presented. Learning here refers to a cohort candidate’s effort to self supervise its own behavior and further adapt to the behavior of the other candidate which it intends to follow. This makes every candidate improve/evolve its behavior and eventually the entire cohort behavior. This ability of the approach is tested by solving an NP-hard combinatorial problem such as Knapsack Problem (KP). Several cases of the 0–1 KP are solved. The effect of various parameters on the solution quality has been discussed.The advantages and limitations of the CI methodology are also discussed.
Σκοπός της εργασίας είναι η σύγκριση τεχνικών υπολογιστικής νοημοσύνης για την κατηγοριοποίηση των μαθητών σύμφωνα με τις αρχές της εξατομικευμένης διδασκαλίας. Βάσει των αποτελεσμάτων, η εφαρμογή αλγόριθμου διαφορικής εξέλιξης και... more
Σκοπός της εργασίας είναι η σύγκριση τεχνικών υπολογιστικής νοημοσύνης για την κατηγοριοποίηση των μαθητών σύμφωνα με τις αρχές της εξατομικευμένης διδασκαλίας. Βάσει των αποτελεσμάτων, η εφαρμογή αλγόριθμου διαφορικής εξέλιξης και γενετικού αλγορίθμου σε δεδομένα που αναφύονται από την πολυπρισματική αξιολόγηση των χαρακτηριστικών και αναγκών του μαθητή, συμβάλλει στον αποτελεσματικό σχηματισμό ομοιογενών μαθητικών ομάδων, με κοινά γνωρίσματα μαθησιακής ικανότητας, δυσκολιών, ψυχοκοινωνικό και γνωστικό προφίλ. Έτσι ο εκπαιδευτικός μπορεί να διαχειρίζεται ευκολότερα τους μαθητές του και να γνωρίζει τα χαρακτηριστικά της κάθε ομάδας. Η μεθοδολογία αυτού του προβλήματος παρέχει βελτιωμένες ικανότητες κατηγοριοποίησης σε σχέση με συμβατικές μεθόδους.
The aim of this paper is to present a method that uses computational intelligence techniques to classify pupils in mathematics according to the principles of personalized instruction. According to the results, the application of differential evolution optimization algorithm as well as genetic algorithm to a set of data emerging from the multiperspectivity assessment of the pupil's particular characteristics and needs, contributes to the effective formation of homogeneous student groups with common skills, difficulties, psychosocial and cognitive profiles. Thus, the teacher can easily manage students, by knowing the characteristics of each group. This method provides improved categorization capabilities with respect to the existing ones.
ABSTRACT: Advancement in cognitive science depends, in part, on doing some occasional ‘theoretical housekeeping’. We highlight some conceptual confusions lurking in an important attempt at explaining the human capacity for rational or... more
ABSTRACT: Advancement in cognitive science depends, in part, on doing some occasional ‘theoretical housekeeping’. We highlight some conceptual confusions lurking in an important attempt at explaining the human capacity for rational or coherent thought: Thagard & Verbeurgt’s computational-level model of humans’ capacity for making reasonable and truth-conducive abductive inferences (1998; Thagard, 2000). Thagard & Verbeurgt’s model assumes that humans make such inferences by computing a coherence function (f_coh), which takes as input representation networks and their pair-wise constraints and gives as output a partition into accepted (A) and rejected (R) elements that maximizes the weight of satisfied constraints. We argue that their proposal gives rise to at least three difficult problems.
Hyper-heuristics are a class of high-level search techniques which operate on a search space of heuristics rather than directly on a search space of solutions. Early hyper-heuristics focussed on selecting and applying a low-level... more
Hyper-heuristics are a class of high-level search techniques which operate on a search space of heuristics rather than directly on a search space of solutions. Early hyper-heuristics focussed on selecting and applying a low-level heuristic at each stage of a search. Recent trends in hyper-heuristic research have led to a number of approaches being developed to automatically generate new heuristics from a set of heuristic components. This work investigates the suitability of using genetic programming as a hyper-heuristic methodology to generate con-structive heuristics to solve the multidimensional 0-1 knapsack problem. A population of heuristics to rank knapsack items are trained on a subset of test problems and then applied to unseen instances. The results over a set of standard benchmarks show that genetic programming can be used to generate constructive heuristics which yield human-competitive results.
We present shared-memory parallel methods for Maximal Clique Enumeration (MCE) from a graph. MCE is a fundamental and well-studied graph analytics task, and is a widely used primitive for identifying dense structures in a graph. Due to... more
We present shared-memory parallel methods for Maximal Clique Enumeration (MCE) from a graph. MCE is a fundamental and well-studied graph analytics task, and is a widely used primitive for identifying dense structures in a graph. Due to its computationally intensive nature, parallel methods are imperative for dealing with large graphs. However, surprisingly, there do not yet exist scalable and parallel methods for MCE on a shared-memory parallel machine. In this work, we present efficient shared-memory parallel algorithms for MCE, with the following properties: (1) the parallel algorithms are provably work-efficient relative to a state-of-the-art sequential algorithm (2) the algorithms have a provably small parallel depth, showing that they can scale to a large number of processors, and (3) our implementations on a multicore machine shows a good speedup and scaling behavior with increasing number of cores, and are substantially faster than prior shared-memory parallel algorithms for MCE.
Advancements in computing technologies make new<br>platforms and large volumes of data available to<br>businesses and governments to discover hidden<br>underlying patterns in the data and creating new<br>knowledge.... more
Advancements in computing technologies make new<br>platforms and large volumes of data available to<br>businesses and governments to discover hidden<br>underlying patterns in the data and creating new<br>knowledge. While businesses need to embrace these<br>technologies in order to stay ahead of competition,<br>governments can reap great benefits in cost effectively<br>delivering social services and bring about improvement in social development indices. However, before any new technology can become a powerful resource (for business or for government), there exists a fundamental need for extensive planning, such that one can chalk out a future trajectory, prepare for the changes to come, and invest prudently. Exploitation of Big Data platforms and technologies requires both corporate strategies and government policies to be in place much before the results would start pouring in. In this paper, we investigate the potential of available Big Data platforms and technologies, their current use by various governments, and their potential for use by the central and state Governments in India.
Local search is a fundamental tool in the development of heuristic algorithms. A neighborhood operator takes a current solution and returns a set of similar solutions, denoted as neighbors. In best improvement local search, the best of... more
Local search is a fundamental tool in the development of heuristic algorithms. A neighborhood operator takes a current solution and returns a set of similar solutions, denoted as neighbors. In best improvement local search, the best of the neighboring solutions replaces the current solution in each iteration. On the other hand, in first improvement local search, the neighborhood is only explored until any improving solution is found, which then replaces the current solution. In this work we propose a new strategy for local search that attempts to avoid low-quality local optima by selecting in each iteration the improving neighbor that has the fewest possible attributes in common with local optima. To this end, it uses inequalities previously used as optimality cuts in the context of integer linear programming. The novel method, referred to as delayed improvement local search, is implemented and evaluated using the travelling salesman problem with the 2-opt neighborhood and the max-c...
The longest path problem on graphs is an NP-hard optimization problem, and as such, it is not known to have an efficient classical solution in the general case. This study develops two quadratic unconstrained binary optimization (QUBO)... more
The longest path problem on graphs is an NP-hard optimization problem, and as such, it is not known to have an efficient classical solution in the general case. This study develops two quadratic unconstrained binary optimization (QUBO) formulations of this well-known problem. The first formulation is based on an approach outlined by (Bauckhage et al., 2018) for the shortest path problem and follows simply from the principle of assigning positions on the path to vertices; using k|V| binary variables, this formulation will find the longest path that visits exactly k of a graph's |V| vertices, if such a path exists. As a point of theoretical interest, we present a second formulation based on degree constraints that is more complicated, but reduces the dependence of the number of variables on k to logarithmic; specifically, it requires |V| + 2|E|log2 k + 3|E| binary variables to encode the longest path problem. We adapt these basic formulations for several variants of the standard longest path problem. Scaling factors for penalty terms and preprocessing time required to construct the Q matrix representing the problem are made explicit in the paper.