Abstract
We present a fast heuristic approach for solving a binary multiple instance learning (MIL) problem, which consists in discriminating between two kinds of item sets: the sets are called bags and the items inside them are called instances. Assuming that only two classes of instances are allowed, a common standard hypothesis states that a bag is positive if it contains at least a positive instance and it is negative when all its instances are negative. Our approach constructs a MIL separating hyperplane by preliminary fixing the normal and reducing the learning phase to a univariate nonsmooth optimization problem, which can be quickly solved by simply exploring the kink points. Numerical results are presented on a set of test problems drawn from the literature.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Multiple instance learning (MIL) (Herrera et al. 2016) is about classification of sets of items: in the MIL terminology, such sets are called bags and the corresponding items are called instances. In the binary case, when also the instances can belong only to two alternative classes, a MIL problem is stated on the basis of the so-called standard MIL assumption, which refers to a positive bag as a bag containing at least a positive instance and to a negative one as any bag whose instances are all negative. For this reason, MIL problems are often interpreted as a kind of weakly supervised classification problems.
The first MIL problem encountered in the literature is a drug design problem (Dietterich et al. 1997). It consists in determining whether a drug molecule is active or not. Each molecule can assume a finite number of three-dimensional conformations, and it is active if at least one among its conformations is able to bind a particular “ binding site,” which generally coincides with a larger protein molecule. The key question is that it is not known which conformation makes a molecule active. In this example, the drug molecule is represented by a bag, while various conformations it can assume correspond to the instances inside the bag.
The MIL paradigm finds application in a lot of fields: text categorization, image recognition (Astorino et al. 2017, 2018), video analysis, diagnostics by means of images (Astorino et al. 2019b, 2020, and Quellec et al. 2017) and so on. An example fitting very well the standard MIL assumption stated above is in discriminating between healthy and nonhealthy patients on the basis of their medical scan (bag): if at least a region (instance) of the medical scan (bag) is abnormal, then the patient is classified as nonhealthy and, on the contrary, when all the regions (instances) of the medical scan (bag) are normal, then the patient is classified as healthy.
In the last years, many papers have been devoted to MIL problems. Various approaches discussed in the literature fall into one of the three following classes: instance-space approaches, bag-space approaches and embedding-space approaches. In the instance-space approaches, the classifier is constructed in the instance space and the classification of the bags is inferred from the classification of the instances: as a consequence, this kind of approach is of the local type. Some instance-space approaches are found in Andrews et al. (2003); Astorino et al. (2019a, 2019c); Avolio and Fuduli (2021); Bergeron et al. (2012); Gaudioso et al. (2020b); Mangasarian and Wild (2008); Vocaturo et al. (2020). In particular, in Andrews et al. (2003) the first SVM (Support Vector Machine) type model for MIL has been proposed, giving rise to a nonlinear mixed integer program solved by means of a BCD (Block Coordinate Descent) approach (Tseng 2001). The same SVM type model treated in Andrews et al. (2003) has been faced in Astorino et al. (2019a) by means of a Lagrangian relaxation technique, while in Astorino et al. (2019c) and Bergeron et al. (2012) a MIL linear separation has been obtained by using some ad hoc nonsmooth approaches. In Mangasarian and Wild (2008), the authors have proposed an instance-space algorithm, expressing each positive bag as convex combination of its instances, whereas in Avolio and Fuduli (2021) a combination of the SVM and the PSVM (Proximal Support Vector Machine) approaches has been adopted. In Gaudioso et al. (2020b) and Vocaturo et al. (2020), a spherical separation model has been tackled by using DC (Difference of Convex) techniques. Other SVM type instance space approaches for MIL are found in Li et al. (2009), Melki et al. (2018), Shan et al. (2018), and Zhang et al. (2013), while in Yuan et al. (2021) a spherical separation with margin is used.
Differently from the above instance-space approaches, the bag-space techniques (see for example Gärtner et al. (2002), Wang and Zucker (2000), and Zhou et al. (2009)) are of the global type since classification is performed considering each bag as an entire entity. Finally, the embedding-space approaches, such as Zhang et al. (2017), are a compromise between the two previous ones since the classifier is obtained in the instance space on the basis of some instances per bag, those ones, in particular, which are more representative of the bag. For more details on the MIL paradigm, we refer the reader to the exhaustive surveys Amores (2013) and Carbonneau et al. (2018).
In this work, stemming from a formulation similar to those adopted in Andrews et al. (2003) (MI-SVM formulation), Astorino et al. (2019c), and Bergeron et al. (2012), where both the normal and the bias of a separation hyperplane are computed, we present a fast instance-space algorithm, which generates a separation hyperplane by heuristically prefixing its normal and by successively computing the bias as an optimal solution to an univariate nonsmooth optimization problem. Solving efficiently this univariate nonsmooth problem (by simply exploring the kink points) constitutes the main novelty of our approach, which ensures quite low computational times while providing reasonable testing accuracy.
The paper is organized as follows. In Sect. 2, we introduce our approach, while some numerical results on a set of benchmark test problems are reported in Sect. 3. Finally, in Sect. 4 some conclusions are drawn.
2 The approach
Assume we are given the index sets \(I^-\) and \(I^+\) of k negative and m positive bags, respectively. We indicate by \(\{x_j\in \mathbb {R}^n\}\) the set of all the instances, each of them belonging to exactly one bag, either negative or positive. We assume \(\{J^-_1,\ldots ,J^-_k\}\) and \(\{J^+_1,\ldots ,J^+_m\}\) be the instance index sets of the negative and positive bags, respectively.
The objective is to find a hyperplane
with \(w \in \mathbb {R}^n\) and \(\gamma \in \mathbb {R}\), which (strictly) separates the two classes of bags on the basis of the standard MIL assumption, i.e.,
-
all the negative bags are entirely confined in the interior of one of the two halfspaces generated by H;
-
each positive bag has at least one of its instances falling into the interior of the other halfspace.
More formally, \(H(w,\gamma )\) is a separating hyperplane if and only if:
To state an optimization model able to provide a possibly separating hyperplane, we define the error \(e_i^-(w,\gamma )\) in classifying the negative bag \(i \in I^-\) as
and the error \(e_i^+(w,\gamma )\) in classifying the positive bag \(J^+_i\) as
Summing up, we obtain the following overall error function:
and the resulting optimization problem
We will refer to the model above as to Formulation 1. Note that \(e(w, \gamma ) \ge 0\) and it is \(e(w, \gamma ) = 0\) if and only if \(H(w, \gamma )\) is a separating hyperplane, according to (1) and (2).
Function \(e(w,\gamma )\) is nonsmooth and nonconvex, but it can be put in DC (Difference of Convex) form (Le Thi and Pham Dinh 2005). This formulation is similar to those adopted in Andrews et al. (2003) (MI-SVM formulation) and Bergeron et al. (2012), while the DC decomposition has been exploited in Astorino et al. (2019c). The reader will find a fresh survey on nonsmooth optimization methods in Gaudioso et al. (2020a). Some specialized algorithms can be found in Gaudioso and Monaco (1992) and Astorino et al. (2011).
Our heuristic approach consists first in a judicious selection of w, the normal to the separating hyperplane, and then in minimizing the error function with respect to the scalar variable \(\gamma \).
As for the choice of w, we calculate the barycenter a of all the instances of the negative bags and the barycenter b of the barycenters of the instances in each positive bag, and then, we fix the normal to the hyperplane \(\bar{w}\) by setting:
for some \(M>0\). Note that, whenever \(M=1\), provided a and b do not coincide, by setting \(\gamma _-=a^{\top }b-\Vert a\Vert ^2\) and \(\gamma _+=\Vert b\Vert ^2 -a^{\top }b\), the hyperplanes \(H(\bar{w}, \gamma _-)\) and \(H(\bar{w}, \gamma _+)\) pass through points a and b, respectively.
Once the normal \(\bar{w}\) has been fixed, defining
and
we rewrite function (3) as follows:
As a consequence, problem (4) becomes
which consists of minimizing a convex and nonsmooth (piecewise affine) function of the scalar variable \(\gamma \).
We note in passing that, by introducing the additional variables \(\xi _{ij}\), \( i \in I^-\), \(j \in J_i^-\), and \(\zeta _i\), \(i\in I^+\) (grouped into the vectors \(\xi ,\zeta \)), the problem can be equivalently rewritten as a linear program of the form
To find an optimal solution to the problem, we prefer, however, to consider formulation (9). Note that the nonnegative function \(e(\gamma )\) is continuous and coercive; consequently, it has a minimum. In particular,
corresponds to a correct classification of all the bags.
A brief discussion of the differential properties of function (8) is in order. Letting
and
we have the following expressions of the correspondent subdifferentials:
and
From (10) and (11), it is easy to see that the points \(\alpha _{ij}\), \(i \in I^-,\, j \in J^-_i\), and \(-\beta _i\), \(i \in I^+\), constitute the kinks, i.e., the points where function \(e(\gamma )\) is nonsmooth. Note also that \(e(\gamma )\) has constant negative slope
for
and it has constant positive slope \(m=|I^+|\) for
Taking into account (10) and (11), the subdifferential \(\partial e(\gamma )\) is the Minkowski sum of four sets, i.e.,
with
and, at the non-kinks points where function \(e(\gamma )\) is differentiable, it is
Moreover, at each kink point, say \(\gamma \), the slope jumps up of s, the multiplicity of the kink defined as \(|IJ^-_0(\gamma )|+|I^+_0(\gamma )|\).
Letting
and
the following property holds.
Proposition 1
The optimal objective function value \(e^*\) of problem (9) is equal to zero if and only if
In such case, every \(\gamma \in [\gamma _\alpha , \gamma _\beta ]\) is optimal.
Proof
Straightforward from (8). \(\square \)
We consider now the case \(\gamma _\alpha >\gamma _\beta \) and state the following theorem.
Theorem 1
If \(\gamma _\alpha >\gamma _\beta \), then there exists an optimal kink solution \(\gamma ^* \in [\gamma _\beta , \gamma _\alpha ]\).
Proof
We prove first that any optimal solution belongs to the interval \([\gamma _\beta ,\gamma _\alpha ]\). We observe in fact that, for every \(\bar{\gamma }< \gamma _\beta \) and for \(i \in I^+\), it is \(\max \{0,\beta _i+\bar{\gamma }\}=0\), while there exists at least one couple ij such that \(\max \{0, \alpha _{ij}-\bar{\gamma }\}>0\). This implies that the directional derivative of \(e(\gamma )\) at \(\bar{\gamma }\) along the positive semi-axis is negative. A similar argument can be used to show that at any \(\bar{\gamma }>\gamma _\alpha \) the directional derivative along the positive semi-axis is positive. As a consequence, \(e(\gamma )\) has optimal solution necessarily in the interval \([\gamma _\beta ,\gamma _\alpha ]\).
Now, observing that \(\gamma _\alpha \) and \(\gamma _\beta \) are both kinks, consider any optimal non-kink solution \(\gamma ^*\in (\gamma _\beta ,\gamma _\alpha )\). Since the function is differentiable at \(\gamma ^*\), it follows that the derivative of \(e(\gamma )\) at \(\gamma ^*\) vanishes, that is, from (13),
i.e., \(|IJ^-_0(\gamma ^*)|=|I^+_0(\gamma ^*)|=0\).
Now consider the biggest kink smaller than \(\gamma ^*\): the existence of such a kink is guaranteed recalling that \(\gamma _\beta \) is a kink and \(\gamma ^* > \gamma _\beta \). Assume for the time being that such a kink is \(\alpha _{sh}\) for some \(s \in I^-, h \in J_s^-\) and let \(\bar{\gamma }=\alpha _{sh}\). It is
Summing up and taking into account (12), it follows \(0 \in \partial e(\bar{\gamma })\), i.e., \(\bar{\gamma }= \alpha _s \) is an optimal kink solution. The case when the biggest kink smaller than \(\gamma ^*\) is \(-\beta _s\) for some \(s \in I^+\) can be treated in a perfectly analogous way. \(\square \)
The properties of function \(e(\gamma )\) we have discussed allow us to state the following kink exploring algorithm to solve problem (9) in order to compute an optimal solution \(\gamma ^*\).
Algorithm MIL-kink
Step 0 (Computing the kinks). Given \(\bar{w}\), compute the kinks \(\alpha _{ij}\), \(i \in I^-, \, j \in J_i^-\) and \(\beta _i\), \(i \in I^+\). Compute \(\gamma _\alpha \) and \(\gamma _\beta \). If \(\gamma _\alpha \le \gamma _\beta \), STOP: choose \(\gamma ^*\) as any value in the interval \([\gamma _\alpha ,\gamma _\beta ]\).
Step 1 (Sorting the kinks). Order the kinks in the interval \([\gamma _\beta , \gamma _\alpha ]\) for increasing values of the \(\alpha _{ij}\)’s and of the \(-\beta _i\)’s.
Step 2 (Exploring the kinks). Explore the kinks starting from \(\gamma _\beta \) until a value \(\gamma ^*=\alpha _{sh}\) for some \(s \in I^-, \, h \in J^-_s\) or \(\gamma ^*=\beta _s\) for some \(s \in I^+\) is found such that \(0 \in \partial e(\gamma ^*)\).
Proposition 2
Algorithm MIL-kink runs in time O(p), where
with \(\bar{m}\) and \(\bar{k}\) being the total number of instances in the positive and negative bags, respectively.
Proof
The computation of the \(\alpha _{ij}\)s and \(\beta _i\)s at Step 0 is performed in time \(O(n\bar{k})\) and \(O(n\bar{m})\), respectively, while the computation of \(\gamma _{\alpha }\) and \(\gamma _{\beta }\) takes time \(O(\bar{k})\) and O(m), respectively. Sorting the kinks at Step 1 takes time \(O((m+\bar{k})\log (m+\bar{k}))\), while exploring the kinks at Step 2 is performed in time \(O(m+\bar{k})\). The thesis follows. \(\square \)
We conclude this section by remarking that an alternative formulation of the error function is obtained by replacing function \(e_i^-(w, \gamma )\) in (3) by
Such formulation will be referred to as Formulation 2 and its theoretical treatment is perfectly analogous to that one of Formulation 1. Despite that, the two formulations present a relevant difference from the computational point of view, since in Formulation 2 the kinks \(\alpha _{ij}\), \(i \in I^-, \,j \in J_i^-\), characterizing Formulation 1 (see formula 6), are replaced by the following ones:
which are much less. As a consequence,
In such case, Algorithm MIL-kink runs in time O(q), where
3 Numerical results
Algorithm MIL-kink, described in the previous section, has been implemented in MATLAB (version R2017b) on a Windows 10 system, characterized by a 2.21 GHz processor and 16 GB of RAM. Both the formulations (code MIL-kink\(^{1}\), corresponding to Formulation 1, and code MIL-kink\(^{2}\), corresponding to Formulation 2, have been tested on twelve data sets drawn from the literature (Andrews et al. 2003) and are listed in Table 1. The first three data sets are image recognition problems, the last two ones consist in predicting whether a compound is a musk or not, while the TST data sets are large-scale text classification problems.
In all the experimentations, we have set \(\bar{w}\) according to (5), taking \(M = 10^6\). Moreover, for each data set, we have adopted the classical tenfold cross-validation, coming out with the results reported in Table 2 in terms of average training and testing correctness.
In Table 3, we compare our results, in terms of average testing correctness and average CPU time, with those ones reported in Avolio and Fuduli (2021) and provided by the MATLAB implementations (launched on the same machine, with the same cross-validation fold structure) of the following algorithms taken from the literature:
-
mi-SPSVM (Avolio and Fuduli 2021): it is an instance-space approach, generating a separation hyperplane placed in the middle between a supporting hyperplane for the instances of the negative bags and a clustering hyperplane for the instances of the positive bags.
-
mi-SVM (Andrews et al. 2003): it is an instance-space approach, where a separating hyperplane is constructed by solving an SVM type optimization model by means of a BCD technique Tseng (2001).
-
MIL-RL (Astorino et al. 2019a): it is an instance-space approach, which provides a separating hyperplane by solving, by means of a Lagrangian relaxation technique Gaudioso (2020), the same SVM type optimization model adopted in mi-SVM.
All the above listed algorithms share with MIL-kink the characteristic of providing a linear separation classifier (i.e., a hyperplane); thus, the CPU time reported in Table 3 corresponds exactly to the execution time, averaged on tenfolds, needed to compute each time such a hyperplane.
In Table 3, for each data set, the best results have been underlined. Comparing all the algorithms, we observe that our approach is clearly very fast (with a CPU time always less than one second), especially when Formulation 2 is adopted (here the number of explored kinks is definitely smaller than in Formulation 1). In terms of average testing correctness, MIL-kink overcomes the other algorithms on four data sets (Elephant, Tiger, TST7, TST10), showing a comparable performance on the remaining test problems (especially on TST4 and TST9).
To have an idea of the performance of our method with respect to further approaches drawn from the literature, in Table 4 we report the comparison of our technique (only in terms of average testing correctness) against the following MIL algorithms, whose results have been taken from the corresponding papers:
-
MI-NPSVM (Zhang et al. 2013): it is an instance-space approach, generating two nonparallel hyperplanes by solving two respective SVM type problems.
-
MIRSVM (Melki et al. 2018): it is an embedding-space SVM type approach, based on identifying, at each iteration, the instances that mostly impact on the classification process.
-
SSLM-MIL (Yuan et al. 2021): it is an instance space approach, based on spherical separation with margin.
For each data set, the best results have been underlined and the character “ - ” means that the corresponding datum is not available. Moreover, about mi-NPSVM, in order to have a fair comparison, we have considered only the linear kernel version, for which there is no result on the musk data sets.
Looking at the results of Table 4, we observe that MIL-kink exhibits the best performance on three data sets (TST7, TST9 and TST10) and it provides quite reasonable results also on Elephant, TST1 and TST4.
4 Conclusions
We have presented a fast heuristic algorithm for solving binary MIL problems characterized by two classes of instances. Our approach gives rise to a nonsmooth univariate optimization model that we solve exactly by simply exploring the kink points. The numerical results appear interesting mainly in terms of computational time, thus suggesting the use of the method either for dealing with very large data sets or as a first tool to check viability of a MIL approach in a specific application.
References
Amores J (2013) Multiple instance classification: review, taxonomy and comparative study. Artif Intell 201:81–105
Andrews S, Tsochantaridis I, Hofmann T (2003) Support vector machines for multiple-instance learning. In: Becker S, Thrun S, Obermayer K (eds) Advances in neural information processing systems. MIT Press, Cambridge, pp 561–568
Astorino A, Frangioni A, Gaudioso M, Gorgone E (2011) Piecewise quadratic approximations in convex numerical optimization. SIAM J Optim 21(4):1418–1438
Astorino A, Fuduli A, Gaudioso M (2019a) A Lagrangian relaxation approach for binary multiple instance classification. IEEE Trans Neural Netw Learn Syst 30(9):2662–2671
Astorino A, Fuduli A, Gaudioso M, Vocaturo E (2019b) Multiple instance learning algorithm for medical image classification. In: CEUR workshop proceedings, vol. 2400
Astorino A, Fuduli A, Giallombardo G, Miglionico G (2019c) SVM-based multiple instance classification via DC optimization. Algorithms 12:249
Astorino A, Fuduli A, Veltri P, Vocaturo E (2017) On a recent algorithm for multiple instance learning. preliminary applications in image classification. In: Proceedings - 2017 IEEE international conference on bioinformatics and biomedicine, BIBM 2017, vol 2017, pp 1615–1619
Astorino A, Fuduli A, Veltri P, Vocaturo E (2020) Melanoma detection by means of multiple instance learning. Interdiscip Sci Comput Life Sci 12(1):24–31
Astorino A, Gaudioso M, Fuduli A, Vocaturo E (2018) A multiple instance learning algorithm for color images classification. In: ACM international conference proceeding series, pp. 262–266
Avolio M, Fuduli A (2021) A semiproximal support vector machine approach for binary multiple instance learning. IEEE Trans Neural Netw Learn Syst 32(8):3566–3577
Bergeron C, Moore G, Zaretzki J, Breneman C, Bennett K (2012) Fast bundle algorithm for multiple instance learning. IEEE Trans Pattern Anal Mach Intell 34(6):1068–1079
Carbonneau M, Cheplygina V, Granger E, Gagnon G (2018) Multiple instance learning: a survey of problem characteristics and applications. Pattern Recogn 77:329–353
Dietterich T, Lathrop R, Lozano-Pérez T (1997) Solving the multiple instance problem with axis-parallel rectangles. Artif Intell 89(1–2):31–71
Gärtner T, Flach P, Kowalczyk A, Smola A (2002) Multi-instance kernels. In: In Proceedings of the 19th international conference on machine learning, pp. 179–186. Morgan Kaufmann
Gaudioso M (2020) A view of Lagrangian relaxation and its applications. Numerical nonsmooth optimization: state of the art algorithms. Springer, pp 579–617
Gaudioso M, Giallombardo G, Miglionico G (2020a) Essentials of numerical nonsmooth optimization. 4OR 18(1): 1–47
Gaudioso M, Giallombardo G, Miglionico G, Vocaturo E (2020b) Classification in the multiple instance learning framework via spherical separation. Soft Comput 24(7):5071–5077
Gaudioso M, Monaco M (1992) Variants to the cutting plane approach for convex nondifferentiable optimization. Optimization 25(1):65–75
Herrera F, Ventura S, Bello R, Cornelis C, Zafra A, Sánchez-Tarragó D, Vluymans S (2016) Multiple instance learning: foundations and algorithms. Springer International Publishing, Berlin
Le Thi H, Pham Dinh T (2005) The DC (difference of convex functions) programming and dca revisited with dc models of real world nonconvex optimization problems. J Global Optim 133:23–46
Li Y, Kwok JT, Tsang IW, Zhou Z (2009) A convex method for locating regions of interest with multi-instance learning, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 5782 LNAI, pp 15–30
Mangasarian O, Wild E (2008) Multiple instance classification via successive linear programming. J Optim Theory Appl 137(3):555–568
Melki G, Cano A, Ventura S (2018) Mirsvm: multi-instance support vector machine with bag representatives. Pattern Recogn 79:228–241
Quellec G, Cazuguel G, Cochener B, Lamard M (2017) Multiple-instance learning for medical image and video analysis. IEEE Rev Biomed Eng 10:213–234
Shan C, Liu L, Xue J, Sun Z, Ma T (2018) Multiple-instance support vector machine based on a new local feature of hierarchical weighted spatio-temporal interest points. J Internet Technol 19(3)
Tseng P (2001) Convergence of a block coordinate descent method for nondifferentiable minimization. J Optim Theory Appl 109(3):475–494
Vocaturo E, Zumpano E, Giallombardo G, Miglionico G (2020) DC-SMIL: a multiple instance learning solution via spherical separation for automated detection of displastyc nevi. In: ACM international conference proceeding series
Wang J, Zucker JD (2000) Solving the multiple-instance problem: a lazy learning approach. In: Proceedings of the seventeenth international conference on machine learning, ICML ’00, pp 1119–1126
Yuan M, Xu Y, Feng R, Liu Z (2021) Instance elimination strategy for non-convex multiple-instance learning using sparse positive bags. Neural Netw 142:509–521
Zhang Q, Perra N, Perrotta D, Tizzoni M, Paolotti D, Vespignani A (2017) Forecasting seasonal influenza fusing digital indicators and a mechanistic disease model. In: Proceedings of the 26th international conference on world wide web, WWW ’17, pp 311–319
Zhang Q, Tian Y, Liu D (2013) Nonparallel support vector machines for multiple-instance learning. Procedia Comput Sci 17:1063–1072
Zhou ZH, Sun YY, Li YF (2009) Multi-instance learning by treating instances as non-IID samples. In: Proceedings of the 26th annual international conference on machine learning, pp 1249–1256. ACM
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funding
No funding was received.
Author information
Authors and Affiliations
Contributions
The authors contributed to each part of this paper equally.
Corresponding author
Ethics declarations
Conflict of interest
All authors declare that he has no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fuduli, A., Gaudioso, M., Khalaf, W. et al. A heuristic approach for multiple instance learning by linear separation. Soft Comput 26, 3361–3368 (2022). https://doi.org/10.1007/s00500-021-06713-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-021-06713-1