Bart Goossens is a professor at imec and Ghent University, Belgium, in the domain of image/video processing and medical imaging. He currently also serves as associate editor for IEEE Transactions on Image Processing. Address: St.Pietersnieuwstraat 41 9000 Gent, Belgium
High-throughput plant phenotyping platforms produce immense volumes of image data. Here, a binary... more High-throughput plant phenotyping platforms produce immense volumes of image data. Here, a binary segmentation of maize colour images is required for 3D reconstruction of plant structure and measurement of growth traits. To this end, we employ a convolutional neural network (CNN) to perform this segmentation successfully.
Regional adversarial attacks often rely on complicated methods for generating adversarial perturb... more Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks. In this study, we show that effective regional perturbations can be generated without resorting to complex methods. We develop a very simple regional adversarial perturbation attack method using cross-entropy sign, one of the most commonly used losses in adversarial machine learning. Our experiments on ImageNet with multiple models reveal that, on average, $76\%$ of the generated adversarial examples maintain model-to-model transferability when the perturbation is applied to local image regions. Depending on the selected region, these localized adversarial examples require significantly less $L_p$ norm distortion (for $p \in \{0, 2, \infty\}$) compared to their non-local counterparts. These localized attacks therefore have the potential to undermine defenses that claim robustness under the aforementioned norms.
Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretic... more Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretical guarantee that their predictions are perfectly calibrated. We propose to exploit this calibration property for the detection of adversarial examples in binary classification tasks. By rejecting predictions if the uncertainty of the IVAP is too high, we obtain an algorithm that is both accurate on the original test set and significantly more robust to adversarial examples. The method appears to be competitive to the state of the art in adversarial defense, both in terms of robustness as well as scalability
Advanced Concepts for Intelligent Vision Systems, 2015
Computationally and data intensive applications, such as video processing algorithms, are traditi... more Computationally and data intensive applications, such as video processing algorithms, are traditionally developed in programming languages such as C/C++. In order to cope with the more demanding requirements (e.g., real-time processing of large datasets), hardware accelerators such as GPUs have emerged to aid multi-core CPUs for computationally intensive tasks. Because these accelerators offer performance improvements for many (but often not all) operations, the programmer needs to decide which parts of the code are best to be developed for the accelerator or the CPU. Development for heterogeneous devices comes at a cost: 1) the sophisticated programming and debugging techniques lead to a steep learning curve, 2) development and optimization often requires huge efforts and time from the programmer, 3) often different versions of the code for different target platforms need to be written, 4) the resulting code may not be future-proof: it is not guaranteed to work optimally on future devices. In this talk we present a new programming language, Quasar, which mitigates these common drawbacks. Quasar is an easy-to-learn, high-level programming language that is hardware-independent, ideal for both rapid prototyping and full deployment on heterogeneous hardware.
The use of graphical processing units (GPUs) for general purpose calculations has gained a lot of... more The use of graphical processing units (GPUs) for general purpose calculations has gained a lot of attention, since speed-up factors of 10x-50x compared to single-threaded CPU execution are not uncommon. This makes the use of GPUs for scientific number crunching applications very appealing. However, GPU programming is challenging, requiring a significant programming expertise in order to get these significant accelerations. The low-level programming required to harvest the GPU parallel power is a major drawback for research both in industry and in academics. In a research environment algorithms typically have to be rapidly tested and adjusted as a proof of concept and little time can be spend on implementation optimization. In this tutorial we present Quasar, a new programming framework that takes care of many common challenges for GPU programming, e.g. memory management, load balancing, scheduling. Quasar consist of a high level programming language with a similar abstraction level as Python or Matlab, making it well suited for rapid prototyping. We demonstrate the use of this programming language for a number of examples. We show how we can start from a straight forward parallelization and further improve it based on the feedback from the profiler and automated profiling analysis. Finally attendees will be able to exercise with Quasar and the IDE tools in a hands-on part of the tutorial. Tutorial goals The goal of the proposed tutorial is to introduce high level programming of heterogeneous hardware to the participants. A second goal is to get attendees acquainted with relevant development tools and how, based on the feedback from these tools, they can easily improve their developed algorithms without the need of low-level optimizations. You will benefit from the tutorial by 1) having a low barrier of entry for GPU programming 2) having shorter development cycles compared to classical low-level languages for heterogeneous hardware, due to the use of a high-level programming language
We introduce a new approach to get faster MRI acquisition. By reducing the number of data-samples... more We introduce a new approach to get faster MRI acquisition. By reducing the number of data-samples in combination with a new MRI reconstruction method, we are able to reduce the acquisition time by a factor 20 without introducing disturbing artifacts. In order to reconstruct the image we have to iteratively apply the non-uniform fast Fourier transform. This step turns out to be a major bottleneck. Therefore, we have accelerated the NUFFT by using the GPU. The speedup achieved by the GPU acceleration now enables new options for MRI research.
PHENOVISION is a high-throughput plant phenotyping system for crop plants in greenhouse condition... more PHENOVISION is a high-throughput plant phenotyping system for crop plants in greenhouse conditions. A conveyor belt transports plants between automated irrigation stations and imaging cabins. The aim is to phenotype maize varieties grown under different conditions. To this end we model the plants in 3D and automate the measuring of the plants.
ABSTRACTThe recent advent of 3D in Electron Microscopy (EM) has allowed for detection of detailed... more ABSTRACTThe recent advent of 3D in Electron Microscopy (EM) has allowed for detection of detailed sub-cellular nanometer resolution structures. While being a scientific breakthrough, this has also caused an explosion in dataset size, necessitating the development of automated workflows. Automated workflows typically benefit reproducibility and throughput compared to manual analysis. The risk of automation is that it ignores the expertise of the microscopy user that comes with manual analysis. To mitigate this risk, this paper presents a hybrid paradigm. We propose a ‘human-in-the-loop’ (HITL) approach that combines expert microscopy knowledge with the power of large-scale parallel computing to improve EM image quality through advanced image restoration algorithms. An interactive graphical user interface, publicly available as an ImageJ plugin, was developed to allow biologists to use our framework in an intuitive and user-friendly fashion. We show that this plugin improves visualiza...
High-throughput plant phenotyping platforms produce immense volumes of image data. Here, a binary... more High-throughput plant phenotyping platforms produce immense volumes of image data. Here, a binary segmentation of maize colour images is required for 3D reconstruction of plant structure and measurement of growth traits. To this end, we employ a convolutional neural network (CNN) to perform this segmentation successfully.
Regional adversarial attacks often rely on complicated methods for generating adversarial perturb... more Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks. In this study, we show that effective regional perturbations can be generated without resorting to complex methods. We develop a very simple regional adversarial perturbation attack method using cross-entropy sign, one of the most commonly used losses in adversarial machine learning. Our experiments on ImageNet with multiple models reveal that, on average, $76\%$ of the generated adversarial examples maintain model-to-model transferability when the perturbation is applied to local image regions. Depending on the selected region, these localized adversarial examples require significantly less $L_p$ norm distortion (for $p \in \{0, 2, \infty\}$) compared to their non-local counterparts. These localized attacks therefore have the potential to undermine defenses that claim robustness under the aforementioned norms.
Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretic... more Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretical guarantee that their predictions are perfectly calibrated. We propose to exploit this calibration property for the detection of adversarial examples in binary classification tasks. By rejecting predictions if the uncertainty of the IVAP is too high, we obtain an algorithm that is both accurate on the original test set and significantly more robust to adversarial examples. The method appears to be competitive to the state of the art in adversarial defense, both in terms of robustness as well as scalability
Advanced Concepts for Intelligent Vision Systems, 2015
Computationally and data intensive applications, such as video processing algorithms, are traditi... more Computationally and data intensive applications, such as video processing algorithms, are traditionally developed in programming languages such as C/C++. In order to cope with the more demanding requirements (e.g., real-time processing of large datasets), hardware accelerators such as GPUs have emerged to aid multi-core CPUs for computationally intensive tasks. Because these accelerators offer performance improvements for many (but often not all) operations, the programmer needs to decide which parts of the code are best to be developed for the accelerator or the CPU. Development for heterogeneous devices comes at a cost: 1) the sophisticated programming and debugging techniques lead to a steep learning curve, 2) development and optimization often requires huge efforts and time from the programmer, 3) often different versions of the code for different target platforms need to be written, 4) the resulting code may not be future-proof: it is not guaranteed to work optimally on future devices. In this talk we present a new programming language, Quasar, which mitigates these common drawbacks. Quasar is an easy-to-learn, high-level programming language that is hardware-independent, ideal for both rapid prototyping and full deployment on heterogeneous hardware.
The use of graphical processing units (GPUs) for general purpose calculations has gained a lot of... more The use of graphical processing units (GPUs) for general purpose calculations has gained a lot of attention, since speed-up factors of 10x-50x compared to single-threaded CPU execution are not uncommon. This makes the use of GPUs for scientific number crunching applications very appealing. However, GPU programming is challenging, requiring a significant programming expertise in order to get these significant accelerations. The low-level programming required to harvest the GPU parallel power is a major drawback for research both in industry and in academics. In a research environment algorithms typically have to be rapidly tested and adjusted as a proof of concept and little time can be spend on implementation optimization. In this tutorial we present Quasar, a new programming framework that takes care of many common challenges for GPU programming, e.g. memory management, load balancing, scheduling. Quasar consist of a high level programming language with a similar abstraction level as Python or Matlab, making it well suited for rapid prototyping. We demonstrate the use of this programming language for a number of examples. We show how we can start from a straight forward parallelization and further improve it based on the feedback from the profiler and automated profiling analysis. Finally attendees will be able to exercise with Quasar and the IDE tools in a hands-on part of the tutorial. Tutorial goals The goal of the proposed tutorial is to introduce high level programming of heterogeneous hardware to the participants. A second goal is to get attendees acquainted with relevant development tools and how, based on the feedback from these tools, they can easily improve their developed algorithms without the need of low-level optimizations. You will benefit from the tutorial by 1) having a low barrier of entry for GPU programming 2) having shorter development cycles compared to classical low-level languages for heterogeneous hardware, due to the use of a high-level programming language
We introduce a new approach to get faster MRI acquisition. By reducing the number of data-samples... more We introduce a new approach to get faster MRI acquisition. By reducing the number of data-samples in combination with a new MRI reconstruction method, we are able to reduce the acquisition time by a factor 20 without introducing disturbing artifacts. In order to reconstruct the image we have to iteratively apply the non-uniform fast Fourier transform. This step turns out to be a major bottleneck. Therefore, we have accelerated the NUFFT by using the GPU. The speedup achieved by the GPU acceleration now enables new options for MRI research.
PHENOVISION is a high-throughput plant phenotyping system for crop plants in greenhouse condition... more PHENOVISION is a high-throughput plant phenotyping system for crop plants in greenhouse conditions. A conveyor belt transports plants between automated irrigation stations and imaging cabins. The aim is to phenotype maize varieties grown under different conditions. To this end we model the plants in 3D and automate the measuring of the plants.
ABSTRACTThe recent advent of 3D in Electron Microscopy (EM) has allowed for detection of detailed... more ABSTRACTThe recent advent of 3D in Electron Microscopy (EM) has allowed for detection of detailed sub-cellular nanometer resolution structures. While being a scientific breakthrough, this has also caused an explosion in dataset size, necessitating the development of automated workflows. Automated workflows typically benefit reproducibility and throughput compared to manual analysis. The risk of automation is that it ignores the expertise of the microscopy user that comes with manual analysis. To mitigate this risk, this paper presents a hybrid paradigm. We propose a ‘human-in-the-loop’ (HITL) approach that combines expert microscopy knowledge with the power of large-scale parallel computing to improve EM image quality through advanced image restoration algorithms. An interactive graphical user interface, publicly available as an ImageJ plugin, was developed to allow biologists to use our framework in an intuitive and user-friendly fashion. We show that this plugin improves visualiza...
Uploads
Papers by Bart Goossens