Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleOctober 2024
On the effectiveness of hybrid pooling in mixup-based graph learning for language processing
Journal of Systems and Software (JSSO), Volume 216, Issue Chttps://doi.org/10.1016/j.jss.2024.112139AbstractGraph neural network (GNN)-based graph learning has been popular in natural language and programming language processing, particularly in text and source code classification. Typically, GNNs are constructed by incorporating alternating layers ...
Highlights
- Provide guidelines of choosing the suitable graph pooling operator for training graph neural networks (GNNs).
- Demonstrate hybrid pooling is more effective for Mixup-based graph learning than standard pooling.
- Demonstrate a smaller ...
- ArticleSeptember 2024
Outside the Comfort Zone: Analysing LLM Capabilities in Software Vulnerability Detection
AbstractThe significant increase in software production driven by automation and faster development lifecycles has resulted in a corresponding surge in software vulnerabilities. In parallel, the evolving landscape of software vulnerability detection, ...
- research-articleJuly 2024
A comprehensive analysis on software vulnerability detection datasets: trends, challenges, and road ahead
International Journal of Information Security (IJOIS), Volume 23, Issue 5Pages 3311–3327https://doi.org/10.1007/s10207-024-00888-yAbstractAs society’s dependence on information and communication systems (ICTs) grows, so does the necessity of guaranteeing the proper functioning and use of such systems. In this context, it is critical to enhance the security and robustness of the ...
- research-articleJuly 2024
Towards Exploring the Limitations of Test Selection Techniques on Graph Neural Networks: An Empirical Study
Empirical Software Engineering (KLU-EMSE), Volume 29, Issue 5https://doi.org/10.1007/s10664-024-10515-yAbstractGraph Neural Networks (GNNs) have gained prominence in various domains, such as social network analysis, recommendation systems, and drug discovery, due to their ability to model complex relationships in graph-structured data. GNNs can exhibit ...
- surveyApril 2024
Test Optimization in DNN Testing: A Survey
ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 33, Issue 4Article No.: 111, Pages 1–42https://doi.org/10.1145/3643678This article presents a comprehensive survey on test optimization in deep neural network (DNN) testing. Here, test optimization refers to testing with low data labeling effort. We analyzed 90 papers, including 43 from the software engineering (SE) ...
-
- research-articleMarch 2024
Active Code Learning: Benchmarking Sample-Efficient Training of Code Models
IEEE Transactions on Software Engineering (ISOF), Volume 50, Issue 5Pages 1080–1095https://doi.org/10.1109/TSE.2024.3376964The costly human effort required to prepare the training data of machine learning (ML) models hinders their practical development and usage in software engineering (ML4Code), especially for those with limited budgets. Therefore, efficiently training ...
- research-articleDecember 2023
KAPE: kNN-based Performance Testing for Deep Code Search
ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 33, Issue 2Article No.: 48, Pages 1–24https://doi.org/10.1145/3624735Code search is a common yet important activity of software developers. An efficient code search model can largely facilitate the development process and improve the programming quality. Given the superb performance of learning the contextual ...
- research-articleNovember 2023
LaF: Labeling-free Model Selection for Automated Deep Neural Network Reusing
ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 33, Issue 1Article No.: 25, Pages 1–28https://doi.org/10.1145/3611666Applying deep learning (DL) to science is a new trend in recent years, which leads DL engineering to become an important problem. Although training data preparation, model architecture design, and model training are the normal processes to build DL models,...
- research-articleSeptember 2024
MUTEN: Mutant-Based Ensembles for Boosting Gradient-Based Adversarial Attack
ASE '23: Proceedings of the 38th IEEE/ACM International Conference on Automated Software EngineeringPages 1708–1712https://doi.org/10.1109/ASE56229.2023.00042Mutation testing (MT) for deep learning (DL) has gained huge attention in the past few years. However, how MT can really help DL is still unclear. In this paper, we introduce one promising direction for the usage of mutants. Specifically, since mutants ...
- ArticleJanuary 2024
An Empirical Study of the Imbalance Issue in Software Vulnerability Detection
AbstractVulnerability detection is crucial to protect software security. Nowadays, deep learning (DL) is the most promising technique to automate this detection task, leveraging its superior ability to extract patterns and representations within extensive ...
- research-articleSeptember 2023
CodeS: Towards Code Model Generalization Under Distribution Shift
ICSE-NIER '23: Proceedings of the 45th International Conference on Software Engineering: New Ideas and Emerging ResultsPages 1–6https://doi.org/10.1109/ICSE-NIER58687.2023.00007Distribution shift has been a longstanding challenge for the reliable deployment of deep learning (DL) models due to unexpected accuracy degradation. Although DL has been becoming a driving force for large-scale source code analysis in the big code ...
- research-articleJuly 2023
Aries: Efficient Testing of Deep Neural Networks via Labeling-Free Accuracy Estimation
ICSE '23: Proceedings of the 45th International Conference on Software EngineeringPages 1776–1787https://doi.org/10.1109/ICSE48619.2023.00152Deep learning (DL) plays a more and more important role in our daily life due to its competitive performance in industrial application domains. As the core of DL-enabled systems, deep neural networks (DNNs) need to be carefully evaluated to ensure the ...
- research-articleOctober 2022
DRE: density-based data selection with entropy for adversarial-robust deep learning models
Neural Computing and Applications (NCAA), Volume 35, Issue 5Pages 4009–4026https://doi.org/10.1007/s00521-022-07812-2AbstractActive learning helps software developers reduce the labeling cost when building high-quality machine learning models. A core component of active learning is the acquisition function that determines which data should be selected to annotate.State-...
- research-articleJuly 2022
An Empirical Study on Data Distribution-Aware Test Selection for Deep Learning Enhancement
ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 31, Issue 4Article No.: 78, Pages 1–30https://doi.org/10.1145/3511598Similar to traditional software that is constantly under evolution, deep neural networks need to evolve upon the rapid growth of test data for continuous enhancement (e.g., adapting to distribution shift in a new environment for deployment). However, it ...
- short-paperOctober 2022
Robust active learning: sample-efficient training of robust deep learning models
CAIN '22: Proceedings of the 1st International Conference on AI Engineering: Software Engineering for AIPages 41–42https://doi.org/10.1145/3522664.3528614Active learning is an established technique to reduce the labeling cost for building high-quality machine learning models. However, state-of-the-art approaches focus on maximizing the clean performance (e.g. accuracy) but disregarding robustness. In ...
- research-articleJune 2022
Towards exploring the limitations of active learning: an empirical study
ASE '21: Proceedings of the 36th IEEE/ACM International Conference on Automated Software EngineeringPages 917–929https://doi.org/10.1109/ASE51524.2021.9678672Deep neural networks (DNNs) are increasingly deployed as integral parts of software systems. However, due to the complex interconnections among hidden layers and massive hyperparameters, DNNs must be trained using a large number of labeled inputs, which ...
- ArticleSeptember 2019
Eye Movement-Based Analysis on Methodologies and Efficiency in the Process of Image Noise Evaluation
Artificial Neural Networks and Machine Learning – ICANN 2019: Image ProcessingPages 29–40https://doi.org/10.1007/978-3-030-30508-6_3AbstractNoise level (image quality) evaluation is an important and popular topic in many applications. However, the knowledge of how people visually explore distorted images for making decision on noise evaluation is rather limited. In this paper, we ...
- articleMay 2018
A group-based signal filtering approach for trajectory abstraction and restoration
Neural Computing and Applications (NCAA), Volume 29, Issue 9Pages 371–387https://doi.org/10.1007/s00521-017-3148-8Trajectory abstraction is used to summarize the large amount of information delivered by the trajectory data, and trajectory restoration is used to reconstruct lost parts of trajectories. To cope with complex trajectory data, in this paper, we propose a ...
- ArticleNovember 2015
XaIBO: An Extension of aIB for Trajectory Clustering with Outlier
ICONIP 2015: Proceeings, Part II, of the 22nd International Conference on Neural Information Processing - Volume 9490Pages 423–431https://doi.org/10.1007/978-3-319-26535-3_48Clustering plays an important role for trajectory analysis. The agglomerative Information Bottleneck aIB approach is effective for successfully managing an optimum number of clusters without the need of an explicit measure of trajectory distance, which ...
- ArticleJuly 2015
Visualization on Agglomerative Information Bottleneck Based Trajectory Clustering
IV '15: Proceedings of the 2015 19th International Conference on Information VisualisationPages 557–560https://doi.org/10.1109/iV.2015.98Undoubtedly, visualization of the trajectory clustering outputs is very important and some researches have been done on visualization of the clustering results. Still importantly, the research on visualizing the procedure of clustering, which is also of ...