Abstract-In this paper a novel method for automatic ground truth generation of camera captured do... more Abstract-In this paper a novel method for automatic ground truth generation of camera captured document images is proposed. Currently, no dataset is available for camera captured documents. It is very difficult to build these datasets manually, as it is very laborious and costly. The proposed method is fully automatic, allowing building the very large scale (i.e., millions of images) labeled camera captured documents dataset, without any human intervention. Evaluation of samples generated by the proposed approach shows that 99.98% of the images are correctly labeled. Novelty of the proposed approach lies in the use of document image retrieval for automatic labeling, especially for camera captured documents, which contain different distortions specific to camera, e.g., blur, occlusion, perspective distortion, etc.
Traditional neural networks trained using point-based maximum likelihood estimation are determini... more Traditional neural networks trained using point-based maximum likelihood estimation are deterministic models and have exhibited near-human performance in many image classification tasks. However, their insistence on representing network parameters with point-estimates renders them incapable of capturing all possible combinations of the weights; consequently, resulting in a biased predictor towards their initialisation. Most importantly, these deterministic networks are inherently unable to provide any uncertainty estimate for their prediction which is highly sought after in many critical application areas. On the other hand, Bayesian neural networks place a probability distribution on network weights and give a built-in regularisation effect making these models able to learn well from small datasets without overfitting. These networks provide a way of generating posterior distribution which can be used for model's uncertainty estimation. However, Bayesian estimation is computationally very expensive since it greatly widens the parameter space. This paper proposes a hybrid convolutional neural network which combines high accuracy of deterministic models with posterior distribution approximation of Bayesian neural networks. This hybrid architecture is validated on 13 publicly available benchmark classification datasets from a wide range of domains and different modalities like natural scene images, medical images, and time-series. Our results show that the proposed hybrid approach performs better than both deterministic and Bayesian methods in terms of classification accuracy and also provides an estimate of uncertainty for every prediction. We further employ this uncertainty to filter out unconfident predictions and achieve significant additional gain in accuracy for the remaining predictions. INDEX TERMS Bayesian estimation, convolutional neural networks, hybrid neural networks, image classification, time-series classification, uncertainty estimation.
Small non-coding RNAs (ncRNAs) are attracting increasing attention as they are now considered pot... more Small non-coding RNAs (ncRNAs) are attracting increasing attention as they are now considered potentially valuable resources in the development of new drugs intended to cure several human diseases. A prerequisite for the development of drugs targeting ncRNAs or the related pathways is the identification and correct classification of such ncRNAs. State-of-the-art small ncRNA classification methodologies use secondary structural features as input. However, such feature extraction approaches only take global characteristics into account and completely ignore co-relative effects of local structures. Furthermore, secondary structure based approaches incorporate high dimensional feature space which is computationally expensive. The present paper proposes a novel Robust and Precise ConvNet (RPC-snRC) methodology which classifies small ncRNAs into relevant families by utilizing their primary sequence. RPC-snRC methodology learns hierarchical representation of features by utilizing positioning and information on the occurrence of nucleotides. To avoid exploding and vanishing gradient problems, we use an approach similar to DenseNet in which gradient can flow straight from subsequent layers to previous layers. In order to assess the effectiveness of deeper architectures for small ncRNA classification, we also adapted two ResNet architectures having a different number of layers. Experimental results on a benchmark small ncRNA dataset show that the proposed methodology does not only outperform existing small ncRNA classification approaches with a significant performance margin of 10% but it also gives better results than adapted ResNet architectures. To reproduce the results Source code and data set is available at https://github.com/muas16/small-noncoding-RNA-classification INDEX TERMS RNA sequence analysis, small non-coding RNA classification, DenseNet, ResNet.
The present study characterised locally available whey samples of cheddar, mozzarella and paneer ... more The present study characterised locally available whey samples of cheddar, mozzarella and paneer for physicochemical and nutritional attributes. The results revealed that the cheddar whey exhibited pH (5.41±0.16), crude protein (0.83±0.03%), fat (0.25±0.01%), lactose (4.95±0.21%) and total solids (6.55±0.27%), slightly higher than those of mozzarella and paneer whey. On the other hand, the paneer whey showed acidity (0.30±0.01) and ash content (0.56±0.02), slightly higher than those of cheddar and mozzarella whey. Furthermore, the mozzarella whey revealed the total plate count values (3.17±0.09 1 0 4 cfu/mL), slightly higher than those of cheddar and paneer whey samples. The paneer whey contained the amount of calcium (25.02 ± 1.34), magnesium (4.88 ± 0.23), sodium (32.11 ± 1.37) and potassium (97.55 ± 3.54) slightly higher, when compared to those of cheddar and mozzarella whey. The cheddar whey possessed the highest amount of essential and non-essential amino acid contents, followed by mozzarella and paneer whey. Thus, cheddar whey exhibited the best physicochemical and nutritional profile among all the whey samples, so it can be used to prepare high quality novel and nutritious sports drink for sportsman.
Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions
Neural networks (NN) are considered as black boxes due to the lack of explainability and transpar... more Neural networks (NN) are considered as black boxes due to the lack of explainability and transparency of their decisions. This significantly hampers their deployment in environments where explainability is essential along with the accuracy of the system. Recently, significant efforts have been made for the interpretability of these deep networks with the aim to open up the black box. However, most of these approaches are specifically developed for visual modalities. In addition, the interpretations provided by these systems require expert knowledge and understanding for intelligibility. This indicates a vital gap between the explainability provided by the systems and the novice user. To bridge this gap, we present a novel framework i.e. Time-Series eXplanation (TSXplain) system which produces a natural language based explanation of the decision taken by a NN. It uses the extracted statistical features to describe the decision of a NN, merging the deep learning world with that of statistics. The two-level explanation provides ample description of the decision made by the network to aid an expert as well as a novice user alike. Our survey and reliability assessment test confirm that the generated explanations are meaningful and correct. We believe that generating natural language based descriptions of the network's decisions is a big step towards opening up the black box.
Proceedings of the 10th International Conference on Agents and Artificial Intelligence
We present a hierarchical framework for zero-shot human-activity recognition that recognizes unse... more We present a hierarchical framework for zero-shot human-activity recognition that recognizes unseen activities by the combinations of preliminarily learned basic actions and involved objects. The presented framework consists of gaze-guided object recognition module, myo-armband based action recognition module, and the activity recognition module, which combines results from both action and object module to detect complex activities. Both object and action recognition modules are based on deep neural network. Unlike conventional models, the proposed framework does not need retraining for recognition of an unseen activity, if the activity can be represented by a combination of the predefined basic actions and objects. This framework brings competitive advantage to industry in terms of the service-deployment cost. The experimental results showed that the proposed model could recognize three types of activities with precision of 77% and recall rate of 82%, which is comparable to a baseline method based on supervised learning.
2020 International Joint Conference on Neural Networks (IJCNN), 2020
Deep neural networks are black boxes by construction. Explanation and interpretation methods ther... more Deep neural networks are black boxes by construction. Explanation and interpretation methods therefore are pivotal for a trustworthy application. Existing methods are mostly based on heatmapping and focus on locally determining the relevant input parts triggering the network prediction. However, these methods struggle to uncover global causes. While this is a rare case in the image or NLP modality, it is of high relevance in the time series domain.This paper presents a novel framework, i.e. Conceptual Explanation, designed to evaluate the effect of abstract (local or global) input features on the model behavior. The method is model-agnostic and allows utilizing expert knowledge. On three time series datasets Conceptual Explanation demonstrates its ability to pinpoint the causes inherent to the data to trigger the correct model prediction.
Development of disease resistant high yielding wheat genotypes is the prime objective of all whea... more Development of disease resistant high yielding wheat genotypes is the prime objective of all wheat breeding programmes. To study genetic diversity for production traits, 229 F Recombinant Inbred Lines (RILs) 5:8 of wheat were planted in one meter row during 2012-13 at the University of Agriculture Peshawar Pakistan. Cluster analysis based on squared Euclidean distance and UPGMA method, categorized the RILs into six groups. Analysis revealed high inter-cluster difference between cluster III and cluster VI followed by cluster IV and VI and then by cluster V and cluster VI. Cluster I contain genotypes having maximum mean value for days to heading, flag leaf area and grains spike , whereas cluster IV contain genotypes having maximum mean -1 value for plant height, number of spikes, 1000-grain weight and grain yield. The results of this study revealed that RILs in cluster I and cluster IV could yield potential segregants.
Early detection of skin cancers like melanoma is crucial to ensure high chances of survival for p... more Early detection of skin cancers like melanoma is crucial to ensure high chances of survival for patients. Clinical application of Deep Learning (DL)-based Decision Support Systems (DSS) for skin cancer screening has the potential to improve the quality of patient care. The majority of work in the medical AI community focuses on a diagnosis setting that is mainly relevant for autonomous operation. Practical decision support should, however, go beyond plain diagnosis and provide explanations. This paper provides an overview of works towards explainable, DL-based decision support in medical applications with the example of skin cancer diagnosis from clinical, dermoscopic and histopathologic images. Analysis reveals that comparably little attention is payed to the explanation of histopathologic skin images and that current work is dominated by visual relevance maps as well as dermoscopic feature identification. We conclude that future work should focus on meeting the stakeholder’s cogni...
Liquidity has been researched globally and locally increasingly in the 21st century. One of the i... more Liquidity has been researched globally and locally increasingly in the 21st century. One of the innovations for this is liquidity-adjusted CAP-model (LCAPM). It has been found that local factors are determining considerable part of the liquidity premium and the differences between markets are significant. However, research in the Finnish stock market is limited and the results differ as the methodologies vary. The aim of this research is to study the price of liquidity risk in the Finnish stock market and see how the methodology affects the results. The research period is from the beginning of 2002 till the end of 2018 and research data consists daily observations of 176 stocks. Liquidity is measured with Closing Percent Quoted Spread and the price of liquidity risk using unconditional LCAPM. The results suggest that two of the three systematic components of liquidity risk are priced along with the expected illiquidity. This means that investors in the Finnish stock market want a pr...
Brand hate is an extreme negative emotion developed in the consumers of a brand when they perceiv... more Brand hate is an extreme negative emotion developed in the consumers of a brand when they perceive the brand as inappropriate due to various reasons. Brand hate have been known to cause great harm to the companies and their brands. Companies face negative consequences from the brand haters such as negative word of mouth, brand rejections, brand boycotts, and anti-branding activities. This study has investigated the concept of brand hate in the light of the theory of hate and the theory of consumer brand relationship. The objectives of this study are to investigate whether the direct personal antecedents(negative past experience, symbolic incongruity, and poor relationship quality) and indirect non-personal antecedents (ideological incompatibility and rumor) triggers brand hate among consumers or not. Moreover, to investigate whether the elements of brand recovery process (apology, compensation, and explanation) helps in minimizing brand hate or not. For the purpose of testing these ...
A novel data augmentation method suitable for wearable sensor data is proposed. Although numerous... more A novel data augmentation method suitable for wearable sensor data is proposed. Although numerous studies have revealed the importance of the data augmentation to improve the accuracy and robustness in machine-learning tasks, the data augmentation method that is applicable to wearable sensor data have not been well studied. Unlike the conventional data augmentation methods, which are mainly developed for image and video analysis tasks, this study proposes a data augmentation method that can take an physical constraint of wearable sensors into account. The effectiveness of the proposed method was evaluated with a human-action-recognition task. The experimental results showed that the proposed method achieved better accuracy with significant difference compared to the cases where no data augmentation is applied and where a couple of simple data augmentation is applied.
The contribution of this paper is two fold. First, it presents a novel approach called DeepBiRD w... more The contribution of this paper is two fold. First, it presents a novel approach called DeepBiRD which is inspired from human visual perception and exploits layout features to identify individual references in a scientific publication. Second, we present a new dataset for image-based reference detection with 2401 scans containing 12244 references, all manually annotated for individual reference. Our proposed approach consists of two stages, firstly it identifies whether given document image is single column or multi-column. Using this information, document image is then splitted into individual columns. Secondly it performs layout driven reference detection using Mask R-CNN in a given scientific publication. DeepBiRD was evaluated on two different datasets to demonstrate the generalization of this approach. The proposed system achieved an F-measure of 0.96 on our dataset. DeepBiRD detected 2.5 times more references than current state-of-the-art approach on their own dataset. Therefor...
The field population of Taragama siva Lefebvre, a polyphagous forest insect pest, was noticed sev... more The field population of Taragama siva Lefebvre, a polyphagous forest insect pest, was noticed severely infected with a polyhedrosis virus, at Jodhpur and adjacent localities during AugustSeptember, 1995. In field studies, a high incidence of disease was present in young larval population. Sample oflate instar larvae collected from field showed 96.66 per cent infected material. Although, one species ofa dipterous parasite is known to attack T. siva , but control by this agent, in the present study, appeared negligible. The number of cocoons formed at the end of outbreak was extremely low. There was good evidence to suggest that the virus infection was the main cause in the sudden collapse of pest population.
The field of explainable AI (XAI) has quickly become a thriving and prolific community. However, ... more The field of explainable AI (XAI) has quickly become a thriving and prolific community. However, a silent, recurrent and acknowledged issue in this area is the lack of consensus regarding its terminology. In particular, each new contribution seems to rely on its own (and often intuitive) version of terms like “explanation” and “interpretation”. Such disarray encumbers the consolidation of advances in the field towards the fulfillment of scientific and regulatory demands e.g., when comparing methods or establishing their compliance w.r.t. biases and fairness constraints. We propose a theoretical framework that not only provides concrete definitions for these terms, but it also outlines all steps necessary to produce explanations and interpretations. The framework also allows for existing contributions to be recontextualized such that their scope can be measured, thus making them comparable to other methods. We show that this framework is compliant with desiderata on explanations, on ...
A database for camera captured documents is useful to train OCRs to obtain better performance. Ho... more A database for camera captured documents is useful to train OCRs to obtain better performance. However, no dataset exists for camera captured documents because it is very laborious and costly to build these datasets manually. In this paper, a fully automatic approach allowing building the very large scale (i.e., millions of images) labeled camera captured documents dataset is proposed. The proposed approach does not require any human intervention in labeling. Evaluation of samples generated by the proposed approach shows that more than 97% of the images are correctly labeled. Novelty of the proposed approach lies in the use of document image retrieval for automatic labeling, especially for camera captured documents, which contain different distortions specific to camera, e.g., blur, perspective distortion, etc.
2020 International Joint Conference on Neural Networks (IJCNN), 2020
Identification of input data points relevant for the classifier (i.e. serve as the support vector... more Identification of input data points relevant for the classifier (i.e. serve as the support vector) has recently spurred the interest of researchers for both interpretability as well as dataset debugging. This paper presents an in-depth analysis of the methods which attempt to identify the influence of these data points on the resulting classifier. To quantify the quality of the influence, we curated a set of experiments where we debugged and pruned the dataset based on the influence information obtained from different methods. To do so, we provided the classifier with mislabeled examples that hampered the overall performance. Since the classifier is a combination of both the data and the model, therefore, it is essential to also analyze these influences for the interpretability of deep learning models. Analysis of the results shows that some interpretability methods can detect mislabels better than using a random approach, however, contrary to the claim of these methods, the sample ...
Organizational effectiveness is critical to success in any economy. It is commonly referred when ... more Organizational effectiveness is critical to success in any economy. It is commonly referred when discussing organizations that have achieved maximum performance. Organizational effectiveness, in general, is based on the integration of the goals of the organization and the employees. Neither of them should be viewed in isolation. There are some factors that may affect the organizational effectiveness such as: performance, motivation, organizational environment, managerial expertise, creative synergy, multi-ethnic and racial background. This article is focused to explain concept of organizational effectiveness with the elaboration of factors affecting it.
Artificial Intelligence (AI) can roughly be categorized into two streams, knowledge driven and da... more Artificial Intelligence (AI) can roughly be categorized into two streams, knowledge driven and data driven both of which have their own advantages. Incorporating knowledge into Deep Neural Networks (DNN), that are purely data driven, can potentially improve the overall performance of the system. This paper presents such a fusion scheme, DeepEX, that combines these seemingly parallel streams of AI, for multi-step time-series forecasting problems. DeepEX achieves this in a way that merges best of both worlds along with a reduction in the amount of data required to train these models. This direction has been explored in the past for single step forecasting by opting for a residual learning scheme. We analyze the shortcomings of this simple residual learning scheme and enable DeepEX to not only avoid these shortcomings but also scale to multi-step prediction problems. DeepEX is tested on two commonly used time series forecasting datasets, CIF2016 and NN5, where it achieves competitive r...
Abstract-In this paper a novel method for automatic ground truth generation of camera captured do... more Abstract-In this paper a novel method for automatic ground truth generation of camera captured document images is proposed. Currently, no dataset is available for camera captured documents. It is very difficult to build these datasets manually, as it is very laborious and costly. The proposed method is fully automatic, allowing building the very large scale (i.e., millions of images) labeled camera captured documents dataset, without any human intervention. Evaluation of samples generated by the proposed approach shows that 99.98% of the images are correctly labeled. Novelty of the proposed approach lies in the use of document image retrieval for automatic labeling, especially for camera captured documents, which contain different distortions specific to camera, e.g., blur, occlusion, perspective distortion, etc.
Traditional neural networks trained using point-based maximum likelihood estimation are determini... more Traditional neural networks trained using point-based maximum likelihood estimation are deterministic models and have exhibited near-human performance in many image classification tasks. However, their insistence on representing network parameters with point-estimates renders them incapable of capturing all possible combinations of the weights; consequently, resulting in a biased predictor towards their initialisation. Most importantly, these deterministic networks are inherently unable to provide any uncertainty estimate for their prediction which is highly sought after in many critical application areas. On the other hand, Bayesian neural networks place a probability distribution on network weights and give a built-in regularisation effect making these models able to learn well from small datasets without overfitting. These networks provide a way of generating posterior distribution which can be used for model's uncertainty estimation. However, Bayesian estimation is computationally very expensive since it greatly widens the parameter space. This paper proposes a hybrid convolutional neural network which combines high accuracy of deterministic models with posterior distribution approximation of Bayesian neural networks. This hybrid architecture is validated on 13 publicly available benchmark classification datasets from a wide range of domains and different modalities like natural scene images, medical images, and time-series. Our results show that the proposed hybrid approach performs better than both deterministic and Bayesian methods in terms of classification accuracy and also provides an estimate of uncertainty for every prediction. We further employ this uncertainty to filter out unconfident predictions and achieve significant additional gain in accuracy for the remaining predictions. INDEX TERMS Bayesian estimation, convolutional neural networks, hybrid neural networks, image classification, time-series classification, uncertainty estimation.
Small non-coding RNAs (ncRNAs) are attracting increasing attention as they are now considered pot... more Small non-coding RNAs (ncRNAs) are attracting increasing attention as they are now considered potentially valuable resources in the development of new drugs intended to cure several human diseases. A prerequisite for the development of drugs targeting ncRNAs or the related pathways is the identification and correct classification of such ncRNAs. State-of-the-art small ncRNA classification methodologies use secondary structural features as input. However, such feature extraction approaches only take global characteristics into account and completely ignore co-relative effects of local structures. Furthermore, secondary structure based approaches incorporate high dimensional feature space which is computationally expensive. The present paper proposes a novel Robust and Precise ConvNet (RPC-snRC) methodology which classifies small ncRNAs into relevant families by utilizing their primary sequence. RPC-snRC methodology learns hierarchical representation of features by utilizing positioning and information on the occurrence of nucleotides. To avoid exploding and vanishing gradient problems, we use an approach similar to DenseNet in which gradient can flow straight from subsequent layers to previous layers. In order to assess the effectiveness of deeper architectures for small ncRNA classification, we also adapted two ResNet architectures having a different number of layers. Experimental results on a benchmark small ncRNA dataset show that the proposed methodology does not only outperform existing small ncRNA classification approaches with a significant performance margin of 10% but it also gives better results than adapted ResNet architectures. To reproduce the results Source code and data set is available at https://github.com/muas16/small-noncoding-RNA-classification INDEX TERMS RNA sequence analysis, small non-coding RNA classification, DenseNet, ResNet.
The present study characterised locally available whey samples of cheddar, mozzarella and paneer ... more The present study characterised locally available whey samples of cheddar, mozzarella and paneer for physicochemical and nutritional attributes. The results revealed that the cheddar whey exhibited pH (5.41±0.16), crude protein (0.83±0.03%), fat (0.25±0.01%), lactose (4.95±0.21%) and total solids (6.55±0.27%), slightly higher than those of mozzarella and paneer whey. On the other hand, the paneer whey showed acidity (0.30±0.01) and ash content (0.56±0.02), slightly higher than those of cheddar and mozzarella whey. Furthermore, the mozzarella whey revealed the total plate count values (3.17±0.09 1 0 4 cfu/mL), slightly higher than those of cheddar and paneer whey samples. The paneer whey contained the amount of calcium (25.02 ± 1.34), magnesium (4.88 ± 0.23), sodium (32.11 ± 1.37) and potassium (97.55 ± 3.54) slightly higher, when compared to those of cheddar and mozzarella whey. The cheddar whey possessed the highest amount of essential and non-essential amino acid contents, followed by mozzarella and paneer whey. Thus, cheddar whey exhibited the best physicochemical and nutritional profile among all the whey samples, so it can be used to prepare high quality novel and nutritious sports drink for sportsman.
Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions
Neural networks (NN) are considered as black boxes due to the lack of explainability and transpar... more Neural networks (NN) are considered as black boxes due to the lack of explainability and transparency of their decisions. This significantly hampers their deployment in environments where explainability is essential along with the accuracy of the system. Recently, significant efforts have been made for the interpretability of these deep networks with the aim to open up the black box. However, most of these approaches are specifically developed for visual modalities. In addition, the interpretations provided by these systems require expert knowledge and understanding for intelligibility. This indicates a vital gap between the explainability provided by the systems and the novice user. To bridge this gap, we present a novel framework i.e. Time-Series eXplanation (TSXplain) system which produces a natural language based explanation of the decision taken by a NN. It uses the extracted statistical features to describe the decision of a NN, merging the deep learning world with that of statistics. The two-level explanation provides ample description of the decision made by the network to aid an expert as well as a novice user alike. Our survey and reliability assessment test confirm that the generated explanations are meaningful and correct. We believe that generating natural language based descriptions of the network's decisions is a big step towards opening up the black box.
Proceedings of the 10th International Conference on Agents and Artificial Intelligence
We present a hierarchical framework for zero-shot human-activity recognition that recognizes unse... more We present a hierarchical framework for zero-shot human-activity recognition that recognizes unseen activities by the combinations of preliminarily learned basic actions and involved objects. The presented framework consists of gaze-guided object recognition module, myo-armband based action recognition module, and the activity recognition module, which combines results from both action and object module to detect complex activities. Both object and action recognition modules are based on deep neural network. Unlike conventional models, the proposed framework does not need retraining for recognition of an unseen activity, if the activity can be represented by a combination of the predefined basic actions and objects. This framework brings competitive advantage to industry in terms of the service-deployment cost. The experimental results showed that the proposed model could recognize three types of activities with precision of 77% and recall rate of 82%, which is comparable to a baseline method based on supervised learning.
2020 International Joint Conference on Neural Networks (IJCNN), 2020
Deep neural networks are black boxes by construction. Explanation and interpretation methods ther... more Deep neural networks are black boxes by construction. Explanation and interpretation methods therefore are pivotal for a trustworthy application. Existing methods are mostly based on heatmapping and focus on locally determining the relevant input parts triggering the network prediction. However, these methods struggle to uncover global causes. While this is a rare case in the image or NLP modality, it is of high relevance in the time series domain.This paper presents a novel framework, i.e. Conceptual Explanation, designed to evaluate the effect of abstract (local or global) input features on the model behavior. The method is model-agnostic and allows utilizing expert knowledge. On three time series datasets Conceptual Explanation demonstrates its ability to pinpoint the causes inherent to the data to trigger the correct model prediction.
Development of disease resistant high yielding wheat genotypes is the prime objective of all whea... more Development of disease resistant high yielding wheat genotypes is the prime objective of all wheat breeding programmes. To study genetic diversity for production traits, 229 F Recombinant Inbred Lines (RILs) 5:8 of wheat were planted in one meter row during 2012-13 at the University of Agriculture Peshawar Pakistan. Cluster analysis based on squared Euclidean distance and UPGMA method, categorized the RILs into six groups. Analysis revealed high inter-cluster difference between cluster III and cluster VI followed by cluster IV and VI and then by cluster V and cluster VI. Cluster I contain genotypes having maximum mean value for days to heading, flag leaf area and grains spike , whereas cluster IV contain genotypes having maximum mean -1 value for plant height, number of spikes, 1000-grain weight and grain yield. The results of this study revealed that RILs in cluster I and cluster IV could yield potential segregants.
Early detection of skin cancers like melanoma is crucial to ensure high chances of survival for p... more Early detection of skin cancers like melanoma is crucial to ensure high chances of survival for patients. Clinical application of Deep Learning (DL)-based Decision Support Systems (DSS) for skin cancer screening has the potential to improve the quality of patient care. The majority of work in the medical AI community focuses on a diagnosis setting that is mainly relevant for autonomous operation. Practical decision support should, however, go beyond plain diagnosis and provide explanations. This paper provides an overview of works towards explainable, DL-based decision support in medical applications with the example of skin cancer diagnosis from clinical, dermoscopic and histopathologic images. Analysis reveals that comparably little attention is payed to the explanation of histopathologic skin images and that current work is dominated by visual relevance maps as well as dermoscopic feature identification. We conclude that future work should focus on meeting the stakeholder’s cogni...
Liquidity has been researched globally and locally increasingly in the 21st century. One of the i... more Liquidity has been researched globally and locally increasingly in the 21st century. One of the innovations for this is liquidity-adjusted CAP-model (LCAPM). It has been found that local factors are determining considerable part of the liquidity premium and the differences between markets are significant. However, research in the Finnish stock market is limited and the results differ as the methodologies vary. The aim of this research is to study the price of liquidity risk in the Finnish stock market and see how the methodology affects the results. The research period is from the beginning of 2002 till the end of 2018 and research data consists daily observations of 176 stocks. Liquidity is measured with Closing Percent Quoted Spread and the price of liquidity risk using unconditional LCAPM. The results suggest that two of the three systematic components of liquidity risk are priced along with the expected illiquidity. This means that investors in the Finnish stock market want a pr...
Brand hate is an extreme negative emotion developed in the consumers of a brand when they perceiv... more Brand hate is an extreme negative emotion developed in the consumers of a brand when they perceive the brand as inappropriate due to various reasons. Brand hate have been known to cause great harm to the companies and their brands. Companies face negative consequences from the brand haters such as negative word of mouth, brand rejections, brand boycotts, and anti-branding activities. This study has investigated the concept of brand hate in the light of the theory of hate and the theory of consumer brand relationship. The objectives of this study are to investigate whether the direct personal antecedents(negative past experience, symbolic incongruity, and poor relationship quality) and indirect non-personal antecedents (ideological incompatibility and rumor) triggers brand hate among consumers or not. Moreover, to investigate whether the elements of brand recovery process (apology, compensation, and explanation) helps in minimizing brand hate or not. For the purpose of testing these ...
A novel data augmentation method suitable for wearable sensor data is proposed. Although numerous... more A novel data augmentation method suitable for wearable sensor data is proposed. Although numerous studies have revealed the importance of the data augmentation to improve the accuracy and robustness in machine-learning tasks, the data augmentation method that is applicable to wearable sensor data have not been well studied. Unlike the conventional data augmentation methods, which are mainly developed for image and video analysis tasks, this study proposes a data augmentation method that can take an physical constraint of wearable sensors into account. The effectiveness of the proposed method was evaluated with a human-action-recognition task. The experimental results showed that the proposed method achieved better accuracy with significant difference compared to the cases where no data augmentation is applied and where a couple of simple data augmentation is applied.
The contribution of this paper is two fold. First, it presents a novel approach called DeepBiRD w... more The contribution of this paper is two fold. First, it presents a novel approach called DeepBiRD which is inspired from human visual perception and exploits layout features to identify individual references in a scientific publication. Second, we present a new dataset for image-based reference detection with 2401 scans containing 12244 references, all manually annotated for individual reference. Our proposed approach consists of two stages, firstly it identifies whether given document image is single column or multi-column. Using this information, document image is then splitted into individual columns. Secondly it performs layout driven reference detection using Mask R-CNN in a given scientific publication. DeepBiRD was evaluated on two different datasets to demonstrate the generalization of this approach. The proposed system achieved an F-measure of 0.96 on our dataset. DeepBiRD detected 2.5 times more references than current state-of-the-art approach on their own dataset. Therefor...
The field population of Taragama siva Lefebvre, a polyphagous forest insect pest, was noticed sev... more The field population of Taragama siva Lefebvre, a polyphagous forest insect pest, was noticed severely infected with a polyhedrosis virus, at Jodhpur and adjacent localities during AugustSeptember, 1995. In field studies, a high incidence of disease was present in young larval population. Sample oflate instar larvae collected from field showed 96.66 per cent infected material. Although, one species ofa dipterous parasite is known to attack T. siva , but control by this agent, in the present study, appeared negligible. The number of cocoons formed at the end of outbreak was extremely low. There was good evidence to suggest that the virus infection was the main cause in the sudden collapse of pest population.
The field of explainable AI (XAI) has quickly become a thriving and prolific community. However, ... more The field of explainable AI (XAI) has quickly become a thriving and prolific community. However, a silent, recurrent and acknowledged issue in this area is the lack of consensus regarding its terminology. In particular, each new contribution seems to rely on its own (and often intuitive) version of terms like “explanation” and “interpretation”. Such disarray encumbers the consolidation of advances in the field towards the fulfillment of scientific and regulatory demands e.g., when comparing methods or establishing their compliance w.r.t. biases and fairness constraints. We propose a theoretical framework that not only provides concrete definitions for these terms, but it also outlines all steps necessary to produce explanations and interpretations. The framework also allows for existing contributions to be recontextualized such that their scope can be measured, thus making them comparable to other methods. We show that this framework is compliant with desiderata on explanations, on ...
A database for camera captured documents is useful to train OCRs to obtain better performance. Ho... more A database for camera captured documents is useful to train OCRs to obtain better performance. However, no dataset exists for camera captured documents because it is very laborious and costly to build these datasets manually. In this paper, a fully automatic approach allowing building the very large scale (i.e., millions of images) labeled camera captured documents dataset is proposed. The proposed approach does not require any human intervention in labeling. Evaluation of samples generated by the proposed approach shows that more than 97% of the images are correctly labeled. Novelty of the proposed approach lies in the use of document image retrieval for automatic labeling, especially for camera captured documents, which contain different distortions specific to camera, e.g., blur, perspective distortion, etc.
2020 International Joint Conference on Neural Networks (IJCNN), 2020
Identification of input data points relevant for the classifier (i.e. serve as the support vector... more Identification of input data points relevant for the classifier (i.e. serve as the support vector) has recently spurred the interest of researchers for both interpretability as well as dataset debugging. This paper presents an in-depth analysis of the methods which attempt to identify the influence of these data points on the resulting classifier. To quantify the quality of the influence, we curated a set of experiments where we debugged and pruned the dataset based on the influence information obtained from different methods. To do so, we provided the classifier with mislabeled examples that hampered the overall performance. Since the classifier is a combination of both the data and the model, therefore, it is essential to also analyze these influences for the interpretability of deep learning models. Analysis of the results shows that some interpretability methods can detect mislabels better than using a random approach, however, contrary to the claim of these methods, the sample ...
Organizational effectiveness is critical to success in any economy. It is commonly referred when ... more Organizational effectiveness is critical to success in any economy. It is commonly referred when discussing organizations that have achieved maximum performance. Organizational effectiveness, in general, is based on the integration of the goals of the organization and the employees. Neither of them should be viewed in isolation. There are some factors that may affect the organizational effectiveness such as: performance, motivation, organizational environment, managerial expertise, creative synergy, multi-ethnic and racial background. This article is focused to explain concept of organizational effectiveness with the elaboration of factors affecting it.
Artificial Intelligence (AI) can roughly be categorized into two streams, knowledge driven and da... more Artificial Intelligence (AI) can roughly be categorized into two streams, knowledge driven and data driven both of which have their own advantages. Incorporating knowledge into Deep Neural Networks (DNN), that are purely data driven, can potentially improve the overall performance of the system. This paper presents such a fusion scheme, DeepEX, that combines these seemingly parallel streams of AI, for multi-step time-series forecasting problems. DeepEX achieves this in a way that merges best of both worlds along with a reduction in the amount of data required to train these models. This direction has been explored in the past for single step forecasting by opting for a residual learning scheme. We analyze the shortcomings of this simple residual learning scheme and enable DeepEX to not only avoid these shortcomings but also scale to multi-step prediction problems. DeepEX is tested on two commonly used time series forecasting datasets, CIF2016 and NN5, where it achieves competitive r...
Uploads
Papers by Sheraz Ahmed