Over the past few years, project management has gained a reputation as an effective management to... more Over the past few years, project management has gained a reputation as an effective management tool that can improve business performance. It can be incredibly challenging to manage a software project, which involves many organizational resources, team resources, and personal resources. As a result, the quality of the software product is determined by the methodology used to complete the project. The impact of time delays and low productivity on software development projects is often felt at the bottom line. During the rapid development of software and non-software project management tools, the number of available products has increased dramatically. The aim of this work is to study the importance of project management systems as well as the tools and techniques that help manage tasks effectively. After analysis of existing studies and surveys conducted, we propose Projectify A Project Management Tool.
The Cloud technology is widely used now-a-days. Cloud technology is useful for storing huge amoun... more The Cloud technology is widely used now-a-days. Cloud technology is useful for storing huge amounts of data and also for performing computations. The join queries in cloud are computed by execution server using different storage servers. In the existing system when the client wants to execute a join query, the query is sent to the execution server which gets the data from different storage servers and performs the join operation. The execution server then returns the results to the client. The client must be able to verify the results received from execution server. Various techniques are described in this paper to verify that the results returned by the join query are proper i.e. the results are correct and complete. Also the results provided by the servers must be secure. Integrity indicates that all the valid tuples are included in the result and no valid tuple is missing. Integrity of result helps in further data processing.
Data mining techniques have been far and wide useful to extract knowledge from large databases. D... more Data mining techniques have been far and wide useful to extract knowledge from large databases. Data mining searches for associations and worldwide patterns that exist in large databases that are hidden among the high dimensional data. Feature selection involves selecting the most useful features from the given data set and reduces dimensionality. Graph clustering method is used for feature selection. Features which are most relevant to the target class and independent of other are selected from the cluster. The features selected from the cluster are given to the classifiers to increase the learning accuracy and achieve best feature subset. The feature selection can be competent and effectual using clustering approach considering time and quality of data respectively. Keywords— Feature selection, minimum spanning tree, clustering, Classification.
Data mining techniques have been widely applied to extract knowledge from large databases. Data m... more Data mining techniques have been widely applied to extract knowledge from large databases. Data mining searches for relationships and global patterns that exist in large databases that are ‘hidden’ among the huge data. Feature selection involves selecting the most useful features from the given data set and reduces dimensionality. Graph clustering method is used for feature selection. Features which are most relevant to the target class and independent of other are selected from the cluster. The feature subset obtained is given to the various supervised learning algorithms to increase the learning accuracy and obtain best feature subset. The feature selection can be efficient and effective using clustering approach. Based on the criteria of efficiency in terms of time complexity and effectiveness in terms of quality of data, useful features from the big data can be selected. Keywords— Feature selection,minimum spanning tree, clustering . _____________________________________________...
Multimedia Information Retrieval is very useful for any application in our daily work. Most of th... more Multimedia Information Retrieval is very useful for any application in our daily work. Most of the applications consist of Multimedia data that are images, text, audio and video. Multimedia information retrieval system is used to search an image. There are same meanings for different data which is also known as semantic gap. This problem is solved by fusion of text based image retrieval and content based image retrieval. Weighted Mean, OWA and WOWA are aggregation operators used in this system for the fusion of text and image numeric values. The Scale invariant feature transforms and speeded up robust feature are two algorithms for feature extraction. To increase the speed of system, the speeded up robust feature algorithm is used. Bag of Words and Bag of Visual Word approaches are used in this system for retrieving images. Keywords-Content Based Image Retrieval, Fusion, Multimedia Information Retrieval, Text Based Image Retrieval. __________________________________________________*...
In recent years, many data-intensive and location based applications have emerged that need to pr... more In recent years, many data-intensive and location based applications have emerged that need to process stream data in applications such as network monitoring, telecommunications data management, and sensor networks. Unlike regular queries, a continuous query exists for certain period of time and need to be continuously processed during this time. The algorithms used for data processing for the traditional database systems are not suited to tackle complex and various continuous queries over dynamic streaming data. The indexing for finite queries is preferred to indexing on infinite data to avoid expensive operations of index maintenance. Previous related work focused on moving queries on static objects or static queries on moving object. But now-a-days queries as well as objects are dynamic. So, hybrid indexing for queries significantly reduces the space costs and scales well with the increasing data. To deal with the speed of unbounded data, it is necessary to use data parallelism i...
In today’s digital environment, text databases are rapidly increases due to use of internet and c... more In today’s digital environment, text databases are rapidly increases due to use of internet and communication mediums. Different text mining techniques are used for knowledge discovery and Information retrieval. Text data contains the side information along with the text data. Side information may be the metadata associated with text data like author, co-author or citation network, document provenance information, web links or other kind of data which provide more insights about the text documents. Such side information contains tremendous amount of information for the clustering purpose. Using such side information in the categorization process provides more refine clustered data. But sometimes side information may be noisy and results in wrong categorization which decreases the quality of clustering process. Therefore, a new approach for mining of text data using side information is suggested, which combines partitioning approach with probabilistic estimation model for the mining ...
579 Abstract— The main goal of this paper is to show multimedia information retrieval task using ... more 579 Abstract— The main goal of this paper is to show multimedia information retrieval task using the combination of textual pre-filtering and image re-ranking. The combination of textual and visual techniques and retrieval processes used to develop the multimedia information retrieval system by which solves the problem of the semantic gap of the given query. Five late semantic fusion approaches are discussed, which can be used for text based and content based image retrieval of any dataset. The logistic regression relevance feedback algorithm is used to determine the similarity between the images from the dataset to the query.
The internet is continuously getting loaded with the data. The internet has grown in popularity f... more The internet is continuously getting loaded with the data. The internet has grown in popularity from being a mere facility to a necessity. It is becoming a universal medium for information publication and usage. The continuous queries are used to monitor the changes to the time varying data and to provide results for online decision making. The client wants to obtain the value of aggregation function over distributed data items. The continuous query is logically run continuously over the space in contrast one-time queries which are run once to over the current data sets.The time-varying data is used for the important decision making. As the data is time varying so the updates should be disseminated to the client continuously. The coherency requirement is dependent on the nature of the data items and user requirement. The examples are stock prices, currency exchange rates, real-time traffic, and weather information, data from sensors, auctions, personal portfolio evaluation, route pl...
International Journal of Advance Research and Innovative Ideas in Education, 2017
Sentiment analysis and opinion mining are used to determine sentiment expressed in text reviews. ... more Sentiment analysis and opinion mining are used to determine sentiment expressed in text reviews. In area of sentiment domain analysis, Bag of Words i.e. BoW is the statistical machine learning approach which suffers from some fundamental issues those occurred during the problem of polarity shifting. Therefore, DSA i.e. Dual Sentiment Analysis is proposed to overcome the problem in classification of sentiments. It consists of two phases such as, Dual training (DT) which determines original as well as reversed training reviews from learning sentiment classifiers and Dual Prediction (DP) sort/classifies the reviews by considering both negative and positive sides of single review. Dependencies of DSA are removed by implementing corpus-based methodology which construct pseudo antonym dictionary. It is used for reversion of review. Along with DSA proposed system contributes features for inconsistency classifier by syntactic construction which helps to improve system performance.
The Cloud technology is widely used now-a-days. Cloud technology is useful for storing huge amoun... more The Cloud technology is widely used now-a-days. Cloud technology is useful for storing huge amounts of data and also for performing computations. The join queries in cloud are computed by execution server using different storage servers. In the existing system when the client wants to execute a join query, the query is sent to the execution server which gets the data from different storage servers and performs the join operation. The execution server then returns the results to the client. The client must be able to verify the results received from execution server. Various techniques are described in this paper to verify that the results returned by the join query are proper i.e. the results are correct and complete. Also the results provided by the servers must be secure. Integrity indicates that all the valid tuples are included in the result and no valid tuple is missing. Integrity of result helps in further data processing.
International Journal of Advance Research and Innovative Ideas in Education, 2017
The rapid expansion of World Wide Web has made a large number of databases like the bibliographie... more The rapid expansion of World Wide Web has made a large number of databases like the bibliographies, scientific databases etc. So user not able to express their need explicitly and it results in to queries that lead to unsatisfactory results. The FBR (Feature Based Retrieval) system allows user to use imprecise queries to express their uncertainty. The traditional way of searching the data requires specifying the queries clearly. More time is needed to retrieve the data with traditional approach. FBR system computes the sensitivity of the output if user modifies certain conditions. The new conditions to improve the quality of result will also be explored by the FBR system. FBR system is designed in such a way that it can handle the probabilistic queries containing uncertainty. To support interactive response time, FBR system allows user to set threshold value. In large databases, to reduce the searching time there is need to search database scientifically which will lead to faster in...
Recommender System is implemented with the help of customer gain content and overcome information... more Recommender System is implemented with the help of customer gain content and overcome information overload. It predicts attracts of customer and makes recommendation according to the interest model of customer. Content-based filtering uses object features to recommend other object similar to what the user likes and want, based on their previous actions or explicit feedback. Images with the same kind of such features are likely to be similar. Therefore, separate such features from the images will be very helpful in order to recommend the most similar products. Based on that data, a user profile is generated, which is then used to make suggestions to the user. Today, many companies use big data to make super applicable recommendations and earnings. Among a variety of recommendation algorithms, data scientists need to choose the best one according a business’s limitations and requirements. When we want to recommend something to a user, the most logical thing to do is to find people wit...
International Journal on Recent and Innovation Trends in Computing and Communication
An astute metropolis is an urbanized region that accumulates data through diverse numerical and e... more An astute metropolis is an urbanized region that accumulates data through diverse numerical and experiential understanding. Cloud-connected Internet of Things (IoT) solutions have the potential to aid intelligent cities in collecting data from inhabitants, devices, residences, and alternative origins. The monitoring and administration of carrying systems, plug-in services, reserve managing, H2O resource schemes, excess managing, illegal finding, safety actions, ability, numeral collection, healthcare abilities, and extra openings all make use of the processing and analysis of this data. This study aims to improve the security of smart cities by detecting attacks using algorithms drawn from the UNSW-NB15 and CICIDS2017 datasets and to create advanced strategies for identifying and justifying cyber threats in the context of smart cities by leveraging real-world network traffic data from UNSW-NB15 and labelled attack actions from CICIDS2017. The research aims to underwrite the developm...
Indonesian Journal of Electrical Engineering and Computer Science, 2022
Due to the extensive use of social media and mobile devices, unbounded and massive data is genera... more Due to the extensive use of social media and mobile devices, unbounded and massive data is generated continuously. The need to process this big data is increasing day by day. The traditional data processing algorithms fail to cater to the need of processing data generated by various applications such as digital geo-based advertising, and recommendation systems. There has been a high demand to process continuous spatial fuzzy textual queries over data stream of spatial-textual objects with high density by present locationbased and social network-based service applications. For the spatialkeyword data stream, the performance plays a vital role as the geo information and keyword description matching is needed for every incoming streaming object. The various continuous geo-keyword query processing methods normally lack the support for fuzzy keyword matching when processing the objects from the geo-textual data stream. The edit distancebased approach with the adaptive partitioning tree i...
International Journal on Recent and Innovation Trends in Computing and Communication
An astute metropolis is an urbanized region that accumulates data through diverse numerical and e... more An astute metropolis is an urbanized region that accumulates data through diverse numerical and experiential understanding. Cloud-connected Internet of Things (IoT) solutions have the potential to aid intelligent cities in collecting data from inhabitants, devices, residences, and alternative origins. The monitoring and administration of carrying systems, plug-in services, reserve managing, H2O resource schemes, excess managing, illegal finding, safety actions, ability, numeral collection, healthcare abilities, and extra openings all make use of the processing and analysis of this data. This study aims to improve the security of smart cities by detecting attacks using algorithms drawn from the UNSW-NB15 and CICIDS2017 datasets and to create advanced strategies for identifying and justifying cyber threats in the context of smart cities by leveraging real-world network traffic data from UNSW-NB15 and labelled attack actions from CICIDS2017. The research aims to underwrite the developm...
International Journal of Advance Research and Innovative Ideas in Education, 2017
Query optimization is important to crowdsourcing system as it provides declarative query interfac... more Query optimization is important to crowdsourcing system as it provides declarative query interface in relational database management system. In the proposed approach, a technique called crowd optimization in which the user has to submit an SQL query and the system compiles the query, generating the execution plan and evaluating that query giving an optimized plan to the user. In relational database systems, query optimization is providing query interfaces, which are important for crowdsourcing. The system considers cost and latency in query optimization and generates query plans that give a good balance between the cost and latency. Efficient algorithms for optimizing four types of queries are used i.e. selection queries, join queries, complex queries and order-by queries.
International Journal of Advance Research and Innovative Ideas in Education, 2017
Spreadsheet or excel sheet is an efficient way of data analysis and management. It has diverse su... more Spreadsheet or excel sheet is an efficient way of data analysis and management. It has diverse supplementary features such as, linear structure, visualization, statistics, reporting, periodic web query etc. There is several computation parameters required sufficient analysis. Mostly, user’s not much familiar with the complicated functions of spreadsheet but many times they have basic SQL query knowledge. Spreadsheet can implement all transformation of data in SQL by utilizing SQL formulae. A query compiler is used to translate SQL query into spreadsheet with similar semantics having NULL values. User can define their queries using high-level language which is further executed on plain spreadsheet. Our main is migration of SQL data into spreadsheet format. Result graph visualization is the contribution part of system.
Presentation-based learning is an effective way of learning in which students or employee of the ... more Presentation-based learning is an effective way of learning in which students or employee of the organization are able to receive continuous feedback from teammates or from their coaches. It is the pictorial way to represent the work. Before going towards the presentation of the work presenters have to work on the slides. These slides of presentation are made from the article, some academic papers or with the help of internet. It results into wastage of more timing to create slides rather than focusing on preparation of the presentation. In this paper, we are analysing the way to automatic generation of the presentation slides from academic papers. Due to this the presenters can prepare their formal slides in a quicker way. Therefore, we are proposing PPSGen system to address this problem of existing system. PPSGen have lots of advantages over baseline methods.
Over the past few years, project management has gained a reputation as an effective management to... more Over the past few years, project management has gained a reputation as an effective management tool that can improve business performance. It can be incredibly challenging to manage a software project, which involves many organizational resources, team resources, and personal resources. As a result, the quality of the software product is determined by the methodology used to complete the project. The impact of time delays and low productivity on software development projects is often felt at the bottom line. During the rapid development of software and non-software project management tools, the number of available products has increased dramatically. The aim of this work is to study the importance of project management systems as well as the tools and techniques that help manage tasks effectively. After analysis of existing studies and surveys conducted, we propose Projectify A Project Management Tool.
The Cloud technology is widely used now-a-days. Cloud technology is useful for storing huge amoun... more The Cloud technology is widely used now-a-days. Cloud technology is useful for storing huge amounts of data and also for performing computations. The join queries in cloud are computed by execution server using different storage servers. In the existing system when the client wants to execute a join query, the query is sent to the execution server which gets the data from different storage servers and performs the join operation. The execution server then returns the results to the client. The client must be able to verify the results received from execution server. Various techniques are described in this paper to verify that the results returned by the join query are proper i.e. the results are correct and complete. Also the results provided by the servers must be secure. Integrity indicates that all the valid tuples are included in the result and no valid tuple is missing. Integrity of result helps in further data processing.
Data mining techniques have been far and wide useful to extract knowledge from large databases. D... more Data mining techniques have been far and wide useful to extract knowledge from large databases. Data mining searches for associations and worldwide patterns that exist in large databases that are hidden among the high dimensional data. Feature selection involves selecting the most useful features from the given data set and reduces dimensionality. Graph clustering method is used for feature selection. Features which are most relevant to the target class and independent of other are selected from the cluster. The features selected from the cluster are given to the classifiers to increase the learning accuracy and achieve best feature subset. The feature selection can be competent and effectual using clustering approach considering time and quality of data respectively. Keywords— Feature selection, minimum spanning tree, clustering, Classification.
Data mining techniques have been widely applied to extract knowledge from large databases. Data m... more Data mining techniques have been widely applied to extract knowledge from large databases. Data mining searches for relationships and global patterns that exist in large databases that are ‘hidden’ among the huge data. Feature selection involves selecting the most useful features from the given data set and reduces dimensionality. Graph clustering method is used for feature selection. Features which are most relevant to the target class and independent of other are selected from the cluster. The feature subset obtained is given to the various supervised learning algorithms to increase the learning accuracy and obtain best feature subset. The feature selection can be efficient and effective using clustering approach. Based on the criteria of efficiency in terms of time complexity and effectiveness in terms of quality of data, useful features from the big data can be selected. Keywords— Feature selection,minimum spanning tree, clustering . _____________________________________________...
Multimedia Information Retrieval is very useful for any application in our daily work. Most of th... more Multimedia Information Retrieval is very useful for any application in our daily work. Most of the applications consist of Multimedia data that are images, text, audio and video. Multimedia information retrieval system is used to search an image. There are same meanings for different data which is also known as semantic gap. This problem is solved by fusion of text based image retrieval and content based image retrieval. Weighted Mean, OWA and WOWA are aggregation operators used in this system for the fusion of text and image numeric values. The Scale invariant feature transforms and speeded up robust feature are two algorithms for feature extraction. To increase the speed of system, the speeded up robust feature algorithm is used. Bag of Words and Bag of Visual Word approaches are used in this system for retrieving images. Keywords-Content Based Image Retrieval, Fusion, Multimedia Information Retrieval, Text Based Image Retrieval. __________________________________________________*...
In recent years, many data-intensive and location based applications have emerged that need to pr... more In recent years, many data-intensive and location based applications have emerged that need to process stream data in applications such as network monitoring, telecommunications data management, and sensor networks. Unlike regular queries, a continuous query exists for certain period of time and need to be continuously processed during this time. The algorithms used for data processing for the traditional database systems are not suited to tackle complex and various continuous queries over dynamic streaming data. The indexing for finite queries is preferred to indexing on infinite data to avoid expensive operations of index maintenance. Previous related work focused on moving queries on static objects or static queries on moving object. But now-a-days queries as well as objects are dynamic. So, hybrid indexing for queries significantly reduces the space costs and scales well with the increasing data. To deal with the speed of unbounded data, it is necessary to use data parallelism i...
In today’s digital environment, text databases are rapidly increases due to use of internet and c... more In today’s digital environment, text databases are rapidly increases due to use of internet and communication mediums. Different text mining techniques are used for knowledge discovery and Information retrieval. Text data contains the side information along with the text data. Side information may be the metadata associated with text data like author, co-author or citation network, document provenance information, web links or other kind of data which provide more insights about the text documents. Such side information contains tremendous amount of information for the clustering purpose. Using such side information in the categorization process provides more refine clustered data. But sometimes side information may be noisy and results in wrong categorization which decreases the quality of clustering process. Therefore, a new approach for mining of text data using side information is suggested, which combines partitioning approach with probabilistic estimation model for the mining ...
579 Abstract— The main goal of this paper is to show multimedia information retrieval task using ... more 579 Abstract— The main goal of this paper is to show multimedia information retrieval task using the combination of textual pre-filtering and image re-ranking. The combination of textual and visual techniques and retrieval processes used to develop the multimedia information retrieval system by which solves the problem of the semantic gap of the given query. Five late semantic fusion approaches are discussed, which can be used for text based and content based image retrieval of any dataset. The logistic regression relevance feedback algorithm is used to determine the similarity between the images from the dataset to the query.
The internet is continuously getting loaded with the data. The internet has grown in popularity f... more The internet is continuously getting loaded with the data. The internet has grown in popularity from being a mere facility to a necessity. It is becoming a universal medium for information publication and usage. The continuous queries are used to monitor the changes to the time varying data and to provide results for online decision making. The client wants to obtain the value of aggregation function over distributed data items. The continuous query is logically run continuously over the space in contrast one-time queries which are run once to over the current data sets.The time-varying data is used for the important decision making. As the data is time varying so the updates should be disseminated to the client continuously. The coherency requirement is dependent on the nature of the data items and user requirement. The examples are stock prices, currency exchange rates, real-time traffic, and weather information, data from sensors, auctions, personal portfolio evaluation, route pl...
International Journal of Advance Research and Innovative Ideas in Education, 2017
Sentiment analysis and opinion mining are used to determine sentiment expressed in text reviews. ... more Sentiment analysis and opinion mining are used to determine sentiment expressed in text reviews. In area of sentiment domain analysis, Bag of Words i.e. BoW is the statistical machine learning approach which suffers from some fundamental issues those occurred during the problem of polarity shifting. Therefore, DSA i.e. Dual Sentiment Analysis is proposed to overcome the problem in classification of sentiments. It consists of two phases such as, Dual training (DT) which determines original as well as reversed training reviews from learning sentiment classifiers and Dual Prediction (DP) sort/classifies the reviews by considering both negative and positive sides of single review. Dependencies of DSA are removed by implementing corpus-based methodology which construct pseudo antonym dictionary. It is used for reversion of review. Along with DSA proposed system contributes features for inconsistency classifier by syntactic construction which helps to improve system performance.
The Cloud technology is widely used now-a-days. Cloud technology is useful for storing huge amoun... more The Cloud technology is widely used now-a-days. Cloud technology is useful for storing huge amounts of data and also for performing computations. The join queries in cloud are computed by execution server using different storage servers. In the existing system when the client wants to execute a join query, the query is sent to the execution server which gets the data from different storage servers and performs the join operation. The execution server then returns the results to the client. The client must be able to verify the results received from execution server. Various techniques are described in this paper to verify that the results returned by the join query are proper i.e. the results are correct and complete. Also the results provided by the servers must be secure. Integrity indicates that all the valid tuples are included in the result and no valid tuple is missing. Integrity of result helps in further data processing.
International Journal of Advance Research and Innovative Ideas in Education, 2017
The rapid expansion of World Wide Web has made a large number of databases like the bibliographie... more The rapid expansion of World Wide Web has made a large number of databases like the bibliographies, scientific databases etc. So user not able to express their need explicitly and it results in to queries that lead to unsatisfactory results. The FBR (Feature Based Retrieval) system allows user to use imprecise queries to express their uncertainty. The traditional way of searching the data requires specifying the queries clearly. More time is needed to retrieve the data with traditional approach. FBR system computes the sensitivity of the output if user modifies certain conditions. The new conditions to improve the quality of result will also be explored by the FBR system. FBR system is designed in such a way that it can handle the probabilistic queries containing uncertainty. To support interactive response time, FBR system allows user to set threshold value. In large databases, to reduce the searching time there is need to search database scientifically which will lead to faster in...
Recommender System is implemented with the help of customer gain content and overcome information... more Recommender System is implemented with the help of customer gain content and overcome information overload. It predicts attracts of customer and makes recommendation according to the interest model of customer. Content-based filtering uses object features to recommend other object similar to what the user likes and want, based on their previous actions or explicit feedback. Images with the same kind of such features are likely to be similar. Therefore, separate such features from the images will be very helpful in order to recommend the most similar products. Based on that data, a user profile is generated, which is then used to make suggestions to the user. Today, many companies use big data to make super applicable recommendations and earnings. Among a variety of recommendation algorithms, data scientists need to choose the best one according a business’s limitations and requirements. When we want to recommend something to a user, the most logical thing to do is to find people wit...
International Journal on Recent and Innovation Trends in Computing and Communication
An astute metropolis is an urbanized region that accumulates data through diverse numerical and e... more An astute metropolis is an urbanized region that accumulates data through diverse numerical and experiential understanding. Cloud-connected Internet of Things (IoT) solutions have the potential to aid intelligent cities in collecting data from inhabitants, devices, residences, and alternative origins. The monitoring and administration of carrying systems, plug-in services, reserve managing, H2O resource schemes, excess managing, illegal finding, safety actions, ability, numeral collection, healthcare abilities, and extra openings all make use of the processing and analysis of this data. This study aims to improve the security of smart cities by detecting attacks using algorithms drawn from the UNSW-NB15 and CICIDS2017 datasets and to create advanced strategies for identifying and justifying cyber threats in the context of smart cities by leveraging real-world network traffic data from UNSW-NB15 and labelled attack actions from CICIDS2017. The research aims to underwrite the developm...
Indonesian Journal of Electrical Engineering and Computer Science, 2022
Due to the extensive use of social media and mobile devices, unbounded and massive data is genera... more Due to the extensive use of social media and mobile devices, unbounded and massive data is generated continuously. The need to process this big data is increasing day by day. The traditional data processing algorithms fail to cater to the need of processing data generated by various applications such as digital geo-based advertising, and recommendation systems. There has been a high demand to process continuous spatial fuzzy textual queries over data stream of spatial-textual objects with high density by present locationbased and social network-based service applications. For the spatialkeyword data stream, the performance plays a vital role as the geo information and keyword description matching is needed for every incoming streaming object. The various continuous geo-keyword query processing methods normally lack the support for fuzzy keyword matching when processing the objects from the geo-textual data stream. The edit distancebased approach with the adaptive partitioning tree i...
International Journal on Recent and Innovation Trends in Computing and Communication
An astute metropolis is an urbanized region that accumulates data through diverse numerical and e... more An astute metropolis is an urbanized region that accumulates data through diverse numerical and experiential understanding. Cloud-connected Internet of Things (IoT) solutions have the potential to aid intelligent cities in collecting data from inhabitants, devices, residences, and alternative origins. The monitoring and administration of carrying systems, plug-in services, reserve managing, H2O resource schemes, excess managing, illegal finding, safety actions, ability, numeral collection, healthcare abilities, and extra openings all make use of the processing and analysis of this data. This study aims to improve the security of smart cities by detecting attacks using algorithms drawn from the UNSW-NB15 and CICIDS2017 datasets and to create advanced strategies for identifying and justifying cyber threats in the context of smart cities by leveraging real-world network traffic data from UNSW-NB15 and labelled attack actions from CICIDS2017. The research aims to underwrite the developm...
International Journal of Advance Research and Innovative Ideas in Education, 2017
Query optimization is important to crowdsourcing system as it provides declarative query interfac... more Query optimization is important to crowdsourcing system as it provides declarative query interface in relational database management system. In the proposed approach, a technique called crowd optimization in which the user has to submit an SQL query and the system compiles the query, generating the execution plan and evaluating that query giving an optimized plan to the user. In relational database systems, query optimization is providing query interfaces, which are important for crowdsourcing. The system considers cost and latency in query optimization and generates query plans that give a good balance between the cost and latency. Efficient algorithms for optimizing four types of queries are used i.e. selection queries, join queries, complex queries and order-by queries.
International Journal of Advance Research and Innovative Ideas in Education, 2017
Spreadsheet or excel sheet is an efficient way of data analysis and management. It has diverse su... more Spreadsheet or excel sheet is an efficient way of data analysis and management. It has diverse supplementary features such as, linear structure, visualization, statistics, reporting, periodic web query etc. There is several computation parameters required sufficient analysis. Mostly, user’s not much familiar with the complicated functions of spreadsheet but many times they have basic SQL query knowledge. Spreadsheet can implement all transformation of data in SQL by utilizing SQL formulae. A query compiler is used to translate SQL query into spreadsheet with similar semantics having NULL values. User can define their queries using high-level language which is further executed on plain spreadsheet. Our main is migration of SQL data into spreadsheet format. Result graph visualization is the contribution part of system.
Presentation-based learning is an effective way of learning in which students or employee of the ... more Presentation-based learning is an effective way of learning in which students or employee of the organization are able to receive continuous feedback from teammates or from their coaches. It is the pictorial way to represent the work. Before going towards the presentation of the work presenters have to work on the slides. These slides of presentation are made from the article, some academic papers or with the help of internet. It results into wastage of more timing to create slides rather than focusing on preparation of the presentation. In this paper, we are analysing the way to automatic generation of the presentation slides from academic papers. Due to this the presenters can prepare their formal slides in a quicker way. Therefore, we are proposing PPSGen system to address this problem of existing system. PPSGen have lots of advantages over baseline methods.
Uploads
Papers by Kalpana Metre