Ubiquity of mobile devices with rich sensory capabilities has given rise to the mobile crowd-sens... more Ubiquity of mobile devices with rich sensory capabilities has given rise to the mobile crowd-sensing (MCS) concept, in which a central authority (the platform) and its participants (mobile users) work collaboratively to acquire sensory data over a wide geographic area. Recent research in MCS highlights the following facts: 1) a utility metric can be defined for both the platform and the users, quantifying the value received by either side; 2) incentivizing the users to participate is a non-trivial challenge; 3) correctness and truthfulness of the acquired data must be verified, because the users might provide incorrect or inaccurate data, whether due to malicious intent or malfunctioning devices; and 4) an intricate relationship exists among platform utility, user utility, user reputation, and data trustworthiness, suggesting a co-quantification of these interrelated metrics. In this paper, we study two existing approaches that quantify crowd-sensed data trustworthiness, based on statistical and vote-based user reputation scores. We introduce a new metric— collaborative reputation scores—to expand this definition. Our simulation results show that collaborative reputation scores can provide an effective alternative to the previously proposed metrics and are able to extend crowd sensing to applications that are driven by a centralized as well as decentralized control.
Widespread use of connected smart devices that are equipped with various built-in sensors has int... more Widespread use of connected smart devices that are equipped with various built-in sensors has introduced the mobile crowdsensing concept to the IoT-driven information and communication applications. Mobile crowdsensing requires implicit collaboration between the crowdsourcer/recruiter platforms and users. Additionally, users need to be incentivized by the crowdsensing platform because each party aims to maximize their utility. Due to the participatory nature of data collection, trustworthiness and truthfulness pose a grand challenge in crowdsensing systems in the presence of malicious users, who either aim to manipulate sensed data or collaborate unfaithfully with the motivation of maximizing their income. In this paper, we propose a game-theoretic approach for trustworthiness-driven user recruitment in mobile crowdsensing systems that consists of three phases: i) user recruitment, ii) collaborative decision making on trust scores, and iii) badge rewarding. Our proposed framework incentivizes the users through a sub-game perfect equilibrium (SPE) and gamification techniques. Through simulations , we show that the platform and user utilities, defined in terms of costs and revenues, can be improved respectively by up to the order of 50% and of at least 15% when compared to fully-distributed and user-centric trustworthy crowdsensing.
Portable medical devices generate volumes of data that could be useful in identifying health risk... more Portable medical devices generate volumes of data that could be useful in identifying health risks. The proposed method filters patients’ electrocardiograms (ECGs) and applies machine-learning classifiers to identify cardiac health risks and estimate severity. The authors present the results of applying their method in a case study.
The past few decades have witnessed incredible advances in human health care, owing to the invent... more The past few decades have witnessed incredible advances in human health care, owing to the invention of devices such as MRI scanners, which allow physicians to monitor personal health in more detail than was ever previously possible. Such advances have drastically improved diagnostic quality and patient health care. Central to this incredible progress was the uncanny ability of technologists and academics to invent ever more useful tools to help physicians, be it the X-ray machine, CT, or MRI scanner. Whereas the aforementioned past-decades' tools aimed at acquiring personal data, the advent of the Internet-of-Things, vast computational power available in the cloud, and new data analytics algorithms will completely change the way we acquire and process medical data to improve health care going forward. In this paper, we conduct a quantitative feasibility study of a Digital Health (D-Health) system that is aimed at acquiring and processing health data using the emerging Internet-of-Everything paradigm. We specifically investigate the technological feasibility of communication, software, and data privacy aspects.
Face recognition is a sophisticated problem requiring a significant commitment of computer resour... more Face recognition is a sophisticated problem requiring a significant commitment of computer resources. A modern GPU architecture provides a practical platform for performing face recognition in real time. The majority of the calculations of an eigenpicture implementation of face recognition are matrix multiplications. For this type of computation, a conventional computer GPU is capable of computing in tens of milliseconds data that a CPU requires thousands of milliseconds to process. In this chapter, we outline and examine the different components and computational requirements of a face recognition scheme implementing the Viola-Jones Face Detection Framework and an eigenpicture face recognition model. Face recognition can be separated into three distinct parts: face detection, eigenvector projection, and database search. For each, we provide a detailed explanation of the exact process along with an analysis of the computational requirements and scalability of the operation.
The Digital Health (D-Health) era is expected to be the " next big thing " since the invention of... more The Digital Health (D-Health) era is expected to be the " next big thing " since the invention of the internet, characterized by inexpensive and widespread medical data acquisition devices, widespread availability of identity-removed health data, and analytics algorithms that provide remote health monitoring feedback to doctors in realtime. Recent years have brought incremental developments in three key technological areas towards the realization of the D-Health era: data acquisition, secure data transmission/storage, and data analytics. i) For data acquisition, the emerging Internet-of-Things (IoT) devices are becoming a viable technology to enable the acquisition of remote health monitoring data. ii) For data storage, emerging system-level and cryptographic mechanisms provide secure and privacy-preserving transmission, storage, and sharing of the acquired data. iii) For data analytics, emerging decision support algorithms provide a mechanism for healthcare professionals to base their clinical diagnoses partially on machine-suggested statistical inferences that rely on a wide corpus of accumulated data. The D-Health era will create new business opportunities in all of these areas. In this paper, we propose a generalized structure for a D-Health system that is capable of remote health monitoring and decision support. We formulate our proposed structure around potential business opportunities and conduct technical feasibility studies.
Smart city sensing calls for crowdsensing via mobile devices that are equipped with various built... more Smart city sensing calls for crowdsensing via mobile devices that are equipped with various built-in sensors. As incentivizing users to participate in distributed sensing is still an open research issue, the trustworthiness of crowdsensed data is expected to be a grand challenge if this cloud-inspired recruitment of sensing services is to be adopted. Recent research proposes reputation-based user recruitment models for crowdsensing; however, there is no standard way of identifying adversaries in smart city crowdsensing. This paper adopts previously proposed vote-based approaches, and presents a thorough performance study of vote-based trustworthiness with trusted entities that are basically a subset of the participating smartphone users. Those entities are called trustworthy anchors of the crowdsensing system. Thus, an anchor user is fully trustworthy and is fully capable of voting for the trustworthiness of other users, who participate in sensing of the same set of phenomena. Besides the anchors, the reputations of regular users are determined based on vote-based (distributed) reputation. We present a detailed performance study of the anchor-based trustworthiness assurance in smart city crowdsensing through simulations, and compare it with the purely vote-based trustworthiness approach without anchors, and a reputation-unaware crowdsensing approach, where user reputations are discarded. Through simulation findings, we aim at providing specifications regarding the impact of anchor and adversary populations on crowdsensing and user utilities under various environmental settings. We show that significant improvement can be achieved in terms of usefulness and trustworthiness of the crowdsensed data if the size of the anchor population is set properly.
Energy harvesting systems that couple solar panels with supercapacitor buffers offer an attractiv... more Energy harvesting systems that couple solar panels with supercapacitor buffers offer an attractive option for powering computational systems deployed in field settings, where power infrastructure is inaccessible. Supercapacitors offer a particularly compelling advantage over electrochemical batteries for such settings because of their ability to survive many more charge–discharge cycles. We share UR-SolarCap—a versatile open source design for such a harvesting system that targets embedded system applications requiring power in the 1–10 W range. Our system is designed for high efficiency and con-trollability and, importantly, supports auto-wakeup from a state of complete energy depletion. This paper summarizes our design methodology, and the rationale behind our design and configuration decisions. Results from the operation and testing of a system realized with our design demonstrate: 1) an achievable harvester efficiency of 85%; 2) the ability to maintain sustained operation over a two week period when the solar panel and buffer are sized appropriately; and 3) a robust auto-wakeup functionality that resumes system operation upon the availability of harvestable energy after a period in which the system has been forced into a dormant state because of a lack of usable energy. To facilitate the use of the system by researchers exploring embedded system applications in environments that lack a power infrastructure, our designs are available for download as an archive containing design schematics, Printed Circuit Board (PCB) files, firmware code, and a component list for assembly of the system. In addition, a limited number of pre-assembled kits are available upon request.
Continuous, high-data-rate visual sensing and processing in the field is important for intelligen... more Continuous, high-data-rate visual sensing and processing in the field is important for intelligent transportation, environmental monitoring, and area / context awareness. Rechargeable batteries in self-sustainable systems suffer from adverse environmental impact and fast aging. Advancements in supercapacitor energy density and low-power processors have reached an inflection point, where a data-intensive visual sensing system can rely solely on supercapacitors for energy buffering. This paper demonstrates the first working prototype of such a system, consisting of eight 3,000-Farad supercapacitors, a custom-designed 70-mW controller / harvester board, and a Nexus 7 tablet computer. We leverage the voltage-to-stored-energy relationship in capacitors to enable precise energy buffer modeling (no more than 3% error in time-to-depletion prediction). We find the supercapaci-tor self-discharge (or leakage) to be a minor issue in practice where the operating power significantly exceeds the leakage power. However, accurate energy budgeting must account for the variation of effective capacitance—particularly lower capacitance at lower voltages nearing energy depletion. Precise energy budgeting allows model-driven system adaptation (such as dynamic CPU configurations on low-power chipsets) and consequently realizes high, stable operational quality-of-service. Our working prototype has been successfully deployed at a campus building rooftop where it analyzes nearby traffic patterns continuously. Figure 1: Picture of our deployed system on a seven-story building rooftop (center picture) that includes a camera, a block of solar panels, and a system box. The system box (left-side picture) contains a Nexus 7 tablet computer (without its internal rechargeable battery) sustained by eight Maxwell 3000 Farad supercapacitors (wrapped in black tapes), and a custom-built controller board (at the top of the box). The energy buffering capacity of the supercapacitor block is about 1.4 × that in the original Nexus 7 battery.
The following decade will witness a surge in remote health-monitoring systems that are based on b... more The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, We depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.
This paper presents an overview of passive radio frequency (RF) energy reception and power harves... more This paper presents an overview of passive radio frequency (RF) energy reception and power harvesting circuits for isolated communications and computing systems lacking access to primary power sources. a unified understanding of the energy harvesting alternatives is provided, followed by an elaborate study of RF energy harvesting within the context of embedded systems. a detailed discussion of RF technologies ranging from the directed communications signal reception to dispersed ambient power harvesting is provided. A comparative focus on design tradeoffs and process alterations is provided to represent the diversity in the applications requiring wireless RF harvesting units. Also included is an analysis of system combinations, and how wake up units, active storage, and duty cycling play roles in the consumption and harvesting of RF energy.
The collection of long-term health data is accelerating with the advent of portable/wearable medi... more The collection of long-term health data is accelerating with the advent of portable/wearable medical devices including electrocardiograms (ECGs). This large corpus of data presents great opportunities to improve the quality of cardiac care. However, analyzing the data from these sensors is a challenge; the relevant information from 120 000 heart beats per patient per day must be condensed into a human-readable form. Our goal is to facilitate the analysis of these unwieldy data sets.We have developed an open source tool for creating easy-to-interpret plots of cardiac information over long periods.We call these plots ECG clocks. The utility of our ECG clock library is demonstrated through multiple examples drawn from a database of 24-h Holter recordings. In these case studies, we focus on the visualization of heart rate and QT dynamics. The ECG clock concept is shown to be relevant for both physicians and researchers, for identifying healthy and abnormal values and patterns in ECG recordings. In this paper, we describe how to use the ECG clock library to analyze 24-h ECG recordings, and how to extend the source code for your own purposes. The tool is applicable to a wide range of cardiac monitoring tasks, such as heart rate variability or ST elevation. This library, which we have made freely available, can help provide new insights into circadian patterns of cardiac function in individuals and groups.
We propose a VM migration approach named Energy Saving Virtual Machine Migration with Minimum Dis... more We propose a VM migration approach named Energy Saving Virtual Machine Migration with Minimum Disruption (ESVM3D) that reduces SLA violations by running VMs in a data center with more available physical hosts as opposed to shutting down idle hosts to save energy. As compared to previously proposed power minimizing VM migration algorithms, simulation results show a 40% reduction in VM migrations and a 10% energy savings as a result of this approach. In summary, ESVM3D achieves a 70% reduction in the number of host shutdowns, resulting in a negligible SLA degradation (0.1%) as compared to the previously proposed approaches, translating to a similar SLA performance and without a degradation in energy consumption.
As the capabilities of smartphones expand, so do consumers’ expectations for resource-intensive m... more As the capabilities of smartphones expand, so do consumers’ expectations for resource-intensive mobile applications. However, mobile devices are ill-suited to execute most these applications due to their hardware limitations. Computational offloading offers a way to augment mobile computation power, but it introduces a communication latency, potentially weakening or negating its advantages. To meet the user demand for high performance, we propose a new service architecture called Acceleration as a Service (AXaaS). We formulate AXaaS based on the observation that most resource-intensive applications,such as real-time face-recognition and augmented reality, have similar resource-demand characteristics: a vast majority of the program execution time is spent on a limited set of library calls, such as Generalized Matrix-Multiply operations (GEMM), or FFT. Our AXaaS model suggests accelerating only these operations by the Telecom Service Providers (TSP). We envision the TSP offering this service through a monthly computational service charge, much like their existing monthly bandwidth charge. We demonstrate the technological and business feasibility of AXaaS on a proof-of concept real-time face recognition application.
A mobile-cloud architecture provides a practical platform for performing face recognition on a mo... more A mobile-cloud architecture provides a practical platform for performing face recognition on a mobile device. However, using a mobile-cloud architecture to perform real-time face recognition presents several challenges including resource limitations and long network delays. In this paper, we determine three approaches for accelerating the execution of the face recognition application by utilizing an intermediate device called a cloudlet. We study in detail one of these approaches, using the cloudlet to perform pre-processing, and quantify the maximum attainable acceleration. Our experimental results show up to a 128× improvement in response time when appropriate cloudlet hardware is used.
As global healthcare systems transition into the digital era, remote patient health monitoring wi... more As global healthcare systems transition into the digital era, remote patient health monitoring will be widespread through the use of inexpensive monitoring devices, such as ECG patches, glucose monitors, etc. Once a sensor-concentrator-cloudlet-cloud infrastructure is in place, it is not unrealistic to imagine a scenario where a physician monitors 20–30 patients remotely. Such an infrastructure will revolutionize clinical diagnostics and preventative medicine by allowing the doctors to access long-term and real-time information, which cannot be obtained from short-term in-hospital ECG recordings. While the large amount of sensor data available to a physician is incredibly valuable clinically, it is overwhelming in raw form. In this paper, the data handling aspect of such a long term health monitoring system is studied. Novel ways to record, aggregate, and visualize this flood of sensory data in an intuitive manner are introduced which allow a doctor to review days worth of data in a matter of seconds. This system is one of the first attempts to provide a tool that allows the visualization of long-term monitoring data acquired from multiple sensors.
The QT interval is a risk marker for cardiac events such as torsades de pointes. However, QT meas... more The QT interval is a risk marker for cardiac events such as torsades de pointes. However, QT measurements obtained from a 12-lead ECG during clinic hours may not capture the full extent of a patient's daily QT range. The purpose of this study was to evaluate the utility of 24-hour Holter ECG recording in patients with long QT syndrome (LQTS) to identify dynamic changes in the heart rate-corrected QT interval and to investigate methods of visualizing the resulting datasets. Beat-to-beat QTc (Bazett) intervals were automatically measured across 24-hour Holter recordings from 202 LQTS type 1, 89 type 2, and 14 type 3 genotyped patients and a reference group of 200 healthy individuals. We measured the percentage of beats with QTc greater than the gender-specific threshold (QTc ≥470 ms in women and QTc ≥450 ms in men). The percentage of beats with QTc prolongation was determined across the 24-hour recordings. Based on the median percentage of heartbeats per patient with QTc prolongation, LQTS type 1 patients showed more frequent QTc prolongation during the day (~3 PM) than they did at night (~3 AM): 97% vs 48%, P ~10(-4) for men, and 68% vs 30%, P ~10(-5) for women. LQTS type 2 patients showed less frequent QTc prolongation during the day compared to nighttime: 87% vs 100%, P ~10(-4) for men, and 62% vs 100%, P ~10(-3) for women. In patients with genotype-positive LQTS, significant differences exist in the degree of daytime and nocturnal QTc prolongation. Holter monitoring using the "QT clock" concept may provide an easy, fast, and accurate method for assessing the true personalized burden of QTc prolongation.
Ubiquity of mobile devices with rich sensory capabilities has given rise to the mobile crowd-sens... more Ubiquity of mobile devices with rich sensory capabilities has given rise to the mobile crowd-sensing (MCS) concept, in which a central authority (the platform) and its participants (mobile users) work collaboratively to acquire sensory data over a wide geographic area. Recent research in MCS highlights the following facts: 1) a utility metric can be defined for both the platform and the users, quantifying the value received by either side; 2) incentivizing the users to participate is a non-trivial challenge; 3) correctness and truthfulness of the acquired data must be verified, because the users might provide incorrect or inaccurate data, whether due to malicious intent or malfunctioning devices; and 4) an intricate relationship exists among platform utility, user utility, user reputation, and data trustworthiness, suggesting a co-quantification of these interrelated metrics. In this paper, we study two existing approaches that quantify crowd-sensed data trustworthiness, based on statistical and vote-based user reputation scores. We introduce a new metric— collaborative reputation scores—to expand this definition. Our simulation results show that collaborative reputation scores can provide an effective alternative to the previously proposed metrics and are able to extend crowd sensing to applications that are driven by a centralized as well as decentralized control.
Widespread use of connected smart devices that are equipped with various built-in sensors has int... more Widespread use of connected smart devices that are equipped with various built-in sensors has introduced the mobile crowdsensing concept to the IoT-driven information and communication applications. Mobile crowdsensing requires implicit collaboration between the crowdsourcer/recruiter platforms and users. Additionally, users need to be incentivized by the crowdsensing platform because each party aims to maximize their utility. Due to the participatory nature of data collection, trustworthiness and truthfulness pose a grand challenge in crowdsensing systems in the presence of malicious users, who either aim to manipulate sensed data or collaborate unfaithfully with the motivation of maximizing their income. In this paper, we propose a game-theoretic approach for trustworthiness-driven user recruitment in mobile crowdsensing systems that consists of three phases: i) user recruitment, ii) collaborative decision making on trust scores, and iii) badge rewarding. Our proposed framework incentivizes the users through a sub-game perfect equilibrium (SPE) and gamification techniques. Through simulations , we show that the platform and user utilities, defined in terms of costs and revenues, can be improved respectively by up to the order of 50% and of at least 15% when compared to fully-distributed and user-centric trustworthy crowdsensing.
Portable medical devices generate volumes of data that could be useful in identifying health risk... more Portable medical devices generate volumes of data that could be useful in identifying health risks. The proposed method filters patients’ electrocardiograms (ECGs) and applies machine-learning classifiers to identify cardiac health risks and estimate severity. The authors present the results of applying their method in a case study.
The past few decades have witnessed incredible advances in human health care, owing to the invent... more The past few decades have witnessed incredible advances in human health care, owing to the invention of devices such as MRI scanners, which allow physicians to monitor personal health in more detail than was ever previously possible. Such advances have drastically improved diagnostic quality and patient health care. Central to this incredible progress was the uncanny ability of technologists and academics to invent ever more useful tools to help physicians, be it the X-ray machine, CT, or MRI scanner. Whereas the aforementioned past-decades' tools aimed at acquiring personal data, the advent of the Internet-of-Things, vast computational power available in the cloud, and new data analytics algorithms will completely change the way we acquire and process medical data to improve health care going forward. In this paper, we conduct a quantitative feasibility study of a Digital Health (D-Health) system that is aimed at acquiring and processing health data using the emerging Internet-of-Everything paradigm. We specifically investigate the technological feasibility of communication, software, and data privacy aspects.
Face recognition is a sophisticated problem requiring a significant commitment of computer resour... more Face recognition is a sophisticated problem requiring a significant commitment of computer resources. A modern GPU architecture provides a practical platform for performing face recognition in real time. The majority of the calculations of an eigenpicture implementation of face recognition are matrix multiplications. For this type of computation, a conventional computer GPU is capable of computing in tens of milliseconds data that a CPU requires thousands of milliseconds to process. In this chapter, we outline and examine the different components and computational requirements of a face recognition scheme implementing the Viola-Jones Face Detection Framework and an eigenpicture face recognition model. Face recognition can be separated into three distinct parts: face detection, eigenvector projection, and database search. For each, we provide a detailed explanation of the exact process along with an analysis of the computational requirements and scalability of the operation.
The Digital Health (D-Health) era is expected to be the " next big thing " since the invention of... more The Digital Health (D-Health) era is expected to be the " next big thing " since the invention of the internet, characterized by inexpensive and widespread medical data acquisition devices, widespread availability of identity-removed health data, and analytics algorithms that provide remote health monitoring feedback to doctors in realtime. Recent years have brought incremental developments in three key technological areas towards the realization of the D-Health era: data acquisition, secure data transmission/storage, and data analytics. i) For data acquisition, the emerging Internet-of-Things (IoT) devices are becoming a viable technology to enable the acquisition of remote health monitoring data. ii) For data storage, emerging system-level and cryptographic mechanisms provide secure and privacy-preserving transmission, storage, and sharing of the acquired data. iii) For data analytics, emerging decision support algorithms provide a mechanism for healthcare professionals to base their clinical diagnoses partially on machine-suggested statistical inferences that rely on a wide corpus of accumulated data. The D-Health era will create new business opportunities in all of these areas. In this paper, we propose a generalized structure for a D-Health system that is capable of remote health monitoring and decision support. We formulate our proposed structure around potential business opportunities and conduct technical feasibility studies.
Smart city sensing calls for crowdsensing via mobile devices that are equipped with various built... more Smart city sensing calls for crowdsensing via mobile devices that are equipped with various built-in sensors. As incentivizing users to participate in distributed sensing is still an open research issue, the trustworthiness of crowdsensed data is expected to be a grand challenge if this cloud-inspired recruitment of sensing services is to be adopted. Recent research proposes reputation-based user recruitment models for crowdsensing; however, there is no standard way of identifying adversaries in smart city crowdsensing. This paper adopts previously proposed vote-based approaches, and presents a thorough performance study of vote-based trustworthiness with trusted entities that are basically a subset of the participating smartphone users. Those entities are called trustworthy anchors of the crowdsensing system. Thus, an anchor user is fully trustworthy and is fully capable of voting for the trustworthiness of other users, who participate in sensing of the same set of phenomena. Besides the anchors, the reputations of regular users are determined based on vote-based (distributed) reputation. We present a detailed performance study of the anchor-based trustworthiness assurance in smart city crowdsensing through simulations, and compare it with the purely vote-based trustworthiness approach without anchors, and a reputation-unaware crowdsensing approach, where user reputations are discarded. Through simulation findings, we aim at providing specifications regarding the impact of anchor and adversary populations on crowdsensing and user utilities under various environmental settings. We show that significant improvement can be achieved in terms of usefulness and trustworthiness of the crowdsensed data if the size of the anchor population is set properly.
Energy harvesting systems that couple solar panels with supercapacitor buffers offer an attractiv... more Energy harvesting systems that couple solar panels with supercapacitor buffers offer an attractive option for powering computational systems deployed in field settings, where power infrastructure is inaccessible. Supercapacitors offer a particularly compelling advantage over electrochemical batteries for such settings because of their ability to survive many more charge–discharge cycles. We share UR-SolarCap—a versatile open source design for such a harvesting system that targets embedded system applications requiring power in the 1–10 W range. Our system is designed for high efficiency and con-trollability and, importantly, supports auto-wakeup from a state of complete energy depletion. This paper summarizes our design methodology, and the rationale behind our design and configuration decisions. Results from the operation and testing of a system realized with our design demonstrate: 1) an achievable harvester efficiency of 85%; 2) the ability to maintain sustained operation over a two week period when the solar panel and buffer are sized appropriately; and 3) a robust auto-wakeup functionality that resumes system operation upon the availability of harvestable energy after a period in which the system has been forced into a dormant state because of a lack of usable energy. To facilitate the use of the system by researchers exploring embedded system applications in environments that lack a power infrastructure, our designs are available for download as an archive containing design schematics, Printed Circuit Board (PCB) files, firmware code, and a component list for assembly of the system. In addition, a limited number of pre-assembled kits are available upon request.
Continuous, high-data-rate visual sensing and processing in the field is important for intelligen... more Continuous, high-data-rate visual sensing and processing in the field is important for intelligent transportation, environmental monitoring, and area / context awareness. Rechargeable batteries in self-sustainable systems suffer from adverse environmental impact and fast aging. Advancements in supercapacitor energy density and low-power processors have reached an inflection point, where a data-intensive visual sensing system can rely solely on supercapacitors for energy buffering. This paper demonstrates the first working prototype of such a system, consisting of eight 3,000-Farad supercapacitors, a custom-designed 70-mW controller / harvester board, and a Nexus 7 tablet computer. We leverage the voltage-to-stored-energy relationship in capacitors to enable precise energy buffer modeling (no more than 3% error in time-to-depletion prediction). We find the supercapaci-tor self-discharge (or leakage) to be a minor issue in practice where the operating power significantly exceeds the leakage power. However, accurate energy budgeting must account for the variation of effective capacitance—particularly lower capacitance at lower voltages nearing energy depletion. Precise energy budgeting allows model-driven system adaptation (such as dynamic CPU configurations on low-power chipsets) and consequently realizes high, stable operational quality-of-service. Our working prototype has been successfully deployed at a campus building rooftop where it analyzes nearby traffic patterns continuously. Figure 1: Picture of our deployed system on a seven-story building rooftop (center picture) that includes a camera, a block of solar panels, and a system box. The system box (left-side picture) contains a Nexus 7 tablet computer (without its internal rechargeable battery) sustained by eight Maxwell 3000 Farad supercapacitors (wrapped in black tapes), and a custom-built controller board (at the top of the box). The energy buffering capacity of the supercapacitor block is about 1.4 × that in the original Nexus 7 battery.
The following decade will witness a surge in remote health-monitoring systems that are based on b... more The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, We depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.
This paper presents an overview of passive radio frequency (RF) energy reception and power harves... more This paper presents an overview of passive radio frequency (RF) energy reception and power harvesting circuits for isolated communications and computing systems lacking access to primary power sources. a unified understanding of the energy harvesting alternatives is provided, followed by an elaborate study of RF energy harvesting within the context of embedded systems. a detailed discussion of RF technologies ranging from the directed communications signal reception to dispersed ambient power harvesting is provided. A comparative focus on design tradeoffs and process alterations is provided to represent the diversity in the applications requiring wireless RF harvesting units. Also included is an analysis of system combinations, and how wake up units, active storage, and duty cycling play roles in the consumption and harvesting of RF energy.
The collection of long-term health data is accelerating with the advent of portable/wearable medi... more The collection of long-term health data is accelerating with the advent of portable/wearable medical devices including electrocardiograms (ECGs). This large corpus of data presents great opportunities to improve the quality of cardiac care. However, analyzing the data from these sensors is a challenge; the relevant information from 120 000 heart beats per patient per day must be condensed into a human-readable form. Our goal is to facilitate the analysis of these unwieldy data sets.We have developed an open source tool for creating easy-to-interpret plots of cardiac information over long periods.We call these plots ECG clocks. The utility of our ECG clock library is demonstrated through multiple examples drawn from a database of 24-h Holter recordings. In these case studies, we focus on the visualization of heart rate and QT dynamics. The ECG clock concept is shown to be relevant for both physicians and researchers, for identifying healthy and abnormal values and patterns in ECG recordings. In this paper, we describe how to use the ECG clock library to analyze 24-h ECG recordings, and how to extend the source code for your own purposes. The tool is applicable to a wide range of cardiac monitoring tasks, such as heart rate variability or ST elevation. This library, which we have made freely available, can help provide new insights into circadian patterns of cardiac function in individuals and groups.
We propose a VM migration approach named Energy Saving Virtual Machine Migration with Minimum Dis... more We propose a VM migration approach named Energy Saving Virtual Machine Migration with Minimum Disruption (ESVM3D) that reduces SLA violations by running VMs in a data center with more available physical hosts as opposed to shutting down idle hosts to save energy. As compared to previously proposed power minimizing VM migration algorithms, simulation results show a 40% reduction in VM migrations and a 10% energy savings as a result of this approach. In summary, ESVM3D achieves a 70% reduction in the number of host shutdowns, resulting in a negligible SLA degradation (0.1%) as compared to the previously proposed approaches, translating to a similar SLA performance and without a degradation in energy consumption.
As the capabilities of smartphones expand, so do consumers’ expectations for resource-intensive m... more As the capabilities of smartphones expand, so do consumers’ expectations for resource-intensive mobile applications. However, mobile devices are ill-suited to execute most these applications due to their hardware limitations. Computational offloading offers a way to augment mobile computation power, but it introduces a communication latency, potentially weakening or negating its advantages. To meet the user demand for high performance, we propose a new service architecture called Acceleration as a Service (AXaaS). We formulate AXaaS based on the observation that most resource-intensive applications,such as real-time face-recognition and augmented reality, have similar resource-demand characteristics: a vast majority of the program execution time is spent on a limited set of library calls, such as Generalized Matrix-Multiply operations (GEMM), or FFT. Our AXaaS model suggests accelerating only these operations by the Telecom Service Providers (TSP). We envision the TSP offering this service through a monthly computational service charge, much like their existing monthly bandwidth charge. We demonstrate the technological and business feasibility of AXaaS on a proof-of concept real-time face recognition application.
A mobile-cloud architecture provides a practical platform for performing face recognition on a mo... more A mobile-cloud architecture provides a practical platform for performing face recognition on a mobile device. However, using a mobile-cloud architecture to perform real-time face recognition presents several challenges including resource limitations and long network delays. In this paper, we determine three approaches for accelerating the execution of the face recognition application by utilizing an intermediate device called a cloudlet. We study in detail one of these approaches, using the cloudlet to perform pre-processing, and quantify the maximum attainable acceleration. Our experimental results show up to a 128× improvement in response time when appropriate cloudlet hardware is used.
As global healthcare systems transition into the digital era, remote patient health monitoring wi... more As global healthcare systems transition into the digital era, remote patient health monitoring will be widespread through the use of inexpensive monitoring devices, such as ECG patches, glucose monitors, etc. Once a sensor-concentrator-cloudlet-cloud infrastructure is in place, it is not unrealistic to imagine a scenario where a physician monitors 20–30 patients remotely. Such an infrastructure will revolutionize clinical diagnostics and preventative medicine by allowing the doctors to access long-term and real-time information, which cannot be obtained from short-term in-hospital ECG recordings. While the large amount of sensor data available to a physician is incredibly valuable clinically, it is overwhelming in raw form. In this paper, the data handling aspect of such a long term health monitoring system is studied. Novel ways to record, aggregate, and visualize this flood of sensory data in an intuitive manner are introduced which allow a doctor to review days worth of data in a matter of seconds. This system is one of the first attempts to provide a tool that allows the visualization of long-term monitoring data acquired from multiple sensors.
The QT interval is a risk marker for cardiac events such as torsades de pointes. However, QT meas... more The QT interval is a risk marker for cardiac events such as torsades de pointes. However, QT measurements obtained from a 12-lead ECG during clinic hours may not capture the full extent of a patient's daily QT range. The purpose of this study was to evaluate the utility of 24-hour Holter ECG recording in patients with long QT syndrome (LQTS) to identify dynamic changes in the heart rate-corrected QT interval and to investigate methods of visualizing the resulting datasets. Beat-to-beat QTc (Bazett) intervals were automatically measured across 24-hour Holter recordings from 202 LQTS type 1, 89 type 2, and 14 type 3 genotyped patients and a reference group of 200 healthy individuals. We measured the percentage of beats with QTc greater than the gender-specific threshold (QTc ≥470 ms in women and QTc ≥450 ms in men). The percentage of beats with QTc prolongation was determined across the 24-hour recordings. Based on the median percentage of heartbeats per patient with QTc prolongation, LQTS type 1 patients showed more frequent QTc prolongation during the day (~3 PM) than they did at night (~3 AM): 97% vs 48%, P ~10(-4) for men, and 68% vs 30%, P ~10(-5) for women. LQTS type 2 patients showed less frequent QTc prolongation during the day compared to nighttime: 87% vs 100%, P ~10(-4) for men, and 62% vs 100%, P ~10(-3) for women. In patients with genotype-positive LQTS, significant differences exist in the degree of daytime and nocturnal QTc prolongation. Holter monitoring using the "QT clock" concept may provide an easy, fast, and accurate method for assessing the true personalized burden of QTc prolongation.
To meet the user demand for an ever-increasing mobile-cloud computing performance for resource-in... more To meet the user demand for an ever-increasing mobile-cloud computing performance for resource-intensive mobile applications, we propose a new service architecture called Acceleration as a Service (AXaaS). We formulate AXaaS based on the observation that most resource-intensive applications, such as real-time face-recognition and augmented reality, have similar resource-demand characteristics: a vast majority of the program execution time is spent on a limited set of library calls, such as Generalized Matrix-Multiply operations (GEMM), or FFT. Our AXaaS model suggests accelerating only these operations by the Telecom Service Providers (TSP). We envision the TSP offering this service through a monthly computational service charge, much like their existing monthly bandwidth charge. We demonstrate the technological and business feasibility of AXaaS on a proof-of-concept real-time face recognition application. We elaborate on the consumer, developer, and the TSP view of this model. Our results confirm AXaaS as a novel and viable business model.
Uploads
Papers by Tolga Soyata
the results of applying their method in a case study.
tens of milliseconds data that a CPU requires thousands of milliseconds to process. In this chapter, we outline and examine the different components and computational requirements of a face recognition
scheme implementing the Viola-Jones Face Detection Framework and an eigenpicture face recognition model. Face recognition can be separated into three distinct parts: face detection, eigenvector projection, and database search. For each, we provide a detailed explanation of the exact process along with an analysis of the computational requirements and scalability of the operation.
The ECG clock concept is shown to be relevant for both physicians and researchers, for identifying healthy and abnormal values and patterns in ECG recordings. In this paper, we describe how to use the ECG clock library to analyze 24-h ECG recordings, and how to extend the source code for your own purposes. The tool is applicable to a wide range of cardiac monitoring tasks, such as heart rate variability or ST elevation. This library, which we have made freely available, can help provide new insights into circadian patterns of cardiac
function in individuals and groups.
ESVM3D achieves a 70% reduction in the number of host
shutdowns, resulting in a negligible SLA degradation (0.1%) as
compared to the previously proposed approaches, translating to a similar SLA performance and without a degradation in energy consumption.
the results of applying their method in a case study.
tens of milliseconds data that a CPU requires thousands of milliseconds to process. In this chapter, we outline and examine the different components and computational requirements of a face recognition
scheme implementing the Viola-Jones Face Detection Framework and an eigenpicture face recognition model. Face recognition can be separated into three distinct parts: face detection, eigenvector projection, and database search. For each, we provide a detailed explanation of the exact process along with an analysis of the computational requirements and scalability of the operation.
The ECG clock concept is shown to be relevant for both physicians and researchers, for identifying healthy and abnormal values and patterns in ECG recordings. In this paper, we describe how to use the ECG clock library to analyze 24-h ECG recordings, and how to extend the source code for your own purposes. The tool is applicable to a wide range of cardiac monitoring tasks, such as heart rate variability or ST elevation. This library, which we have made freely available, can help provide new insights into circadian patterns of cardiac
function in individuals and groups.
ESVM3D achieves a 70% reduction in the number of host
shutdowns, resulting in a negligible SLA degradation (0.1%) as
compared to the previously proposed approaches, translating to a similar SLA performance and without a degradation in energy consumption.
vast majority of the program execution time is spent on a limited set of library calls, such as Generalized Matrix-Multiply operations (GEMM), or FFT. Our AXaaS model suggests accelerating only these operations
by the Telecom Service Providers (TSP). We envision the TSP offering this service through a monthly computational service charge, much like their existing monthly bandwidth charge. We demonstrate the technological and business feasibility of AXaaS on a proof-of-concept real-time face recognition application. We elaborate on the consumer, developer, and the TSP view of this model. Our results confirm AXaaS as a novel and viable business model.