Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Seehwan Yoo

ABSTRACT Proposed is a new I/O scheduling for SSD, proportional work-conserving. The proposed scheduling algorithm provides proportional fairness among tasks. To provide proportional fairness, proportional work-conserving differentiates... more
ABSTRACT Proposed is a new I/O scheduling for SSD, proportional work-conserving. The proposed scheduling algorithm provides proportional fairness among tasks. To provide proportional fairness, proportional work-conserving differentiates probability to fully utilise the task's quantum, which is controlled by adjusting opportunistic waiting time. Its proportional fairness is formally proveds and its validity through numerical evaluation and experimental results from Linux implementation are presented.
ABSTRACT Enterprise servers require customized solid-state drives (SSDs) to satisfy their specialized I/O performance and reliability requirements. For effective use of SSDs for enterprise purposes, SSDs must be designed considering... more
ABSTRACT Enterprise servers require customized solid-state drives (SSDs) to satisfy their specialized I/O performance and reliability requirements. For effective use of SSDs for enterprise purposes, SSDs must be designed considering requirements such as those related to performance, lifetime, and cost constraints. However, SSDs have numerous hardware and software design options, such as flash memory types and block allocation methods, which have not been well analyzed yet, but on which the SSD performance depends. Furthermore, there is no methodology for determining the optimal design for a particular I/O workload. This paper proposes SSD-Tailor, a customization tool for SSDs. SSD-Tailor determines a near-optimal set of design options for a given workload. SSD designers can use SSD-Tailor to customize SSDs in the early design stage to meet the customer requirements. We evaluate SSD-Tailor with nine I/O workload traces collected from real-world enterprise servers. We observe that SSD-Tailor finds near-optimal SSD designs for these workloads by exploring only about 1% of the entire set of design candidates. We also show that the near-optimal designs increase the average I/O operations per second by up to 17% and decrease the average response time by up to 163% as compared to an SSD with a general design.
In this paper, we propose a mechanism to eliminate the per- formance anomaly of IEEE 802.11b. Performance anomaly happens when nodes that have difierent transmission rates are in the same wireless cell. All the nodes in the cell might... more
In this paper, we propose a mechanism to eliminate the per- formance anomaly of IEEE 802.11b. Performance anomaly happens when nodes that have difierent transmission rates are in the same wireless cell. All the nodes in the cell might experience the same throughput even though their transmission rates are difierent because DCF of WLAN provides equal probability of channel access, but it does not guarantee the equal utilization of the wireless channel among the nodes. To reduce such a performance anomaly, we adjust the frame size proportionally depending on the bit rate. Additionally, our scheme eliminates the per- formance anomaly in multi-hop case. Simulation study shows that our scheme achieves an improvement in the aggregate throughput and the fairness.
ABSTRACT Enterprise servers require customized solid-state drives (SSDs) to satisfy their specialized I/O performance and reliability requirements. For effective use of SSDs for enterprise purposes, SSDs must be designed considering... more
ABSTRACT Enterprise servers require customized solid-state drives (SSDs) to satisfy their specialized I/O performance and reliability requirements. For effective use of SSDs for enterprise purposes, SSDs must be designed considering requirements such as those related to performance, lifetime, and cost constraints. However, SSDs have numerous hardware and software design options, such as flash memory types and block allocation methods, which have not been well analyzed yet, but on which the SSD performance depends. Furthermore, there is no methodology for determining the optimal design for a particular I/O workload. This paper proposes SSD-Tailor, a customization tool for SSDs. SSD-Tailor determines a near-optimal set of design options for a given workload. SSD designers can use SSD-Tailor to customize SSDs in the early design stage to meet the customer requirements. We evaluate SSD-Tailor with nine I/O workload traces collected from real-world enterprise servers. We observe that SSD-Tailor finds near-optimal SSD designs for these workloads by exploring only about 1% of the entire set of design candidates. We also show that the near-optimal designs increase the average I/O operations per second by up to 17% and decrease the average response time by up to 163% as compared to an SSD with a general design.
ABSTRACT Virtualization has recently been applied to consumer electronic (CE) devices such as smart TVs and smartphones. In these virtualized CE devices, memory is a valuable resource, because the virtual machines (VMs) on the devices... more
ABSTRACT Virtualization has recently been applied to consumer electronic (CE) devices such as smart TVs and smartphones. In these virtualized CE devices, memory is a valuable resource, because the virtual machines (VMs) on the devices must share the same physical memory. However, physical memory is usually partitioned and allocated to each VM. This partitioning technique may result in memory shortages, which can seriously degrade application performance. This paper proposes a new swap mechanism for virtualized CE devices with flash memory. This proposed mechanism reduces memory consumption by compressing and sharing unused pages. This swap mechanism stores the unused page in memory of another VM, to increase the available memory of the original VM. The proposed swap mechanism is implemented on the Xen hypervisor and Linux. The mechanism improves the application performance by up to 38% by significantly reducing the number of swap-out requests. The swap-out requests are reduced by up to 88% compared to previous swap mechanisms. Moreover, the mechanism reduces memory consumption of the swap area by up to 79%.
ABSTRACT Facing practical limits to increasing processor frequencies, manufacturers have resorted to multi-core designs in their commercial products. In multi-core implementations, cores in a physical package share the last-level caches... more
ABSTRACT Facing practical limits to increasing processor frequencies, manufacturers have resorted to multi-core designs in their commercial products. In multi-core implementations, cores in a physical package share the last-level caches to improve inter-core communication. To efficiently exploit this facility, operating systems must employ cache-aware schedulers. Unfortunately, virtualization software, which is a foundation technology of cloud computing, is not yet cache-aware or does not fully exploit the locality of the last-level caches. In this paper, we propose a cache-aware virtual machine scheduler for multi-core architectures. The proposed scheduler exploits the locality of the last-level caches to improve the performance of concurrent applications running on virtual machines. For this purpose, we provide a space-partitioning algorithm that migrates and clusters communicating virtual CPUs (VCPUs) in the same cache domain. Second, we provide a time-partitioning algorithm that co-schedules or schedules in sequence clustered VCPUs. Finally, we present a theoretical analysis that proves our scheduling algorithm is more efficient in supporting concurrent applications than the default credit scheduler in Xen. We implemented our virtual machine scheduler in the recent Xen hypervisor with para-virtualized Linux-based operating systems. We show that our approach can improve performance of concurrent virtual machines by up to 19% compared to the credit scheduler.
ABSTRACT This paper investigates the feasibility of real-time scheduling with mobile hypervisor, Xen-ARM. Particularly for mobile virtual machines, real-time support is in high demand. However, it is difficult to guarantee real-time... more
ABSTRACT This paper investigates the feasibility of real-time scheduling with mobile hypervisor, Xen-ARM. Particularly for mobile virtual machines, real-time support is in high demand. However, it is difficult to guarantee real-time scheduling with virtual machines because inter-VM and intra-VM schedulability have to be determined in multi-OS environments. To address the schedulability, first, this paper presents a definition of a real-time virtual machine. Second, this paper analyzes intra-VM schedulability, taking quantization overhead into account. Quantization overhead comes from tick-based scheduling of Xen-ARM, which requires integer presentation of scheduling period and execution slice. Third, to minimize quantization overhead, this paper provides a new algorithm, called SH-quantization that provides accurate and efficient parameterization of a real-time virtual machine. Fourth, this paper presents an inter-VM schedulability test for incorporating multiple real-time virtual machines. To evaluate the approach, we implement the SH-quantization algorithm in Xen-ARM and paravirtualize a real-time OS, called xeno- (mu ) C/OS-II. We ran extensive experiments with various configurations of real-time tasks on a real hardware platform in order to characterize the scheduling behavior of real-time virtual machine with quantization. The results show that quantization overhead consumes additional CPU bandwidth up to 90% and the proposed algorithm guarantees intra/inter-VM schedulability with minimal CPU bandwidth.
ABSTRACT Proposed is a new I/O scheduling for SSD, proportional work-conserving. The proposed scheduling algorithm provides proportional fairness among tasks. To provide proportional fairness, proportional work-conserving differentiates... more
ABSTRACT Proposed is a new I/O scheduling for SSD, proportional work-conserving. The proposed scheduling algorithm provides proportional fairness among tasks. To provide proportional fairness, proportional work-conserving differentiates probability to fully utilise the task's quantum, which is controlled by adjusting opportunistic waiting time. Its proportional fairness is formally proveds and its validity through numerical evaluation and experimental results from Linux implementation are presented.
Multicast is a multiuser communication scheme. Till now, multicast has been studied to handle one multicast group at a time. However new applications such as online games need to handle multiple groups simultaneously. This paper proposes... more
Multicast is a multiuser communication scheme. Till now, multicast has been studied to handle one multicast group at a time. However new applications such as online games need to handle multiple groups simultaneously. This paper proposes BO-multicast; Boolean operation multicast. Through NS2 simulation, we show that Boolean operations can improve the performance by 610 percent in delay and 210 percent in routing table size
Recently, system virtualization has been applied to consumer electronics such as smart mobile phones. Although multi-core processors have become a viable solution for complex applications on consumer electronics, how to utilize the... more
Recently, system virtualization has been applied to consumer electronics such as smart mobile phones. Although multi-core processors have become a viable solution for complex applications on consumer electronics, how to utilize the multi-core resources in virtualization layer is not researched sufficiently. In this paper, we present the hypervisor design and implementation for multi-core CE devices; the hypervisor targets to improve network performance by fully utilizing the multi-core processor.
This paper addresses the I/O latency issue within Xen-ARM. Although Xen-ARM's split driver presents reliable driver isolation, it requires additional inter-VM scheduling. Consequently, the credit scheduler within... more
This paper addresses the I/O latency issue within Xen-ARM. Although Xen-ARM's split driver presents reliable driver isolation, it requires additional inter-VM scheduling. Consequently, the credit scheduler within Xen-ARM results in unsatisfactory I/O latency for real-time guest OS. This paper analyzes the I/O latency in Xen-ARM's interrupt path, and proposes a new scheduler to bound I/O latency. Our scheduler dynamically assigns