Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Energy-Efficient Resource and Power Allocation for Underlay Multicast Device-to-Device Transmission
Previous Article in Journal
Network Intrusion Detection through Discriminative Feature Selection by Using Sparse Logistic Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Combinational Buffer Management Scheme in Mobile Opportunistic Network

School of Computer and Information Engineering, Henan Normal University, Xinxiang 453000, China
*
Author to whom correspondence should be addressed.
Future Internet 2017, 9(4), 82; https://doi.org/10.3390/fi9040082
Submission received: 19 September 2017 / Revised: 7 November 2017 / Accepted: 8 November 2017 / Published: 14 November 2017

Abstract

:
Nodes in Mobile Opportunistic Network (MON) have to cache packets to deal with the intermittent connection. The buffer management strategy obviously impacts the performance of MON, and it attracts more attention recently. Due to the limited storage capacity of nodes, traditional buffer management strategies just drop messages based on the property of message, and they neglect the collaboration between neighbors, resulting in an ineffective performance improvement. Therefore, effective buffer management strategies are necessary to ensure that each node has enough buffer space to store the message when the node buffer is close to congestion. In this paper, we propose a buffer management strategy by integrating the characteristics of messages and nodes, and migrate the redundant messages to the neighbor to optimize the total utility, instead of deleting them. The simulation experiment results show that it can obviously improve the delivery ratio, the overhead ratio and the average delays, and reduce the amount of hops compared with the traditional ones.

Graphical Abstract

1. Introduction

Mobile Opportunistic Network (MON) [1] are a special kind of ad hoc network that uses the node mobility to transfer packets and becomes more popular in recent years. With the rapid development of wireless communication technology, as well as the widespread popularity of communication equipment, the applications of MON are extended significantly from disaster recovery networks [2], wildlife tracking networks [3], and interplanetary networks [4] to mobile vehicle networks [5].
However, due to intermittent connection and unstable links between nodes, it adopts a storing-carrying-forwarding paradigm to communicate and forward messages through a hop-by-hop style. Connections between nodes are sparse and the contact time between nodes is intermittent. The unpredictability of network links requires nodes to cache messages until they meet the destination node. Nodes exchange information with each other when they contact, and forward messages based on flooding to increase their delivery ratio. The excessive duplicate copies in the network results in the node buffer congestion severely, which leads to the decline of the utilization of the network resource. Therefore, buffer management strategies play an important role in the transmission of the opportunistic network, and they have a direct impact on the message delivery ratio, overhead ratio and average delivery delay, etc. It is important to design an efficient buffer management strategy in MON.
The existing research of buffer management in opportunistic network mainly include the following strategies: Drop Head (DH), Drop Tail (DT), Drop Random (DR), Remaining Lifetime (RL), Drop Most Forwarded (MOFO), Least Recently Received (DLRR), and history-based drop [6,7]. The above buffer management strategies usually sort the messages by considering the property of messages [8], such as the lifetime or hops, and delete messages with lower priority when the buffer overflows. They do not consider the impact of collaboration between neighbors on the performance of the buffer management strategy. In addition, many works only consider the message attributes, while ignoring the characteristics of the nodes in MON [9,10,11]. Taking into account the above facts, we propose a reasonable queuing strategy for the buffer management strategy from the perspective of the message and node, which includes the size of the message, the hop count, time-to-live (TTL), and the node utility; meanwhile, our strategy includes an efficient message migration scheme that effectively deletes and transfers messages. We carried out the simulation experiment with four metrics: delivery ratio, overhead ratio, average delivery delay, and hops, respectively. Our goal is to create a comprehensive buffer management policy in mobile opportunistic network. The main contributions are summarized as follows.
  • We integrate the attributes of nodes into the utility value of messages and decide whether to receive a new message based on the utility value of the message and that of the node.
  • We migrate the messages to the neighbor, rather than deleting them when the buffer space of nodes is full.
The rest of this paper is organized as follows. We first review related works in Section 2. In Section 3, we describe the buffer management policy in detail. In Section 4, we conduct the simulation experiment. Finally, we conclude this paper and give a short discussion on future issues in Section 5.

2. Related Works

In the past several years, buffer management policy is gradually becoming one of the hot topics in the MON, and effective buffer management policies are necessary for nodes [12]. This is mainly because nodes have to cache packets in the data forwarding process until they encounter the destination node. Thus, the efficient management of the node buffer is necessary in opportunistic network. Reasonable buffer management policy allows the node to save more messages and make the message arrive at the destination node quickly. Most of the existing buffer management policies typically use the message utility value as a criterion to decide which messages should be deleted, so we classify them into two categories: one is the single standard of the message and the other is the multiple standards. A single standard means that only one of the properties of the message is used to calculate the message utility value, such as the number of copies of the messages, the size of the message, the remaining time of the message, etc. Multiple standards consider several properties of the message simultaneously when calculating the message utility value. The detailed description of the classification is as follows.

2.1. Single Standard

The delay of the message is unavoidable and it may take a certain amount of time since the mobile opportunistic network is intermittent; therefore, the node must carry the message until it contacts another node that can act as the relay node. For example, Krifa et al. [13] use the delay of the messages as a criterion for evaluating utility values for messages, and propose an optimal buffer management strategy based on a distributed algorithm to statistically learn the approximate global network knowledge. Its purpose is to minimize the average delivery delay or to maximize the average delivery ratio. Another important factor in the assessment criteria of the message is the TTL of the message. The delivery probability increases as the message survival time grows longer. The common method is to remove a packet when its TTL value is zero. For instance, Scott et al. [14] define the TTL value as a timeout value and specify to delete messages whose lifecycle exceeds TTL to reduce congestion in the node buffer.
Analogously, the number of forwarding hops of the message directly affects the network overhead in the opportunistic network. The goal of the buffer management is that the messages arrive at the destination node with the fewer number of hops, which not only reduces the message delay in the network, but also reduces the network overhead. Erramilli and Crovella [9] propose a priority scheduling policy in which the priority of the message in the node is based on the distance from the source to the destination. Elwhishi et al. [15] propose a new message scheduling strategy for epidemic and two-hop forwarding routing in Delay Tolerant Networks (DTNs), so that each message has the best delivery ratio when it forwards or drops. Ramanathan et al. [16] propose the Prioritized Epidemic (PREP) routing. This strategy uses a density gradient method to delete messages based on the source-to-destination time cost, that is, the density gradient decreases with the increasing distance from the destination.
Note that the probability of the message arriving at the destination node increases as the number of message copies increases. However, excessive copies of messages in the network also lead to buffer congestion. Hence, how to delete the message copy and ensure the delivery ratio of the message is a central issue. For example, Kim et al. [11] drop the message with the largest number of copies to minimize the influence of message drop, which results in high delivery ratio in a DTN. Similarly, the author treats DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities that determine how packets should be replicated and deleted in the system in [17].
The size of the message also determines the importance of the message in the node buffer. For example, possibility of congestion will be alleviated if the larger and useless messages are deleted [18]. Rashid et al. propose dropping the message in the node if its size is larger than or equal to the incoming message in [19]. In the same year, Rashid et al. [10] also introduce a variety of existing buffer management policies and performance evaluation indicators, and propose a Drop Largest (DLA) sorting strategy that drops large-size messages when the node buffer becomes congested. Wang et al. propose a knapsack-based message scheduling and drop strategy for DTN in [20], and calculate utility values of those messages by quantifying the impact of replicating or dropping them. Based on the result of [20], the work in [21] further decides which message should be deleted based on the backpack problem when the buffer overflows, so as to maximize the utility of network resources. Similarly, Yong et al. [22] propose an optimal adaptive buffer management scheme in the situations where the bandwidth is limited and messages vary in size.

2.2. Multiple Standards

Most of the existing studies only focus on one property of the message in buffer management [8], and they do not consider the buffer from a multiple perspective. In recent years, several scholars try to solve the problem of buffer congestion from a comprehensive perspective. Compared with the single standard, the advantage of considering the composite properties of a message is that each property of the message occupies a certain percentage when calculating the utility value, thus fully demonstrating the overall characteristics of the message. For example, Pan et al. present a comprehensive-integrated buffer management (CIM) strategy [8], which takes all information relevant to message delivery and network resources into consideration. The strategy calculates the utility value of the package from the following aspects: the lifetime of the message, the size of the message, and the number of forwarding hops. In the opportunistic network, we also consider the properties of the nodes because of the difference of the movement trajectories of the nodes. Yao et al. [23] preferentially compute the utility value of caching data based on node interest and data transfer probability, and then design the overall cache management strategy including passive and proactive dropping policy, and scheduling policy. For instance, the author of [24] provides message priority information for the mobile vehicle by using the node encounter rate and context information, such as survival time, number of available copies and maximum number of forwarding message copies. However, this method is passive and nodes in the communication range can’t initiatively select whether to receive the message from other nodes. In contrast, our proposed buffer management strategy enables the node to proactively select and receive messages from other nodes.

2.3. Migration Strategy

While the above strategies can address the problem of buffer overflow, it decreases the message delivery ratio due to the loss of the message. The most critical is that none of them consider migrating messages to other nodes. Therefore, the authors [25] proposed a message migration replacement storage strategy, which is a Storage Routing (SR) algorithm and determines which messages are migrated to which nodes. The policy defines a migration cost function as the migration standard, where the migration costs consist of storage cost and transmission cost, and it avoids dropping messages by migrating messages to neighbors to resolve node buffer congestion problem. Moetesum et al. [26] propose the size-aware drop (SAD) strategy, which aims to improve buffer utilization and avoid unnecessary loss of information. Similarly, Zhang et al. [27] apply the concept of revenue management, and employ dynamic programming to develop a congestion management strategy for DTN. The author dynamically determines whether to change the state of the message in the node by calculating the maximum expected return over a period of time. However, this migration method does not take into account the available buffer size of the neighbor node and directly forwards the message to the neighbor node. Therefore, we use the dynamic programming method and apply the revenue management concept to develop a reasonable transfer strategy, and consider the storage capacity of the neighbor node to avoid the node buffer congestion after receiving the message.

3. Buffer Management

The design of the buffer management strategy should be combined with the attributes of the node and the attributes of the message. Most studies only consider the properties of the message and ignore those of nodes. In addition, since the motion trajectory of the node depends on the mobility model, the attributes of the nodes are significantly different. We propose a combinational buffer management scheme by collecting and analyzing the state of the message and considering the node properties in mobile opportunistic network. We describe it from four aspects: preliminaries, queuing strategy, utility value calculation and evaluation method, respectively. We give the details in the following sections.

3.1. Preliminaries

Mobile nodes change their locations over time in MONs. In general, evaluating the performance of the routing algorithm is based on the movement trajectories and paths formed by the motion of the nodes. In this paper, we use a real trace, Korea Advanced Institute of Science and Technology (KAIST) [28] and a new mobility model, Self-Similar Least-Action Human Walk (SLAW) [29]. We use a dynamic graph G = (V, E) to simulate the movement model of the nodes in MONs, where V is a set of nodes and E is the connection between any two nodes. Nodes exchange information with each other when they are in the communication range. We randomly specify the source node and the destination node in MON. Figure 1 shows the movement trajectory of the node A to the node D. We assume that the contact time between nodes follows the exponentially distributed [21]. In addition, we assume that there are the following properties in MONs:
  • Each node has a limited buffer.
  • Mobility of nodes is independent and nodes have different contact rates.
  • The links have the same bandwidth.
  • A short contact duration or low data rate will not complete the message transmission.

3.2. Queuing Strategy

The transmission is mainly dependent on the opportunistic communication of nodes in MON. The node is defined as mobile devices that are carried by individuals in general. Therefore, we suppose each message carries a relevant information package P = { u i , v i , s i , t i , h i } [17] that contains the message of the source node u i , destination node v i , the size of the message s i , the time from the beginning of creation until now t i , and the number of hops from the source node to the current node h i . We set the TTL as the threshold of t i , and delete the message when t i exceeds the TTL.
We propose a comprehensive buffer management strategy in mobile opportunistic network. Figure 2 shows the process of the Combinational Buffer Management (CBM) queuing strategy. We decide that two nodes connect with each other and deliver the message when they enter the communication range of each other. We first sort the priority of the messages in the node before exchanging information. The node A determines whether it is congested when node B forwards message i to node A. If node A has sufficient buffer space, node B directly sends message i to node A, and the node A receives the message i and stores it in the appropriate location in the cache queue as shown in Figure 2a. When the size of the buffer space of node A is not enough to store message i, we calculate the utility value of message i and the utility value of node A according to CBM strategy, where the utility value of node A is determined by the average utility value of all messages in the node. If the utility value of message i is less than the utility value of node A, the node A does not store the message i and the message transfer process failed as shown in Figure 2b. If the message utility value is higher than the utility value of the node, node A receives the message. However, because the fact that node A does not have enough buffer size, we use the buffer replacement function in the CBM policy to transfer the message with the minimum utility value to the neighbor node C of the node A when the node C has extra available buffer space, (where q is the message with the lowest utility value in node A) until the buffer of node A is enough to contain the message [8], as shown in Figure 2c. Then, node A receives the message i from node B. When the buffer occupancy rate of the neighbor node C reaches the threshold, we give up the migration and directly delete the least useful message until node A has enough buffer to store message i.
Here, the novelty of our strategy is that we migrate messages that need to delete the neighbor nodes with larger buffers. The proposed migration strategy can effectively assure that the delivery ratio does not decline due to directly deleting the message.
The number of message copies is uncontrolled in the opportunistic network. Excessive storage of the same message will inevitably lead to the node buffer space being filled up in a short time; meanwhile, it also causes the waste of network resources, the waste of extensive network resources and the decline of buffer utilization [8]. In order to address this problem, we suppose messages in a node are not redundant. In other words, we determine whether the node keeps the same message when a new message arrives. If there are two messages with the same content, we compare the utility values of the two messages. The advantage of queuing strategy is that it not only avoids the repeated storage of messages, but also ensures that the node utility value is improved. The transmission process terminates when the message is forwarded to the destination node. We update the relevant information package of the message in the node when the nodes make contact and timely remove the message that was sent successfully so as to quickly clear the redundant messages in the network. The node carries more messages through exchanging information in order for the message to reach the destination node quickly. Our proposed strategy establishes a reasonable queuing strategy and a novel message migration method based on the local information in the mobile opportunistic network. The combinational buffer management scheme evaluates the message and node utility value more comprehensively through multiple attributes of the message.

3.3. Utility Value Calculation

Since nodes are mobile devices carried by individuals, we define the node buffer size as restricted in the opportunistic network. A node must choose which messages to delete when the buffer is full. Normally, we drop the message when the TTL elapses [8]. In this article, we define a utility value according to information of the message and the node. The sort of the message in the node is based on its utility value, and the utility value of the node is determined by all messages in node. We quantify the utility value according to the size of the message, the number of forwarding hops from the source node to the current node, and the TTL, and sort all the messages ascending in the node. As shown in Equation (1), the utility value of the message i is
D ( S , H , T ) = α × s i S + β × h i H a v + γ × t i T T L ,
where α , β , γ are the weighted factors, which represent the impact of message size, the number of forwarding hops and TTL of the message, respectively. Furthermore, α + β + γ = 1. In addition, S is the buffer size of the node and H a v is the average number of hops for all messages in a node.
According to Equation (1), D ( S , H , T ) is determined by the attribute value of the message itself. The existing studies [16] show that the message delivery ratio increases as the survival time of the message and the number of forwarding hops of the message increase. Therefore, if the message has the above conditions, the likelihood of the destination node successfully receiving the message increases, and we delete these messages first. Since the messages in the buffer are sorted from small to large according to their utility values, we preferentially delete messages with large utility values. To minimize the utility value of the message in the network, we define the negative correlation of the utility value of message i as
U i = D i .
In the following, we sort the messages in descending order in the node buffer based on the negative correlation of the node utility values. We store the message with a larger negative correlation of utility values and delete or transfer the message with a smaller negative correlation of utility values.
The utility value of the node is determined by the average utility value of all messages in the node. Therefore, the utility value of node M is
U M = i = 1 n D ( S , H , T ) n ,
for which n is the number of all messages in a node.

3.4. Evaluation Method

Algorithm 1 presents the steps performed by CBM strategy. It starts whenever any two nodes contact. The node can be a sender or a receiver and it exchanges message information during contact (as line 2). We first determine whether the node buffer overflows before two nodes send messages to each other. If the node A has enough available buffers A a , we calculate the utility value of the new coming message i and insert it into the node buffer queue (as lines 4–5). Otherwise, we calculate the utility value of the node A and compare it with the newly arrived message i utility value (as lines 6–7). We give up the transfer when the utility value of the message i is less than the utility value of the node A (as lines 8–9). However, when the utility value of the message i is bigger than the utility value of the node A, we forward the message with the lowest utility value in node A to the neighbor node based on the migration policy to ensure that message A has enough buffer space to receive message i, where q is the message with the lowest utility value in node A, thereby increasing the utility value of node A (as lines 10–11). We select one node from neighbors as an object to be migrated. Then, according to Equation (4) [25], where A v ( C ) is the ratio of available capacity of node C, we determine whether the ratio of available capacity of node C to its buffer storage capacity μ ( C ) is more than μ 0 (in experiments, we set μ 0 = 0.3 ). Meanwhile, we calculate the utility value of the neighbor node (as lines 12–14). Here, we specify the value of μ 0 to ensure that the buffer of the neighbor node does not become congested after receiving the migration message. If the utility value of the message q is bigger than the utility value of the node C and the μ ( C ) is less than μ 0 , we forward the message to the neighbor node and insert it into the buffer queue (as lines 15–17). Otherwise, we give up the transfer of the message (as lines 18–19). When all neighbor nodes do not satisfy the migration condition, we directly delete the message with the minimum utility value in the buffer of the node A until the available buffer space can accommodate the message i (as line 21). The above is the whole process of CBM policies for the message transfer and the message migration.
μ ( C ) = A v ( C ) ÷ S , i f s i < A v ( C ) , , i f s i A v ( C ) .
When the node is close to overflow, the migration method is also applicable to transfer part of messages in the node to the neighbor with a relatively sufficient available buffer to ensure that it has the cache space to save other messages at the next contact. It is worth noting that the neighbor node as the migrated object has adequate available buffer. If we do not consider the existing space of the neighbor node and directly transfer the message to the neighbor node, it is not possible to determine whether the neighbor node has enough buffers to receive the message. Meanwhile, it may lead to the buffer of the neighbor node overflowing in the subsequent moment, which not only leads to reduce message delivery ratio and unreasonable buffer utilization, but also wastes a lot of network resources.
Algorithm 1 Message forwarding and migration strategy of CBM.
1:
for each node A, BV do
2:
if B and A is connected and B is the sender then
3:
for each message i in B BufferQueue send to A do
4:
if s i A a
5:
Calculate the message i utility value and A receive i and store it
6:
else
7:
Calculates the A average utility value U ( A )
8:
if U ( i ) U ( A )
9:
Abandon transfer message i
10:
else
11:
Select the message q with min utility value in A BufferQueue
12:
for each node CV do
13:
if BC and μ ( C ) μ 0 then
14:
Calculate average utility value U ( C )
15:
if U ( q ) U ( C )
16:
C receives q and stores it
17:
end if
18:
else
19:
Abandon transfer message q
20:
end for
21:
Node A delete the message q until node A store message i in BufferQueue
22:
end if
23:
end if
24:
end for
25:
end if
26:
end for

4. Simulation

4.1. Network Model and Simulation Environment

We simulate our experiments with the Opportunistic Network Environment (ONE) simulator [30]. We use the following two datasets.
  • KAIST is a real dataset which record the daily activities of 32 students in the campus dormitory that carried the Garmin GPS 60CSx handheld receiver in Daejeon, Korea, in Asia from 26 September 2006 to 3 October 2007. In addition, the GPS receiver reads and records a track every 10 s with accuracy of 3 m. The participants walk most of the time during the experiment, but also occasionally travel by bus, trolley, cars, or subway trains. A total of 92 daily trajectories were collected.
  • SLAW is a new mobile model that relies on GPS traces of human walks, including 226 daily traces collected from 101 volunteers from five different outdoor locations for five hours. These traces include the same nature of the people, such as students in the same university campuses or visitors of the theme park. SLAW can represent the social contexts between volunteers through the participants visiting common places and walk patterns.
We define the size of the original messages between 0.5 MB and 1 MB, and we randomly assign them to the node. The replication and forwarding of messages are achieved by the node contact. Before the formal experiment, we confirmed that the values of several weighted factors parameters α , β and γ are 0.55, 0.3, and 0.25, respectively, by a series of tests. The detailed information of the simulation parameters are shown in Table 1.
Meanwhile, we consider several classical buffer management policies such as Random Early Detection (RED), Drop Head (DH), Drop Tail (DT), Drop Random (DR), and Drop-Oldest (DOA) [10], and compare these strategies with the CBM strategy in the same scenario. Next, we provide a brief overview of the storage and deletion methods for each strategy. The basic idea of the RED algorithm is to use the average queue length to measure the degree of network congestion, and we determine whether the network is congested by detecting the average queue length, and then we calculate the discard probability according to the linear relationship of the average queue length and the discard probability, thus deciding which packets should be deleted. In addition, DH deletes the packet at the head of the queue and inserts a new message at the end of the queue. Instead, DT both deletes and inserts the packet at the end of the queue. In DR, the node removes or stores a package arbitrarily from the buffer queue. In DOA, the node drops the packet with the shorter remaining lifetime [10].
We use the following four indicators to evaluate RED, DH, DT, DR, DOA, and our six proposed CBM buffer management methods in our simulation experiment.
  • Delivery ratio. The ratio of the number of messages successfully delivered to the destination to the total number of messages generated.
  • Overhead ratio. The number of all messages and their copies in the network divided by the number of the original messages.
  • Average delays. The average delay of all messages successfully delivered to the destination node.
  • Hops. The average forwarding hops for all messages from the source node to the destination node.

4.2. Simulation Results and Discussion

4.2.1. Overall Performance

During the experiment, we calculated the utility value of the message and the node according to the characteristics of the six management methods. For the CBM strategy, we calculate the utility value of the message and its negative correlation according to Equations (1) and (2), and calculate the utility value of the node according to Equation (3). When the nodes randomly contact with other nodes and exchange messages, the nodes update the message information table, and the utility value of the node changes.
Figure 3 shows the results of the six buffer management strategies in terms of the delivery ratio, where Figure 3a presents the results of the delivery ratio of the KAIST. Obviously, the CBM strategy improves the delivery ratio of the message, and it has the highest delivery ratio after 2000 s. The most obvious is that, compared with the RED, the CBM enhances the delivery ratio almost by 15%, from 0.82 of RED to 0.97 of CBM. Figure 3b presents the results of the delivery ratio of the SLAW. Similarly, the CBM strategy has the highest message delivery ratio leading other strategies about 4–28% when the emulation time exceeds 1500 s. One of the advantages of the CBM strategy is that the message arrives at the destination node with the maximum delivery ratio.
Figure 4 presents the number of hops of the six methods. The CBM achieves the least number of hops compared with the other five strategies both KAIST and SLAW. In general, a message arriving at the destination node experiences 11 hops in CBM in Figure 4a, while the other five strategies (RED, DH, DR, DT and DOA) require 33, 29, 32, 26 and 35, respectively. Furthermore, the average number of hops for the messages arriving at the destination node only needs 25 hops in CBM strategy in Figure 4b, which is less than half of the DR strategy. Meanwhile, the number of hops for the other five strategies (RED, DH, DR, DT and DOA) is 130, 74, 156, 116 and 61, respectively. Therefore, another advantage of the CBM strategy is that the message arrives at the destination node with a minimum number of relay nodes.
The high delivery ratio and low hop also validate the effectiveness of our CBM strategy, which not only combines the attributes of the message with the properties of the node, but also introduces the concept of message migration to avoid node buffer congestion.
Table 2 illustrates the network overhead ratio and the average delay of the message of the six buffer management policies, where the overhead ratio is the number of message copies of the original messages. We can see that the CBM policies with the least network overhead ratio are both KAIST and SLAW. In this way, we can effectively avoid the network congestion caused by the excessive number of copy messages. Similarly, the average delay of CBM strategy is superior to the other strategies because the node updates the queue based on the real-time information of the neighbor node and continuously receives the message with the higher utility value in CBM. We use the migration method and transfer the messages with the lower utility value to neighbors instead of directly deleting them when the node buffer is congested and the neighbor nodes have larger available buffer size. Message migration leads to the survival time of the message growing longer in the network, but it ensures the message delivery ratio and does not affect the CBM strategy overall performance.

4.2.2. Analysis of Message Migration

It can clearly be seen from Figure 5 that the amount of message migration increases as the time increases in CBM policy. In the experiment, we calculate the number of the migration messages n r ( q ) , that is, we migrate the message q with the smallest U q in the node when the message migration condition is satisfied. The possibility of a message q being migrated or deleted is large in the node if the U q is small. It is important to note that this policy applies to situations where the size of the buffer is limited. When the buffer is large enough, the node has enough space to store the message and the message does not migrate, that is, the amount of migration is zero.

4.2.3. Impact of Buffer Size of the Delivery Ratio

Figure 6 shows the impact of buffer size of the delivery ratio. As shown in Figure 6a, when the buffer size is greater than 11 MB, the delivery ratio of the CBM strategy is higher than the other five strategies. Similarly, the delivery ratio of the CBM strategy is higher than the other five strategies when the buffer size is greater than 14 MB in Figure 6b. The message delivery ratio is also improved with the increase of node buffer. This is because when the buffer size is limited, the proposed CBM can forward messages with higher priority in the queue according to the utility value of the node and the message, and transfer the message with low utility value to the neighbor node when buffer congestion happens. Therefore, we can predict that the message delivery ratio can reach 100% in the simulation time when the buffer size is not limited. This is mainly because, when the node buffer is large enough, the node can store-carry-forward enough messages and encounter the destination node after a certain period of time.
In short, we propose a comprehensive buffer management strategy that reasonably deals with the messages stored between nodes and effectively solves the problem of buffer congestion. It effectively avoids wasting network resources caused by the flooding of the message. Compared with the other five buffer management strategies (RED, DH, DR, DT and DOA), this strategy not only significantly improves the delivery ratio and reduces the number of forwarding hops, but also has the optimal performance in terms of overhead ratio and average delay. Therefore, the overall performance of CBM strategy is superior to other buffer management strategies.

5. Conclusions

In MONs, transfer information between nodes adopts a storing-carrying-forwarding paradigm due to the node mobility and intermittent connectivity of the network. Node buffer management will directly affect the message delivery ratio due to limited storage space, and rational management strategies will effectively avoid excessive copies of messages and the waste of network resources. The existing buffer management research for opportunistic network is fragmented. In order to solve the present issue, the proposed CBM strategy designs the utility value from the view of messages and nodes. It mainly includes a reasonable queuing strategy and a novel message migration method. The novelty of the strategy is that we proactively forward messages to the neighbor node in order to avoid the reduced delivery ratio of the message caused by deleting messages when there is node buffer congestion. Compared with other classical buffer management strategies, CBM aims to maximize delivery ratio and minimize the number of forwarding hops, overhead ratio and average delays.
In the future, we will further study the popularity of the message when calculating the utility value of the message, and use the node position and the node centrality as the criterion of the node utility value. More refined schedule methods will be designed and evaluated. At the same time, we can further verify the correctness and advantages of the CBM strategy by collecting and using more real data sets. At the same time, we should address buffer management in terms of average delay time of messages in future work.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant No. U1404602, the Young Scholar Program of Henan Province under Grant No.2015GGJS-086, the Science and Technology Foundation of Henan Province under Grant No.172102210341, the Dr. Startup Project of Henan Normal University under Grant No.qd14136, and the Young Scholar Program of Henan Normal University with No. 15018.

Author Contributions

Peiyan Yuan conceived the idea for the study; Peiyan Yuan and Hai Yu did the analyses; Hai Yu performed the experiments and wrote the paper; Peiyan Yuan revised the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Trifunovic, S.; Kouyoumdjieva, S.T.; Distl, B.; Pajevic, L.; Karlsson, G.; Plattner, B. A Decade of Research in Opportunistic Networks: Challenges, Relevance, and Future Directions. IEEE Commun. Mag. 2017, 55, 168–173. [Google Scholar] [CrossRef]
  2. Ngo, T.; Nishiyama, H.; Kato, N.; Kotabe, S.; Tohjo, H. A Novel Graph-Based Topology Control Cooperative Algorithm for Maximizing Throughput of Disaster Recovery Networks. In Proceedings of the 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring), Nanjing, China, 15–18 May 2016; pp. 1–5. [Google Scholar]
  3. Dressler, F.; Ripperger, S.; Hierold, M.; Nowak, T. From radio telemetry to ultra-low-power sensor networks: Tracking bats in the wild. IEEE Commun. Mag. 2016, 54, 129–135. [Google Scholar] [CrossRef]
  4. Zhang, L.; Zhou, X.; Guo, J. Noncooperative Dynamic Routing with Bandwidth Constraint in Intermittently Connected Deep Space Information Networks Under Scheduled Contacts. Wirel. Pers. Commun. 2012, 68, 1255–1285. [Google Scholar] [CrossRef]
  5. Qin, J.; Zhu, H.; Zhu, Y.; Lu, L.; Xue, G.; Li, M. POST: Exploiting Dynamic Sociality for Mobile Advertising in Vehicular Networks. IEEE Trans. Parallel Distrib. Syst. 2016, 27, 1770–1782. [Google Scholar] [CrossRef]
  6. Santos, R.; Orozco, J.; Ochoa, S.F. A real-time analysis approach in opportunistic networks. ACM SIGBED Rev. 2011, 8, 40–43. [Google Scholar] [CrossRef]
  7. Boldrini, C. Design and analysis of context-aware forwarding protocols for opportunistic networks. In Proceedings of the Second International Workshop on Mobile Opportunistic Networking, Pisa, Italy, 22–23 February 2010; pp. 201–202. [Google Scholar]
  8. Pan, D.; Ruan, Z.; Zhou, N.; Liu, X.; Song, Z. A comprehensive-integrated buffer management strategy for opportunistic networks. EURASIP J. Wirel. Commun. Netw. 2013, 2013, 103. [Google Scholar] [CrossRef]
  9. Erramilli, V.; Crovella, M. Forwarding in opportunistic networks with resource constraints. In Proceedings of the Third ACM Workshop on Challenged Networks, San Francisco, CA, USA, 15 September 2008; pp. 41–48. [Google Scholar]
  10. Rashid, S.; Ayub, Q.; Zahid, M.S.M.; Abdullah, A.H. Impact of Mobility Models on DLA (Drop Largest) Optimized DTN Epidemic Routing Protocol. Int. J. Comput. Appl. 2011, 18, 35–39. [Google Scholar] [CrossRef]
  11. Kim, D.; Park, H.; Yeom, I. Minimizing the impact of buffer overflow in DTN. In Proceedings of the 3rd International Conference on Future Internet Technologies (CFI), Seoul, Korea, 18–20 June 2008. [Google Scholar]
  12. Sati, S.; Probst, C.; Graffi, K. Analysis of Buffer Management Policies for Opportunistic Networks. In Proceedings of the IEEE 25th International Conference on Computer Communication and Networks, Waikoloa, HI, USA, 1–4 August 2016; pp. 1–8. [Google Scholar]
  13. Krifa, A.; Baraka, C.; Spyropoulos, T. Optimal Buffer Management Policies for Delay Tolerant Networks. In Proceedings of the 5th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks, San Francisco, CA, USA, 16–20 June 2008; pp. 260–268. [Google Scholar]
  14. Scott, K.; Burleigh, S. Bundle Protocol Specification. Internet RFC 5050. Available online: https://rfc-editor.org/rfc/rfc5050.txt (accessed on 10 November 2017).
  15. Elwhishi, A.; Ho, P.H.; Naik, K.; Shihada, B. A Novel Message Scheduling Framework for Delay Tolerant Networks Routing. IEEE Trans. Parallel Distrib. Syst. 2013, 24, 871–880. [Google Scholar] [CrossRef]
  16. Ramanathan, R.; Hansen, R.; Basu, P.; Rosales-Hain, R.; Krishnan, R. Prioritized epidemic routing for opportunistic networks. In Proceedings of the 1st International MobiSys Workshop on Mobile Opportunistic Networking, San Juan, Puerto Rico, 11 June 2007; pp. 62–66. [Google Scholar]
  17. Balasubramanian, A.; Levine, B.N.; Venkataramani, A. DTN routing as a resource allocation problem. ACM SIGCOMM Comput. Commun. Rev. 2007, 37, 373–384. [Google Scholar] [CrossRef]
  18. Ayub, Q.; Rashid, S.; Zahid, M.S.M. Buffer Scheduling Policy for Opportunitic Networks. Int. J. Sci. Eng. Res. 2013, 2, 1–7. [Google Scholar]
  19. Rashid, S.; Ayub, Q.; Zahid, S.M.M.; Abdullah, A.H. E-DROP: An Effective Drop Buffer Management Policy for DTN Routing Protocols. Int. J. Comput. Appl. 2011, 13, 8–13. [Google Scholar] [CrossRef]
  20. Wang, E.; Yang, Y.; Wu, J. A Knapsack-Based Message Scheduling and Drop Strategy for Delay-Tolerant Networks. In Proceedings of the European Conference on Wireless Sensor Networks, Porto, Portugal, 9–11 February 2015; pp. 120–134. [Google Scholar]
  21. Wang, E.; Yang, Y.; Wu, J. A Knapsack-based buffer management strategy for delay-tolerant networks. J. Parallel Distrib. Comput. 2015, 86, 1–15. [Google Scholar] [CrossRef]
  22. Li, Y.; Qian, M.; Jin, D.; Su, L.; Zeng, L. Adaptive Optimal Buffer Management Policies for Realistic DTN. In Proceedings of the IEEE Global Telecommunications Conference, Honolulu, HI, USA, 30 November–4 December 2009; pp. 1–5. [Google Scholar]
  23. Yao, J.; Ma, C.; Yu, H.; Liu, Y.; Yuan, Q. A Utility-Based Buffer Management Policy for Improving Data Dissemination in Opportunistic Networks. China Commun. 2017, 14, 118–126. [Google Scholar] [CrossRef]
  24. Iranmanesh, S. A novel queue management policy for delay-tolerant networks. EURASIP J. Wirel. Commun. Netw. 2016, 2016, 88. [Google Scholar] [CrossRef]
  25. Seligman, M.; Mundur, P.; Mundur, P. Alternative custodians for congestion control in delay tolerant networks. In Proceedings of the 2006 SIGCOMM Workshop on Challenged Networks, Pisa, Italy, 11–15 September 2006; pp. 229–236. [Google Scholar]
  26. Moetesum, M.; Hadi, F.; Imran, M.; Minhas, A.A.; Vasilakos, A.V. An adaptive and efficient buffer management scheme for resource-constrained delay tolerant networks. Wirel. Netw. 2016, 22, 2189–2201. [Google Scholar] [CrossRef]
  27. Zhang, G.; Wang, J.; Liu, Y. Congestion management in delay tolerant networks. In Proceedings of the 4th Annual International Conference on Wireless Internet, Maui, HI, USA, 17–19 November 2008; p. 65. [Google Scholar]
  28. Rhee, I.; Shin, M.; Hong, S.; Lee, K.; Kim, S.J.; Chong, S. On the Levy-Walk Nature of Human Mobility. IEEE/ACM Trans. Netw. 2011, 19, 630–643. [Google Scholar] [CrossRef]
  29. Lee, K.; Hong, S.; Kim, S.J.; Rhee, I.; Chong, S. SLAW: Self-Similar Least-Action Human Walk. IEEE/ACM Trans. Netw. 2012, 20, 515–529. [Google Scholar] [CrossRef]
  30. Niu, J.; Wang, D.; Atiquzzaman, M. Copy limited flooding over opportunistic networks. In Proceedings of the 2013 IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 7–10 April 2015; Volume 58, pp. 94–107. [Google Scholar]
Figure 1. An example of node movement in Mobile Opportunistic Network.
Figure 1. An example of node movement in Mobile Opportunistic Network.
Futureinternet 09 00082 g001
Figure 2. The process of the Combinational Buffer Management (CBM) queuing strategy.
Figure 2. The process of the Combinational Buffer Management (CBM) queuing strategy.
Futureinternet 09 00082 g002
Figure 3. Comparison of delivery ratio.
Figure 3. Comparison of delivery ratio.
Futureinternet 09 00082 g003
Figure 4. Comparison of hops.
Figure 4. Comparison of hops.
Futureinternet 09 00082 g004
Figure 5. The number of the migration messages in CBM policy.
Figure 5. The number of the migration messages in CBM policy.
Futureinternet 09 00082 g005
Figure 6. Impact of buffer size of the delivery ratio.
Figure 6. Impact of buffer size of the delivery ratio.
Futureinternet 09 00082 g006
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParameterValue
Simulation field size600 × 600 m 2
Simulation time (KAIST/SLAW)15 × 10 3 s/18 × 10 3 s
Number of nodes (KAIST/SLAW)90/500
Transmission range25 m
Node storage size20 MB
Message storage size[0.5,1] MB
The TTL of the message300 s
Table 2. Comparisons of overhead ratio and average delay.
Table 2. Comparisons of overhead ratio and average delay.
KAISTSLAW
Overhead RatioAverage Delay (s)Overhead RatioAverage Delay (s)
CBM1340.551101.5615,305.411292.61
RED4009.451442.4140,731.241487.67
DH2372.801920.6726,255.481316.42
DT3972.881190.4540,674.631494.95
DOA3691.131324.8342,903.371467.42
DR4075.631691.9454,101.541445.23

Share and Cite

MDPI and ACS Style

Yuan, P.; Yu, H. A Combinational Buffer Management Scheme in Mobile Opportunistic Network. Future Internet 2017, 9, 82. https://doi.org/10.3390/fi9040082

AMA Style

Yuan P, Yu H. A Combinational Buffer Management Scheme in Mobile Opportunistic Network. Future Internet. 2017; 9(4):82. https://doi.org/10.3390/fi9040082

Chicago/Turabian Style

Yuan, Peiyan, and Hai Yu. 2017. "A Combinational Buffer Management Scheme in Mobile Opportunistic Network" Future Internet 9, no. 4: 82. https://doi.org/10.3390/fi9040082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop