[go: up one dir, main page]

0% found this document useful (0 votes)
214 views24 pages

Java IEEE Abstracts

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 24

On the Performance Benefits of Multihoming Route Control

Abstract

Multihoming is increasingly being employed by large enterprises and data centers to extract good
performance and reliability from their ISP connections. Multihomed end networks today can employ a
variety of route control products to optimize their Internet access performance and reliability. However,
little is known about the tangible benefits that such products can offer, the mechanisms they employ
and their trade-offs. This paper makes two important contributions. First, we present a study of the
potential improvements in Internet round-trip times (RTTs) and transfer speeds from employing
multihoming route control. Our analysis shows that multihoming to three or more ISPs and cleverly
scheduling traffic across the ISPs can improve Internet RTTs and throughputs by up to 25% and 20%,
respectively. However, a careful selection of ISPs is important to realize the performance improvements.
Second, focusing on large enterprises, we propose and evaluate a wide-range of route control
mechanisms and evaluate their design trade-offs. We implement the proposed schemes on a Linux-
based Web proxy and perform a trace-based evaluation of their performance. We show that both
passive and active measurement-based techniques are equally effective and could improve the Web
response times of enterprise networks by up to 25% on average, compared to using a single ISP. We also
outline several "best common practices" for the design of route control products.

Optimal State Allocation for Multicast Communications With Explicit Multicast Forwarding

Abstract

In this paper, we propose a scalable and adaptive multicast forwarding mechanism based on explicit
multicast (Xcast). This mechanism optimizes the allocation of forwarding states in routers and can be
used to improve the scalability of traditional IP multicast and source-specific multicast. Compared with
previous work, our mechanism needs fewer routers in a multicast tree to store forwarding states and
therefore leads to a more balanced distribution of forwarding states among routers. We focus on two
problems and formulate each of them as an optimization problem. The first problem, referred to as
minstate, minimizes the total number of routers that store forwarding states in a multicast tree. The
second problem, referred to as balancestate, minimizes the maximum number of forwarding states
stored in a router for all multicast groups, which is proved to be an NP-hard problem. We design a
distributed algorithm that obtains the optimal solution to the first problem and propose an
approximation algorithm for the second problem. We also prove that the approach adopted by most
existing works to allocate forwarding states in the branching routers of a multicast tree is a special case
of our mechanism. The simulation results show that the forwarding state allocation provided by
previous work is concentrated on the backbone routers in the Internet, which may cause the scalability
problem. In contrast, our mechanism can balance forwarding states stored among routers and reduce
the number of routers that store the forwarding states for a multicast tree.

Dual-Link Failure Resiliency Through Backup Link Mutual Exclusion

ABSTRACT

Networks employ link protection to achieve fast recovery from link failures. While the first link failure
can be protected using link protection, there are several alternatives for protecting against the
second failure. This paper formally classifies the approaches to dual-link failure resiliency. One of the
strategies to recover from dual-link failures is to employ link protection for the two failed links
independently, which requires that two linksmay not use each other in their backup paths if they may
fail simultaneously. Such a requirement is referred to as backup link mutual exclusion (BLME) constraint
and the problem of identifying a backup path for every linkthat satisfies the above requirement is
referred to as the BLME problem. This paper develops the necessary theory to establish the sufficient
conditions for existence of a solution to the BLME problem. Solution methodologies for the BLME
problem is developed using two approaches by: 1) formulating the backup path selection as an integer
linear program; 2)developing a polynomial time heuristic based on minimum cost path routing. The ILP
formulation and heuristic are applied to six networks and their performance is compared with
approaches that assume precise knowledge of dual-link failure. It is observed that a solution exists for all
of the six networks considered. The heuristic approach is shown to obtain feasible solutions that are
resilient to most dual-link failures, although the backup path lengths may be significantly higher than
optimal. In addition, the paper illustrates the significance of the knowledge of failure location by
illustrating that network with higher connectivity may require lesser capacity than one with a lower
connectivity to recover from dual-link failures.

Enhancing Search Performance in Unstructured P2P Networks Based on Users’ Common Interest

ABSTRACT

Peer-to-peer (P2P) networks establish loosely coupled application-level overlays on top of the Internet
to facilitate efficient sharing of resources. They can be roughly classified as either structured or
unstructured networks. Without stringent constraints over the network topology, unstructured P2P
networks can be constructed very efficiently and are therefore considered suitable to the Internet
environment. However, the random search strategies adopted by these networks usually perform
poorly with a large network size. In this paper, we seek to enhance the search performance in
unstructured P2P networks through exploiting users' common interest patterns captured within a
probability-theoretic framework termed the user interest model (UIM). A search protocol and a routing
table updating protocol are further proposed in order to expedite the search process through self
organizing the P2P network into a small world. Both theoretical and experimental analyses are
conducted and demonstrated the effectiveness and efficiency of our approach.

Two Techniques for Fast Computation of Constrained Shortest Paths

ABSTRACT

Computing constrained shortest paths is fundamental to some important network functions such as QoS
routing, MPLS path selection, ATM circuit routing, and traffic engineering. The problem is to find the
cheapest path that satisfies certain constraints. In particular, finding the cheapest delay-constrained
path is critical for real-time data flows such as voice/video calls. Because it is NP-complete, much
research has been designing heuristic algorithms that solve the epsiv-approximation of the problem with
an adjustable accuracy. A common approach is to discretize (i.e., scale and round) the link delay or link
cost, which transforms the original problem to a simpler one solvable in polynomial time. The efficiency
of the algorithms directly relates to the magnitude of the errors introduced during discretization. In this
paper, we propose two techniques that reduce the discretization errors, which allows faster algorithms
to be designed. Reducing the overhead of computing constrained shortest paths is practically important
for the successful design of a high-throughput QoS router, which is limited at both processing power and
memory space. Our simulations show that the new algorithms reduce the execution time by an order of
magnitude on power-law topologies with 1000 nodes. The reduction in memory space is similar.

The Server Reassignment Problem for Load Balancing in Structured P2P Systems

Abstract
Load balancing among application layer peer-to-peer (P2P) networks is critical for its effectiveness but,
are considered to be the most important development for next-generation internet infrastructure. Most
structured P2P systems rely on ID-space partitioning schemes to solve the load imbalance problem and
have been known to result in an imbalance factor of ? (log N) in the zone sizes.

Two important contributions to minimize the same are proposed in [1]. First, the virtual-server-based
load balancing problem using an optimization-based approach and deriving proposal in general and its
advantages over previous strategies. Second, characterizing the effect of heterogeneity on load
balancing algorithm performance and the conditions in which heterogeneity may be easy or hard to deal
with based on an extensive study of a wide spectrum of load and capacity scenarios.

Securing User-Controlled Routing Infrastructures

ABSTRACT

Designing infrastructures that give untrusted third parties (such as end-hosts) control over routing is a
promising research direction for achieving flexible and efficient communication. However, serious
concerns remain over the deployment of such infrastructures, particularly the new security
vulnerabilities they introduce. The flexible control plane of these infrastructures can be exploited to
launch many types of powerful attacks with little effort. In this paper, we make several contributions
towards studying security issues in forwarding infrastructures (FIs). We present a general model for an
FI, analyze potential security vulnerabilities, and present techniques to address these vulnerabilities. The
main technique that we introduce in this paper is the use of simple lightweight cryptographic constraints
on forwarding entries. We show that it is possible to prevent a large class of attacks on end-hosts and
bound the flooding attacks that can be launched on the infrastructure nodes to a small constant value.
Our mechanisms are general and apply to a variety of earlier proposals such as , DataRouter, and
Network Pointers.

Strategyproof Mechanisms for Scheduling Divisible Loads in Bus-Networked Distributed Systems

Abstract—The scheduling of arbitrarily divisible loads on a distributed system is studied by Divisible Load
Theory (DLT). DLT has the

underlying assumption that the processors will not cheat. In the real world, this assumption is unrealistic
as the processors are owned
and operated by autonomous rational organizations that have no a priori motivation for cooperation.
Consequently, they will manipulate

the algorithms if it benefits them to do so. In this work, we propose strategyproof mechanisms for
scheduling divisible loads on three

types of bus-connected distributed systems. These mechanisms provide incentives to the processors to
obey the prescribed

algorithms and to truthfully report their parameters, leading to an efficient load allocation and
execution.

Multicast Routing with Delay and Delay Variation Constraints for Collaborative Applications on Overlay
Networks

Abstract—Computer supported collaborative applications on overlay networks are gaining popularity


among users who are geographically dispersed. Examples of these kinds of applications include video-
conferencing, distributed database replication, and online games. This type of application requires a
multicasting subnetwork, using which messages should arrive at the destinations within a specified delay
bound. These applications also require that destinations receive the message from the source at
approximately the same time. The problem of finding a multicasting subnetwork with delay and delay-
variation bound has been proved to be an NP Complete problem in the literature and heuristics have
been proposed for this problem. In this paper, we provide an efficient heuristic to obtain a multicast
subnetwork on an overlay network, given a source and a set of destinations that is within a specified
maximum delay and a specified maximum variation in the delays from a source to the destinations. The
time-complexity of our algorithm isO(|E| + nk \log(|E|/n) + m^{2}k), where n and |E| are the number of
nodes and edges in the network, respectively, k is the number of shortest paths determined, and m is
the number of destinations. We have shown that our algorithm is significantly better in terms of time-
complexity than existing algorithms for the same problem. Our extensive empirical studies indicate that
our heuristic uses significantly less runtime in comparison with the best-known heuristics while
achieving the tightest delay variation for a given end-to-end delay bound.

Using the Conceptual Cohesion of Classes for Fault   Prediction in Object-Oriented Systems

ABSTRACT

High cohesion is a desirable property of software as it positively impacts understanding, reuse, and
maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect
particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely
based on using the structural information from the source code, such as attribute references, in
methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO
software systems based on the analysis of the unstructured information embedded in the source code,
such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is
inspired by the mechanisms used to measure textual coherence in cognitive psychology and
computational linguistics. This paper presents the principles and the technology that stand behind the
C3 measure. A large case study on three open source software systems is presented which compares the
new measure with an extensive set of existing metrics and uses them to construct models that predict
software faults. The case study shows that the novel measure captures different aspects of class
cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing
structural cohesion metrics proves to be a better predictor of faulty classes when compared to different
combinations of structural cohesion metrics.

WASP: Protecting Web Applications Using Positive   Tainting and Syntax-Aware Evaluation

Many software systems have evolved to include a web-based component that makes them available to
the public via the Internet and can expose them to a variety of web-based attacks. One of these attacks
is SQL injection, which can give attackers unrestricted access to the databases underlying web
applications and has become increasingly frequent and serious. This paper presents a new, highly
automated approach for protecting web applications against SQL injection that has both conceptual and
practical advantages over most existing techniques. From a conceptual standpoint, the approach is
based on the novel idea of positive tainting and on the concept of syntax-aware evaluation. From a
practical standpoint, our technique is precise and efficient and has minimal deployment requirements.
We also present an extensive empirical evaluation of our approach performed using WASP, a tool that
implements our technique. In the evaluation, we used WASP to protect a wide range of web applications
while subjecting them to a large and varied set of attacks and legitimate accesses. WASP was able to
stop all attacks and did not generate any false positives. Our studies also show that the overhead
imposed by WASP was negligible in most cases.

A Bidirectional Routing Abstraction for Asymmetric   Mobile Ad Hoc Networks

Many software systems have evolved to include a web-based component that makes them available to
the public via the Internet and can expose them to a variety of web-based attacks. One of these attacks
is SQL injection, which can give attackers unrestricted access to the databases underlying web
applications and has become increasingly frequent and serious. This paper presents a new, highly
automated approach for protecting web applications against SQL injection that has both conceptual and
practical advantages over most existing techniques. From a conceptual standpoint, the approach is
based on the novel idea of positive tainting and on the concept of syntax-aware evaluation. From a
practical standpoint, our technique is precise and efficient and has minimal deployment requirements.
We also present an extensive empirical evaluation of our approach performed using WASP, a tool that
implements our technique. In the evaluation, we used WASP to protect a wide range of web applications
while subjecting them to a large and varied set of attacks and legitimate accesses. WASP was able to
stop all attacks and did not generate any false positives. Our studies also show that the overhead
imposed by WASP was negligible in most cases.

  Efficient Routing in Intermittently Connected Mobile   Networks: The Multiple-Copy Case

Intermittently connected mobile networks are wireless networks where most of the time there does not
exist a complete path from the source to the destination. There are many real networks that follow this
model, for example, wildlife tracking sensor networks, military networks, vehicular ad hoc networks, etc.
In this context, conventional routing schemes fail, because they try to establish complete end-to-end
paths, before any data is sent.

To deal with such networks researchers have suggested to use flooding-based routing schemes. While
flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from
severe contention which can significantly degrade their performance. Furthermore, proposed efforts to
reduce the overhead of flooding-based schemes have often been plagued by large delays. With this in
mind, we introduce a new family routing schemes that "spray" a few message copies into the network,
and then route each copy independently towards the destination. We show that, if carefully designed,
spray routing not only performs significantly fewer transmissions per message, but also has lower
average delivery delays than existing schemes; furthermore, it is highly scalable and retains good
performance under a large range of scenarios.

Finally, we use our theoretical framework proposed in our 2004 paper to analyze the performance of
spray routing. We also use this theory to show how to choose the number of copies to be sprayed and
how to optimally distribute these copies to relays.

  

Efficient Resource Allocation for Wireless Multicast

In this paper, we propose a bandwidth-efficient multicast mechanism for heterogeneous wireless


networks. We reduce the bandwidth cost of a IP multicast tree by adaptively selecting the cell and the
wireless technology for each mobile host to join the multicast group. Our mechanism enables more
mobile hosts to cluster together and lead to the use of fewer cells to save the scarce wireless
bandwidth. Besides, the paths in the multicast tree connecting to the selected cells share more common
links to save the wireline bandwidth. Our mechanism supports the dynamic group membership and
offers mobility of group members. Moreover, our mechanism requires no modification on the current IP
multicast routing protocols. We formulate the selection of the cell and the wireless technology for each
mobile host in the heterogeneous wireless networks as an optimization problem. We use Integer Linear
Programming to model the problem and show that the problem is NP-hard. To solve the problem, we
propose an distributed algorithm based on Lagrangean relaxation and a network protocol based on the
algorithm. The simulation results show that our mechanism can effectively save the wireless and
wireline bandwidth as compared to the traditional IP\\ multicast.

  A Precise Termination Condition of the Probabilistic   Packet Marking Algorithm

The probabilistic packet marking (PPM in short) algorithm is a promising way to discover the Internet
map, or an attack graph, that the attack packets traversed during a distributed denial-of-service attack.
Yet, the PPM algorithm is not prefect as its termination condition is not well-defined in the literature.
More importantly, without a proper termination condition, the attack graph constructed by the PPM
algorithm would be wrong with a very high probability. In this work, we provide a precise termination
condition for the PPM algorithm and name the new algorithm the rectified probabilistic packet marking
(RPPM in short) algorithm. The most significant merit of the RPPM algorithm is that when the algorithm
terminates, the algorithm guarantees that the constructed attack graph is correct with a specified level
of confidence. We carry out simulations on the RPPM algorithm and show that the RPPM algorithm can
guarantee the correctness of the constructed attack graph under 1) different probabilities that a router
marks the attack packets, and 2) different structures of the network graph. The RPPM algorithm
provides an autonomous way for the original PPM algorithm to determine its termination, and it is a
promising mean to enhance the reliability of the PPM algorithm.
  Computation-Efficient Multicast Key Distribution

ABSTRACT

Efficient key distribution is an important problem for secure group communications. The communication
and storage complexity of multicast key distribution problem has been studied extensively. In this paper,
we propose a new multicast key distribution scheme whose computation complexity is significantly
reduced. Instead of using conventional encryption algorithms, the scheme employs MDS codes, a class
of error control codes, to distribute multicast key dynamically. This scheme drastically reduces the
computation load of each group member compared to existing schemes employing traditional
encryption algorithms. Such a scheme is desirable for many wireless applications where portable devices
or sensors need to reduce their computation as much as possible due to battery power limitations.
Easily combined with any key-tree-based schemes, this scheme provides much lower computation
complexity while maintaining low and balanced communication complexity and storage complexity for
secure dynamic multicast key

  Controlling IP Spoofing through Interdomain Packet   Filters

ABSTRACT

The distributed denial-of-service (DDoS) attack is a serious threat to the legitimate use of the Internet.
Prevention mechanisms are thwarted by the ability of attackers to forge or spoof the source addresses
in IP packets. By employing IP spoofing, attackers can evade detection and put a substantial burden on
the destination network for policing attack packets. In this paper, we propose an interdomain packet
filter (IDPF) architecture that can mitigate the level of IP spoofing on the Internet. A key feature of our
scheme is that it does not require global routing information. IDPFs are constructed from the
information implicit in border gateway protocol (BGP) route updates and are deployed in network
border routers. We establish the conditions under which the IDPF framework correctly works in that it
does not discard packets with valid source addresses. Based on extensive simulation studies, we show
that, even with partial deployment on the Internet, IDPFs can proactively limit the spoofing capability of
attackers. In addition, they can help localize the origin of an attack packet to a small number of
candidate networks.

  Online Index Recommendations for High-Dimensional   Databases Using Query Workloads

High-dimensional databases pose a challenge withrespect to efficient access. High-dimensional indexes


do notwork because of the oft-cited "curse of dimensionality'. However, users are usually interested in
querying data over a relativelysmall subset of the entire attribute set at a time. A potential solution is to
use lower dimensional indexes that accurately represent the user access patterns. Query response using
physical database design developed based on a static snapshot of the query workload may significantly
degrade if the query patterns change.To address these issues, we introduce a parameterizable
technique to recommend indexes based on index types frequently used forhigh-dimensional data sets
and to dynamically adjust indexesas the underlying query workload changes. We incorporate aquery
pattern change detection mechanism to determine when the access patterns have changed enough to
warrant change inthe physical database design. By adjusting analysis parameters,we trade off analysis
speed against analysis resolution. We perform experiments with a number of data sets, query sets, and
parameters to show the effect that varying these characteristics has on analysis results.
  Ranked Reverse Nearest Neighbor Search

ABSTRACT

Given a set of data points P and a query point q in a multidimensional space, reverse nearest neighbor
(RNN) query finds data points in P whose nearest neighbors are q. Reverse k-nearest neighbor (RkNN)
query (where k ges 1) generalizes RNN query to find data points whose kNNs include q. For RkNN query
semantics, q is said to have influence to all those answer data points. The degree of q's influence on a
data point p (isin P) is denoted by kappap where q is the kappap-th NN of p. We introduce a new variant
of RNN query, namely, ranked reverse nearest neighbor (RRNN) query, that retrieves t data points most
influenced by q, i.e., the t data points having the smallest kappa's with respect to q. To answer this
RRNN query efficiently, we propose two novel algorithms, kappa-counting and kappa-browsing that are
applicable to both monochromatic and bichromatic scenarios and are able to deliver results
progressively. Through an extensive performance evaluation, we validate that the two proposed RRNN
algorithms are superior to solutions derived from algorithms designed for RkNN query.

  An Efficient Clustering Scheme to Exploit Hierarchical   Data in Network Traffic analysis

There is significant interest in the data mining and network management communities about the need
to improve existing techniques for clustering multi-variate network traffic flow records so that we can
quickly infer underlying traffic patterns. In this paper we investigate the use of clustering techniques to
identify interesting traffic patterns from network traffic data in an efficient manner. We develop a
framework to deal with mixed type attributes including numerical, categorical and hierarchical attributes
for a one-pass hierarchical clustering algorithm. We demonstrate the improved accuracy and efficiency
of our approach in comparison to previous work on clustering network traffic.

  Probabilistic Group Nearest Neighbor Queries in   Uncertain Databases

The importance of query processing over uncertain data has recently arisen due to its wide usage in
many real-world applications. In the context of uncertain databases, previous work have studied many
query types such as nearest neighbor query, range query, top-$k$ query, skyline query, and similarity
join. In this paper, we focus on another important query, namely probabilistic group nearest neighbor
query (PGNN), in the uncertain database, which also has many applications. Specifically, given a set, Q,
of query points, a PGNN query retrieves data objects that minimize the aggregate distance (e.g. sum,
min, and max) to query set Q. Due to the inherent uncertainty of data objects, previous techniques to
answer group nearest neighbor query (GNN) cannot be directly applied to our PGNN problem.
Motivated by this, we propose effective pruning methods, namely spatial pruning and probabilistic
pruning, to reduce the PGNN search space, which can be seamlessly integrated into our PGNN query
procedure. Extensive experiments have demonstrated the efficiency and effectiveness of our proposed
approach, in terms of the wall clock time and the speed-up ratio against linear scan.

  Dynamic Load Balancing

ABSTRACT

In Web cache cluster, because task resource demand characteristic is shifting and task resource demand
information is hard to obtain, existing dynamic load balancing strategy is inefficient or even inapplicable.
In order to solve this problem, a novel self adaptive dynamic load balancing strategy dedicated for Web
cache cluster is presented. Load model of the self adaptive dynamic load balancing strategy could adapt
itself dynamically according to the change of task resource demand characteristic, and reflect
system load state precisely. Comparing with other load balancing strategy in Web cache cluster, the self
adaptive dynamic load balancingstrategy has outstanding performance in experiment.

  

A New TCP for Persistent Packet Reordering

ABSTRACT

Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we
propose a new version of TCP that maintains high throughput when reordering occurs and yet, when
packet reordering does not occur, is friendly to other versions of TCP. The proposed TCP variant, or TCP-
PR, does not rely on duplicate acknowledgments to detect a packet loss. Instead, timers are maintained
to keep track of how long ago a packet was transmitted. In case the corresponding acknowledgment has
not yet arrived and the elapsed time since the packet was sent is larger than a given threshold, the
packet is assumed lost. Because TCP-PR does not rely on duplicate acknowledgments, packet reordering
(including out-or-order acknowledgments) has no effect on TCP-PR's performance. Through extensive
simulations, we show that TCP-PR performs consistently better than existing mechanisms that try to
make TCP more robust to packet reordering. In the case that packets are not reordered, we verify that
TCP-PR maintains the same throughput as typical implementations of TCP (specifically, TCP-SACK) and
shares network resources fairly. Furthermore, TCP-PR only requires changes to the TCP sender side
making it easier to deploy.

  Randomized Protocols for Duplicate Elimination in   Peer- to-Peer Storage Systems

Distributed peer-to-peer systems rely on voluntary participation of peers to effectively manage a storage
pool. In such systems, data is generally replicated for performance and availability. If the storage
associated with replication is not monitored and provisioned, the underlying benefits may not be
realized. Resource constraints, performance scalability, and availability present diverse considerations.
Availability and performance scalability, in terms of response time, are improved by aggressive
replication, whereas resource constraints limit total storage in the network. Identification and
elimination of redundant data pose fundamental problems for such systems. In this paper, we present a
novel and efficient solution that addresses availability and scalability with respect to management of
redundant data. Specifically, we address the problem of duplicate elimination in the context of systems
connected over an unstructured peer-to-peer network in which there is no a priori binding between an
object and its location. We propose two randomized protocols to solve this problem in a scalable and
decentralized fashion that does not compromise the availability requirements of the application.
Performance results using both large-scale simulations and a prototype built on PlanetLab demonstrate
that our protocols provide high probabilistic guarantees while incurring minimal administrative
overheads.

  Multicast Routing with Delay and Delay Variation   Constraints for Collaborative Applications on Overlay
Networks
Abstract—Computer supported collaborative applications on overlay networks are gaining popularity
among users who are geographically dispersed. Examples of these kinds of applications include video-
conferencing, distributed database replication, and online games. This type of application requires a
multicasting subnetwork, using which messages should arrive at the destinations within a specified delay
bound. These applications also require that destinations receive the message from the source at
approximately the same time. The problem of finding a multicasting subnetwork with delay and delay-
variation bound has been proved to be an NP Complete problem in the literature and heuristics have
been proposed for this problem. In this paper, we provide an efficient heuristic to obtain a multicast
subnetwork on an overlay network, given a source and a set of destinations that is within a specified
maximum delay and a specified maximum variation in the delays from a source to the destinations. The
time-complexity of our algorithm isO(|E| + nk \log(|E|/n) + m^{2}k), where n and |E| are the number of
nodes and edges in the network, respectively, k is the number of shortest paths determined, and m is
the number of destinations. We have shown that our algorithm is significantly better in terms of time-
complexity than existing algorithms for the same problem. Our extensive empirical studies indicate that
our heuristic uses significantly less runtime in comparison with the best-known heuristics while
achieving the tightest delay variation for a given end-to-end delay bound.

  Efficient Approximate Query Processing in Peer-to-Peer   Networks

ABSTRACT

Peer-to-peer (P2P) databases are becoming prevalent on the Internet for distribution and sharing of
documents, applications, and other digital media. The problem of answering large-scale ad hoc analysis
queries, for example, aggregation queries, on these databases poses unique challenges. Exact solutions
can be time consuming and difficult to implement, given the distributed and dynamic nature of P2P
databases. In this paper, we present novel sampling-based techniques for approximate answering of ad
hoc aggregation queries in such databases. Computing a high-quality random sample of the database
efficiently in the P2P environment is complicated due to several factors: the data is distributed (usually
in uneven quantities) across many peers, within each peer, the data is often highly correlated, and,
moreover, even collecting a random sample of the peers is difficult to accomplish. To counter these
problems, we have developed an adaptive two-phase sampling approach based on random walks of the
P2P graph, as well as block-level sampling techniques. We present extensive experimental evaluations to
demonstrate the feasibility of our proposed solution.

  VOICE-TO-PHONEME conversion algorithoms for spaker   indipendent voice-tag application in 


embedded   platform

ABSTRACT

In this paper we present two voice-to-phoneme conversion algorithms that extract voice-tag
abstractions for speaker-independent voice-tag applications in embedded platforms, which are very
sensitive to memory and CPU consumptions. In the first approach, a voice-to-phoneme conversion in
batch mode manages this task by preserving the commonality of input feature vectors of multiple voice-
tag example utterances. Given multiple example utterances, a developed feature combination strategy
produces an "average" utterance, which is converted to phonetic strings as a voice-tag representation
via a speaker-independent phonetic decoder. In the second approach, a sequential voice-to-phoneme
conversion algorithm uncovers the hierarchy of phonetic consensus embedded among multiple phonetic
hypotheses generated by a speaker-independent phonetic decoder from multiple example utterances of
a voice-tag. The most relevant phonetic hypotheses are then chosen to represent the voice-tag. The
voice-tag representations obtained by these two voice-to-phoneme conversion algorithms are
compared in speech recognition experiments to phonetic transcriptions of voice-tag reference prepared
by an expert phonetician. Both algorithms either perform comparably to or significantly better than the
manual transcription approach. We conclude from this that both algorithms are very effective for the
targeted purposes

  A Fully Distributed Proactively Secure   Threshold-  Multisignature Scheme

Threshold-multisignature schemes combine the properties of threshold group-oriented signature


schemes and multisignature schemes to yield a signature scheme that allows a threshold (t) or more
group members to collaboratively sign an arbitrary message. In contrast to threshold group signatures,
the individual signers do not remain anonymous, but are publicly identifiable from the information
contained in the valid threshold-multisignature. The main objective of this paper is to propose such a
secure and efficient threshold-multisignature scheme. The paper uniquely defines the fundamental
properties of threshold-multisignature schemes and shows that the proposed scheme satisfies these
properties and eliminates the latest attacks to which other similar schemes are subject. The efficiency of
the proposed scheme is analyzed and shown to be superior to its counterparts. The paper also proposes
a discrete logarithm based distributed-key management infrastructure (DKMI), which consists of a round
optimal, publicly verifiable, distributed-key generation (DKG) protocol and a one round, publicly
verifiable, distributed-key redistribution/updating (DKRU) protocol. The round optimal DKRU protocol
solves a major problem with existing secret redistribution/updating schemes by giving group members a
mechanism to identify malicious or faulty share holders in the first round, thus avoiding multiple
protocol executions.

  Copuled Based Metric

The main aim of this project is to introduce a new set of metrics that measure the quality of
modularization of a object-oriented software system. These metrics characterize the software from a
variety of perspectives such as structural, architectural and notions like similarity of purposes. Structural
refers to inter-module coupling-based notions. Architectural refers to the horizontal layering of modules
in large software systems. The notion of API (Application Programming Interface) is employed as the
basis for the structural metrics. Some of the important support metrics include those that characterize
each module on the basis of the similarity of purpose of the services offered by the module. Here
coupling-based structural metrics are used that provide various measures of the function-call traffic
through the APIâ„¢s of the modules in relation to the overall function-call traffic. Here functional call
traffic refers to the inter-modular interaction. The existing system measures the software quality using
code complexity and maintainability. In existing system performance analysis takes more time as well as
not more accurate. The drawbacks are eliminated by coupling based structured metrics.

  Face Recognition for smart interaction

ABSTRACT

In this paper an overview of face recognition research activities at the interACT Research Center is given.
Theface recognition efforts at the interACT Research Center consist of development of a fast and
robust facerecognition algorithm and fully automatic face recognition systems that can be deployed for
real-life smartinteraction applications. The face recognition algorithm is based on appearances of local
facial regions that are represented with discrete cosine transform coefficients. Three fully
automatic face recognition systems have been developed that are based on this algorithm. The first one
is the "door monitoring system" that observes the entrance of a room and identifies the subjects while
they are entering the room. The second one is the "portableface recognition system" that aims at
environment-free face recognition and recognizes the user of a machine. The third system, "3D face
recognition system", performs fully automatic face recognition on 3D range data.

  OCGRR: A New Scheduling Algorithm for Differentiated   Services Networks

We propose a new fair scheduling technique, called OCGRR (Output Controlled Grant-based Round
Robin), for the support of DiffServ traffic in a core router. We define a stream to be the same-class
packets from a given immediate upstream router destined to an output port of the core router. At each
output port, streams may be isolated in separate buffers before being scheduled in a frame. The
sequence of traffic transmission in a frame starts from higher-priority traffic and goes down to lower-
priority traffic. A frame may have a number of small rounds for each class. Each stream within a class can
transmit a number of packets in the frame based on its available grant, but only one packet per small
round, thus reducing the intertransmission time from the same stream and achieving a smaller jitter and
startup latency. The grant can be adjusted in a way to prevent the starvation of lower priority classes.
We also verify and demonstrate the good performance of our scheduler by simulation and comparison
with other algorithms in terms of queuing delay, jitter, and start-up latency.

  Provably Secure Three-Party Authenticated Quantum   Key Distribution Protocols

This work presents quantum key distribution protocols (QKDPs) to safeguard security in large networks,
ushering in new directions in classical cryptography and quantum cryptography. Two three-party QKDPs,
one with implicit user authentication and the other with explicit mutual authentication, are proposed to
demonstrate the merits of the new combination, which include the following: 1) security against such
attacks as man-in-the-middle, eavesdropping and replay, 2) efficiency is improved as the proposed
protocols contain the fewest number of communication rounds among existing QKDPs, and 3) two
parties can share and use a long-term secret (repeatedly). To prove the security of the proposed
schemes, this work also presents a new primitive called the Unbiased-Chosen Basis (UCB) assumption

  Secure Socket Layer (SSL) is the redominant

State-of-the-art cluster-based data centers consisting of three tiers (Web server, application server, and
database server) are being used to host complex Web services such as e-commerce applications. The
application server handles dynamic and sensitive Web contents that need protection from
eavesdropping, tampering, and forgery. Although the Secure Sockets Layer (SSL) is the most popular
protocol to provide a secure channel between a client and a cluster-based network server, its high
overhead degrades the server performance considerably and, thus, affects the server scalability.
Therefore, improving the performance of SSL-enabled network servers is critical for designing scalable
and high-performance data centers. In this paper, we examine the impact of SSL offering and SSL-
session-aware distribution in cluster-based network servers. We propose a back-end forwarding
scheme, called ssl_with_bf, that employs a low-overhead user-level communication mechanism like
Virtual Interface Architecture (VIA) to achieve a good load balance among server nodes. We compare
three distribution models for network servers, Round Robin (RR), ssl_with_session, and ssl_with_bf,
through simulation. The experimental results with 16-node and 32-node cluster configurations show
that, although the session reuse of ssl_with_session is critical to improve the performance of application
servers, the proposed back-end forwarding scheme can further enhance the performance due to better
load balancing. The ssl_with_bf scheme can minimize the average latency by about 40 percent and
improve throughput across a variety of workloads

  Malicious Packet Losses

In this paper, we consider the problem of detecting whether a compromised router is maliciously
manipulating its stream of packets. In particular, we are concerned with a simple yet effective attack in
which a router selectively drops packets destined for some victim. Unfortunately, it is quite challenging
to attribute a missing packet to a malicious action because normal network congestion can produce the
same effect. Modern networks routinely drop packets when the load temporarily exceeds their buffering
capacities. Previous detection protocols have tried to address this problem with a user-defined
threshold: too many dropped packets imply malicious intent. However, this heuristic is fundamentally
unsound; setting this threshold is, at best, an art and will certainly create unnecessary false positives or
mask highly focused attacks. We have designed, developed, and implemented a compromised router
detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number
of congestive packet losses that will occur. Once the ambiguity from congestion is removed, subsequent
packet losses can be attributed to malicious actions. We have tested our protocol in Emulab and have
studied its effectiveness in differentiating attacks from legitimate network behavior.

 Distributional Features for Text Categorization

Text categorization is the task of assigning predefined categories to natural language text. With the
widely used 'bag of words' representation, previous researches usually assign a word with values such
that whether this word appears in the document concerned or how frequently this word appears.
Although these values are useful for text categorization, they have not fully expressed the abundant
information contained in the document. This paper explores the effect of other types of values, which
express the distribution of a word in the document. These novel values assigned to a word are called
distributional features, which include the compactness of the appearances of the word and the position
of the first appearance of the word. The proposed distributional features are exploited by a tf idf style
equation and different features are combined using ensemble learning techniques. Experiments show
that the distributional features are useful for text categorization. In contrast to using the traditional
term frequency values solely, including the distributional features requires only a little additional cost,
while the categorization performance can be significantly improved. Further analysis shows that the
distributional features are especially useful when documents are long and the writing style is casual.

  Progressive Parametric Query Optimization

Commercial applications usually rely on pre-compiled parameterized procedures to interact with a


database. Unfortunately, executing a procedure with a set of parameters different from those used at
compilation time may be arbitrarily sub-optimal. Parametric query optimization (PQO) attempts to solve
this problem by exhaustively determining the optimal plans at each point of the parameter space at
compile time. However, PQO is likely not cost-effective if the query is executed infrequently or if it is
executed with values only within a subset of the parameter space. In this paper we propose instead to
progressively explore the parameter space and build a parametric plan during several executions of the
same query. We introduce algorithms that, as parametric plans are populated, are able to frequently
bypass the optimizer but still execute optimal or near-optimal plans.

  GLIP: A Concurrency Control Protocol for Clipping   Indexing

Multidimensional databases are beginning to be used in a wide range of applications. To meet this fast-
growing demand, the R-tree family is being applied to support fast access to multidimensional data, for
which the R+-tree exhibits outstanding search performance. In order to support efficient concurrent
access in multiuser environments, concurrency control mechanisms for multidimensional indexing have
been proposed. However, these mechanisms cannot be directly applied to the R+-tree because an
object in the R+-tree may be indexed in multiple leaves. This paper proposes a concurrency control
protocol for R-tree variants with object clipping, namely, Granular Locking for clipping indexing (GLIP).
GLIP is the first concurrency control approach specifically designed for the R+-tree and its variants, and it
supports efficient concurrent operations with serializable isolation, consistency, and deadlock-free.
Experimental tests on both real and synthetic data sets validated the effectiveness and efficiency of the
proposed concurrent access framework.

  Flexible Rollback Recovery in Dynamic Heterogeneous   Grid Computing

Large applications executing on Grid or cluster architectures consisting of hundreds or thousands of


computational nodes create problems with respect to reliability. The source of the problems are node
failures and the need for dynamic configuration over extensive run-time. This paper presents two fault-
tolerance mechanisms called Theft Induced Checkpointing and Systematic Event Logging. These are
transparent protocols capable of overcoming problems associated with both, benign faults, i.e., crash
faults, and node or subnet volatility. Specifically, the protocols base the state of the execution on a
dataflow graph, allowing for efficient recovery in dynamic heterogeneous systems as well as multi-
threaded applications. By allowing recovery even under different numbers of processors, the
approaches are especially suitable for applications with need for adaptive or reactionary configuration
control. The low-cost protocols offer the capability of controlling or bounding the overhead. A formal
cost model is presented, followed by an experimental evaluation. It is shown that the overhead of the
protocol is very small and the maximum work lost by a crashed process is small and bounded

  SAR Image Regularization With Fast Approximate   Discrete Minimization

Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise.
The presence of this noise makes the automatic interpretation of images a challenging task and noise
reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous
approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization
provides a convenient way to express both data fidelity constraints and desirable properties of the
filtered image. In this context, total variation minimization has been extensively used to constrain the
oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed
distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-
likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on
weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not
achievable on large images required by remote sensing applications. The computational burden of the
state-of-the-art algorithm for approximate minimization (namely the -expansion) is too heavy specially
when considering joint regularization of several images. We show that a satisfying solution can be
reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial
moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in
urban area SAR images.

  PDCS: Security and Privacy Support for Data-Centric   Sensor Networks Ion for

—The demand for efficient data dissemination/access techniques to find the relevant data from within a
sensor network has led to the development of data-centric sensor networks (DCS), where the sensor
data as contrast to sensor nodes are named based on attributes such as event type or geographic
location. However, saving data inside a network also creates security problems due to the lack of
tamper-resistance of the sensor nodes and the unattended nature of the sensor network. For example,
an attacker may simply locate and compromise the node storing the event of his interest.

To address these security problems, we present pDCS, a privacy-enhanced DCS network which offers
different levels of data privacy based on different cryptographic keys. In addition, we propose several
query optimization techniques based on Euclidean Steiner Tree and Keyed Bloom Filter tominimize the
query overhead while providing certain query privacy. Finally, detailed analysis and simulations show
that the Keyed Bloom Filter scheme can significantly reduce the message overhead with the same level
of query delay and maintain a very high level of query privacy  Team Multicasting In Mobile Adhoc
Networks

In this paper, a novel multicast routing protocol namely Hypercube based Team Multicast Routing
Protocol (HTMRP) has been proposed to address the scalability in mobile ad hoc networks. In HTMRP
team multicasting is proposed where the multicast group does not consist of individuals rather, member
teams. This mechanism is common in ad hoc networks to accomplish collective tasks such as emergency
recovery, battle field where team affinity model exist when the member teams has a common interest.
In MANET the link failures due to mobility is a big concern and is addressed in HTMRP by incorporating a
logical hypercube model. The HTMRP also has a mesh layer on top of the hypercube for effective fault
tolerance. In addition to scalability, HTMRP also guarantee the new QoS requirements namely high
availability and good load balancing by incorporating team, hypercube and mesh tiers. The HTMRP has
been simulated and extensively analyzed for scalability, delivery ratio and control overhead. The results
show that HTMRP provides better performance for the above evaluation parameters than the existing
multicast routing protocol.

  Image Transmissions with Security Enhancement Based on Region and Path Diversity in  Wireless Sensor
Networks

Transmissions of large sized images can be a bottleneck for a wireless sensor network (WSN) due to its
limited resources. Security can be another concern. This paper proposes a collaborative transmission
scheme for image sensors to utilize inter-sensor correlations to decide the transmission and security
sharing patterns based on the path diversities. Our proposed approach for secret image sharing on
multiple node-disjoint paths for image delivery is to achieve high security without any key distribution
and management, and thus the key management related problems do not exist. The energy efficiency is
another major contribution made in this paper. This scheme does not only allow each image sensor to
transmit optimal fractions of overlapped images through appropriate transmission paths in an energy-
efficient way, but also provides unequal protection to overlapped image regions by path selections and
adaptive bit error rate (BER) requirement. The simulation results show that the proposed scheme can
achieve considerable gains in terms of network lifetime extension, image transmission security
enhancement, image quality improvement, and energy efficiency for wireless sensor networks.

  Image Transmissions with Security Enhancement Based   on Region and Path Diversity in WSN/w

Transmissions of large sized images can be a bottleneck for a wireless sensor network (WSN) due to its
limited resources. Security can be another concern. This paper proposes a collaborative transmission
scheme for image sensors to utilize inter-sensor correlations to decide the transmission and security
sharing patterns based on the path diversities. Our proposed approach for secret image sharing on
multiple node-disjoint paths for image delivery is to achieve high security without any key distribution
and management, and thus the key management related problems do not exist. The energy efficiency is
another major contribution made in this paper. This scheme does not only allow each image sensor to
transmit optimal fractions of overlapped images through appropriate transmission paths in an energy-
efficient way, but also provides unequal protection to overlapped image regions by path selections and
adaptive bit error rate (BER) requirement. The simulation results show that the proposed scheme can
achieve considerable gains in terms of network lifetime extension, image transmission security
enhancement, image quality improvement, and energy efficiency for wireless sensor networks

  Lord of the Links: A Framework for Discovering Missing Links in the Internet Topology

The topology of the Internet at the autonomous system (AS) level is not yet fully discovered despite
significant research activity. The community still does not know how many links are missing, where
these links are and finally, whether the missing links will change our conceptual model of the Internet
topology. An accurate and complete model of the topology would be important for protocol design,
performance evaluation and analyses. The goal of our work is to develop methodologies and tools to
identify and validate such missing links between ASes. In this work, we develop several methods and
identify a significant number of missing links, particularly of the peer-to-peer type. Interestingly, most of
the missing AS links that we find exist as peer-to-peer links at the Internet exchange points (IXPs). First,
in more detail, we provide a large-scale comprehensive synthesis of the available sources of
information. We cross-validate and compare BGP routing tables, Internet routing registries, and
traceroute data, while we extract significant new information from the less-studied Internet exchange
points (IXPs). We identify 40% more edges and approximately 300% more peer-to-peer edges compared
to commonly used data sets. All of these edges have been verified by either BGP tables or traceroute.
Second, we identify properties of the new edges and quantify their effects on important topological
properties. Given the new peer-to-peer edges, we find that for some ASes more than 50% of their paths
stop going through their ISPs assuming policy-aware routing. A surprising observation is that the degree
of an AS may be a poor indicator of which ASes it will peer with.

  Efficient and Robust Local Mutual Exclusion in Mobile Ad Hoc Networks

In a mobile ad hoc network, nodes that are geographically close may need to compete for exclusive
access to a shared resource. This paper proposes an abstraction of this problem, called local mutual
exclusion; it is an extension to mobile networks of the dining philosophers problem, which has been well
studied in static networks. The desirable feature of an algorithm for this problem is having response
time and failure locality independent of the total number of nodes, thus providing a scalable and robust
solution. The paper presents two algorithms, exhibiting trade-offs between simplicity, failure locality and
response time. The first algorithm has two variations, one of which has response time that depends very
weakly on the number of nodes in the entire system and is polynomial in the maximum number of
neighboring nodes; the failure locality, although not optimal, is small and grows very slowly with system
size. The second algorithm has optimal failure locality and response time that is quadratic in the number
of nodes. A pleasing aspect of the latter algorithm is that when nodes do not move, it has linear
response time, improving on previous results for static algorithms with optimal failure locality.

  Energy-Aware Tag Anti collision Protocols for RFID   Systems

Energy consumption of mobile readers is becoming an important issue as applications of RFID systems
pervade different aspects of our lives. Surprisingly, however, these systems are not energy-aware with
the focus till date being on reducing the time to read all tags by the reader. The problem of tag
arbitration in RFID systems is considered with the aim of trading off time for energy savings at the
reader. The approach of using multiple time slots per node of a binary search tree is explored through
three anti-collision protocols that aim to reduce the number of colliding responses from tags. This
results in fewer reader queries and tag responses and, hence, energy savings at both the reader and tags
(if they are active tags). An analytical framework is developed to predict the performance of our
protocols, with the numerical evaluation of this framework validated through simulation. It is shown
that all three protocols provide significant energy savings when compared to the existing query tree
protocol while sharing the deterministic and memoryless properties of the latter

  A Cooperative Diversity Based Handoff Management   Scheme Cooperative diversity has emerged as a
promising technique to facilitate fast handoff mechanisms in mobile ad-hoc environments. The key
concept behind a prominent cooperative diversity based protocol, namely, Partner-based Hierarchical
Mobile IPv6 (PHMIPv6), is to enable mobile nodes anticipate handover events by selecting suitable
partners to communicate on their behalves with Mobility Anchor Points (MAPs). In the original design of
PHMIPv6, mobile hosts choose partners based on their signal strength. Such a naive selection procedure
may lead to scenarios where mobile hosts lose communication with the selected partners before the
completion of the handoff operations. In addition, PHMIPv6 overlooks security considerations, which
can easily lead to vulnerable mobile hosts and/or partner entities. As a solution to these two
shortcomings of PHMIPv6, this paper first proposes an extended version of PHMIPv6 called Connection
Stability Aware PHMIPv6 (CSA-PHMIPv6). In CSA-PHMIPv6, mobile hosts select partners with whom
communication can last for a sufficiently long time by employing the Link Expiration Time (LET)
parameter. To tackle the security issues, the simple yet effective use of two distinct authentication keys
is envisioned. Furthermore, to shorten the communication time between mobile hosts and their
corresponding partners, a second handoff management approach called Partner Less Dependable
PHMIPv6 (PLD-PHMIPv6) is proposed.

  Stateless Multicasting in Mobile Ad Hoc Networks

There are increasing interest and big challenges in designing a scalable and robust multicast routing
protocol in a mobile ad hoc network (MANET) due to the difficulty in group membership management,
multicast packet forwarding, and the maintenance of multicast structure over the dynamic network
topology for a large group size or network size. In this paper, we propose a novel Robust and Scalable
Geographic Multicast Protocol (RSGM). Several virtual architectures are used in the protocol without
need of maintaining state information for more robust and scalable membership management and
packet forwarding in the presence of high network dynamics due to unstable wireless channels and
node movements. Specifically, scalable and efficient group membership management is performed
through a virtual-zone-based structure, and the location service for group members is integrated with
the membership management. Both the control messages and data packets are forwarded along
efficient tree-like paths, but there is no need to explicitly create and actively maintain a tree structure.
The stateless virtual-tree-based structures significantly reduce the tree management overhead, support
more efficient transmissions, and make the transmissions much more robust to dynamics. Geographic
forwarding is used to achieve further scalability and robustness. To avoid periodic flooding of the source
information throughout the network, an efficient source tracking mechanism is designed. Furthermore,
we handle the empty-zone problem faced by most zone-based routing protocols. We have studied the
protocol performance by performing both quantitative analysis and extensive simulations. Our results
demonstrate that RSGM can scale to a large group size and a large network size, and can more efficiently
support multiple multicast groups in the network. Compared to existing protocols ODMRP and SPBM,
RSGM achieves a significantly higher delivery ratio under all circumstances, with different moving
speeds- - , node densities, group sizes, number of groups, and network sizes. RSGM also has the
minimum control overhead and joining delay.

  Rules of Designing Routing Metrics for Greedy, Face, and Combined Greedy-Face Routing

Different geographic routing protocols have different requirements on routing metric designs to ensure
proper operation. Combining a wrong type of routing metrics with a geographic routing protocol may
produce unexpected results, such as geographic routing loops and unreachable nodes. In this paper, we
propose a novel routing algebra system to investigate the compatibilities between routing metrics and
three geographic routing protocols including greedy, face, and combined greedy-face routing. Five
important algebraic properties, respectively, named odd symmetry, transitivity, strict order, source
independence, and local minimum freeness, are defined in this algebra system. Based on these algebraic
properties, the necessary and sufficient conditions for loop-free, delivery-guaranteed, and consistent
routing are derived when greedy, face, and combined greedy-face routing serve as packet forwarding
schemes or as path discovery algorithms, respectively. Our work provides essential criteria for
evaluating and designing geographic routing protocols.

  Decentralized QoS-Aware Checkpointing Arrangement in Mobile Grid Computing

This paper deals with decentralized, QoS-aware middleware for checkpointing arrangement in Mobile
Grid (MoG) computing systems. Checkpointing is more crucial in MoG systems than in their conventional
wired counterparts due to host mobility, dynamicity, less reliable wireless links, frequent
disconnections, and variations in mobile systems. We've determined the globally optimal checkpoint
arrangement to be NP-complete and so consider Reliability Driven (ReD) middleware, employing
decentralized QoS-aware heuristics, to construct superior checkpointing arrangements efficiently. With
ReD, an MH (mobile host) simply sends its checkpointed data to one selected neighboring MH, and also
serves as a stable point of storage for checkpointed data received from a single approved neighboring
MH. ReD works to maximize the probability of checkpointed data recovery during job execution,
increasing the likelihood that a distributed application, executed on the MoG, completes without
sustaining an unrecoverable failure. It allows collaborative services to be offered practically and
autonomously by the MoG. Simulations and actual testbed implementation show ReD's favorable
recovery probabilities with respect to Random Checkpointing Arrangement (RCA) middleware, a QoS-
blind comparison protocol producing random arbitrary checkpointing arrangements.

  Independently Verifiable Decentralized Role-Based   Delegation


In open systems such as cloud computing platforms, delegation transfers privileges among users across
different administrative domains and facilitates information sharing. We present an independently
verifiable delegation mechanism, where a delegation credential can be verified without the participation
of domain administrators. Our protocol, called role-based cascaded delegation (RBCD), supports simple
and efficient cross-domain delegation of authority. RBCD enables a role member to create delegations
based on the dynamic needs of collaboration; in the meantime, a delegation chain can be verified by
anyone without the participation of role administrators. We also describe an efficient realization of
RBCD by using aggregate signatures, where the authentication information for an arbitrarily long role-
based delegation chain is captured by one short signature of constant size.

  A Visual Backchannel for Large-Scale Events

We introduce the concept of a Visual Backchannel as a novel way of following and exploring online
conversations aboutlarge-scale events. Microblogging communities, such as Twitter, are increasingly
used as digital backchannels for timely exchange ofbrief comments and impressions during political
speeches, sport competitions, natural disasters, and other large events. Currently,shared updates are
typically displayed in the form of a simple list, making it difficult to get an overview of the fast-paced
discussions asit happens in the moment and how it evolves over time. In contrast, our Visual
Backchannel design provides an evolving, interactive,and multi-faceted visual overview of large-scale
ongoing conversations on Twitter. To visualize a continuously updating informationstream, we include
visual saliency for what is happening now and what has just happened, set in the context of the evolving
conversation.As part of a fully web-based coordinated-view system we introduce Topic Streams, a
temporally adjustable stacked graphvisualizing topics over time, a People Spiral representing
participants and their activity, and an Image Cloud encoding the popularityof event photos by size.
Together with a post listing, these mutually linked views support cross-filtering along topics,
participants, andtime ranges. We discuss our design considerations, in particular with respect to
evolving visualizations of dynamically changing data.Initial feedback indicates significant interest and
suggests several unanticipated uses.

  SparkClouds: Visualizing Trends in Tag Clouds

Tag clouds have proliferated over the web over the last decade. They provide a visual summary of a
collection of texts by visually depicting the tag frequency by font size. In use, tag clouds can evolve as
the associated data source changes over time. Interesting discussions around tag clouds often include a
series of tag clouds and consider how they evolve over time. However, since tag clouds do not explicitly
represent trends or support comparisons, the cognitive demands placed on the person for perceiving
trends in multiple tag clouds are high. In this paper, we introduce SparkClouds, which integrate
sparklines [23] into a tag cloud to convey trends between multiple tag clouds. We present results from a
controlled study that compares SparkClouds with two traditional trend visualizations—multiple line
graphs and stacked bar charts—as well as Parallel Tag Clouds [4]. Results show that SparkClouds ability
to show trends compares favourably to the alternative visualizations.

  Transmission Network Planning Under Security and Environmental Constraints

This paper explores the impact of CO 2 emission trading on capacity planning of electric power
transmission systems. Two different models for annual emission costs are assumed. The CO 2 emission
price is modeled as a probability density function in the transmission network planning problem. The
Monte Carlo technique is deployed to simulate the CO 2 emission price volatility. The transmission
network planning problem is formulated as a mixed-integer optimization whose objective is to minimize
the sum of annual generator operating costs and annuitized transmission investment costs over
different demand levels subject to N-1 network security constraints as well as operating limits on system
components. The overall problem is formulated within the framework of a linear dc optimal power flow
incorporating binary decision variables to model the lumpy nature of transmission investment. A linear
model of losses is also proposed and included in the dc power flow model. The proposed approach can
be used to determine the most probable optimal transmission capacity. The methodology is
demonstrated through case studies simulated on the IEEE 24-bus network.

  A Survey of Payment Card Industry Data Security Standard

Usage of payment cards such as credit cards, debit cards, and prepaid cards, continues to grow. Security
breaches related to payment cards have led to billion dollar losses annually. In order to offset this trend,
major payment card networks have founded the Payment Card Industry (PCI) Security Standards Council
(SSC), which has designed and released the PCI Data Security Standard (DSS). This standard guides
service providers and merchants to implement stronger security infrastructures that reduce the risks of
security breaches. This article mainly discusses the need for the PCI DSS and the data security
requirements defined in the standard to address the ongoing security issues, especially those pertaining
to payment card data handling. It also surveys various technical solutions, offered by a few security
vendors, for merchant companies and organizations involved in payment card transaction processing to
comply with the standard. The compliance of merchants or service providers to the PCI DSS are assessed
by PCI Qualified Security Assessors (QSAs). This article thus discusses the requirements to become PCI
QSAs. In addition, it introduces the PCI security scanning procedures that guide the scanning of security
policies of a merchant or service provider and prepare relevant reports. We believe that this survey
sheds light on potential technical research problems pertinent to the PCI DSS and its compliance

  Detection of Selfish Nodes in Networks Using CoopMAC Protocol with ARQ

CoopMAC has been recently proposed as a possible implementation of cooperation protocols in the
medium access control (MAC) layer of a wireless network. However, some nodes may refrain from
cooperation for selfish purposes, e.g. in order to save energy, in what is called selfish behavior or
misbehavior. This protocol violation worsens other nodes' performance and can be avoided if other
nodes detect and punish (e.g. banning from the network) misbehaving nodes. However, fading and
interference may prevent nodes from cooperating even if they are willing, therefore it is not trivial to
identify misbehaving nodes. In a fading scenario where an automatic repeat request (ARQ) protocol is
used, we propose a mechanism that allows to detect misbehaving nodes. Two approaches, either based
on the uniformly most powerful (UMP) test or on the sequential probability ratio test (SPRT) are
considered. The two techniques are characterized and compared in terms of their average detection
delay and resulting network performance.

  XLP: A Cross-Layer Protocol for Efficient Communication in Wireless Sensor Networks

Severe energy constraints of battery-powered sensor nodes necessitate energy-efficient communication


in Wireless Sensor Networks (WSNs). However, the vast majority of the existing solutions are based on
the classical layered protocol approach, which leads to significant overhead. It is much more efficient to
have a unified scheme, which blends common protocol layer functionalities into a cross-layer module. In
this paper, a cross-layer protocol (XLP) is introduced, which achieves congestion control, routing, and
medium access control in a cross-layer fashion. The design principle of XLP is based on the cross-layer
concept of initiative determination, which enables receiver-based contention, initiative-based
forwarding, local congestion control, and distributed duty cycle operation to realize efficient and reliable
communication in WSNs. The initiative determination requires simple comparisons against thresholds,
and thus, is very simple to implement, even on computationally constrained devices. To the best of our
knowledge, XLP is the first protocol that integrates functionalities of all layers from PHY to transport into
a cross-layer protocol. A cross-layer analytical framework is developed to investigate the performance of
the XLP. Moreover, in a cross-layer simulation platform, the state-of-the-art layered and cross-layer
protocols have been implemented along with XLP for performance evaluations. XLP significantly
improves the communication performance and outperforms the traditional layered protocol
architectures in terms of both network performance and implementation complexity.

  Radio Sleep Mode Optimization in Wireless Sensor Networks

efficiency is a central challenge in sensor networks, and the radio is a major contributor to overall
energy node consumption. Current energy-efficient MAC protocols for sensor networks use a fixed low-
power radio mode for putting the radio to sleep. Fixed low-power modes involve an inherent trade-off:
deep sleep modes have low current draw and high energy cost and latency for switching the radio to
active mode, while light sleep modes have quick and inexpensive switching to active mode with a higher
current draw. This paper proposes adaptive radio low-power sleep modes based on current traffic
conditions in the network. It first introduces a comprehensive node energy model, which includes
energy components for radio switching, transmission, reception, listening, and sleeping, as well as the
often disregarded microcontroller energy component for determining the optimal sleep mode and MAC
protocol to use for given traffic scenarios. The model is then used for evaluating the energy-related
performance of our recently proposed RFID impulse protocol enhanced with adaptive low-power
modes, and comparing it against BMAC and IEEE 802.15.4, for both MicaZ and TelosB platforms under
varying data rates. The comparative analysis confirms that RFID impulse with adaptive low-power modes
provides up to 20 times lower energy consumption than IEEE 802.15.4 in low traffic scenario. The
evaluation also yields the optimal settings of low-power modes on the basis of data rates for each node
platform, and provides guidelines and a simple algorithm for the selection of appropriate MAC protocol,
low-power mode, and node platform for a given set of traffic requirements of a sensor network
application

  

On the Benefits of Cooperative Proxy Caching for Peer-to-Peer Traffic

This paper analyzes the potential of cooperative proxy caching for peer-to-peer (P2P) traffic as a means
to ease the burden imposed by P2P traffic on Internet Service Providers (ISPs). In particular, we propose
two models for cooperative caching of P2P traffic. The first model enables cooperation among caches
that belong to different autonomous systems (ASs), while the second considers cooperation among
caches deployed within the same AS. We analyze the potential gain of cooperative caching in these two
models. To perform this analysis, we conduct an eight-month measurement study on a popular P2P
system to collect traffic traces for multiple caches. Then, we perform extensive trace-based simulations
to analyze different angles of cooperative caching schemes. Our results demonstrate that: 1) significant
improvement in byte hit rate can be achieved using cooperative caching, 2) simple object replacement
policies are sufficient to achieve that gain, and 3) the overhead imposed by cooperative caching is
negligible. In addition, we develop an analytic model to assess the gain from cooperative caching in
different settings. The model accounts for number of caches, salient P2P traffic features, and network
characteristics. Our model confirms that substantial gains from cooperative caching are attainable under
wide ranges of traffic and network characteristics.

  The Design and Evaluation of a Self-Organizing   Superpeer Network

Superpeer architectures exploit the heterogeneity of nodes in a peer-to-peer (P2P) network by assigning
additional responsibilities to higher capacity nodes. In the design of a superpeer network for file sharing,
several issues have to be addressed: how client peers are related to superpeers, how superpeers locate
files, how the load is balanced among the superpeers, and how the system deals with node failures. In
this paper, we introduce a self-organizing superpeer network architecture (SOSPNet) that solves these
issues in a fully decentralized manner. SOSPNet maintains a superpeer network topology that reflects
the semantic similarity of peers sharing content interests. Superpeers maintain semantic caches of
pointers to files, which are requested by peers with similar interests. Client peers, on the other hand,
dynamically select superpeers offering the best search performance. We show how this simple approach
can be employed not only to optimize searching, but also to solve generally difficult problems
encountered in P2P architectures such as load balancing and fault tolerance. We evaluate SOSPNet
using a model of the semantic structure derived from eight-month traces of two large file-sharing
communities. The obtained results indicate that SOSPNet achieves close-to-optimal file search
performance, quickly adjusts to changes in the environment (node joins and leaves), survives even
catastrophic node failures, and efficiently distributes the system load taking into account superpeer
capacities.

    A Chat Application in Lift

The article discusses how to build a multiuser, realtime chat application in Lift and discuss Scala's
language features that make Lift possible. The application provides a single chat server that takes chat
messages and redistributes the messages out to all listeners. Lift's Comet implementation uses a single
HTTP connection to poll for changes to an arbitrary number of components on the page

  Research on construct e-commerce site

E-commerce site with interactive features is a business information system, which in the establishment
of a virtual network of shopping malls, for personal shopping service to provide fast, so that people can
buy their own homes satisfied with the goods. I hope to provide more economical sales channels for
businessman. Traditional shopping is no longer suitable for modern people to the pace of shopping, e-
commerce sites to achieve the systematic, economic and standardization, fast, convenient, modern
shopping will become the first choice. The paper uses ASP technology to achieve a practical system of
personal shopping on the Internet.

  E-School: A Web-Service Oriented Resource Based   E- Iearning System

The education systems of 21st century are characterized by increasing dependence on various modes of
electronic facilities. E-learning is one such facility that can ensure education for everyone without
considering their geographical locations. However, due to very high initial cost for infrastructural
development, developing countries like Bangladesh are unable to get the total benefits of e-learning.
Therefore, the education facilities and standards in rural and remote areas are noticeably
dissatisfactory, compared to the urban areas of those countries. Reusing of existing resources and
infrastructures to implement an e-learning system can reduce the overall operational cost of the system,
hence can be ideal for developing countries to exploit various e-learning facilities. Considering these
issues, in this paper we propose to design a web service oriented resource based system named “E-
School” for the primary, secondary, and higher secondary education system of Bangladesh. This e-
learning system will provide identical course materials, useful multimedia tools, integrated databases,
and help desk for students. We design E-School as a platform independent system so that it can be
accessed using any cell phone, PDA and computer from anywhere with mobile network coverage

  Managing Health Care Through Social Networks

Surveys show an increased reliance on physician and patient social networks, which promise to
transform healthcare management. But challenges such as privacy and data accuracy remain.

  Museum Gallery Review

Living in London has definite advantages. The tube (London's subway) may be overcrowded in the
mornings, and the cost of a three-course meal at even a mediocre restaurant would scare anyone into
cooking at home (indefinitely); however, in London, there is always something to do, see, or visit. For
example, I can name seven museums that can be visited seven days a week, with the odd late-night
opening once a month… FREE of charge! Yes, free of charge!

  Social Connect Services

Social-networking websites let users build social connections with family, friends, and coworkers. Users
can also build profiles for storing and sharing various types of content with others, including photos,
videos, and messages. Updating user profiles with interesting content is a form of self-expression that
increases interaction in such sites. To encourage this interaction and provide richer content, social-
networking sites expose their networks to Web services in the form of online application programming
interfaces. These APIs allow third-party developers to interface with the social-networking site, access
information and media posted with user profiles, and build social applications that aggregate, process,
and create content based on users’ interests.

Social-networking sites provide numerous application services that can mash up user-profile data with
third-party data. In addition, third-party sites can rapidly distribute their services via social-networking
sites to keepin touch with users while they’re on these sites. Moreover, users can enjoy various
applications with content from numerous third-party sites: users access social-networking sites, where
they maintain their profiles; third-party sites retrieve these profiles, enrich the content, and return them
to the social-networking sites for consumption by the user and, possibly, friends. For example, Facebook
users can share music with friends, create playlists, and get concert alerts on their profile page by
installing the third-party music application iLike (www.ilike.com). Major social-networking sites have
begun launching social-networks connect services such as Facebook Platform, Google Friend Connect,
and MySpaceID that further break down the garden walls of social-networking sites.
These SNCSs let third-party sites develop social applications and extend their services without having to
either host or build their own social network. This extension allows third-party sites to leverage the
social-networking site’s features.

For example, third-party sites can exploit the authentication services provided by a social-networking
site so that users need not create another username and password to access the third-party site;
instead, users can draw on their social-network credentials and established profile.

Users can also access third-party sites that leverage socialnetwork user-profile content. The third-party
sites retrieve users’ profiles from the social-networking site to create an enhanced experience. In this
way, they can increase membership by providing more interesting content from a variety of sources in a
seamless manner.

You might also like