[go: up one dir, main page]

0% found this document useful (0 votes)
24 views5 pages

Real-Time Semantic Web Data Stream Processing Using Storm

This document summarizes research on real-time processing of semantic web data streams. It discusses related work on compressing and distributing RDF data streams. The author presents a system for managing RDF data streams in real-time using the Apache Storm platform. The system analyzes Twitter messages to detect events and extract relations in RDF format to enrich a knowledge base.

Uploaded by

Ravi H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views5 pages

Real-Time Semantic Web Data Stream Processing Using Storm

This document summarizes research on real-time processing of semantic web data streams. It discusses related work on compressing and distributing RDF data streams. The author presents a system for managing RDF data streams in real-time using the Apache Storm platform. The system analyzes Twitter messages to detect events and extract relations in RDF format to enrich a knowledge base.

Uploaded by

Ravi H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2020 International Conference on Computing and Information Technology, University o f Tabuk, Kingdom o f Saudi Arabia.

Volume: 02, Issue: ICCIT- 1441, Page No.: 79 - 83, 9 h & 1 0 Sep. 2020

Real-Time Semantic Web Data Stream Processing


Using Storm
Mouad Banane
dept. o f computer science
Hassan II University
Casablanca, Morocco
mouad.banane-etu@etu.univh2c.ma

Abstract— Semantic web technologies are increasingly used


for the management of data flows. Several RDF flow II. Re l a t e d W o r k
processing systems have been proposed. The data at the entry All the RDF flow processing systems proposed, as well
of the system is big and generated continuously at a fast and as the RDF data processing approaches in real-time, share
variable rate. As a result, storing and processing the entire
the same problems in terms of heterogeneity (multisource
flow becomes costly and reasoning almost impossible.
data) and absence of explicit semantics making it possible to
Consequently, the use of techniques allowing to reduce the load
while preserving the semantics of the data, makes it possible to
satisfy complex requests and the reasoning. In this section,
optimize the treatments even the reasoning. However, none of we provide a brief overview of the research work that deals
the SPARQL extensions include this functionality. Thus, In with the subject of managing RDF data flows in a real-time
this paper we present a system for managing RDF data flows in distributed system. We also present the main systems
real-time, the system contains two parts, the first manages the offering a serialization adapted to RDF flows. RDSZ [1]
storage of RDF data and the second process the data that (RDF Differential Stream compressor based on Zlib) is an
comes in real-time, and combines these news data with old ones RDF compression method. The differential encoder used by
to respond to requests from users, programs, and software the algorithm assigns an identifier to each subject and each
agents. For validation, this approach makes it possible to detect object of the RDF triples and stores them in a table of key-
events and automatically extract relations from them, in RDF value pairs. We can thus serialize the RDF triples, by
format. To do this, the system analyzes Twitter messages in replacing the subjects and the objects by their identifiers, or
real-time simultaneously with the processing of RDF data by putting them empty when the subject repeats on several
stored in a triplestore. triplets. RDSZ analyzes which compression method is most
effective between applying the Zlib algorithm [2] to base
Keywords— Big Data, Real-time Processing, Semantic Web, triples, or applying Zlib to the serialized form. Ztreamy [3] is
RDF Data Stream Processing. a scalable middleware platform for the distribution of
semantic data streams. It makes it possible to publish data
I. In t r o d u c t io n flows so that these can be consumed by other applications;
The Semantic Web is the evolution of web 1.0. Its main the platform supports operations such as mirroring
innovation is to enable data reuse, making it easier to find, (duplication of streams for parallel processing), joining,
combine and use. For this, the available data are organized in partitioning (separation of elements from the stream for
a semantic network, a semantic structure organized using specific processing) and filtering. The scalable and portable
metadata. Metadata are data describing other data: thus, we approach of this project makes it adaptable to a wide range of
can obtain information on each annotated data, which use cases such as for example, the smart city. ERI [4] is a
facilitates its search (for example, we can specify that the compressed RDF data format aimed at reducing the amount
Jason Statham data, associated with a film, corresponds to a of data transmitted during flow processing. Based on RDSZ,
name actor). The semantization of data facilitates their use the algorithm is based on the fact that the structure of the
both for the user and for the machine. The RDF (Resource data in the streams is very familiar to the producer, and that it
Description Framework) format is the basic language of the does not vary greatly. ERI [12] considers a stream as a
semantic web. As a data model, it makes it possible to continuous sequence of RDF triplet blocks. Each block is
establish a graph representing web resources and their divided into channels: structural channels, to encode the
metadata. An RDF document represents information in the subjects of the triples and the properties associated with each
form of triples consisting of a subject, a predicate, and an with a dynamic dictionary of structures, and value channels,
object. In the literature, the management of real-time to encode the values of the concrete data of the triples. The
semantic web data is generally seen as a sub-task of the task structure dictionary brings together all the different groups of
of managing massive data flows. The likelihood of having triplets having the same subject: these groups are called
false positives is also discussed in this paper. Finally, we molecules. Various operations are carried out to optimize the
implement the entire system on the Apache Storm platform size of the molecules, by avoiding repetitions of the discrete
using Twitter data. We propose an approach allowing to predicate for example (identical predicate-object pairs on
detect events and to automatically extract relations, in RDF several subjects). Information about discrete predicates, as
format to enrich a knowledge base. To do this, the system well as information about molecules (metadata, compression,
analyzes messages from Twitter, a dynamic source of configuration ...), is stored in presets, provided by the data
information that can be captured in real-time. In the next source, or inferred at runtime. An ERI stream is, therefore, a
section, we discuss previous work in the literature and then sequence of blocks of molecules, each being multiplexed
we specify our objective and the tools used in section 3. In into several channels, the whole forming a set suitable for
section 4, we describe our system, our system is evaluated in standard compression algorithms. The principle of the
section 5. structure dictionary allows compression optimized for RDF

Vol 02, No. ICCIT- 1441, Page No.: 79 - 83, 9th & 10th Sep. 2020
978-1 -7281 -2680-7/20/$31.00 ©2020 IEEE

Authorized licensed use limited to: R V College of Engineering. Downloaded on January 23,2024 at 18:42:02 UTC from IEEE Xplore. Restrictions apply.
Banane, Web Data Stream Processing ...

flows. But for our use case, the problem related to multiple memory, control their partitioning in order to optimize their
treatments, requiring several compressions and location, and manipulate them using a set of operations (map,
decompressions, persists. filter, join). Spark Streaming extends Spark by the micro­
batch operation. It accumulates the data over a certain period
Several approaches to processing big data in distributed
to produce a micro-RDD on which it performs the desired
systems. Recently, many applications have emerged, using
calculation. For this, unlike Storm[15] which performs
data streams from distributed and heterogeneous sources.
processing one by one, Spark Streaming will add a delay
The realization of such a system remains a scientific
between the arrival of a message and its processing. Its API
challenge that will have to take into account the volume of
is identical to the classic Spark API. It is thus possible to
data, their speed and their variety. Some prototypes have
process data streams in the same way as static data. Systems
been proposed in order to define a system architecture that
have also been proposed in order to define hybrid
ensures the management of massive data flows in real-time.
architectures that manage both batch and real-time
o n the other hand, the domain of the semantic web offers
processing such as the Lambda and Storm-Yarn architecture.
through a common format (RDF) to combine several
The idea of Lambda Architecture [8] is to simultaneously use
heterogeneous systems and thus compensate for the variety
batch processing on all data to provide complete views, and
of data. In what follows, we describe some existing systems
real-time processing of data flows to provide dynamic views.
adapted to the processing of raw data flows on a distributed
The outputs from the two treatments can be combined at the
platform. Apache Hadoop [5] is one of these distributed
presentation level. This architecture attempts to balance
systems widely used to analyze big data. It allows you to
throughput, latency, and fault tolerance. It is made up of 3
manage a distributed data file system HDFS (Hadoop
layers: The batch layer manages the storage of the data set,
Distributed File System) which supports storage on a very
as well as the calculation of complete views on a large set or
large number of machines. The advantage of HDFS is to
part of data. These views are updated infrequently since the
limit the transfer time by assigning to each entity of the
calculation time can be long (a few hours). The real-time
cluster the task of processing the data it contains. Hadoop is
layer is used to process recent data (which is not taken into
based on the MapReduce parallel calculation algorithm [6]
account in the batch layer) in order to compensate for the
where the calculation time is normally divided by the
high latency of the batch layer. It continuously calculates
number of entities performing the task. This parallel
real-time views incrementally based on a flow processing
processing is based on the batch mode where each
system (e.g. Storm) and random read/write databases.
calculation lasts a certain time. It is very efficient for
Processing latency is in the order of a few milliseconds. The
analyzing large volumes of data. However, it was not
service layer is used to manage the merging of results from
designed to meet the needs of analyzes with high time
the batch and real-time layers. The logic of fusion is the
constraints, for example, in the case of real-time detection of
responsibility of the developer who will have to define how
anomalies or bank fraud. To get around the nature of the
the data will be exploited. The advantage of the Lambda
batch mode, other solutions are appearing in the Big Data
architecture is its ability to process and maintain data flows,
ecosystem, the most popular of which is Apache Storm and
while large historical data is also processed by a batch
Spark Streaming. Apache Storm [7] is a real-time oriented
pipeline. However, the duality of the batch and real-time
solution based on the concept of complex event processing
layers requires producing the same result from two different
(CEP) and uses the concept of topology. Concretely, it is a
paths. This requires maintaining code in two complex
fault-tolerant distributed computing system that guarantees
distributed systems, designed differently while ensuring the
data processing at least once. Storm revolves around 4 uniqueness of processing an event. Storm-on-Yarn [10] is
concepts: Tuple: it represents a message in the "Storm"
another solution developed by Yahoo! to co-locate the real­
sense, namely a list of dynamically typed named values. time processing with the batch processing. The idea is to
Stream: a collection of tuples with the same pattern. Spout: it
make it possible to run Hadoop and CEP technologies in the
consumes the data from a source and transmits one or more
same cluster instead of two separate clusters. The load used
streams to the bolts. Bolt: a tuples processing node that can
by Storm often varies depending on the speed and volume of
generate streams which will be transmitted to other bolts. It
data to be processed. Storm-on-Yarn allows you to manage
can also write the output data to external storage platforms.
peak loads and dynamically allocate resources, normally
Storm also supports an additional level of abstraction
used by Hadoop, to Storm when necessary. Besides, Yahoo!
through the Trident API [8]. This API integrates certain
Added mechanisms that allow Storm applications to access
functions on a data set such as join, aggregation, and
data stored in HDFS and HBase[18].
grouping. It allows processing ordered by minibatch of N
tuples. o n the other hand, Storm does not provide any Big The originality of our work is the management of RDF
Data storage medium as in Hadoop[13]. Spark Streaming [9] data in real-time via the use of a big data processing tool in
is another real-time processing system based on the real-time called Storm..
MapReduce programming paradigm. It is the extension of
Apache Spark, analysis software that accelerates the III. Se m a n t ic W e b , D a t a flo w , and St o r m
processing of data on a Hadoop platform. Spark is 10 to 100
times faster than Hadoop due to the reduced number of A. Sematic Web
writes and reads on the disc. For this, it uses an abstraction
called RDD (Resilient Distributed Dataset) which allows, The Semantic Web aims to organize and structure the
enormous amount of information presented on the Net. It is a
transparently, to mount in-memory data distributed on HDFS
and to persist them on disk if necessary. RDD has the semi-structured language based on XML. Figure 1 shows
advantage of providing fault tolerance without having to one of the versions of the layered organization offered by the
resort to the often costly replication mechanisms. It makes it W3C. Each layer is built on the layers below. Thus, all of the
possible to explicitly persist the intermediate data in layers use XML syntax. This allows you to take advantage of
all the technologies developed around XML: XML Schema,

Vol. 02, No. ICCIT- 1441, Page No.: 79 - 83, 9th & 10th Sep. 2020

Authorized licensed use limited to: R V College of Engineering. Downloaded on January 23,2024 at 18:42:02 UTC from IEEE Xplore. Restrictions apply.
Banane, Web Data Stream Processing ...

XML resource exploitation tools (Java libraries, etc.), XML C. Storm


databases. XML comes from the SGML language, but unlike Apache Storm is a real-time distributed processing
HTML, the structure and presentation of XML documents system for processing data flows. Storm uses the concept of
are conceptually separate. XML is a language that uses tags topology through which data tuples travel. This architecture
as a format for the universal representation of data. At the is made up of Spout and Bolt. A Spout is a source of data
same time, an XML document contains the data and flow, while a Bolt contains the calculation logic. A Spout
indications on the role that this data plays. XML is the and Bolt network is represented by a directed acyclic graph
cornerstone of information exchange on the web. called "topologies". Storm is an open-source distributed real­
Unfortunately, XML is insufficient to describe all the time processing system produced by the Apache community.
semantics needed on the Web. It enables large data streams to be processed quickly and
reliably. It can be used for real-time analysis, learning,
continuous calculation. It is characterized by: Speed,
Scalability, Fault tolerance, Reliability, Ease of use (multi­
language), and Maturity.

IV. A p p r o a c h D e s c r ip t io n
In the architecture of our system, event data is processed
and managed by distributed systems like Redis [12] and
Storm [13], Redis is used as a memory processing
component.

Fig. 1. Stack of languages for the Semantic Web

RDF is a language developed by the W3C to put a


semantic layer on the Web [11]. It allows the connection of
web resources using directed and labeled arc. The structure
of RDF documents is complex. An RDF document is a set of
triples <subject, predicate, object> as shown in Figure 2. In
addition, the predicate (also called property) links the subject
(resource) to the object (value). Thus, the subject and the
object are nodes of the graph linked by an edge directed from
the subject to the object. The nodes and the arcs belong to
Fig. 4. system architecture
types "resources". A resource is identified by a URI [11].

Value

Fig. 2. An RDF triple.

B. Data Flow
We can define data flows as a continuous, ordered
sequence of items (implicitly by time of arrival in the Data
Flow Management System, or explicitly by production
timestamp at source), arriving in real time. The adoption of heterogeneity of d iffere n t
formats and data models! Homogeneity
semantic web technologies in the world of dynamic data and More knowledge
sensors gave rise to the concept of RDF data flow. Thus,
RDF flows were introduced as a natural extension of the Fig. 5. Translation of data formats and models in RDF
RDF model in the flow environment.

A distributed data flow processing system is an essential


element to ensure high scalability and fast response time.
This system must support scalable reasoning on data flows
using continuous SPARQL queries. Several real-time
distributed computing platforms exist such as Apache Storm,
Apache S4 or Spark[16] Streaming. They offer different
strategies for data partitioning and task allocation. The idea
Fig. 3. Example of RDF graph flow. is to ensure modularity through an API that allows the
flexible introduction of an existing or future calculation
system. When developing such a system, constraints will

Vol. 02, No. ICCIT- 1441, Page No.: 79 - 83, 9th & 10th Sep. 2020

Authorized licensed use limited to: R V College of Engineering. Downloaded on January 23,2024 at 18:42:02 UTC from IEEE Xplore. Restrictions apply.
Banane, Web Data Stream Processing ...

have to be considered such as the dynamic distribution of intermediate processing data in memory to have fast and
data and tasks, scheduling and parallelization of processing, inexpensive input-output access. The second approach
while optimizing network traffic and workload. allows you to persist static data and relevant summaries of
data flow to disk. There are several NoSQL storage solutions
A. Continuous SPARQL in memory such as Memcached [9] and Redis [10]. The data
A continuous query engine should be able to reason not is stored in RAM in a key-value format and can be
represented in several structures such as strings, lists, hashes,
only on data flows but also on static data and even the set of
Cloud Linked Open Data (LOD) datasets. The requests must and sets. A comparative study [11] shows almost similar
performances between Memcached and Redis in terms of
adapt to the incoming speed of the data flows and be
evaluated continuously in order to take account of the execution time. As part of our system, we decided to use
Redis because it supports more functionality for
evolving nature of the flow. The semantics of SPARQL
queries must allow processing based on time or the order of manipulating data. Unlike Memcached, Redis allows you to
periodically persist data on disk, which helps prevent data
arrival of data. The standard SPARQL will be extended by
introducing the concept of an adaptable sliding window (the loss in the event of a failure. It also supports an LUA-based
scripting language for writing stored procedures, the
defined portion of a flow).
atomicity of which is guaranteed by the architecture.
Some prototypes have been proposed recently in the monothreade. Besides, Redis' Sorted Set structure provides a
literature, drawing inspiration from the work done by the practical implementation of sliding windows. It allows you to
conventional database community. For example, CSPARQL automatically manage the sampling by operating
[14] is one of the first extensions of SPARQL intended to aggregations over a time interval, but the eviction must be
support continuous queries. other projects extending programmed manually.
SPARQL have been launched. SPARQL-Stream [15]
extends SPARQL so that it can manage window operators V. V a l id a t io n
without worrying about query performance. CQELS [16], the
most recent language, allows you to act natively on RDF This section assesses the quality and relevance of our
flows and continuous requests without going through extension. To do this, we looked at the performance obtained
intermediary tools. These projects take into account the in terms of execution time and the preservation of the
temporal aspect of flows and implement windowing semantics of the data. We consider the processing of a set of
operators. However, none of these examples is suited to the tweets.
large volume of distributed data flows. Queries on this data Twitter allows free retrieval of streaming data, taking
must be able to run in a dynamic environment with high time advantage of this advantage using a streaming tool like
constraints. The distribution of these queries, as well as the Storm, which is essential for processing this data in real time.
data, plays an important role in ensuring a certain level of In this paper, we will read and analyze Twitter messages in
scalability and latency. This distribution should take into real time with our Storm-based system. We create our
account the optimization of network traffic and the workload. application which retrieves tweets from “Twitter API” using
Also, the distribution of data in several RDF storage Java Eclipse.
platforms requires the establishment of a SPARQL
federation [17]. This raises a question about the best strategy After adding the necessary twitter4j biblios, we first
to follow to optimally execute the federation of continuous create a Java class CreateSpout.java, We know that the
SPARQL queries. To our knowledge, there are two works processing of tweets can be done using only one Bolt, but we
[11,12] that propose to execute continuous SPARQL queries created two Bolts BoltExtractor and RetweetBoltExtractor to
in a distributed way. However, their performance has not prove the join of our system. To join the tiles of these two
been evaluated in a context of complex reasoning, and there Bolts, we need another Bolt BoltRDFWriter, it will store this
is no consideration of the federation aspect. data in RDF format. Now let's create a topology that will
allow us to perform the processing in real time.
B. Data storage
Two approaches to data storage will have to be used in
our system. The first approach makes it possible to store the
45156 [NIO5erverCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN org.apache,zookeeper.server.NIOServerCnxnFactory - Ignoring exception
java.nio.channels.ClosedChanneLException: null
at sun.nio.c h .ServerSocketChannellmpl.accept(ServerSocketChannellmpl.ja va:137) ~ [na:1.6.0_29]
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:188) ~[zookeeper-3.4.5.jar:3.4.5-1392090]
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_29]
45156 [main] INFO backtype.storm.testing - Done shutting down in process zookeeper

Fig. 6. Result of running the system on Tweets.

Vol. 02, No. ICCIT- 1441, Page No.: 79 - 83, 9 & 10th Sep. 2020

Authorized licensed use limited to: R V College of Engineering. Downloaded on January 23,2024 at 18:42:02 UTC from IEEE Xplore. Restrictions apply.
Banane, Web Data Stream Processing ...

import org.apache.storm.Config;
import o r g .apache.storm.LocalCluster;
import org.apache.storm.topology.TopologyBuilder;

import com. raidentrance.bolt.TwitterAnalyzerBolt;


import c o m .raidentrance.spout.TweetStreamSpout;

)public class TwitterTopology {


] public static void main(String args[]) {
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("twitterSpout”, new TweetStreamSpout());
builder.setBolt("twitterAnalyzerBolt", new TwitterAnalyzerBolt(), 1).shuffleGrouping('"twitterSpout")

Config conf = new Config();


conf.setDebug(false);

final LocalCluster cluster = new LocalCluster();


• cluster.submit!opology("twitterTopology", conf, builder.createTopology());

[3] Querying RDF streams with C-SPARQL


[4] Cuesta C.E., Martinez-Prieto M.A., Fernandez J.D. (2013) Towards
Fig. 7. Topology code. an Architecture for Managing Big Semantic Data in Real-Time. In:
Drira K. (eds) Software Architecture. ECSA 2013. Lecture Notes in
As input, the system processes dynamic data based on Computer Science, vol 7957. Springer, Berlin, Heidelberg
continuously arriving atomic events but also supports [5] Mauri A. et al. (2016) TripleWave: Spreading RDF Streams on the
Web. In: Groth P. et al. (eds) The Semantic Web - ISWC 2016.
enrichment with static data. Unlike many engines like RSP ISWC 2016. Lecture Notes in Computer Science, vol 9982. Springer,
[19], our system does not consider events as a set of Cham
independent time stamped RDF triples, but as a graph of [6] Towards Efficient Processing of RDF Data Streams
atomic events which cannot be divided. Therefore, the [7] Strider: A Hybrid Adaptive Distributed RDF Stream Processing
system evaluates the continuous request against all of the Engine
events in a given window. This strategy makes it possible in [8] WAVES: Big Data Platform for Real-time RDF Stream Processing.
particular to deal with the throughput problems encountered Norberto Fernandez, Jesus Arias, Luis Sanchez, Damaris Fuentes
by many RSP engines [37]. Dynamic data flows have Lorenzo, and Oscar Corcho. Rdsz : an approach for lossless rdf
generated considerable interest within the semantic web stream compression. In European Semantic Web Conference, pages
52-67. Springer, 2014.
community. The processing of these flows has recently been
[9] Peter Deutsch and Jean-Loup Gailly. Zlib compressed data format
the subject of RSP systems based mainly on centralized specification version 3.3. Technical report, 1996.
execution. Recognizing the scalability limitations of single­ [10] Jesus Arias Fisteus, Norberto Fernandez Garcia, Luis Sanchez
machine systems, efforts have relied on generic flow Fernandez, and Damaris Fuentes-Lorenzo. Ztreamy : A middleware
processing frameworks to distribute requests on a cluster of for publishing semantic streams on the web. Web Semantics :
machines. Science, Services and Agents on the World Wide Web, 25 :16-23,
2014.
[11] Javier D Fernandez, Alejandro Llaves, and Oscar Corcho. Efficient
Co n c l u s io n rdf interchange (eri) format for rdf data streams. In International
Data flow processing is a research field dedicated to Semantic Web Conference, pages 244-259. Springer, 2014.
finding solutions to efficiently manage large flows of data [12] Apache Hadoop. [En ligne]. Available: http://hadoop.apache.org/.
during very short periods of time. In this paper we presented [13] J. Dean et S. Ghemawat, «MapReduce: simplified data processing on
large clusters,)) Commun. ACM, vol. 51 (1), pp. 107-113, January
a new Big Data solution for real-time analysis of RDF data
2008.
flows based on Storm. The principle consists in combining
[14] «Apache Storm,) [En ligne]. Available: http://storm.apache.org/.
the data flows with the stored data. For this, the system
[15] «Apache Spark Streaming,) [En ligne]. Available:
analyzes tweets from Twitter in real time simultaneously https://spark.apache.org/streaming/.
with the processing of RDF data stored in a triplestore.
[16] N. Marz et J. Warren, Big Data: Principles and best practices of
scalable realtime data systems, Manning Publications, 2013.
Re f e r e n c e s [17] Yahoo!, «Storm-on-YARN: Convergence of Low-Latency and Big
Data,) chez Annual Hadoop Summit, North America, 2013.
[18] Kolchin, Maxim, Peter Wetz, Elmar Kiesling, and A. Min Tjoa.
[1] J. J. C. G. Klyne: Resource Description Framework (rdf): Concepts
"YABench: A comprehensive framework for RDF stream processor
and abstract syntax. Tech. rep., W3C. (2004)
correctness and performance assessment." In International Conference
[2] Gerber D., Hellmann S., Buhmann L., Soru T., Usbeck R., Ngonga on Web Engineering, pp. 280-298. Springer, Cham, 2016.
Ngomo AC. (2013) Real-Time RDF Extraction from Unstructured
Data Streams. In: Alani H. et al. (eds) The Semantic Web - ISWC
2013. ISWC 2013. Lecture Notes in Computer Science, vol 8218.
Springer, Berlin, Heidelberg

Vol. 02, No. ICCIT- 1441, Page No.: 79 - 83, 9th & 10th Sep. 2020

Authorized licensed use limited to: R V College of Engineering. Downloaded on January 23,2024 at 18:42:02 UTC from IEEE Xplore. Restrictions apply.

You might also like