Skip to main content
  • Djelloul Bouchiha received his Engineer degree in computer science from Sidi Bel Abbes University, Algeria, in 2002, ... moreedit
Ontologies have greatly contributed to solving the challenges of knowledge modeling, artificial intelligence, clustering, and classification. Ontologies are mainly used, in the field of web services, to describe the interfaces of web... more
Ontologies have greatly contributed to solving the challenges of knowledge modeling, artificial intelligence, clustering, and classification. Ontologies are mainly used, in the field of web services, to describe the interfaces of web services and allow automation of the functional processes of this technology (in particular discovery). Supporting this model with a shared ontology (knowledge base) can optimize the technology's performance. The construction of ontology by manual methods is a time-consuming and costly task that requires the presence of a domain expert. Approaches for the automatic construction of ontologies will be interesting and less expensive solutions in terms of time and financial costs. In this article, we propose to use Hierarchical Clustering algorithm to create an ontological network (Service Ontology (SO)) from a corpus of semantic web services. Experiments have been carried out to show the efficiency of the proposed approach.
Text classification consists in attributing text (document) to its corresponding class (category). It can be performed using an artificial intelligence technique called machine learning. However, before training the machine learning model... more
Text classification consists in attributing text (document) to its corresponding class (category). It can be performed using an artificial intelligence technique called machine learning. However, before training the machine learning model that classifies texts, three main steps are also mandatory: (1) Preprocessing, which cleans the text; (2) Feature selection, which chooses the features that significantly represent the text; and (3) Feature weighting, which aims at numerically representing text through feature vector. In this paper, we propose two algorithms for feature selection and feature weighting. Unlike most existing works, our algorithms are sense-based since they use ontology to represent not the syntax but the sense of a text as a feature vector. Experiments show that our approach gives encouraging results compared to existing works. However, some additional suggested improvements can make these results more impressiveText classification consists in attributing text (document) to its corresponding class (category). It can be performed using an artificial intelligence technique called machine learning. However, before training the machine learning model that classifies texts, three main steps are also mandatory: (1) Preprocessing, which cleans the text; (2) Feature selection, which chooses the features that significantly represent the text; and (3) Feature weighting, which aims at numerically representing text through feature vector. In this paper, we propose two algorithms for feature selection and feature weighting. Unlike most existing works, our algorithms are sense-based since they use ontology to represent not the syntax but the sense of a text as a feature vector. Experiments show that our approach gives encouraging results compared to existing works. However, some additional suggested improvements can make these results more impressive.
In this paper, we describe a new representative attribute derived from description logics formalism. This attribute constructed using subsumption hierarchy, is very meaningful and can help to perform a simplest form of reasoning Algorithm... more
In this paper, we describe a new representative attribute derived from description logics formalism. This attribute constructed using subsumption hierarchy, is very meaningful and can help to perform a simplest form of reasoning Algorithm based on vectors comparing, which yields to an efficient description logics system. We use for experimentation the famous description logics human beings knowledge base. The aim is to proof that the same expansions can be achieved using simplest reasoning Algorithm.
Abstract—Web applications are subject to continuous changes and rapid evolution triggered by increasing competition, espe-cially in commercial domains such as electronic commerce. Un-fortunately, usually they are implemented without... more
Abstract—Web applications are subject to continuous changes and rapid evolution triggered by increasing competition, espe-cially in commercial domains such as electronic commerce. Un-fortunately, usually they are implemented without producing any useful ...
The evolution of the traditional Web towards the semantic Web allows the machine to be a first-order citizen on the Web and increases discoverability of and accessibility to the unstructured data on the Web. This evolution enables the... more
The evolution of the traditional Web towards the semantic Web allows the machine to be a first-order citizen on the Web and increases discoverability of and accessibility to the unstructured data on the Web. This evolution enables the Linked Data technology to be used as background knowledge bases for unstructured data, notably the texts, available nowadays on the Web. For the Arabic language, the current situation is less brightness; the content of the Arabic language on the Web doesn't reflect the importance of this language. Given the fact that Arabic is one of the most important languages in the Web, and unfortunately it is under-resourced, so creating linguistic resources for it now is a necessity. Thus, we developed a linguistic approach for annotating Arabic textual corpus with Linked Data, especially DBpedia, which is Linked Open Data (LOD) extracted from Wikipedia. This approach uses natural language techniques to shedding light on Arabic text with Linked Open Data. The evaluation results of this approach are encouraging, despite the high complexity of our independent-domain knowledge base and the reduced resources in Arabic natural language processing.
Text classification consists in attributing text (document) to its corresponding class (category). It can be performed using an artificial intelligence technique called machine learning. However, before training the machine learning model... more
Text classification consists in attributing text (document) to its corresponding class (category). It can be performed using an artificial intelligence technique called machine learning. However, before training the machine learning model that classifies texts, three main steps are also mandatory: (1) Preprocessing, which cleans the text; (2) Feature selection, which chooses the features that significantly represent the text; and (3) Feature weighting, which aims at numerically representing text through feature vector. In this paper, we propose two algorithms for feature selection and feature weighting. Unlike most existing works, our algorithms are sense-based since they use ontology to represent, not the syntax, but the sense of a text as a feature vector. Experiments show that our approach gives encouraging results compared to existing works. However, some additional suggested improvements can make these results more impressive.
This chapter proposes the use of ontology alignment to contribute to the interoperability of a business federation based on data interoperability. We proposed a system with a linguistic and syntactic matcher, called ABCMap. The ABCMap... more
This chapter proposes the use of ontology alignment to contribute to the interoperability of a business federation based on data interoperability. We proposed a system with a linguistic and syntactic matcher, called ABCMap. The ABCMap tool is based on an optimization method that relies on Artificial Bee Colonies (ABC). Experiments done using the implemented tool give the best results in terms of Recall and Precision.
Recently, many methods have appeared to solve the problem of the evolution of alignment under the change of ontologies. The main challenge for them is to maintain consistency of alignment after applying the change. An alignment is... more
Recently, many methods have appeared to solve the problem of the evolution of alignment under the change of ontologies. The main challenge for them is to maintain consistency of alignment after applying the change. An alignment is consistent if and only if the ontologies remain consistent even when used in conjunction with the alignment. The objective of this work is to take a step forward by considering the alignment evolution according to the conservativity principle under the change of ontologies. In this context, an alignment is conservative if the ontological change should not introduce new semantic relationships between concepts from one of the input ontologies. The authors give methods for the conservativity violation detection and repair under the change of ontologies and they carry out an experiment on a dataset adapted from the Ontology Alignment Evaluation Initiative. The experiment demonstrates both the practical applicability of the proposed approach and shows the limit...
Abstract. A Web application is a software system which provides its functionalities through the Web. Understanding, maintaining and re-engineering legacy Web applications requires a reverse-engineering process. In a previous work, an... more
Abstract. A Web application is a software system which provides its functionalities through the Web. Understanding, maintaining and re-engineering legacy Web applications requires a reverse-engineering process. In a previous work, an ontology based Web application ...
The ultimate aim of Machine Learning (ML) is to make machine acts like a human. In particular, ML algorithms are widely used to classify texts. Text classification is the process of classifying texts into a predefined set of categories... more
The ultimate aim of Machine Learning (ML) is to make machine acts like a human. In particular, ML algorithms are widely used to classify texts. Text classification is the process of classifying texts into a predefined set of categories based on the texts’ content. It contributes to improving information retrieval on the Web. In this paper, we focus on the "Arabic" text classification since there is a large community in the world that uses this language. The Arabic text classification process consists of three main steps: preprocessing, feature extraction and ML algorithm. This paper presents a comparative empirical study to see which combination (feature extraction - ML algorithm) acts well when dealing with Arabic documents. So, we implemented one hundred sixty classifiers by combining 5 feature extraction techniques and 32 machine learning algorithms. Then, we made these classifiers open access for the benefit of the AI and NLP communities. Experiments were carried out u...
WiHArD (Wikipedia based Hierarchical Arabic Dataset) is a hierarchical Arabic dataset of 6027 texts extracted from Wikipedia Web site. WiHArD is structured into three "level 1" classes and nine "level 2" classes: • "Level 1" classes are... more
WiHArD (Wikipedia based Hierarchical Arabic Dataset) is a hierarchical Arabic dataset of 6027 texts extracted from Wikipedia Web site. WiHArD is structured into three "level 1" classes and nine "level 2" classes:
• "Level 1" classes are Culture (ثقافة), History (تاريخ) and Math (رياضيات). Texts in this level describe general notions related to these domains.
• "Level 2" classes are Clothes (ملابس), Food_drinks (طعام و شراب), Tourism (سياحة), Events (أحداث), Inventions (اختراعات), Monuments (أثار), Algebra (جبر), Analysis (تحليل) and Geometry (هندسة). Texts in this level describe specific notions related to these sub-domains.

see more: https://data.mendeley.com/datasets/kdkryh5rs2
This chapter proposes the use of ontology alignment to contribute to the interoperability of a business federation based on data interoperability. We proposed a system with a linguistic and syntactic matcher, called ABCMap. The ABCMap... more
This chapter proposes the use of ontology alignment to contribute to the interoperability of a business federation based on data interoperability. We proposed a system with a linguistic and syntactic matcher, called ABCMap. The ABCMap tool is based on an optimization method that relies on Artificial Bee Colonies (ABC). Experiments done using the implemented tool give the best results in terms of Recall and Precision.
The ultimate aim of Machine Learning (ML) is to make machine acts like a human. In particular, ML algorithms have been widely used to classify texts. Text classification is the process of classifying texts into a predefined set of... more
The ultimate aim of Machine Learning (ML) is to make machine acts like a human. In particular, ML algorithms have been widely used to classify texts. Text classification is the process of classifying texts into a predefined set of categories based on the texts’ content. It contributes to improving information retrieval on the Web. In this paper, we focus on the "Arabic" text classification since there is a large community in the world that uses this language. The Arabic text classification process consists of three main steps: preprocessing, feature extraction and ML algorithm. In this paper, a comparative empirical study has been carried out to see which combination (feature extraction - ML algorithm) acts well when dealing with Arabic documents. One hundred sixty classifiers have been implemented by combining 5 feature extraction techniques and 32 machine learning algorithms. We made these classifiers open access for the benefit of the AI and NLP communities. Experiments have been carried out using a huge open dataset. The comparison study reveals that TFIDF-Perceptron is the best performing combination of a classifier.
Recently, many methods have appeared to solve the problem of the evolution of alignment under the change of ontologies. The main challenge for them is to maintain consistency of alignment after applying the change. An alignment is... more
Recently, many methods have appeared to solve the problem of the evolution of alignment under the change of ontologies. The main challenge for them is to maintain consistency of alignment after applying the change. An alignment is consistent if and only if the ontologies remain consistent even when used in conjunction with the alignment. The objective of this work is to take a step forward by considering the alignment evolution according to the conservativity principle under the change of ontologies. In this context, an alignment is conservative if the ontological change should not introduce new semantic relationships between concepts from one of the input ontologies. The authors give methods for the conservativity violation detection and repair under the change of ontologies and they carry out an experiment on a dataset adapted from the Ontology Alignment Evaluation Initiative. The experiment demonstrates the practical applicability of the proposed approach as an add-on component for alignment evolution methods, dealing with conservativity violations under ontology change.
The web plays a crucial role in our daily life. Its openness allows users to access data around the clock. Recently, data has become more exploitable by machines due to the newly introduced mechanism of linked data, which improves the... more
The web plays a crucial role in our daily life. Its openness allows users to access data around the clock. Recently, data has become more exploitable by machines due to the newly introduced mechanism of linked data, which improves the quality of published data on the web dramatically. Therefore, we have attempted to benefit from the investment, regarding data, which already exist on the web, particularly web applications, to generate linked data. To achieve this, we suggested a set of transformation rules to extract data from HTML tables then convert them into RDF (Resource Description Framework) triples. Our hypothesis is based on a direct conversion of relational data into RDF triples proposed by the W3C Consortium. The suggested extraction process of RDF triples is automatic; however, it remains manual when it comes to primary and foreign keys detection. Simultaneously, we have developed a tool, called HTML2RDF, which accomplishes the extraction process. Results obtained by HTML2RDF were promising. However, their quality remains dependent on the proper determination of primary and foreign keys.
The ontology authoring is a fundamental task in the Semantic Web. This process enables the domain expert to develop ontologies with the help of dedicated tools. This article presents an approach for building OWL ontologies from relational... more
The ontology authoring is a fundamental task in the Semantic Web. This process enables the domain expert to develop ontologies with the help of dedicated tools. This article presents an approach for building OWL ontologies from relational databases based on Model Driven Engineering (MDE). The proposed approach consists of two phases: (1) Preprocessing phase and (2) Transformation phase. The first one consists of creating an input model from the database which must conform to its meta-model. The second phase takes this model as input and transforms it into an OWL file by executing a set of mapping rules written in Atlas Transformation Language (ATL). The transformation process is done at a higher level of abstraction; it does a matching between the source meta-model elements (Database) and the target meta-model elements (OWL). We have concretized our approach as the DB2OWLOntology tool, and we have evaluated it with a set of databases. The obtained results are encouraging and show the efficiency of the proposed approach.
The ontology has marked its presence in several research fields in order to address issues in knowledge modeling’ Artificial Intelligence, classification, clustering and more specifically in knowledge engineering. In the Web Service... more
The ontology has marked its presence in several research fields in order to address issues in knowledge modeling’ Artificial Intelligence, classification, clustering and more specifically in knowledge engineering. In the Web Service field, ontology is used principally to describe the Web Service interface for discovering, storing and composing Web services. This is well known as semantic Web services, such as OWL-S, WSMO and WSDL-S. Unfortunately, work in this field remain insufficient till now; so, supporting this description by adding a shared ontology (knowledge base) can improve the performance of the Web Service technology. Manual construction of ontology is a tedious and expensive task which requires domain expert intervention; an automatic ontological construction process will be interesting and less costly solution in terms of time and money. In this paper, we propose to use clustering algorithms to create an ontological network (Service Ontology (SO)) from a semantic Web Services corpus. The created SO can be used for storage and discovery of Web Services in a distributed and intelligent environment.
Web services are the latest attempt to revolutionize large scale distributed computing. They are based on standards which operate at the syntactic level and lack semantic representation capabilities. Semantics provide better qualitative... more
Web services are the latest attempt to revolutionize large scale distributed computing. They are based on standards which operate at the syntactic level and lack semantic representation capabilities. Semantics provide better qualitative and scalable solutions to the areas of service interoperation, service discovery, service composition, and process orchestration. WSDL-S defines a mechanism to associate semantic annotations with Web services that are described using Web Service Description Language (WSDL). In this paper we propose an approach for semi-automatically annotating WSDL Web services descriptions. This allows WSDL-S Semantic Web Service Engineering. The annotation approach consists of two main processes: Categorization and Matching. Categorization process consists in classifying WSDL service description to its corresponding domain. Matching process consists in mapping WSDL entities to pre-existing domain ontology. Both categorization and matching rely on ontology matching techniques. A tool has been developed and some experiments have been carried out to evaluate the proposed approach.
The evolution of the traditional Web towards the semantic Web allows the machine to be a first-order citizen on the Web and increases discoverability of and accessibility to the unstructured data on the Web. This evolution enables the... more
The evolution of the traditional Web towards the semantic Web allows the machine to be a first-order citizen on the Web and increases discoverability of and accessibility to the unstructured data on the Web. This evolution enables the Linked Data technology to be used as background knowledge bases for unstructured data, notably the texts, available nowadays on the Web. For the Arabic language, the current situation is less brightness; the content of the Arabic language on the Web doesn't reflect the importance of this language. Given the fact that Arabic is one of the most important languages in the Web, and unfortunately it is under-resourced, so creating linguistic resources for it now is a necessity. Thus, we developed a linguistic approach for annotating Arabic textual corpus with Linked Data, especially DBpedia, which is Linked Open Data (LOD) extracted from Wikipedia. This approach uses natural language techniques to shedding light on Arabic text with Linked Open Data. The evaluation results of this approach are encouraging, despite the high complexity of our independent-domain knowledge base and the reduced resources in Arabic natural language processing.
Nowadays, Wide World Web is a wide network of information resources found in documents, such as HTML pages, PHP, etc. Most are intended for consumer uses, whether by human or machines (i.e., programs). So the Web grows with the emergence... more
Nowadays, Wide World Web is a wide network of information resources found in documents, such as HTML pages, PHP, etc. Most are intended for consumer uses, whether by human or machines (i.e., programs). So the Web grows with the emergence of new technologies, such as Web services, mobile applications and Web applications. These technologies manipulate data in multiple formats in a hidden or unusable way to users. In parallel, there is also another growth of desire for directly accessing data on the Web. However, the current Web cannot meet this need. So, we have for instance to switch to the semantic Web by reengineering our classic Web applications into RDF Linked data. This can be justified by the fact that (in the semantic Web) data are represented in RDF format which makes them directly available to users. In this paper, we propose a model-based approach to transform Web applications into semantic ones. This is done by extracting data from the Web applications, and transforming t...
Alignment revision has been identified as an important problem since the early years of semantic web project development. However, the study of this problem is not investigated in its right framework. The encoding of ontologies and hence... more
Alignment revision has been identified as an important problem since the early years of semantic web project development. However, the study of this problem is not investigated in its right framework. The encoding of ontologies and hence alignments as knowledge bases leads us to tackle this problem on the light of base revision theory. For that purpose, we adapt partial meet contraction framework to design a rational operator for alignment contraction and to formulate the set of postulates that characterize it. We derive from it another operator to deal with alignment inconsistency. We give the set of postulates that characterize this class of operators. We compare our framework with related works and we give some trends for future investigations.
1 Dept. Mathematics and Computer Science, EEDIS Lab., Inst. Sciences and Technologies, Ctr Univ Naama, UDL-SBA, Naama 45000, Algeria 2 Department of Mechanical Engineering, University of Medea, Faculty of Technology, Medea 26000, Algeria... more
1 Dept. Mathematics and Computer Science, EEDIS Lab., Inst. Sciences and Technologies, Ctr Univ Naama, UDL-SBA, Naama 45000, Algeria 2 Department of Mechanical Engineering, University of Medea, Faculty of Technology, Medea 26000, Algeria 3 LERM Renewable Energy and Materials Laboratory, University of Medea, Medea 26000, Algeria 4 Faculty of Exact Sciences and Informatics, Ziane Achour University of Djelfa, P.O. Box 3117, Djelfa, Algeria 5 Faculty of Sciences and Technology, Ziane Achour University of Djelfa, P.O. Box 3117, Djelfa, Algeria 6 Department of Engineering and Architecture, University of Parma, Parco Area delle Scienze, 181/A, Parma 43124, Italy 7 Department of Basic Science, University of Engineering and Technology, Peshawar 25000, Pakistan 8 Unit of Research on Materials and Renewable Energies, Faculty of Sciences, Department of Physics, Abou Bakr Belkaid University, P.O. Box 119, Tlemcen 13000, Algeria
Web services are an emerging paradigm which aims at implementing software components in the Web. They are based on syntactic standards, notably WSDL. Semantic annotation of Web services provides better qualitative and scalable solutions... more
Web services are an emerging paradigm which aims at implementing software components in the Web. They are based on syntactic standards, notably WSDL. Semantic annotation of Web services provides better qualitative and scalable solutions to the areas of service interoperation, service discovery, service composition and process orchestration. Manual annotation is a time-consuming process which requires deep domain knowledge and consistency of interpretation within annotation teams. Therefore, we propose an approach for semi-automatically annotating WSDL Web services descriptions. This is allowed by Semantic Web Service Engineering. The annotation approach consists of two main processes: categorization and matching. Categorization process consists in classifying WSDL service description to its corresponding domain. Matching process consists in mapping WSDL entities to pre-existing domain ontology. Both categorization and matching rely on ontology matching techniques. A tool has been de...
This chapter presents all the concepts, techniques, and analyses in relation to the problem of reengineering existing (legacy) systems towards new technologies. Reengineering is a sub-problem of software engineering. It is the study and... more
This chapter presents all the concepts, techniques, and analyses in relation to the problem of reengineering existing (legacy) systems towards new technologies. Reengineering is a sub-problem of software engineering. It is the study and analysis of an existing system for purposes of understanding, maintenance, or migration towards new technologies that arise from day to day, without rewriting the software from scratch. This will save us time and money in the software development process. Author's objective is not to create new terms, but to introduce the terms already in use with new perspectives. So in this chapter, definitions and techniques are introduced, taxonomies and models are proposed, relevant questions are answered, some specialized conferences and journals are listed and compared; all this to highlight the ways to authors who are interested in writing research papers or surveys in the software reengineering field.
The ontology alignment process aims at generating a set of correspondences between entities of two ontologies. It is an important task, notably in the semantic web research, because it allows the joint consideration of resources defined... more
The ontology alignment process aims at generating a set of correspondences between entities of two ontologies. It is an important task, notably in the semantic web research, because it allows the joint consideration of resources defined in different ontologies. In this article, the authors developed an ontology alignment system called ABCMap+. It uses an optimization method based on artificial bee colonies (ABC) to solve the problem of optimizing the aggregation of three similarity measures of different matchers (syntactic, linguistic and structural) to obtain a single similarity measure. To evaluate the ABCMap+ ontology alignment system, authors considered the OAEI 2012 alignment system evaluation campaign. Experiments have been carried out to get the best ABCMap+'s alignment. Then, a comparative study showed that the ABCMap+ system is better than participants in the OAEI 2012 in terms of Recall and Precision.
One of the fundamental problems in the development of the semantic web is what is known as the ontology authoring. This process allows the domain expert to create ontologies and their instances by dedicated tools from relational databases... more
One of the fundamental problems in the development of the semantic web is what is known as the ontology authoring. This process allows the domain expert to create ontologies and their instances by dedicated tools from relational databases and/or web applications. In this article is presented an approach that allows building OWL ontologies and RDF instances from web applications. The proposed approach starts with a reverse engineering process that aims to recover the original design from the web application source code by using program understanding techniques. Then, a forward engineering process is applied to create an OWL ontology from the recovered diagrams, based on a set of mapping rules. The proposed approach is concertized by a PHP2OWLGen tool and is evaluated with a set of web applications. The obtained results were encouraging and showed the efficiency of the proposed approach.
Abstract—Web applications are subject to continuous changes and rapid evolution triggered by increasing competition, espe-cially in commercial domains such as electronic commerce. Un-fortunately, usually they are implemented without... more
Abstract—Web applications are subject to continuous changes and rapid evolution triggered by increasing competition, espe-cially in commercial domains such as electronic commerce. Un-fortunately, usually they are implemented without producing any useful ...
... Revue internationale avec comité de lecture. Article de revue avec comité de lecture. OntoWer: An Ontology based Web Application Reverse-Engineering approach. Sidi MohamedBenslimane[LIRIS] , Mimoun Malki , Djelloul Bouchiha , Djamal... more
... Revue internationale avec comité de lecture. Article de revue avec comité de lecture. OntoWer: An Ontology based Web Application Reverse-Engineering approach. Sidi MohamedBenslimane[LIRIS] , Mimoun Malki , Djelloul Bouchiha , Djamal Benslimane[LIRIS]. ...
... Revue internationale avec comité de lecture. Article de revue avec comité de lecture. Ontology Based Web Application Reverse-Engineering Approach. Djelloul Bouchiha , Mimoun Malki , Sidi Mohamed Benslimane[LIRIS]. 3/2007. ...
This article proposes the use of ontology alignment to improve the interoperability of a business federation based on data interoperability. We proposed two ontology alignment systems: a system with a linguistic and syntactic matcher... more
This article proposes the use of ontology alignment to improve the interoperability of a business federation based on data interoperability. We proposed two ontology alignment systems: a system with a linguistic and syntactic matcher (called ABCMap), and another system with linguistic, syntactic and structural matcher (called ABCMmap+). The ABCMap and ABCMap+ systems are based on an optimization method based on artificial bee colonies (ABC). The analysis of the experimental results shows that the addition of a structural matcher gives better results in terms of precision.
Ontology is an important aspect of the semantic web, which is why semantic web developers are interested in constructing ontology in various applications based on domain experts. By transforming an existing application database into... more
Ontology is an important aspect of the semantic web, which is why semantic web developers are interested in constructing ontology in various applications based on domain experts. By transforming an existing application database into ontology, we many construct ontologies without having to hire an expert in the field. Model-driven engineering is the foundation of the suggested strategy (MDE). In a nutshell, the technique is divided into two phases, the first of which attempts to prepare the data needed for the transformation in the form of a model with a database. A compliance relationship between this model and its meta-model is required. Phase (2) applies a set of rules written in the Atlas Transformational Language to change the model produced in the first phase into another model, which is an OWL ontology (ATL). We tested our solution using a set of databases created specifically for this purpose and built it in an eclipse environment using an EMF and ATL transform language. The acquired findings demonstrate the strength and efficacy of the recommended strategy.
The evolution of the traditional Web into the semantic Web makes the machine a first-class citizen on the Web and increases the discovery and accessibility of unstructured Web-based data. This development makes it possible to use Linked... more
The evolution of the traditional Web into the semantic Web makes the machine a first-class citizen on the Web and increases the discovery and accessibility of unstructured Web-based data. This development makes it possible to use Linked Data technology as the background knowledge base for unstructured data, especially texts, now available in massive quantities on the Web. Given any text, the main challenge is determining DBpedia's most relevant information with minimal effort and time. Although, DBpedia annotation tools, such as DBpedia spotlight, mainly targeted English and Latin DBpedia versions. The current situation of the Arabic language is less bright; the Web content of the Arabic language does not reflect the importance of this language. Thus, we have developed an approach to annotate Arabic texts with Linked Open Data, particularly DBpedia. This approach uses natural language processing and machine learning techniques for interlinking Arabic text with Linked Open Data. ...
Software reengineering is an important area of the software engineering. The quest to maintain and understand operational legacy systems has always been a challenge for software practitioners. This chapter presents a compilation of... more
Software reengineering is an important area of the software engineering. The quest to maintain and understand operational legacy systems has always been a challenge for software practitioners. This chapter presents a compilation of notions and techniques covering major areas namely, reverse engineering, program understanding, software maintenance, migration and evolving. Our objective is not to create new terms, but to introduce the terms already in use with different perspectives.
Ontology alignment process aims at generating a set of correspondences between entities of two ontologies. In this paper, we describe two ontology alignment systems: a system with a linguistic and syntactic matcher (called ABCMap), and... more
Ontology alignment process aims at generating a set of correspondences between entities of two ontologies. In this paper, we describe two ontology alignment systems: a system with a linguistic and syntactic matcher (called ABCMap), and another system with linguistic, syntactic and structural matcher (called ABCMmap+). The ABCMap and ABCMap+ systems are based on an optimization method based on artificial bee colonies (ABC). The analysis of the experimental results shows that the addition of a structural matcher gives better results in terms of precision.
The evolution of the traditional Web towards the semantic Web allows the machine to be a first-order citizen on the Web and increases discoverability of and accessibility to the unstructured data on the Web. This evolution enables the... more
The evolution of the traditional Web towards the semantic Web allows the machine to be a first-order citizen on the Web and increases discoverability of and accessibility to the unstructured data on the Web. This evolution enables the Linked Data technology to be used as background knowledge bases for unstructured data, notably the texts, available nowadays on the Web. For the Arabic language, the current situation is less brightness; the content of the Arabic language on the Web doesn't reflect the importance of this language. Given the fact that Arabic is one of the most important languages in the Web, and unfortunately it is under-resourced, so creating linguistic resources for it now is a necessity. Thus, we developed a linguistic approach for annotating Arabic textual corpus with Linked Data, especially DBpedia, which is Linked Open Data (LOD) extracted from Wikipedia. This approach uses natural language techniques to shedding light on Arabic text with Linked Open Data. The evaluation results of this approach are encouraging, despite the high complexity of our independent-domain knowledge base and the reduced resources in Arabic natural language processing.

And 57 more

This book is intended for first-year students in the Mathematics and Computer Science Bachelor's program, as well as anyone seeking a solid foundation in algorithms and data structures. The algorithms presented in this book are translated... more
This book is intended for first-year students in the Mathematics and Computer Science Bachelor's program, as well as anyone seeking a solid foundation in algorithms and data structures. The algorithms presented in this book are translated into the C programming language.
Offering a progressive learning approach, this book draws extensively from our experience teaching Algorithms and Data Structures over several years. At the end of each chapter, there is a set of solved exercises. After reading and understanding the material, students are encouraged to attempt solving the exercises on their own before checking the solutions. Students need to remember that the same problem can be solved using different algorithms.
Practical work, in this book, is an integral part of programming education, providing students with the hands-on experience and skills needed to succeed in the field. It bridges the gap between theory and application, preparing students for the dynamic and evolving world of software development.

The author of this book would greatly appreciate any feedback or suggestions.
Ce livre est destiné aux étudiants de la première année Licence Mathématiques et Informatique, et à tous ceux qui souhaitent acquérir des bases solides en algorithmique et en structures de données. Les algorithmes de ce livre sont... more
Ce livre est destiné aux étudiants de la première année Licence Mathématiques et Informatique, et à tous ceux qui souhaitent acquérir des bases solides en algorithmique et en structures de données. Les algorithmes de ce livre sont traduits en langage C.
Proposant un apprentissage progressif, ce livre s’appuie largement sur notre expérience d’enseignement de la matière "Algorithmique et structures de données" pendant plusieurs années. A la fin de chaque chapitre, il y a un ensemble d’exercices corrigés. Après avoir lu et compris le cours, l’étudiant est conseillé d’essayer de résoudre les exercices par lui-même avant de consulter la correction. L’étudiant ne doit pas oublier qu’un même problème peut être résolu par différents algorithmes.
L’auteur de ce livre sera très reconnaissant de recevoir toute remarque ou suggestion.
Ce livre est destiné à tous ceux qui veulent acquérir des bases solides en algorithmique et structures de données. Les algorithmes de ce livre ont été traduits en langage Pascal. Ce livre permet un apprentissage autonome. Les exercices de... more
Ce livre est destiné à tous ceux qui veulent acquérir des bases solides en algorithmique et structures de données. Les algorithmes de ce livre ont été traduits en langage Pascal.
Ce livre permet un apprentissage autonome. Les exercices de chaque chapitre ont une difficulté progressive. Après avoir lu et compris le cours, l’étudiant est conseillé d’essayer de résoudre les exercices par lui-même avant de consulter la correction. L’étudiant ne doit pas oublier qu’un même problème peut être résolu par différents algorithmes.
L’auteur de ce livre sera très reconnaissant de recevoir toute remarque, suggestion ou correction.
Le cours Programmation Orientée Objet en Java est le fruit des années d’enseignement. C’est un cours simple avec des exercices corrigés. En effet, notre objectif était de simplifier les notions OO en les rapprochant au monde réel. Ce... more
Le cours Programmation Orientée Objet en Java est le fruit des années d’enseignement. C’est un cours simple avec des exercices corrigés. En effet, notre objectif était de simplifier les notions OO en les rapprochant au monde réel. Ce cours est dédié aux étudiants de la 2ième année Licence Informatique, Option : Systèmes Informatiques (SI), Semestre 3, Matière Programmation Orientée Objet.
Ce livre constitue un support de cours pour différents enseignements d’algorithmique et de programmation en langage Pascal donnés aux étudiants universitaires ayant une base en mathématiques, notamment ceux appartenant aux filières... more
Ce livre constitue un support de cours pour différents enseignements d’algorithmique et de programmation en langage Pascal donnés aux étudiants universitaires ayant une base en mathématiques, notamment ceux appartenant aux filières classées "Sciences et Technologies" et "Sciences Exactes". Il s’agit d’un premier volume introduisant les notions de base de l’algorithmique et les structures de données statiques, et initiant à la programmation en Pascal. Il comporte donc des cours simples, avec des exercices corrigés. Le prochain volume sera consacré à des structures de données dynamiques, dites aussi récursives, et à des notions avancées d’algorithmique.
L'objectif de ce cours est d'apprendre aux étudiants comment résoudre un problème par un programme, commençant par l’analyse de ce problème, déterminer la méthode la plus efficace pour résoudre ce problème, exprimer cette méthode en langage algorithmique, et enfin traduire l’algorithme en langage Pascal.
Ce livre s’adresse aux étudiants universitaires ayant déjà acquis des notions de base sur l’algorithmique et la programmation en Pascal. Il constitue la continuité d’un précédent volume dont l’intitulé était "Initiation à l’algorithmique... more
Ce livre s’adresse aux étudiants universitaires ayant déjà acquis des notions de base sur l’algorithmique et la programmation en Pascal. Il constitue la continuité d’un précédent volume dont l’intitulé était "Initiation à l’algorithmique et à la programmation en Pascal".
Dans le précédent volume, nous avons introduit des notions de base de l’algorithmique et les structures de données statiques, et nous avons initié à la programmation en Pascal. Nous avons donc apporté une description de la méthode de résolution d’un problème, allant de l’analyse, jusqu’à l’écriture du programme. Nous avons aussi présenté comment écrire un algorithme séquentiel avec des opérations de base et des structures de données (types) simples. C’est à ce niveau qu’une section importante a été consacrée à la présentation du langage Pascal. Nous somme passés ensuite aux structures de contrôle, notamment les structures conditionnelles (simples, composées et de choix multiples) et les boucles (Tant que, Répéter et Pour). Puis, on est revenu aux types structurés, à savoir les tableaux et les chaînes de caractères. Les sous-programmes (fonctions et procédures) visant à organiser et à réduire un programme ont été aussi introduits. Les deux derniers chapitres ont été consacrés respectivement aux types définis par l’utilisateur (cette partie a montré les différentes possibilités du langage Pascal offertes aux utilisateurs pour définir leurs propres types), et au type fichier permettant le stockage permanent des données.
Dans ce volume intitulé "Notions avancées sur l’algorithmique et la programmation en Pascal", on commence par les algorithmes de tri et la complexité des algorithmes. Ensuite, on présente les structures de données dynamiques, notamment les listes chaînées, les arbres et les graphes. Ce cours a été enrichi par des exercices corrigés.
Ré-ingénierie des Applications Web vers les Services Web sémantiques