Skip to main content
Adrian Groza
Ontology Building Competition (BOC) is a competition for developing ontologies, where the evaluation and the final ranking is done automatically, based on five dimensions of evaluation: structural, semantic, design patterns, worst... more
Ontology Building Competition (BOC) is a competition for developing ontologies, where the evaluation and the final ranking is done automatically, based on five dimensions of evaluation: structural, semantic, design patterns, worst practices and the ability of answering to a set of competency questions. The rules of the competition are described directly in formal language. The reference ontology for this competition has been developed in RacerPro. The RacerPro LISP API was used to define evaluation metrics on the ontologies and to compute ranking. For checking the domain coverage, the ontologies should i) provide answers to some predefined competency questions (CQ) and ii) cover some specific pre-defined terms. CQs have two sources: 1) pre-defined by the organizers, or 2) proposed directly by the competitors. The first set of CQs aims to assure convergence of the ontologies developed by the participants. This set assures that all the ontologies cover a common kernel of concepts and relations in different modelling approaches. The common kernel is proportional to the size of the CQs set. With the second set of CQs, the competitors have incentives to formulate questions that their ontology can easily answer, but which can cause problems for the other competitors. The participants had to map their ontology concepts to the terms appearing in critical questions. BOC2013 represents the first edition of the competition, which a participation of 25 teams. In this paper the formal specification, the evaluation metrics that were used, and an analysis of the results of BOC 2013 are presented. Both the evaluation framework and the participating ontologies are public available on the competition page.
While climate experts have agreed that global warming is real, this consensus has not reached all the society levels. Our aim is to develop a conversational agent able to explain issues related to global warming. The developed chatbot... more
While climate experts have agreed that global warming is real, this consensus has not reached all the society levels. Our aim is to develop a conversational agent able to explain issues related to global warming. The developed chatbot relies on textual entailment to identify the best answer for a statement conveyed by a human agent. To enhance the conversational capabilities we employed the technical instrumentation provided by the API.AI framework. To exploit domain knowledge, the agent uses climate change ontologies converted into a adequate format for the API.AI model. Hence, we developed a Climebot, that is an argumentative agent for climate change based on ontologies and textual entailment.
There is an increasing demand for explainable models in various applications. We interleave here explanations generated both from black-box models and white-box models. To achieve this we employ (i) a local model-agnostic technique... more
There is an increasing demand for explainable models in various applications. We interleave here explanations generated both from black-box models and white-box models. To achieve this we employ (i) a local model-agnostic technique (LIME), (ii) a theoretic game approach (SHAP), (iii) an example-based explanation technique (ExMatchina), and (iv) a model-specific method (KTrain). The running scenario is to identify persons from images and to explain the algorithmic decision. First, we extract features from images in order to train a white-box model and deploy explanation methods that work with tabular data. Second, we take advantage of transfer learning and fine-tuning to train black-box models and deploy image-based explanations methods. We aim to increase the interpretability of the system by providing explanations. We use the results to determine whether or not decisions are valid or if explanations are viable. Although many explanations techniques have been developed, there are no appropriate performance metrics to evaluate them. We propose two metrics to evaluate explanations quality. Based on these metrics, we demonstrate that LIME is unstable as it generates different explanations for same instance at multiple runs. Advantages and disadvantages of explanations techniques are also discussed and a user-grounded evaluation is performed. The evaluation study reveals that several explanation techniques were pre-ferred by participants, followed by features listing, example-based explanations and pixels-based explanations.
The role of games in computing science has been given some consideration in the literature. We aim to design games that can be used for teaching logic. Specifically, we build on the Minefield game used in the Stanford Intro to Logic... more
The role of games in computing science has been given some consideration in the literature. We aim to design games that can be used for teaching logic. Specifically, we build on the Minefield game used in the Stanford Intro to Logic course, and present a version, which we call MineFOL, that incorporates time constraints. We also move beyond the Stanford examples in that we provide a formalisation of the game, with a view to eventually generating game instances automatically as well as from user input, and show how MineFOL can be used as a learning tool. We also developed an online platform supporting learners in at least three ways. First, the learners can practice reasoning in First Order Logic (FOL) and proving strategies such as resolution through various MineFOL games. Second, the learners can practice formalisation in FOL, by allowing learners to build their own games. Third, the learners become aware of the need of interleaving various technologies to solve a logicbased task. In this line, MineFOL is formalised in such way to allow augmenting reasoning in FOL with search strategies.
The era of artificial intelligence (AI) has revolutionized our daily lives and AI has become a powerful force that is gradually transforming the field of medicine. Ophthalmology sits at the forefront of this transformation thanks to the... more
The era of artificial intelligence (AI) has revolutionized our daily lives and AI has become a powerful force that is gradually transforming the field of medicine. Ophthalmology sits at the forefront of this transformation thanks to the effortless acquisition of an abundance of imaging modalities. There has been tremendous work in the field of AI for retinal diseases, with age-related macular degeneration being at the top of the most studied conditions. The purpose of the current systematic review was to identify and evaluate, in terms of strengths and limitations, the articles that apply AI to optical coherence tomography (OCT) images in order to predict the future evolution of age-related macular degeneration (AMD) during its natural history and after treatment in terms of OCT morphological structure and visual function. After a thorough search through seven databases up to 1 January 2022 using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideli...
Residency training in medicine lays the foundation for future medical doctors. In real-world settings, training centers face challenges in trying to create balanced residency programs, with cases encountered by residents not always being... more
Residency training in medicine lays the foundation for future medical doctors. In real-world settings, training centers face challenges in trying to create balanced residency programs, with cases encountered by residents not always being fairly distributed among them. In recent years, there has been a tremendous advancement in developing artificial intelligence (AI)-based algorithms with human expert guidance for medical imaging segmentation, classification, and prediction. In this paper, we turned our attention from training machines to letting them train us and developed an AI framework for personalised case-based ophthalmology residency training. The framework is built on two components: (1) a deep learning (DL) model and (2) an expert-system-powered case allocation algorithm. The DL model is trained on publicly available datasets by means of contrastive learning and can classify retinal diseases from color fundus photographs (CFPs). Patients visiting the retina clinic will have ...
We focus here on designing agents for games with incomplete information, such that the Stratego game. We develop two playing agents that use probabilities and forward reasoning with multiple-ply. We also proposed various evaluation... more
We focus here on designing agents for games with incomplete information, such that the Stratego game. We develop two playing agents that use probabilities and forward reasoning with multiple-ply. We also proposed various evaluation functions for a given position and we analyse the importance of the starting configuration.
We consider the problem of finding the minimal sequence of questions needed to identify an unknown element from a set of cardinality M. This task is common meet in games such as Guess Who, 20 Questions or Akinator. Our scenario is to... more
We consider the problem of finding the minimal sequence of questions needed to identify an unknown element from a set of cardinality M. This task is common meet in games such as Guess Who, 20 Questions or Akinator. Our scenario is to identify a person based on features extracted from an image. The assumption is that the user thinks at any person described on the DBpedia. We do not store previous expert knowledge or user profile. The sequence of questions is built based on heuristics that favor the most relevant features: information gain, gain ration, probabilistic entropy. As we deal with features that are automatically extracted from images, the data is noisy. We test the performance of the method using simulated dialogues between software agent and human agent.
Debate sites in social media provide a unified platform for citizens to discuss controversial questions and to put forward their ideas and arguments on the issues of common interest. Opinions of citizens may provide useful knowledge to... more
Debate sites in social media provide a unified platform for citizens to discuss controversial questions and to put forward their ideas and arguments on the issues of common interest. Opinions of citizens may provide useful knowledge to stakeholders but manual analysis of arguments in debate sites is tedious, while computational support to this end has been rather scarce. We focus here on developing a technical instrumentation for making sense of a set of online arguments and aggregating them into usable results for policy making and climate science communication. Our objectives are: (i) to aggregate arguments posted for a certain debate topic, (ii) to consolidate opinions posted under several but related topics either in the same or different debate site, and (iii) to identify possible linguistic characteristics of the argumentative texts. For the first objective, we propose a voting method based on subjective logic [13]. For the second objective, we assess the semantic similarity b...
Research Interests:
ABSTRACT In this paper we discuss the numerical solving of one of the well known problems related to the production systems. Namely, it is about facility location problem, which is reformulated in terms of non-convex minimization problems... more
ABSTRACT In this paper we discuss the numerical solving of one of the well known problems related to the production systems. Namely, it is about facility location problem, which is reformulated in terms of non-convex minimization problems with quadratic constraints. After relaxation of the above problem, a model based on second-order cone programming is obtained.
The leading diagnostic tool in modern ophthalmology, Optical Coherence Tomography (OCT), is not yet able to establish the evolution of retinal diseases. Our task is to forecast the progression of retinal diseases by means of machine... more
The leading diagnostic tool in modern ophthalmology, Optical Coherence Tomography (OCT), is not yet able to establish the evolution of retinal diseases. Our task is to forecast the progression of retinal diseases by means of machine learning technologies. The aim is to help the ophthalmologist to determine when early treatment is needed in order to prevent severe vision impairment or even blindness. The acquired data are made up of sequences of visits from multiple patients with age-related macular degeneration (AMD), which, if not treated at the appropriate time, may result in irreversible blindness. The dataset contains 94 patients with AMD and there are 161 eyes included with more than one medical examination. We used various techniques from machine learning (linear regression, gradient boosting, random forest and extremely randomised trees, bidirectional recurrent neural network, LSTM network, GRU network) to handle technical challenges such as how to learn from small-sized time...

And 95 more