Nuno Fachada
Universidade Lusófona, COPELABS, Faculty Member
- Researcher.edit
Modeling and simulation (M&S) serve as essential tools in various scientific and engineering domains, enabling the representation of complex systems and processes without the constraints of physical experimentation. These tools have... more
Modeling and simulation (M&S) serve as essential tools in various scientific and engineering domains, enabling the representation of complex systems and processes without the constraints of physical experimentation. These tools have evolved significantly with the integration of artificial intelligence (AI), which offers advanced capabilities in essential aspects of M&S such as optimization, data analysis, and verification and validation. AI’s capacity to enhance M&S is demonstrated in applications ranging from engineering and physics to social sciences and biology, providing novel approaches to problem-solving and system understanding.
Research Interests:
The majority of current commercial available Unmanned Aerial Vehicles (UAV) depend mainly on satellite signals to estimate their position. Although this is not a problem in outdoor environments, in scenarios where these signals are... more
The majority of current commercial available Unmanned Aerial Vehicles (UAV) depend mainly on satellite signals to estimate their position. Although this is not a problem in outdoor environments, in scenarios where these signals are obstructed, the UAV may be unable to estimate its position, rendering navigation infeasible. Thus, in satellite-less environments alternative methods are required. The AutoNAV package allows the simulation of Unmanned Aerial Vehicles (UAVs) in environments with limited-to-no satellite signal (such as indoor environments), taking advantage of (terrestrial) radio measurements between stationary reference points (anchors) and a UAV to estimate the position of the UAV. The package provides Python implementations of two algorithms for this purpose: Generalized Trust Region Sub-Problem (GTRS) and Weighted Least Squares (WLS). To provide a user-friendly experience and a straightforward comparison process for other researchers, functions that extract the most relevant metrics and plot the ground truth trajectory and the estimated positions by the algorithms are also included. In the provided examples, a scenario of UAV navigation inside a warehouse, e.g., for stock inventory counting, is designed and depicted. Moreover, the package is designed with modularity in mind, enabling researchers to easily implement, compare and integrate their work, fostering a centralized and straightforward approach for analyzing the performance of existing methods.
Research Interests:
Collective organization in multi-party conversations emerges through the exchange of utterances between participants. While most research has focused on content-centred mechanisms that lead to emergent conversational coordination, less... more
Collective organization in multi-party conversations emerges through the exchange of utterances between participants. While most research has focused on content-centred mechanisms that lead to emergent conversational coordination, less attention has been given to explaining conversational order based on who is addressed and who responds, especially when dealing with large conversational datasets. In this paper, we introduce a Python library, ParShift , that implements a state-of-the-art theoretical quantitative framework known as Participation Shifts. This framework enables researchers to study participant-centred order and differentiation in multi-party conversations. With ParShift , researchers can characterize conversations by quantifying the probabilities of events related to how people address each other during conversations. This library is particularly useful for studying conversation threads in social networks, parliamentary debates, team meetings, or student debates on a large scale. Code metadata Current code version v1.
Research Interests:
Synthetic data is essential for assessing clustering techniques, complementing and extending real data, and allowing for more complete coverage of a given problem's space. In turn, synthetic data generators have the potential of creating... more
Synthetic data is essential for assessing clustering techniques, complementing and extending real data, and allowing for more complete coverage of a given problem's space. In turn, synthetic data generators have the potential of creating vast amounts of data-a crucial activity when real-world data is at premium-while providing a well-understood generation procedure and an interpretable instrument for methodically investigating cluster analysis algorithms. Here, we present Clugen, a modular procedure for synthetic data generation, capable of creating multidimensional clusters supported by line segments using arbitrary distributions. Clugen is open source, comprehensively unit tested and documented, and is available for the Python, R, Julia, and MATLAB/Octave ecosystems. We demonstrate that our proposal can produce rich and varied results in various dimensions, is fit for use in the assessment of clustering algorithms, and has the potential to be a widely used framework in diverse clustering-related research tasks.
Research Interests:
This article presents a dataset of 10,917 news articles with hierarchical news categories collected between 1 January 2019 and 31 December 2019. We manually labeled the articles based on a hierarchical taxonomy with 17 first-level and 109... more
This article presents a dataset of 10,917 news articles with hierarchical news categories collected between 1 January 2019 and 31 December 2019. We manually labeled the articles based on a hierarchical taxonomy with 17 first-level and 109 second-level categories. This dataset can be used to train machine learning models for automatically classifying news articles by topic. This dataset can be helpful for researchers working on news structuring, classification, and predicting future events based on released news.
Research Interests:
Technology plays a crucial role in the management of natural resources in agricultural production. Free and open-source software and sensor technology solutions have the potential to promote more sustainable agricultural production. The... more
Technology plays a crucial role in the management of natural resources in agricultural production. Free and open-source software and sensor technology solutions have the potential to promote more sustainable agricultural production. The goal of this rapid review is to find exclusively free and open-source software for precision agriculture, available in different electronic databases, with emphasis on their characteristics and application formats, aiming at promoting sustainable agricultural production. A thorough search of the Google Scholar, GitHub, and GitLab electronic databases was performed for this purpose. Studies reporting and/or repositories containing up-to-date software were considered for this review. The various software packages were evaluated based on their characteristics and application formats. The search identified a total of 21 free and open-source software packages designed specifically for precision agriculture. Most of the identified software was shown to be extensible and customizable, while taking into account factors such as transparency, speed, and security, although some limitations were observed in terms of repository management and source control. This rapid review suggests that free and open-source software and sensor technology solutions play an important role in the management of natural resources in sustainable agricultural production, and highlights the main technological approaches towards this goal. Finally, while this review performs a preliminary assessment of existing free and open source solutions, additional research is needed to evaluate their effectiveness and usability in different scenarios, as well as their relevance in terms of environmental and economic impact on agricultural production.
Research Interests:
Uavnoma is a set of Python functions and a front-end script for modeling, studying, and analyzing the communication system composed by an unmanned aerial vehicle (UAV) and two ground users. We assume that the UAV acts as an aerial base... more
Uavnoma is a set of Python functions and a front-end script for modeling, studying, and analyzing the communication system composed by an unmanned aerial vehicle (UAV) and two ground users. We assume that the UAV acts as an aerial base station to serve the users according to non-orthogonal multiple access (NOMA) principles. For more practical insights, residual hardware impairments (RHI) and imperfect successive interference cancellation (ipSIC) are considered. More specifically, uavnoma allows the modelers to study and visualize the system performance in terms of achievable rate and outage probability. Additionally, the package produces figures and tables showcasing results from the specified performance metrics.
Research Interests:
Preprocessing text data sets for use in Natural Language Processing tasks is usually a time-consuming and expensive effort. Text data, normally obtained from sources such as, but not limited to, web scraping, scanned documents or PDF... more
Preprocessing text data sets for use in Natural Language Processing tasks is usually a time-consuming and expensive effort. Text data, normally obtained from sources such as, but not limited to, web scraping, scanned documents or PDF files, is typically unstructured and prone to artifacts and other types of noise. The goal of the TextCL package is to simplify this process by providing multiple methods suited for text data preprocessing. It includes functionality for splitting texts into sentences, filtering sentences by language, perplexity filtering, and removing duplicate sentences. Another functionality offered by the TextCL package is the outlier detection module, which allows to identify and filter out texts that are different from the main topic distribution of the data set. This method allows selecting one of several unsupervised outlier detection algorithms, such as TONMF (block coordinate descent framework), RPCA (robust principal component analysis), or SVD (singular value decomposition) and apply it to the text data.
Research Interests:
A retail business is a network of similar-format grocery stores with a sole proprietor and a well-established logistical infrastructure. The retail business is a stable market, with low growth, limited customer revenues, and intense... more
A retail business is a network of similar-format grocery stores with a sole proprietor and a well-established logistical infrastructure. The retail business is a stable market, with low growth, limited customer revenues, and intense competition. On the system level, the retail industry is a dynamic system that is challenging to represent due to uncertainty, nonlinearity, and imprecision. Due to the heterogeneous character of retail systems, direct scenario modeling is arduous. In this article, we propose a framework for retail system scenario planning that allows managers to analyze the effect of different quantitative and qualitative factors using fuzzy cognitive maps. Previously published fuzzy retail models were extended by adding external factors and combining expert knowledge with domain research results. We determined the most suitable composition of fuzzy operators for the retail system, highlighted the system’s most influential concepts, and how the system responds to changes in external factors. The proposed framework aims to support senior management in conducting flexible long-term planning of a company’s strategic development, and reach its desired business goals.
Research Interests:
In this paper we present a technique for procedurally generating 3D maps using a set of premade meshes which snap together based on designer-specified visual constraints. The proposed approach avoids size and layout limitations, offering... more
In this paper we present a technique for procedurally generating 3D maps using a set of premade meshes which snap together based on designer-specified visual constraints. The proposed approach avoids size and layout limitations, offering the designer control over the look and feel of the generated maps, as well as immediate feedback on a given map's navigability. A prototype implementation of the method, developed in the Unity game engine, is discussed, and a number of case studies are analyzed. These include a multiplayer game where the method was used, together with a number of illustrative examples which highlight various parameterizations and piece selection methods. The technique can be used as a designercentric map composition method and/or as a prototyping system in 3D level design, opening the door for quality map and level creation in a fraction of the time of a fully human-based approach. INDEX TERMS 3D maps, computer games, designer-centric methods, layout, procedural content generation (PCG).
Research Interests:
Automated assessment tools (AATs) are software systems used in teaching environments to automate the evaluation of computer programs implemented by students. These tools can be used to stimulate the interest of computer science students... more
Automated assessment tools (AATs) are software systems used in teaching environments to automate the evaluation of computer programs implemented by students. These tools can be used to stimulate the interest of computer science students in programming courses by providing quick feedback on their work and highlighting their mistakes. Despite the abundance of such tools, most of them are developed for a specific course and are not production-ready. Others lack advanced features that are required for certain pedagogical goals (e.g. Git integration) and/or are not flexible enough to be used with students having different computer literacy levels, such as first year and second year students. In this paper we present Drop Project (DP), an automated assessment tool built on top of the Maven build automation software. We have been using DP in our teaching activity since 2018, having received more than fifty thousand submissions between projects, classroom exercises, tests and homework assignments. The tool's automated feedback has allowed us to raise the difficulty level of the course's projects, while the grading process has become more efficient and consistent between different teachers. DP is an extensively tested, production-ready tool. The software's code and documentation are available in GitHub under an open-source software license.
Research Interests:
Skeletal muscle physiology remains of paramount importance in understanding insulin resistance. Due to its high lipid turnover rates, regulation of intramyocellular lipid droplets (LDs) is a key factor. Perilipin 5 (PLIN5) is one of the... more
Skeletal muscle physiology remains of paramount importance in understanding insulin resistance. Due to its high lipid turnover rates, regulation of intramyocellular lipid droplets (LDs) is a key factor. Perilipin 5 (PLIN5) is one of the most critical agents in such regulation, being often referred as a protector against lipotoxicity and consequent skeletal muscle insulin resistance. We examined area fraction, size, subcellular localization and PLIN5 association of LDs in two fiber types of type 2 diabetic (T2D), obese (OB) and healthy (HC) individuals by means of fluorescence microscopy and image analysis. We found that T2D type II fibers have a significant subpopulation of large and internalized LDs, uncoated by PLIN5. Based on this novel result, additional hypotheses for the pathophysiology of skeletal muscle insulin resistance are formulated, together with future research directions.
Research Interests:
We present SimpAI, an AI agent created for the ColorShape-Links competition, based on an arbitrarily sized version of the Simplexity board game. The agent uses a highly efficient parallelized Minimax-type search, with an heuristic... more
We present SimpAI, an AI agent created for the ColorShape-Links competition, based on an arbitrarily sized version of the Simplexity board game. The agent uses a highly efficient parallelized Minimax-type search, with an heuristic function composed of several partial heuristics, the balance of which was optimized with an evolutionary algorithm. SimpAI was the runner-up in the competition's most challenging session, which required an AI agent with good adaptation capabilities.
Research Interests:
This paper investigates the performance and scalability of a new update strategy for the particle swarm optimization (PSO) algorithm. The strategy is inspired by the Bak–Sneppen model of co-evolution between interacting species, which is... more
This paper investigates the performance and scalability of a new update strategy for the particle swarm optimization (PSO) algorithm. The strategy is inspired by the Bak–Sneppen model of co-evolution between interacting species, which is basically a network of fitness values (representing species) that change over time according to a simple rule: the least fit species and its neighbors are iteratively replaced with random values. Following these guidelines, a steady state and dynamic update strategy for PSO algorithms is proposed: only the least fit particle and its neighbors are updated and evaluated in each time-step; the remaining particles maintain the same position and fitness, unless they meet the update criterion. The steady state PSO was tested on a set of unimodal, multimodal, noisy and rotated benchmark functions, significantly improving the quality of results and convergence speed of the standard PSOs and more sophisticated PSOs with dynamic parameters and neighborhood. A...
Verification and validation are two important aspects of model building. Verification and validation compare models with observations and descriptions of the problem modelled, which may include other models that have been verified and... more
Verification and validation are two important aspects of model building. Verification and validation compare models with observations and descriptions of the problem modelled, which may include other models that have been verified and validated to some level. However, the use of simulation for modelling social complexity is very diverse. Often, verification and validation do not refer to an explicit stage in the simulation development process, but to the modelling process itself, according to good practices and in a way that grants credibility to using the simulation for a specific purpose. One cannot consider verification and validation without considering the purpose of the simulation. This chapter deals with a comprehensive outline of methodological perspectives and practical uses of verification and validation. The problem of evaluating simulations is addressed in four main topics: (1) the meaning of the terms verification and validation in the context of simulating social compl...
In this paper we present a technique for procedurally generating 3D maps using a set of premade meshes which snap together based on designer-specified visual constraints. The proposed approach avoids size and layout limitations, offering... more
In this paper we present a technique for procedurally generating 3D maps using a set of premade meshes which snap together based on designer-specified visual constraints. The proposed approach avoids size and layout limitations, offering the designer control over the look and feel of the generated maps, as well as immediate feedback on a given map’s navigability. A prototype implementation of the method, developed in the Unity game engine, is discussed, and a number of case studies are analyzed. These include a multiplayer game where the method was used, together with a number of illustrative examples which highlight various parameterizations and generation methods. We argue that the technique is designer-friendly and can be used as a map composition method and/or as a prototyping system in 3D level design, opening the door for quality map and level creation in a fraction of the time of a fully human-based approach.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
ColorShapeLinks is an AI board game competition framework specially designed for students and educators in videogame development, with openness and accessibility in mind. The competition is based on an arbitrarily-sized version of the... more
ColorShapeLinks is an AI board game competition framework specially designed for students and educators in videogame development, with openness and accessibility in mind. The competition is based on an arbitrarily-sized version of the Simplexity board game, the motto of which, "simple to learn, complex to master", is curiously also applicable to AI agents. ColorShapeLinks offers graphical and text-based frontends and a completely open and documented development framework built using industry standard tools and following software engineering best practices. ColorShapeLinks is not only a competition, but both a game and a framework which educators and students can extend and use to host their own competitions. It has been successfully used for running internal competitions in AI classes, as well as for hosting an international AI competition at the IEEE Conference on Games.
Research Interests:
Cellular evolutionary algorithms (cEAs) are a particular type of EAs in which a communication structure is imposed to the population and mating restricted to topographically nearby individuals. In general, these algorithms have longer... more
Cellular evolutionary algorithms (cEAs) are a particular type of EAs in which a communication structure is imposed to the population and mating restricted to topographically nearby individuals. In general, these algorithms have longer takeover times than panmictic EAs and previous investigations argue that they are more efficient in escaping local optima of multimodal and deceptive functions. However, most of those studies are not primarily concerned with population size, despite being one of the design decisions with a greater impact in the accuracy and convergence speed of population-based metaheuristics. In this paper, optimal population size for cEAs structured by regular and random graphs with different degree is estimated. Selecto-recombinative cEAs and standard cEAs with mutation and different types of crossover were tested on a class of functions with tunable degrees of difficulty. Results and statistical tests demonstrate the importance of setting an appropriate population size. Event Takeover Values (ETV) were also studied and previous assumptions on their distribution were not confirmed: although ETV distributions of panmictic EAs are heavy-tailed, log-log plots of complementary cumulative distribution functions display no linearity. Furthermore, statistical tests on ETVs generated by several instances of the problems conclude that power law models cannot be favored over log-normal. On the other hand, results confirm that cEAs impose deviations to distribution tails and that large ETVs are less probable when the population is structured by graphs with low connectivity degree. Finally, results suggest that for panmictic EAs the ETVs’ upper bounds are approximately equal to the optimal population size.
Research Interests:
generateData is a MATLAB/Octave function for generating 2D data clusters. Data is created along straight lines, which can be more or less parallel depending on the selected input parameters. The function also allows to fine-tune the... more
generateData is a MATLAB/Octave function for generating 2D data clusters. Data is created along straight lines, which can be more or less parallel depending on the selected input parameters. The function also allows to fine-tune the generated data with respect to number of clusters, total data points, average cluster separation and several other distributional properties.
Research Interests:
A video game design and development degree is a very specic choice for students, who are mainly interested in making games or taking part in the game development process. Databases are not an apparent requirement in order to pursue these... more
A video game design and development degree is a very specic choice for students, who are mainly interested in making games or taking part in the game development process. Databases are not an apparent requirement in order to pursue these goals, often leading to a lack of motivation and interest in the subject. Nonetheless, a number of Bachelor degrees in video game design and development acknowledge the importance of Databases, and offer it as a mandatory course in their curricula. However, the subject exposition is often done in the context of typical database areas of application, such as business settings, websites or library and university management. In this paper we describe and propose four classroom database problems specically designed for video game design and development students. It is our belief that using a context-aware approach yields not only more motivated students, but students with a better understanding of database concepts so often necessary to design and develop computer games.
Research Interests:
micompm is a MATLAB / GNU Octave port of the original micompr R package for comparing multivariate samples associated with different groups. Its purpose is to determine if the compared samples are significantly different from a... more
micompm is a MATLAB / GNU Octave port of the original micompr R package for comparing multivariate samples associated with different groups. Its purpose is to determine if the compared samples are significantly different from a statistical point of view. This method uses principal component analysis to convert multivariate observations into a set of linearly uncorrelated statistical measures, which are then compared using statistical tests and score plots. This technique is independent of the distributional properties of samples and automatically selects features that best explain their differences, avoiding manual selection of specific points or summary statistics. The procedure is appropriate for comparing samples of time series, images, spectrometric measures or similar multivariate observations. It is aimed at researchers from all fields of science, although it requires some knowledge on design of experiments, statistical testing and multidimensional data analysis.
Research Interests:
Verification and validation are two important aspects of model building. Verification and validation compare models with observations and descriptions of the problem modelled, which may include other models that have been verified and... more
Verification and validation are two important aspects of model building. Verification and validation compare models with observations and descriptions of the problem modelled, which may include other models that have been verified and validated to some level. However, the use of simulation for modelling social complexity is very diverse. Often, verification and validation do not refer to an explicit stage in the simulation development process, but to the modelling process itself, according to good practices and in a way that grants credibility to using the simulation for a specific purpose. One cannot consider verification and validation without considering the purpose of the simulation. This chapter deals with a comprehensive outline of methodological perspectives and practical uses of verification and validation. The problem of evaluating simulations is addressed in four main topics: (1) the meaning of the terms verification and validation in the context of simulating social complexity; (2) types of validation, as well as techniques for validating simulations; (3) model replication and comparison as cornerstones of verification and validation; and (4) the relationship of various validation types and techniques with different modelling strategies.
PerfAndPubTools consists of a set of MATLAB/Octave functions for the post-processing and analysis of software performance benchmark data and producing associated publication quality materials.
Research Interests:
We present five project assignments for teaching database concepts to undergraduate students in interdisciplinary degree programs, where teaching databases as a standalone course is not practical. The projects were developed for a... more
We present five project assignments for teaching database concepts to undergraduate students in interdisciplinary degree programs, where teaching databases as a standalone course is not practical. The projects were developed for a CS2-level programming course, where LINQ and basic database concepts are lectured, and have been shown to effectively engage students and improve their understanding of queries and data manipulation.
Research Interests:
Patient feedback is a crucial component in identifying areas of improvement and enhancing service quality in healthcare. However, manual analysis of a large volume of reviews is challenging. This paper proposes a novel framework and... more
Patient feedback is a crucial component in identifying areas of improvement and enhancing service quality in healthcare. However, manual analysis of a large volume of reviews is challenging. This paper proposes a novel framework and software for sentiment analysis and topic modeling with the goal of automating this process, providing a more efficient method for data extraction and facilitating informed decision-making. The Google reviews dataset, a rich source of patient feedback, is used for this purpose. We present findings from related research to identify potential strategies for implementing this framework. The ultimate goal is to understand patient sentiments, identify common complaint topics, and highlight positive aspects of healthcare centers. This approach will provide valuable insights that are essential for the continuous improvement and success of healthcare centers.
Research Interests:
This position paper proposes the hypothesis that physiological noise artefacts can be classified based on the type of movements performed by participants in Virtual Reality contexts. To assess this hypothesis, a detailed research plan is... more
This position paper proposes the hypothesis that physiological noise artefacts can be classified based on the type of movements performed by participants in Virtual Reality contexts. To assess this hypothesis, a detailed research plan is proposed to study the influence of movement on the quality of the captured physiological signals. This paper argues that the proposed plan can produce a valid model for classifying noisy physiological signal features, providing insights into the influence of movement on artefacts, while contributing to the development of movement-based filters and the implementation of best practices for using various associated technologies.
Research Interests:
Snappable Meshes is an algorithm that procedurally generates 3D environments by iteratively selecting and linking pre-built map pieces. These pieces are triangular meshes annotated by designers with connectors marking potential links, and... more
Snappable Meshes is an algorithm that procedurally generates 3D environments by iteratively selecting and linking pre-built map pieces. These pieces are triangular meshes annotated by designers with connectors marking potential links, and bounding volumes indicating where overlaps should be avoided. In this article, we present a method for automatically generating connectors and bounding volumes from generic non-manifold triangular meshes for use with the Snappable Meshes algorithm, minimizing artist/designer work, while encouraging iteration of map piece design, an essential part of successful environment generation.
Research Interests:
Wildfires constitute a major socioeconomic burden. While a number of scientific and technological methods have been used for predicting and mitigating wildfires, this is still an open problem. In turn, agent-based modeling is a modeling... more
Wildfires constitute a major socioeconomic burden. While a number of scientific and technological methods have been used for predicting and mitigating wildfires, this is still an open problem. In turn, agent-based modeling is a modeling approach where each entity of the system being modeled is represented as an independent decision-making agent. It is a useful technique for studying systems that can be modeled in terms of interactions between individual components. Consequently, it is an interesting methodology for modeling wildfire behavior. In this position paper, we propose a complete computational pipeline for modeling and predicting wildfire behavior by leveraging agent-based modeling, among other techniques. This project is to be developed in collaboration with scientific and civil stakeholders, and should produce an open decision support system easily extendable by stakeholders and other interested parties.
Research Interests:
We present a method for procedural generation of 3D levels based on a system of connectors with pins and human-made pieces of geometry. The method avoids size limitations and layout constraints, while offering the level designer a high... more
We present a method for procedural generation of 3D levels based on a system of connectors with pins and human-made pieces of geometry. The method avoids size limitations and layout constraints, while offering the level designer a high degree of control over the generated maps. The proposed approach was originally developed for and tested on a multiplayer shooter game, nonetheless being sufficiently general to be useful for other types of game, as demonstrated by a number of additional experiments. The method can be used as both a level design and level creation tool, opening the door for quality map creation in a fraction of the time of a fully human-based approach.
Research Interests:
Abstract. Computer simulations play an important role as a tool for predicting and understanding the behaviour of complex systems. The immune system is one such system, and it is feasible to expect that a well tuned model can provide... more
Abstract. Computer simulations play an important role as a tool for predicting and understanding the behaviour of complex systems. The immune system is one such system, and it is feasible to expect that a well tuned model can provide qualitatively similar behaviour. Such models can have several applications, among which is its use as a virtual test bench in the first stages of drug testing. Most of the existing models are developed using a formal approach based on differential equations, which yields average behaviour and ...
Immune system (IS) simulations have several applications, such as biological theory testing or as a complement in the development of improved drugs. This paper presents an agent based approach to simulate the IS response to bacterial... more
Immune system (IS) simulations have several applications, such as biological theory testing or as a complement in the development of improved drugs. This paper presents an agent based approach to simulate the IS response to bacterial infection challenge. The agent simulator is implemented in a discrete time and twodimensional space, and composed by two layers: a) a specialized cellular automata responsible for substance di usion and reactions; and b) the layer where agents move, act and interact. The IS model focuses ...
Research Interests:
ABSTRACT We aim to build an open source bioimage informatics tool capable of analyzing 2D and 3D topography (absolute and relative localization, shape, size and intensity) of distinct types of intracellular particles (such as organelles,... more
ABSTRACT We aim to build an open source bioimage informatics tool capable of analyzing 2D and 3D topography (absolute and relative localization, shape, size and intensity) of distinct types of intracellular particles (such as organelles, proteins, lipids and other detectable biomolecules) in different types of cells in a high-throughput fashion. To achieve this, TopoCell is primarily designed to run fluorescence microscope images, where all possible channels imaged are used to semi-automatically detect different particles, define cell type and segment cells. To each detected particle, a coordinate, intensity, volume, and sphericity value are attributed. Cells are detected through a plasmalemma marker, where volume and coordinate values are generated after the segmentation step. Numerous cell types can be defined by the user accordingly to the density of given intracellular particles. By controlling a dedicated filter, the user can select which cells and particles to consider in the analysis according to their detected values, including relative distance between two or more types of particles. Visualization of results and statistical analysis are also available. TopoCell allows us to comprehensively measure in still microscope images, the distribution of intracellular phenomena which otherwise would be negligible to the naked eye, as for instance translocation or particle association.
Immune system (IS) simulations have several applications, such as biological theory testing or as a complement in the development of improved drugs. This paper presents an agent based approach to simulate the IS response to bacterial... more
Immune system (IS) simulations have several applications, such as biological theory testing or as a complement in the development of improved drugs. This paper presents an agent based approach to simulate the IS response to bacterial infection challenge. The agent simulator is implemented in a discrete time and twodimensional space, and composed by two layers: a) a specialized cellular automata responsible for substance di usion and reactions; and b) the layer where agents move, act and interact. The IS model focuses upon low level cellular receptor interactions, receptor diversity and genetic-ruled agents, aiming to observe and study the resultant emergent behavior. The model reproduces the following IS behavioral characteristics: speci city and specialization, immune memory and vaccine immunization.
Research Interests:
In this paper we introduce PyXYZ, a 3D wireframe software rendering framework for educational purposes. The main goal of this framework is to provide a simple-to-understand tool that students can use to build a more sophisticated engine,... more
In this paper we introduce PyXYZ, a 3D wireframe software rendering framework for educational purposes. The main goal of this framework is to provide a simple-to-understand tool that students can use to build a more sophisticated engine, while learning mathematics and acquiring a deeper knowledge of the complexity of a modern 3D engine. PyXYZ can be used as a teaching aid in course work and/or as a template for multi-goal project assignments, allowing students with diverse capabilities and interests to have different levels of commitment. The engine has been used with positive results in a mathematics course unit of a computer games BA and can be easily adapted to various teaching scenarios.
Research Interests:
Computer games are complex products incorporating software, design and art. Consequently, their development is a multidisciplinary effort, requiring professionals from several fields, who should nonetheless be knowledgeable across... more
Computer games are complex products incorporating software, design and art. Consequently, their development is a multidisciplinary effort, requiring professionals from several fields, who should nonetheless be knowledgeable across disciplines. Due to the variety of skills required, and in order to train these professionals, computer game development (GD) degrees have been appearing in North America and Europe since the late 1990s. Following this trend, several GD degrees have emerged in Portugal. Given the lack of specialized academic staff, not uncommon in younger scientific areas, some of these degrees "borrowed" computer science (CS) programs and faculty within the same institution, leading in some cases to a disconnect between CS theory and practice and the requirements of GD classes. In this paper, we discuss our experience in adapting the CS curriculum of a three-year computer games BA in accordance with GD requirements. We used a top-down approach, where the game engine used for GD informs the choice of CS topics and programming languages lectured in the CS curriculum. The discussion is centered around the choices taken and the theoretical and empirical rationale behind our decisions. Preliminary empirical results indicate a substantial increase in GD project quality and a clear improvement in the students' technical skills, as well as in their experimentation and adaptation capabilities.
Research Interests:
As unidades curriculares (UCs) de programação num curso de Videojogos devem ter dois objetivos: 1) um objetivo mais geral, que consiste em fornecer aos alunos as bases que lhes permitam assimilar conceitos gerais de programação,... more
As unidades curriculares (UCs) de programação num curso de Videojogos devem ter dois objetivos: 1) um objetivo mais geral, que consiste em fornecer aos alunos as bases que lhes permitam assimilar conceitos gerais de programação, matemática e física, bem como desenvolver o seu pensamento lógico e algorítmico; e, 2) um objetivo mais específico, que consiste na aprendizagem das ferramentas e conceitos concretos que permitam aos alunos trabalhar de forma fluída no game engine de eleição do curso. Este último objetivo enquadra as UCs de programação numa lógica top-down, pois a seleção do game engine guia a forma como os respetivos programas são preparados. Dentro desta perspetiva, as UCs de Programação devem alimentar e ter em vista possíveis colaborações com as restantes UCs, em especial as de game development puro. De forma transversal, os exemplos de aula e projetos de avaliação devem estar devidamente adaptados aos alunos em questão, de modo a tornar a exposição das matérias o mais apelativa possível. Neste documento discutiremos a forma como estes desafios estão a ser abordados na Licenciatura em Videojogos da Universidade Lusófona de Humanidades e Tecnologias.
Research Interests:
We investigate the convergence speed, accuracy, robustness and scalability of PSOs structured by regular and random graphs with 3 ≤ k ≤ n. The main conclusion is that regular and random graphs with the same averaged connectivity k may... more
We investigate the convergence speed, accuracy, robustness and scalability of PSOs structured by regular and random graphs with 3 ≤ k ≤ n. The main conclusion is that regular and random graphs with the same averaged connectivity k may result in significantly different performance, namely when k is low.
Research Interests:
Agent-based modeling (ABM) is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as a self-determining agent. Large scale emergent behavior in ABMs is population sensitive. As such, it is... more
Agent-based modeling (ABM) is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as a self-determining agent. Large scale emergent behavior in ABMs is population sensitive. As such, it is advisable that the number of agents in a simulation is able to reflect the reality of the system being modeled. This means that in domains such as social modeling, ecology, and biology, systems can contain millions or billions of individuals. Such large scale simulations are only feasible in non-distributed scenarios when the computational power of commodity processors, such as GPUs and multi-core CPUs, is fully exploited. In this paper we evaluate the feasibility of using CPU-oriented OpenCL for high-performance simulations of agent-based models. We compare a CPU-oriented OpenCL implementation of a reference ABM against a parallel Java version of the same model. We show that there are considerable gains in using CPU-based OpenCL for developing and implementing ABMs, with speedups up to 10x over the parallel Java version on a 10-core hyper-threaded CPU.
Research Interests:
The highly multivariate nature of EEG data often limits the search for statistically significant differences in data collected from two or more groups of subjects. We have recently developed a new technique for assessing whether two or... more
The highly multivariate nature of EEG data often limits the search for statistically significant differences in data collected from two or more groups of subjects. We have recently developed a new technique for assessing whether two or more multidimensional samples are drawn from the same distribution. Here, we apply this to EEG data collected from schizophrenia patients and healthy controls while performing a Visual Backward Masking (VBM) task.
Research Interests:
Immune system (IS) simulations have several applications, such as biological theory testing or as a complement in the development of improved drugs. This paper presents an agent based approach to simulate the IS response to bacterial... more
Immune system (IS) simulations have several applications, such as biological theory testing or as a complement in the development of improved drugs. This paper presents an agent based approach to simulate the IS response to bacterial infection challenge. The agent simulator is implemented in a discrete time and two-dimensional space, and composed by two layers: a) a specialized cellular automata responsible for substance diffusion and reactions; and b) the layer where agents move, act and interact. The IS model focuses upon low level cellular receptor interactions, receptor diversity and genetic-ruled agents, aiming to observe and study the resultant emergent behavior. The model reproduces the following IS behavioral characteristics: specificity and specialization, immune memory and vaccine immunization.
Research Interests:
Computer simulations play an important role as a tool for predicting and understanding the behaviour of complex systems. The immune system is one such system, and it is feasible to expect that a well tuned model can provide qualitatively... more
Computer simulations play an important role as a tool for predicting and understanding the behaviour of complex systems. The immune system is one such system, and it is feasible to expect that a well tuned model can provide qualitatively similar behaviour. Such models can have several applications, among which is its use as a virtual test bench in the first stages of drug testing. Most of the existing models are developed using a formal approach based on differential equations, which yields average behaviour and results for the total cellular population. This approach is not very intuitive, and though well formalized, can stray from biologic significance. In this paper we focus on cellular automata (CA) and agent-based models, that although poorly formalized, are more flexible and offer modelling possibilities close to biologic reality. We review and compare several models, discuss their methodologies and assumptions, and how these map on to plausible simulations. Finally we suggest a model based on those we consider the aspects of greater potential of the discussed solutions.
Research Interests:
Abstract: Computational models of the immune system (IS) and pathogenic agents have several applications, such as theory testing and validation, or as a complement to first stages of drug trials. One possible application is the prediction... more
Abstract: Computational models of the immune system (IS) and pathogenic agents have several applications, such as theory testing and validation, or as a complement to first stages of drug trials. One possible application is the prediction of the lethality of new Influenza A strains, which are constantly created due to antigenic drift and shift. Here, we present several simulations of antigenic variability in Influenza A using an agent-based approach, where low level molecular antigen-antibody interactions are explicitly described. Antigenic ...
Research Interests:
In spatial agent-based models (SABMs) each entity of the system being modeled is uniquely represented as an independent agent. Large scale emergent behavior in SABMs is population sensitive. Thus, the number of agents should reflect the... more
In spatial agent-based models (SABMs) each entity of the system being modeled is uniquely represented as an independent agent. Large scale emergent behavior in SABMs is population sensitive. Thus, the number of agents should reflect the system being modeled, which can be in the order of billions. Models can be decomposed such that each component can be concurrently processed by a different thread. In this thesis, a conceptual model for investigating parallelization strategies for SABMs is presented. The model, PPHPC, captures important characteristics of SABMs. NetLogo, Java and OpenCL (CPU and GPU) implementations are proposed. To confirm that all implementations yield the same behavior, their outputs are compared using two methodologies. The first is based on common model comparison techniques found in literature. The second is a novel approach which uses principal component analysis to convert simulation output into a set of linearly uncorrelated measures which can be analyzed in a model-independent fashion. In both cases, statistical tests are applied to determine if the implementations are properly aligned. Results show that most implementations are statistically equivalent, with lower-level parallel implementations offering substantial speedups. The PPHPC model was shown to be a valid template model for comparing SABM implementations.
Research Interests:
In spatial agent-based models (SABMs) each entity of the system being modeled is uniquely represented as an independent agent. Large scale emergent behavior in SABMs is population sensitive. Thus, the number of agents should reflect the... more
In spatial agent-based models (SABMs) each entity of the system being modeled is uniquely represented as an independent agent. Large scale emergent behavior in SABMs is population sensitive. Thus, the number of agents should reflect the system being modeled, which can be in the order of billions. Models can be decomposed such that each component can be concurrently processed by a different thread. In this thesis, a conceptual model for investigating parallelization strategies for SABMs is presented. The model, PPHPC, captures important characteristics of SABMs. NetLogo, Java and OpenCL (CPU and GPU) implementations are proposed. To confirm that all implementations yield the same behavior, their outputs are compared using two methodologies. The first is based on common model comparison techniques found in literature. The second is a novel approach which uses principal component analysis to convert simulation output into a set of linearly uncorrelated measures which can be analyzed in a model-independent fashion. In both cases, statistical tests are applied to determine if the implementations are properly aligned. Results show that most implementations are statistically equivalent, with lower-level parallel implementations offering substantial speedups. The PPHPC model was shown to be a valid template model for comparing SABM implementations.
Research Interests:
These synthetic data sets were produced with the purpose of benchmarking of clustering algorithms, sharing common characteristics with spectrometric data after PCA, i.e. low volume and similar direction clusters, but also present key... more
These synthetic data sets were produced with the purpose of benchmarking of clustering algorithms, sharing common characteristics with spectrometric data after PCA, i.e. low volume and similar direction clusters, but also present key differences regarding number of points, number of clusters, scale and inter- and intra-cluster point proximity. The data sets are labeled from S1 to S6, and are composed with a variable number of clusters and cluster elements. Of these, only S1 contains non-overlapping clusters; the remaining have at least one overlapping cluster. The S5 data set contains multiple mixed groups, making it impossible to differentiate them with 100% accuracy.
PPHPC is a conceptual model which captures important characteristics of spatial agent-based models (SABMs), such as agent movement and local agent interactions. It was designed with several goals in mind: Provide a basis for a tutorial on... more
PPHPC is a conceptual model which captures important characteristics of spatial agent-based models (SABMs), such as agent movement and local agent interactions. It was designed with several goals in mind: Provide a basis for a tutorial on complete model specification and thorough simulation output analysis. Investigate statistical comparison strategies for model replication. Compare different implementations from a performance point of view, using different frameworks, programming languages, hardware and/or parallelization strategies, while maintaining statistical equivalence among implementations. Test the influence of different pseudo-random number generators (PRNGs) on the statistical accuracy of simulation output. The model can be implemented using substantially different approaches that ensure statistically equivalent qualitative results. Implementations may differ in aspects such as the selected system architecture, choice of programming language and/or agent-based modeling fr...
Research Interests:
Research Interests:
A multithreaded Java implementation of the PPHPC Agent-Based Model, developed with two goals in mind: Compare the performance of this implementation with an existing NetLogo implementation. Study how different parallelization strategies... more
A multithreaded Java implementation of the PPHPC Agent-Based Model, developed with two goals in mind: Compare the performance of this implementation with an existing NetLogo implementation. Study how different parallelization strategies impact simulation performance on a shared memory architecture.
Research Interests:
These synthetic data sets were produced with the purpose of benchmarking of clustering algorithms, sharing common characteristics with spectrometric data after PCA, i.e. low volume and similar direction clusters, but also present key... more
These synthetic data sets were produced with the purpose of benchmarking of clustering algorithms, sharing common characteristics with spectrometric data after PCA, i.e. low volume and similar direction clusters, but also present key differences regarding number of points, number of clusters, scale and inter- and intra-cluster point proximity. The data sets are labeled from S1 to S6, and are composed with a variable number of clusters and cluster elements. Of these, only S1 contains non-overlapping clusters; the remaining have at least one overlapping cluster. The S5 data set contains multiple mixed groups, making it impossible to differentiate them with 100% accuracy.
Os algoritmos evolutivos (AE) são uma classe de algoritmos de otimização que usa uma população de soluções candidatas (indivíduos) e estratégias darwinistas de seleção e recombinação para procurar uma solução ótima ou satisfatória para um... more
Os algoritmos evolutivos (AE) são uma classe de algoritmos de otimização que usa uma população de soluções candidatas (indivíduos) e estratégias darwinistas de seleção e recombinação para procurar uma solução ótima ou satisfatória para um determinado problema computacional.