Skip to main content
Key Performance Indicators (KPIs) are a higher‐level characterization of the performance of a network slice, meant to be assessable at any time. Bodies such as the GSM Alliance have proposed the use of KPIs, including, but not limited to,... more
Key Performance Indicators (KPIs) are a higher‐level characterization of the performance of a network slice, meant to be assessable at any time. Bodies such as the GSM Alliance have proposed the use of KPIs, including, but not limited to, latency, throughput, power consumption, and security. However, while latency, throughput, and power consumption are mensurable universally, security is much harder to measure. In this article, we propose using a Moving Target Defense (MTD) approach and measurable network properties to establish a new straightforward network security metric for underlying resilience against network‐centric attacks. We called it DynSec, a comprehensive model for basic network security within the network slice. Monte Carlo experimentation showed that DynSec is accurate and suitable as a KPI.
The advent of future 5th Generation (5G) use cases, such as ultra-dense networking and ultra-low latency propelled by Smart Cities and IoT projects will demand revolutionary network infrastructures. The need for low latency, high... more
The advent of future 5th Generation (5G) use cases, such as ultra-dense networking and ultra-low latency propelled by Smart Cities and IoT projects will demand revolutionary network infrastructures. The need for low latency, high bandwidth, scalability, ubiquitous access and support for IoT resource-constrained devices are some of the prominent issues that networks have to face to support future 5G use cases, which arise since current wireless and mobile infrastructures are not able to fulfill. In particular, the pervasiveness and high-density of Wireless Local Area Networks (WLAN) at urban centers, together with their growing capacity and evolving standards, can be leveraged to support such demand. We argue that the integration of key 5G cornerstone technologies, such as Network Function Virtualization (NFV) and softwarization, fill some of the abovementioned gaps in regards to proper WLAN management and service orchestration. In this paper, we present a solution for slicing WLAN infrastructures, aiming to provide differentiated services on top of the same substrate through customized, isolated and independent digital building blocks. Through this proposal, we aim at efficiently handling ultra-dense networking 5G use cases to achieve benefits at unprecedented levels. Towards this goal, we present proof of concept realised over a real testbed and assess its feasibility.
Heating appliances such as HVAC systems consume around 48% of the energy power spent on household appliances every year. With this in mind, it is relevant to increase the efficiency of those solutions. Moreover, a malfunctioning device... more
Heating appliances such as HVAC systems consume around 48% of the energy power spent on household appliances every year. With this in mind, it is relevant to increase the efficiency of those solutions. Moreover, a malfunctioning device can increase this value even further. Thus, there is a need to develop methods that allow the identification of eventual failures before they occur. This is only achievable when services capable of analyzing data, interpret it and obtaining knowledge from it, are created. This paper presents an infrastructure that supports the inspection of failure detection in boilers, making viable to forecast faults and errors. A major part of the work is data analysis and the creation of procedures that can process it. The main goal is creating an efficient system able to identify, predict and notify the occurrence of failure events. Our fundamental contribution is the possibility to scale the system to others datasets, being able to resolve different Big Data issues.
Monitoring road traffic is extremely important given the possibilities it opens up in terms of studying the behavior of road users, road design and planning problems, as well as because it can be used to predict future traffic. Especially... more
Monitoring road traffic is extremely important given the possibilities it opens up in terms of studying the behavior of road users, road design and planning problems, as well as because it can be used to predict future traffic. Especially on highways that connect beaches and larger urban areas, traffic is characterized by having peaks that are highly dependent on weather conditions and rest periods. This paper describes a dataset of mobility patterns of a coastal area in Aveiro region, Portugal, fully covered with traffic classification radars, over a two-year period. The sensing infrastructure was deployed in the scope of the PASMO project, an open living lab for co-operative intelligent transportation systems. The data gathered includes the speed of the detected objects, their position, and their type (heavy vehicle, light vehicle, two-wheeler, and pedestrian). The dataset includes 74,305 records, corresponding to the aggregation of road information at 10 min intervals. A brief an...
Handing over highly demanding tasks to remote or nearby computing units helps accommodate the service Quality of Service (QoS) requirements, and compensates for the limited computational capabilities of User Equipment (UE) such as... more
Handing over highly demanding tasks to remote or nearby computing units helps accommodate the service Quality of Service (QoS) requirements, and compensates for the limited computational capabilities of User Equipment (UE) such as smartphones and tablets. Task offloading is a promising technique being proposed for Virtualized Edge (VE) environments to solve a wide range of issues, frequently with the aim of enabling resource-intensive low-latency services. However, the volatile nature of 5G and B5G networks, as they continuously change due to dynamic policies, optimization processes, and users' mobility, formalizes a major obstacle facing offloading and overall resource orchestration. To cope with such a challenge, under the scope of Multi-access Edge Computing (MEC), a three-tier fuzzy-based orchestration strategy is proposed with the aim of offloading the users' workload to the optimum computing units to support stricter QoS requirements and reduce the perceived service delay. To evaluate our solution, we compare the proposed workload orchestrator with different employed algorithms. The evaluation shows that our orchestrator achieves nearly ideal performance, and outperforms the state-of-the-art approaches considered.
This paper provides a first assessment of a framework that allows network operators to use advanced slicing mechanisms to establish bandwidth restrictions to the different flows of a multi-interfaced User Equipment (UE), even when it... more
This paper provides a first assessment of a framework that allows network operators to use advanced slicing mechanisms to establish bandwidth restrictions to the different flows of a multi-interfaced User Equipment (UE), even when it moves between different access technologies. The objective is to prevent individual flows from overwhelming resources available to a slice, due to the often unpredictable traffic usage (both upstream and downstream) of UEs. To realize this, OVSDB bandwidth configuration and UE-OpenFlow support were integrated into a previously existing architecture, which used Network Function Virtualization (NFV) and Software Defined Networking (SDN) to create a virtualised representation of the UE (vUE) in the operator’s cloud, and handle slice flow-based mobility in a transparent way to the endpoints. We elaborate the framework in a scenario where telecommunication providers (i.e., mobile network operators (MNO) and internet service providers (ISP)) are able to instantiate network slices when requested by an over-the-top (OTT) provider. Our enhanced framework allows the network to implement end-to-end quality of service (QoS), allowing the slice mobility capability to preserve flow-based uplink airtime resources. The framework was implemented in a experimental testbed featuring 3GPP and non-3GPP links, with results showcasing the feasibility of the proposal.
Deliverable D2.1: System conceptual design of VIDAS
We explore a ‘Smart-BnB scenario’ whereby someone (an Owner) advertises a smart property on a web platform. Renters use the platform for short periods, and may fully enjoy the property, including its smart features such as sensors. This... more
We explore a ‘Smart-BnB scenario’ whereby someone (an Owner) advertises a smart property on a web platform. Renters use the platform for short periods, and may fully enjoy the property, including its smart features such as sensors. This scenario should further ensure the Renter’s privacy, so we use consent receipts and selective sharing. This paper describes a demonstrator of how smart environments can operate in a privacy respecting manner.
The fact that most IoT solutions are provided by 3rd-parties, along with the pervasiveness of the collected data, raises privacy and security concerns. There is a need to verify which data is being sent to the 3rd-party, as well as... more
The fact that most IoT solutions are provided by 3rd-parties, along with the pervasiveness of the collected data, raises privacy and security concerns. There is a need to verify which data is being sent to the 3rd-party, as well as preventing those channels from becoming an exploitation avenue. We propose to use existing API definition languages to create contracts which define the data that can be transmitted, in what format, and with which constraints. To verify the compliance with these contracts, we propose a converging "Multi-Access Edge Computing" architecture which validates RESTalike API requests/responses against a Swagger schema. We deal with encrypted traffic using an SFC-enabled Man-in-the-Middle, allowing us to do verifications in "real-time". We devised a Proof of Concept and shown that we were able to detect (and stop) contract violations.
Internet of Things (IoT) solutions are becoming very popular since everything can now be processed through a technological service. Currently, data is digital information, creating the need to design platforms and services that fill the... more
Internet of Things (IoT) solutions are becoming very popular since everything can now be processed through a technological service. Currently, data is digital information, creating the need to design platforms and services that fill the gap between data sensors and processing frameworks. IoT Platforms are responsible for attaching data sources with the remaining processing architecture. This paper presents a Machine to Machine (M2M) platform able to monitor data acquisition, processing, and visualization. The use of SCoTv2 allows users to integrate several sources and obtain relevant information only by connecting their sensors with the platform. As our preeminent goal is creating a large scale platform useful for several scenarios, a significant part of the study is related to software challenges, and the connection between technologies. Therefore, our principal contribution is the definition of effective architecture able to reply to different use cases.
The rise of 5th Generation (5G) based network systems provide the prospect for an unprecedented technological revolution in different aspects of current network infrastructures to fully satisfy the high demands of smart space. This work... more
The rise of 5th Generation (5G) based network systems provide the prospect for an unprecedented technological revolution in different aspects of current network infrastructures to fully satisfy the high demands of smart space. This work addresses the challenges that raise in exploiting the potential of WiFi WLAN-sharing technology in 5G Ultra-Dense Networking (UDN) use cases. We investigate new complementary aspects of emerging 5G technologies such as Network Function Virtualization (NFV) and Fog computing to design a unique WiFi WLAN-sharing ecosystem to allow complying with 5G UDN critical requirements. In the resulting approach, we empower WiFi WLAN-sharing infrastructures with Fog computing capabilities and follow a slice-defined approach, aiming to provide differentiated services at unprecedented levels, on top of the same infrastructure through customized, isolated and independent building blocks. The solution also enables slices to accommodate applications besides networking functions, seeking to provide ultra-low latency rates by leveraging direct linkage to data producer entities. A proof of concept was conducted by carrying out experiments in a real laboratory testbed, allowing insights into the feasibility and suitability of slicing WiFi WLAN-sharing systems.
Heating appliances consume approximately 48 % of the energy spent on household appliances every year. Furthermore, a malfunctioning device can increase the cost even further. Thus, there is a need to create methods that can identify the... more
Heating appliances consume approximately 48 % of the energy spent on household appliances every year. Furthermore, a malfunctioning device can increase the cost even further. Thus, there is a need to create methods that can identify the equipment’s malfunctions and eventual failures before they occur. This is only possible with a combination of data acquisition, analysis and prediction/forecast. This paper presents an infrastructure that supports the previously mentioned capabilities and was deployed for failure detection in boilers, making possible to forecast faults and errors. We also present our initial predictive maintenance models based on the collected data.
The concept of cooperative communication appears as a beneficial method that can address key challenges faced by wireless networks. Cooperative techniques in IEEE 802.11 MAC protocols have thus received significant attention both in... more
The concept of cooperative communication appears as a beneficial method that can address key challenges faced by wireless networks. Cooperative techniques in IEEE 802.11 MAC protocols have thus received significant attention both in theoretical and practical aspects. In this survey article, we provide an overview of existing research on cooperative MAC protocols in the IEEE 802.11 standard. We specially focus on protocol’s behavior and propose a novel architectural model for cooperation. We present a classification of cooperative relay based MAC protocols, along model desired categories, and review representative cooperative protocols for 802.11. We further evaluate the operational issues of cooperative protocols in term of architecture, compatibility and complexity.
The Future Internet approach requires new solutions to support novel usage scenarios driven by the technological evolution and the new service demands. However, this paradigm shift requires deeper changes in the existing systems, which... more
The Future Internet approach requires new solutions to support novel usage scenarios driven by the technological evolution and the new service demands. However, this paradigm shift requires deeper changes in the existing systems, which makes Internet providers reluctant in deploying the full transformation required for the Future Internet. The Entity Title Architecture (ETArch) is a holistic clean-slate Future Internet system embedding new services for these scenarios leveraging the Software Defined Networking (SDN) concept materialized by the OpenFlow. However, legacy ETArch deploys a fully per-flow approach to provision the same transport model for all sessions (equivalent to the Internet best-effort), while suffering with performance drawbacks and lacking Quality of Service (QoS) control. To that, we evolved ETArch with SMART (Support of Mobile Sessions with High Transport Network Resource Demand) QoS control approach, which coordinates admission control and dynamic control of super-dimensioned resources to accommodate multimedia sessions with QoS-guaranteed over time, while keeping scalability/performance and users with full Quality of Experience (QoE). The SMART-enabled ETArch system evaluation was carried out using a real Testbed of the OFELIA Brazilian Island, confirming its benefits in both data and control planes over the legacy ETArch.
The new communication paradigm established by social media along with its growing popularity in recent years contributed to attract an increasing interest of several research fields. One such research field is the field of event detection... more
The new communication paradigm established by social media along with its growing popularity in recent years contributed to attract an increasing interest of several research fields. One such research field is the field of event detection in social media. The contribution of this article is to implement a system to detect newsworthy events in Twitter. The proposed pipeline first splits the tweets into segments. These segments are then ranked. The top k segments in this ranking are then grouped together. Finally, the resulting candidate events are filtered in order to retain only those related to real-world newsworthy events. The implemented system was tested with three months of data, representing a total of 4,770,636 tweets written in Portuguese. In terms of performance, the proposed approach achieved an overall precision of 88% and a recall of 38%.
Database applications are being increasingly under pressure to respond effectively to ever more demanding performance requirements. Software architects can resort to several well-known architectural tactics to minimize the possibility of... more
Database applications are being increasingly under pressure to respond effectively to ever more demanding performance requirements. Software architects can resort to several well-known architectural tactics to minimize the possibility of coming across with any performance bottleneck. The usage of call-level interfaces (CLIs) is a strategy aimed at reducing the overhead of business components. CLIs are low-level APIs that provide a high-performance environment to execute standard SQL statements on relational and also on some NoSQL database (DB) servers. In spite of these valuable features, CLIs are not thread-safe when distinct threads need to share datasets retrieved through Select statements from databases. Thus, even in situations where two or more threads could share a dataset, there is no other possibility than providing each thread with its own dataset, this way leading to an increased need of computational resources. To overcome this drawback, in this paper we propose a new na...
Database applications are a very pervasive tool that enable businesses to make the most out of the data they collect and generate. Furthermore, they can also be used to provide services on top of such data that can access, process, modify... more
Database applications are a very pervasive tool that enable businesses to make the most out of the data they collect and generate. Furthermore, they can also be used to provide services on top of such data that can access, process, modify and explore it. It was argued in the work this paper extends that when client applications that access a database directly run on public or semi-public locations that are not highly secured (such as a reception desk), the database credentials used could be stolen by a malicious user. To prevent such an occurrence, solutions such as virtual private networks (VPNs) can be used to secure access to the database. However, VPNs can be bypassed by accessing the database from within the business network in an internal attack, among other problems. A methodology called Secure Proxied Database Connectivity (SPDC) is presented which aims to push the database credentials out of the client applications and divides the information required to access them between a proxy and an authentication server, while supporting existing tools and protocols that provide access to databases, such as JDBC. This approach will be shown and further detailed in this paper in terms of attack scenarios, implementation and discussion.
Agnostic fault-tolerant systems cannot recover to a consistent state if a failure/crash occurs during a transaction. By their nature, inconsistent states are very difficult to be treated and recovered into the previous consistent state.... more
Agnostic fault-tolerant systems cannot recover to a consistent state if a failure/crash occurs during a transaction. By their nature, inconsistent states are very difficult to be treated and recovered into the previous consistent state. One of the most common fault tolerance mechanisms consists in logging the system state whenever a modification takes place, and recovering the system to the system previous consistent state in the event of a failure. This principle was used to design a general recovering log-based model capable of providing data consistency on agnostic fault-tolerant systems. Our proposal describes how a logging mechanism can recover a system to a consistent state, even if a set of actions of a transaction were interrupted mid-way, due to a server crash. Two approaches of implementing the logging system are presented: on local files and on memory in a remote fault-tolerant cluster. The implementation of a proof of concept resorted to a previous proposed framework, which provides common relational features to NoSQL database management systems. Among the missing features, the previous proposed framework used in the proof of concept, was not fault-tolerant.
IoT platforms have become quite complex from a technical viewpoint, becoming the cornerstone for information sharing, storing, and indexing given the unprecedented scale of smart services being available by massive deployments of a large... more
IoT platforms have become quite complex from a technical viewpoint, becoming the cornerstone for information sharing, storing, and indexing given the unprecedented scale of smart services being available by massive deployments of a large set of data-enabled devices. These platforms rely on structured formats that exploit standard technologies to deal with the gathered data, thus creating the need for carefully designed customised systems that can handle thousands of heterogeneous data sensors/actuators, multiple processing frameworks, and storage solutions. We present the SCoT2.0 platform, a generic-purpose IoT Platform that can acquire, process, and visualise data using methods adequate for both real-time processing and long-term Machine Learning (ML)-based analysis. Our goal is to develop a large-scale system that can be applied to multiple real-world scenarios and is potentially deployable on private clouds for multiple verticals. Our approach relies on extensive service containe...
5G systems are putting increasing pressure on Telecom operators to enhance users’ experience, leading to the development of more techniques with the aim of improving service quality. However, it is essential to take into consideration not... more
5G systems are putting increasing pressure on Telecom operators to enhance users’ experience, leading to the development of more techniques with the aim of improving service quality. However, it is essential to take into consideration not only users’ demands but also service providers’ interests. In this work, we explore policies that satisfy both views. We first formulate a mathematical model to compute End-to-End (E2E) delay experienced by mobile users in Multi-access Edge Computing (MEC) environments. Then, dynamic Virtual Machine (VM) allocation policies are presented, with the objective of satisfying mobile users Quality of Service (QoS) requirements, while optimally using the cloud resources by exploiting VM resource reuse.Thus, maximizing the service providers’ profit should be ensured while providing the service required by users. We further demonstrate the benefits of these policies in comparison with previous works.
The state-of-the-art solutions for detection of Network Neutrality (NN) violations assume that all detectable Traffic Differentiations (TDs) are in fact NN violations. However, legislators and regulatory agencies state instructions that... more
The state-of-the-art solutions for detection of Network Neutrality (NN) violations assume that all detectable Traffic Differentiations (TDs) are in fact NN violations. However, legislators and regulatory agencies state instructions that establish which TDs may be considered as violations (or are allowed), and in which conditions. We advocate that these instructions should be considered before signaling a detected TD as an NN violation. In this paper, we are concerned with quantifying how much these instructions influence the results achieved by state-of-theart solutions. We analyzed the public dataset of TDs detected by Glasnost under the regulatory perspective. We found that in specific circumstances, up to 48% of detected TDs cannot be conclusively signaled as NN violations. Our findings point towards the need for additional considerations when designing solutions focusing on NN, and to weaker conclusions drawn by solutions that ignore the regulatory perspective of the Internet.
Network slicing emerges as a key technology in next generation networks, boosted by the integration of software‐defined networking and network functions virtualization. However, while allowing resource sharing among multiple tenants, such... more
Network slicing emerges as a key technology in next generation networks, boosted by the integration of software‐defined networking and network functions virtualization. However, while allowing resource sharing among multiple tenants, such networks must also ensure the security requirements needed for the scenarios they are employed. This letter presents the leading security challenges on the use of network slices at the packet core, the solutions that academy and industry are proposing to address them, pointing out some directions that should be considered.

And 442 more

Programmers of relational database applications use software solutions (Hibernate, JDBC, LINQ, ADO.NET) to ease the development process of business tiers. These software solutions were not devised to address access control policies, much... more
Programmers of relational database applications use software solutions (Hibernate, JDBC, LINQ, ADO.NET) to ease the development process of business tiers. These software solutions were not devised to address access control policies, much less for evolving access control policies, in spite of their unavoidable relevance. Currently, access control policies, whenever implemented, are enforced by independent components leading to a separation between policies and their enforcement. This paper proposes a new approach based on an architectural model referred to here as the Access Control driven Architecture with Dynamic Adaptation (ACADA). Solutions based on ACADA are automatically built to statically enforce access control policies based on schemas of Create, Read, Update and Delete (CRUD) expressions. Then, CRUD expressions are dynamically deployed at runtime driven by established access control policies. Any update in the policies is followed by an adaptation process to keep access control mechanisms aligned with the policies to be enforced. A proof of concept based on Java and Java Database Connectivity
(JDBC) is also presented.
Most of the security threats in relational database applications have their source in client-side systems when they issue requests formalized by Create, Read, Update and Delete (CRUD) expressions. If tools such as ODBC and JDBC are used... more
Most of the security threats in relational database applications have their source in client-side systems when they issue requests formalized by Create, Read, Update and Delete (CRUD) expressions. If tools such as ODBC and JDBC are used to develop business logics, then there is another source of threats. In some situations the content of data sets retrieved by Select expressions can be modified and then committed into the host databases. These tools are agnostic regarding not only database schemas but also regarding the established access control policies. This situation can hardly be mastered by programmers of business logics in database applications with many and complex access control policies. To overcome this gap, we extend the basic Role-Based Access policy to support and supervise the two sources of security threats. This extension is then used to design the correspondent RBAC model. Finally, we present a software architectural model from which static RBAC mechanisms are automatically built, this way relieving programmers from mastering any schema. We demonstrate empirical evidence of the effectiveness
"This research proposes an architecture for reusable components aimed at bridging the object-oriented and the relational paradigms. The component, referred to here as Business Tier Component, provides a single wide range static interface... more
"This research proposes an architecture for reusable components
aimed at bridging the object-oriented and the relational paradigms. The component, referred to here as Business Tier Component, provides a single wide range static interface able to manage a set of Create, Read, Update and Delete (CRUD) expressions, deployed at runtime and of any complexity, on behalf of application tiers. The only constraint is that the required interface to manage each CRUD expression must be a super-interface of the provided wide range interface. The main research challenge of this paper is the definition of an architecture for reusable components aimed at managing dynamically a set of CRUD expressions, deployed at runtime, on behalf of application tiers. "
Currently, business tiers for relational database applications are mostly built from software artifacts, among which Java Persistent API, Java Database Connectivity and LINQ are three representatives. Those software artifacts were mostly... more
Currently, business tiers for relational database applications are mostly built from software artifacts, among which Java Persistent API, Java Database Connectivity and LINQ are three representatives. Those software artifacts were mostly devised to address the impedance mismatch between the object-oriented and the relational paradigms. Key aspects as reusable business tier components and access control to data residing inside relational databases have not been addressed. To tackle the two aspects, this research proposes an architecture, referred to here as Business Tier Architecture (BTA), to develop reusable business tier components which enforce access control policies to data residing inside relational databases management systems. Besides BTA, this paper also presents a proof of concept based on Java and on Java Database Connectivity (JDBC).
Call Level Interfaces (CLI) play a key role in database applications whenever performance is a key requirement. SQL statements are encoded inside strings this way keeping the power and the expressiveness of the SQL language.... more
Call Level Interfaces (CLI) play a key role in database applications whenever performance is a key requirement. SQL statements are encoded inside strings this way keeping the power and the expressiveness of the SQL language. Unfortunately, despite this significant advantage, CLI do not promote the development of business tier components, much less for business tier components driven by dynamic adaptation. To tackle this CLI drawback, and simultaneously keep their advantages, this paper proposes an architecture, herein referred to as the Object-to-Relational Component Architecture (ORCA), relying on CLI for building adaptable business tiers components. ORCA has the capacity of being dynamically adapted to manage any set of SQL statements deployed at runtime. The focus of this paper is threefold: 1) present the ORCA, 2) present a proof of concept based on Java and, finally, 3) assess its performance against a standard CLI.
Access control is a key challenge in software engineering, especially in relational database applications. Current access control techniques are based on additional security layers designed by security experts. These additional security... more
Access control is a key challenge in software engineering, especially in relational database applications. Current access control techniques are based on additional security layers designed by security experts. These additional security layers do not take into account the necessary business logic leading to a separation between business tiers and access control mechanisms. Moreover, business tiers are built from commercial tools (ex: Hibernate, JDBC, ODBC, LINQ), which are not tailored to deal with security aspects. To overcome this situation several proposals have been presented. In spite of their relevance, they do not support the enforcement of access control policies at the level of the runtime values that are used to interact with protected data. Runtime values are critical entities because they play a key role in the process of defining which data is accessed. In this paper, we present a general technique for static checking, at the business tier level, the runtime values that are used to interact with databases and in accordance with the established access control policies. The technique is applicable to CRUD (create, read, update and delete) expressions and also to actions (update and insert) that
are executed on data retrieved by Select expressions. A proof of
concept is also presented. It uses an access control platform previously developed, which lacks the key issue of this paper. The collected results show that the presented approach is an effective solution to enforce access control policies at the level of runtime values that are used to interact with data residing in relational databases.
"Call Level Interfaces (CLI) provide services aimed at easing the integration of database components and components from client applications. CLI support native SQL statements keeping this way expressiveness and performance of SQL. Thus,... more
"Call Level Interfaces (CLI) provide services aimed at easing the integration of database components and components from client applications. CLI support native SQL statements keeping this way expressiveness and performance of SQL. Thus, they cannot be discarded as a valid option whenever SQL expressiveness and SQL performance are considered key requirements. Despite the aforementioned performance advantage, CLI do not comprise other important performance features, as concurrency over the in-memory data. In this paper we present and assess a component that is a concurrent version of the ResultSet interface from the JDBC API. Several threads may interact simultaneously in the same
instance of the ResultSet in a concurrent fashion and can be simultaneously connected to the underlying database. The main contributions of this paper are twofold: i) the design of an Enhanced ResultSet Component to provide a concurrent access to relational databases; ii) the evaluation of its performance. The Enhaced ResultSet performance will be assessed in a real scenario. The outcome shows that the gain in performance may increase until 80%. "
To store, update and retrieve data from database management systems (DBMS), software architects use tools, like call-level interfaces (CLI), which provide standard functionalities to interact with DBMS. However, the emerging of NoSQL... more
To store, update and retrieve data from database management systems (DBMS), software architects use tools, like call-level interfaces (CLI), which provide standard functionalities to interact with DBMS. However, the emerging of NoSQL paradigm, and particularly new NoSQL DBMS providers, lead to situations where some of the standard functionalities provided  by CLI are not supported, very often due to their distance from the relational model or due to design constraints. As such, when a system architect needs to evolve, namely from a relational DBMS to a NoSQL DBMS, he must overcome the difficulties conveyed by the features not provided by NoSQL DBMS. Choosing the wrong NoSQL DBMS risks major issues with components requesting non-supported features. This paper focuses on how to deploy features that are not so commonly supported by NoSQL DBMS (like Stored Procedures, Transactions, Save Points and interactions with local memory structures) by implementing them in standard CLI.
Research Interests:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys... more
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys  several drawbacks. Among them we emphasize: 1) if policies are complex, their enforcement can lead to performance decay of database servers; 2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious
users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Research Interests:
"Object-oriented programming is the most successful programming paradigm. Relational database management systems are the most successful data storage components. Despite their individual successes and their desirable tight binding, they... more
"Object-oriented programming is the most successful programming paradigm. Relational database management systems are the most successful data storage components. Despite their individual successes and their desirable tight binding, they rely on different points of view about data entailing difficulties on their integration. Some solutions have been proposed to overcome these difficulties, such as Embedded SQL, object/relational mappings (O/RM), language extensions and even Call Level Interfaces (CLI), as JDBC and ADO.NET. In this paper we present a new model aimed at integrating object-oriented languages and relational databases, named CRUD Data Object Model (CRUD-DOM). CRUDDOM relies on CLI (JDBC) and aims not only at exploring
CLI advantages as preserving its performance and SQL expressiveness but also on providing a typestate approach for the implementation of the ResultSet interface. The model design aims to facilitate the development of automatic code generation tools. We also present such a tool, called CRUD Manager (CRUD-M), which provides automatic code generation with a complementary support for software maintenance. This paper shows that CRUD-DOM is an effective model to address the aforementioned objectives. "
The development of database applications comprises three different tiers: application tier, database tier and finally the middle tier also known as the data access layer. The development of each tier per-se entails many challenges. Very... more
The development of database applications comprises three different tiers: application tier, database tier and finally the middle tier also known as the data access layer. The development of each tier per-se entails many challenges. Very often the most difficult challenges to be addressed derive from non-functional requirements, as productivity, usability, performance, reliability, high-availability and transparency. This paper is focused on defining and presenting a model for the data access layer aimed to integrate object-oriented application tiers and relational database tiers. The model addresses situations on which users need to explicitly write down complex static Create, Read, Update and Delete (CRUD) expressions and simultaneously get advantages regarding some non-functional requirements. The model, known as CRUD Data Object Model (CRUD-DOM), tackles the following nonfunctional requirements: performance, usability and productivity. The main contributions of this paper are threefold: 1) to present the CRUD-DOM model; 2) to carry out an enhanced performance assessment based on a case study; 3) to present a tool, called CRUD Manager (CRUD-M), which provides automatic code generation with complementary support for software test and maintenance. The main outcome of this paper is the evidence that the pair CRUD-DOM and CRUD-M effectively addresses productivity, performance and usability requirements in the aforementioned context.
Call Level Interfaces (CLI) are software API used for building business tiers of relational database applications whenever performance is a key requirement. Nevertheless, their use is cumber-some, mainly in large database applications... more
Call Level Interfaces (CLI) are software API used for building business tiers of relational database applications whenever performance is a key requirement. Nevertheless, their use is cumber-some, mainly in large database applications with many and complex Create, Read, Update and Delete (CRUD) expressions. CLI are low level API conveying several difficulties during the development process of relational business tiers. Four of them are herein emphasized: 1) Programmers need to master the schemas of the underlying databases; 2) the same CRUD expression is frequently re-written to address different business needs; 3) CLI are not suited to cope with evolving business tiers and, finally, 4) CLI do not provide any feature to decouple development process of relational business tiers from the development process of application tiers. To tackle these difficulties, this paper proposes an architecture for building reusable relational business tier components based on CLI herein referred to as the Reusable Business Tier Architecture (RBTA). It relies on a customizable wide typed service to address a business area, such as accountability. The typed service is able to manage CRUD expressions, deployed at runtime, on behalf of application tiers and in accordance with users’ needs. The only constraint is that the required service to manage each CRUD expression must be a sub-set of the implemented wide typed service. A proof of concept is also presented.
""Currently, programmers of database applications use standard API and frameworks as artifacts to develop business tiers aimed at integrating the object-oriented and the relational paradigms. These artifacts provide programmers with the... more
""Currently, programmers of database applications use standard API
and frameworks as artifacts to develop business tiers aimed at integrating the object-oriented and the relational paradigms. These artifacts provide programmers with the necessary services to develop business tiers. In this paper we propose a new architecture based on general Call Level Interfaces from which reusable and Adaptable Business tier Components (ABC) may be developed. Each individual ABC component is able to manage SQL statements of any complexity, deployed at run-time, and also to provide tailored services to each SQL statement. To accomplish this goal, the only requirement is that the schema of each deployed SQL statement must be in conformance with one of the pre-defined static schemas (interfaces) of the recipient ABC  component. The main contributions of this paper are threefold: 1) to present the new architecture based on general Call Level Interfaces on which ABC components are based, 2) to show that the source code of ABC components may be automatically built by a tool and 3) to present a concrete example of ABC based on JDBC. The main outcome of this paper is the evidence that the presented architecture is an effective approach to build reusable and adaptable business tiers components to bridge the object-oriented and the relational paradigms. ""
Call Level Interfaces (CLI) are low level API aimed at providing services to connect two main components in database applications: client applications and relational databases. Among their functionalities, the ability to manage data... more
Call Level Interfaces (CLI) are low level API aimed at providing services to connect two main components in database applications: client applications and relational databases. Among their functionalities, the ability to manage data retrieved from databases is emphasized. The retrieved data is kept in local memory structures that may be permanently connected to the host database. Client applications, beyond the ability to read their  contents, may also execute Insert, Update and Delete actions over the local memory structures, following specific protocols. These protocols are row (tuple) oriented and, while being executed, cannot be preempted to start another protocol. This restriction leads to several difficulties when applications need to deal with several tuples at a time. The most paradigmatic case is the impossibility to cope with concurrent environments where several threads need to access to the same local memory structure instance, each one pointing to a different tuple and executing its particular protocol. To overcome the aforementioned fragility, a Concurrent Tuple Set Architecture (CTSA) is proposed to manage local memory structures. A performance assessment of a Java component based on JDBC (CLI) is also carried out and compared with a common approach. The main outcome of this research is the evidence that in concurrent environments, components relying on the CTSA may significantly improve the overall performance when compared with solutions based on standard JDBC API.