2018 International Conference on High Performance Computing & Simulation (HPCS), 2018
It is estimated that about 2.5 exabytes of data are produced daily. This large volume of data has... more It is estimated that about 2.5 exabytes of data are produced daily. This large volume of data has brought new possibilities of applications, however, to manage this large volume of data, new technologies were needed. One of the most prominent technologies is the Hadoop framework, which implements a parallel task processing paradigm. The aim of this paper is to present the results of our group's research which analyzed the performance of the Hadoop framework for Big Data processing. The performance evaluation focused on finding the saturation point of Hadoop performance by varying the number of nodes in the cluster applying two benchmarks - TeraSort and Pi. The analysis was performed using a real infrastructure, implementing the system in a physical cluster, providing a general approach of performance analysis in the Hadoop framework for developers and researchers.
With industry 4.0, data-based approaches are in vogue. However, extracting the essential features... more With industry 4.0, data-based approaches are in vogue. However, extracting the essential features is not a trivial task and greatly influences the final result. There is also a need for specialized system knowledge to monitor the environment and diagnose faults. In this context, the diagnosis of faults is significant, for example, in a vehicle fleet monitoring system, since it is possible to diagnose faults even before the customer is aware of the fault, minimizing the maintenance costs of the modules. In this paper, several models using machine learning (ML) techniques were applied and analyzed during the fault diagnosis process in vehicle fleet tracking modules. Two approaches were proposed: ‘With Knowledge’ and ‘Without Knowledge’, to explore the dataset using ML techniques to generate classifiers that can assist in the fault diagnosis process. The approach ‘With Knowledge’ performs the feature extraction manually, using the ML techniques: random forest, naive Bayes, support vect...
2018 International Conference on High Performance Computing & Simulation (HPCS), 2018
Actually, the Internet is an essential component in the lives of the vast majority of people and ... more Actually, the Internet is an essential component in the lives of the vast majority of people and it has been contributing to an unprecedented technological advance. The Internet of Things (IoT) is considered the first real evolution of the Internet after Web 2.0. Through it, revolutionary new applications with power to permeate people's lives are expected to emerge. IoT is a term used to refer to the integration of physical and virtual objects, with different purposes and in different areas. This technology is enabled by several others, such as the popularization of sensors, wireless networks and the explosion of cloud storage and big data. However, considering that any device equipped with computing resources and Internet access can collect, analyze and make decisions based on pre- established criteria, the development of an architecture for the management of this environment is necessary and requires further studies. In this way, the aim of this paper is to propose an architecture for the efficient management of all the infrastructure that compose an Internet of Things environment, and ensures the Quality of Service (QoS). For this, the proposed dynamic and self-manageable architecture aims to relate mechanisms of prediction and load balancing, as well as access control, resources management and security.
Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web, 2017
The growth of video surveillance devices increases the rate of streaming data. However, even work... more The growth of video surveillance devices increases the rate of streaming data. However, even working in the Fog Computing environment, these smart devices may fail collecting information, producing missing or invalid data. This issue can affect the user quality of experience, because the PTZ-controller may lose the target object tracking. Therefore, this paper presents the Singular Spectrum Analysis - (SSA), as the method to replace missing values in this complex environment of intelligent surveillance cameras. SSA is characterized within time series field by performing a non-parametric spectral estimation with spatial-temporal correlations. The values not correctly monitored, were estimated by SSA with accuracy, allowing the tracking of a suspect object.
Anais do XXXIX Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (SBRC 2021), 2021
Em Microgrids a produção de energia é realizada combinando fontes renováveis e não renováveis de ... more Em Microgrids a produção de energia é realizada combinando fontes renováveis e não renováveis de geração de energia. Desse modo, é fundamental o controle da geração não renovável para evitar o desperdício. Esse tipo de problema tem sido investigado por várias pesquisas, que empregam variações do ajuste do controlador Proporcional-Integral-Derivativo (PID) para evitar perdas de energia. Entretanto, nenhum dos trabalhos empregou uma estratégia para reduzir o tempo de equilíbrio entre os geradores de energia. Nesse contexto, este trabalho apresenta o KaspaFog, uma abordagem que emprega uma estratégia de predição de dados utilizando o modelo SARIMA e uma rede neural com aprendizado por reforço para ajustar o controle da geração de energia. O KaspaFog é uma infraestrutura na névoa apoiada pela nuvem, devido à necessidade de processamento e tempos de respostas rápidos. Com o uso do KaspaFog, foi alcançada uma redução de 18% na produção de energia não renovável em comparação ao ajuste Zieg...
2018 International Conference on High Performance Computing & Simulation (HPCS), 2018
The number of surveillance cameras distributed over Smart Cities and the streaming workload gener... more The number of surveillance cameras distributed over Smart Cities and the streaming workload generated by them have been increased. Although Fog computing has been used to reduce latency and jitter time, the gateways IoT may fail to collect information, producing missing or invalid data, affecting the quality of service. Therefore, this paper presents an analysis of gap filling algorithms to data missing problem in a smart surveillance environment. Performance Evaluation study has shown that it is possible to maximize the accuracy of data imputation using Singular Spectrum Analysis (SSA). SSA is characterized by time series field by performing a non-parametric spectral estimation with spatial-temporal correlations. Statistical outcomes have confirmed the requirement of accurate data imputation techniques to the Smart City environment, specifi¬cally in the surveillance scenario. In addition, the performance evaluation technique has allowed emphasizing the contribution of imputation data approaches. These approaches can estimate values that were not correctly monitored, increasing the accuracy in the estimation of the Streaming video, and thus improving the quality of service.
A computacao em nuvem introduz um novo nivel de flexibilidade e escalabilidade para provedores e ... more A computacao em nuvem introduz um novo nivel de flexibilidade e escalabilidade para provedores e clientes, pois aborda desafios como a rapida mudanca em cenarios de Tecnologia da Informacao (TI) e a necessidade de reduzir custos e tempo no gerenciamento da infraestrutura. No entanto, para oferecer garantias de qualidade de servico (QoS) sem limitar o numero de solicitacoes aceitas, os provedores devem ser capazes de dimensionar de maneira dinâmica e eficiente as solicitacoes de servico a serem executadas nos recursos computacionais disponiveis nos datacenters. O balanceamento de carga nao e uma tarefa trivial, envolvendo desafios relacionados a demanda de servicos, que podem mudar instantaneamente, para modelagem de desempenho, implantacao e monitoramento de aplicativos em recursos de TI virtualizados. Desta forma, o objetivo deste trabalho e desenvolver e avaliar o desempenho de diferentes heuristicas de balanceamento de carga para um ambiente de nuvem, a fim de estabelecer um mape...
Proceedings of the XIV Brazilian Symposium on Information Systems - SBSI'18, 2018
Cloud computing introduces a new level of flexibility and scalability for providers and clients, ... more Cloud computing introduces a new level of flexibility and scalability for providers and clients, because it addresses challenges such as rapid change in Information Technology (IT) scenarios and the need to reduce costs and time in infrastructure management. However, to be able to offer quality of service (QoS) guarantees without limiting the number of requests accepted, providers must be able to dynamically and efficiently scale service requests to run on the computational resources available in the data centers. Load balancing is not a trivial task, involving challenges related to service demand, which can shift instantly, to performance modeling, deployment and monitoring of applications in virtualized IT resources. In this way, the aim of this paper is to develop and evaluate the performance of different load balancing heuristics for a cloud environment in order to establish a more efficient mapping between the service requests and the virtual machines that will execute them, and to ensure the quality of service as defined in the service level agreement. By means of experiments, it was verified that the proposed heuristics presented better results when compared with traditional and artificial intelligence heuristics.
As ferramentas de virtualização têm um papel fundamental no crescimento da utilização da Computaç... more As ferramentas de virtualização têm um papel fundamental no crescimento da utilização da Computação em Nuvem. Por meio da virtualização, é possível realizar a migração de máquinas virtuais dentro de um provedor de serviços de nuvem, proporcionando a utilização eficiente dos recursos. Entretanto, não oram evidenciadas quais técnicas são mais indicadas de acordo com os cenários distintos de carga no qual o sistema está operando. Diferentemente das avaliações de desempenhos simplesmente comparativas encontradas na literatura, este trabalho propõe a utilização de um modelo estatístico consistente para avaliação de desempenho das técnicas de migração: (i) live migration e (ii) non-live migration. O objetivo do modelo estatístico é identificar o comportamento das técnicas de migração de máquinas virtuais sob diferentes situações de cargas de trabalho. O modelo estatístico proposto para utilização é composto pela abordagem de avaliação de desempenho das técnicas de migração de máquina virt...
2018 IEEE Symposium on Computers and Communications (ISCC), 2018
In the last 20 years, the amount of energy consumed has grown more than 50% and due to a shortage... more In the last 20 years, the amount of energy consumed has grown more than 50% and due to a shortage of energy resources in the future, will not be possible to meet all this demand. The current distribution model transports energy from stations to consumers, but does not consider the use of alternative sources. The smart grids have emerged to allow the inclusion of alternative forms of energy generation in the grid. Yet, to avoid an overload in the system is necessary to calculate the power flow in real time. In this paper, we use Fog Computing as mean to reduce the logical distance between the central distribution and consumption spot. IoT devices in the network edge have more effectiveness and less cost to handle the power flow information. We evaluate the performance of the Newton-Raphson and Gauss-Seidel algorithms with the objective of developing calculations in real time of the load flow problem with the help of fog. Our results have shown that is possible to make a smart grid ba...
As Smart Grids sao redes responsaveis por distribuir a energia de forma segura e promover uma med... more As Smart Grids sao redes responsaveis por distribuir a energia de forma segura e promover uma medicao justa do consumo. Por terem uma grande quantidade de sensores para monitorar e registrar diferentes quantidades de dados ao longo do dia, podem deixar de coletar informacoes, produzindo dados ausentes ou invalidos, afetando a qualidade do servico. Esse artigo apresenta a proposta de um algoritmo adaptativo, construido a partir da avaliacao de desempenho de dois algoritmos utilizados para a imputacao de dados faltantes, o Spline e o Singular Spectrum Analysis(SSA). A avaliacao de desempenho mostra melhorias significativas na imputacao de dados faltantes com o algoritmo construido, permitindo uma medicao mais precisa do consumo mesmo com dados faltantes.
Proceedings of the Symposium on Applied Computing, 2017
This paper makes experimental evaluations that involve the allocation of virtual machines in a cl... more This paper makes experimental evaluations that involve the allocation of virtual machines in a cloud environment. Virtual machine allocation is an open research field in cloud, which can lead to the best performance for clients. However, the allocations are made by estimating the number of resources that need to be allocated to the virtual machines in the host without taking account of the possible workload required for these virtual machines. In carrying out this, we set up a cloud prototype, together with virtual machines with the same configuration as that for Amazon and Microsoft providers, so that our prototype could be validated. After this, we allocated as many virtual machines as possible in a single host based on our own infrastructure and involving homogeneous workloads and heterogeneous workloads. The results showed that the benefits obtained from heterogeneous sets of virtual machines were better than the homogeneous sets.
Anais do Simpósio Brasileiro de Sistemas de Informação (SBSI), 2017
Computação em nuvem é um estilo de computação no qual os provedores de recursos podem oferecer se... more Computação em nuvem é um estilo de computação no qual os provedores de recursos podem oferecer serviços sob demanda de forma transparente e os clientes geralmente pagam de acordo com o uso. A nuvem introduz um novo nível de flexibilidade e escalabilidade para usuários abordando desafios como a rápida alteração em cenários de Tecnologia de Informação (TI) e a necessidade de reduzir custos e tempo no gerenciamento de infraestrutura. No entanto, para ser capaz de oferecer garantias de qualidade de serviço (QoS) sem limitar o número de requisições aceitas, os provedores devem ser capazes de escalonar de forma dinâmica e eficiente as requisições de serviços para serem executadas nos recursos disponíveis. O balanceamento de carga não é uma tarefa trivial, envolvendo desafios relacionados à demanda de serviço, a qual pode mudar instantaneamente, à modelagem de desempenho, e implantação e monitoramento de aplicações em recursos de TI virtualizados. Dessa forma, o objetivo deste artigo é des...
Energy advancement and innovation have generated several challenges for large modernized cities, ... more Energy advancement and innovation have generated several challenges for large modernized cities, such as the increase in energy demand, causing the appearance of the small power grid with a local source of supply, called the Microgrid. A Microgrid operates either connected to the national centralized power grid or singly, as a power island mode. Microgrids address these challenges using sensing technologies and Fog-Cloudcomputing infrastructures for building smart electrical grids. A smart Microgrid can be used to minimize the power demand problem, but this solution needs to be implemented correctly so as not to increase the amount of data being generated. Thus, this paper proposes the use of Fog computing to help control power demand and manage power production by eliminating the high volume of data being passed to the Cloud and decreasing the requests’ response time. The GridLab-d simulator was used to create a Microgrid, where it is possible to exchange information between consum...
As information about clients and businesses is migrating to the cloud, there are growing concerns... more As information about clients and businesses is migrating to the cloud, there are growing concerns about how safe this environment is. Furthermore, it is known that as more stringent levels of security are required, the countermeasures, necessary to maintain the security of the system, are subjected to increasing interference on the performance. In the case of cloud computing, it is possible to compensate the overhead generated by the security mechanisms by changing the number of available resources on-the-fly. The aim of this paper is to perform a performance evaluation of a service involving the application of security mechanisms. We considered the change of computing resources during the execution time by means of a dynamic and self-managed module, which was responsible for load balancing, efficient utilization of resources and Quality of Service level assurance. According to the results of the experiments, we verified that the approach in the Vertical environment provided the fulfilment of the requirements defined in the Service Level Agreement, even with security overhead, with slight changes in the service costs.
2018 International Conference on High Performance Computing & Simulation (HPCS), 2018
It is estimated that about 2.5 exabytes of data are produced daily. This large volume of data has... more It is estimated that about 2.5 exabytes of data are produced daily. This large volume of data has brought new possibilities of applications, however, to manage this large volume of data, new technologies were needed. One of the most prominent technologies is the Hadoop framework, which implements a parallel task processing paradigm. The aim of this paper is to present the results of our group's research which analyzed the performance of the Hadoop framework for Big Data processing. The performance evaluation focused on finding the saturation point of Hadoop performance by varying the number of nodes in the cluster applying two benchmarks - TeraSort and Pi. The analysis was performed using a real infrastructure, implementing the system in a physical cluster, providing a general approach of performance analysis in the Hadoop framework for developers and researchers.
With industry 4.0, data-based approaches are in vogue. However, extracting the essential features... more With industry 4.0, data-based approaches are in vogue. However, extracting the essential features is not a trivial task and greatly influences the final result. There is also a need for specialized system knowledge to monitor the environment and diagnose faults. In this context, the diagnosis of faults is significant, for example, in a vehicle fleet monitoring system, since it is possible to diagnose faults even before the customer is aware of the fault, minimizing the maintenance costs of the modules. In this paper, several models using machine learning (ML) techniques were applied and analyzed during the fault diagnosis process in vehicle fleet tracking modules. Two approaches were proposed: ‘With Knowledge’ and ‘Without Knowledge’, to explore the dataset using ML techniques to generate classifiers that can assist in the fault diagnosis process. The approach ‘With Knowledge’ performs the feature extraction manually, using the ML techniques: random forest, naive Bayes, support vect...
2018 International Conference on High Performance Computing & Simulation (HPCS), 2018
Actually, the Internet is an essential component in the lives of the vast majority of people and ... more Actually, the Internet is an essential component in the lives of the vast majority of people and it has been contributing to an unprecedented technological advance. The Internet of Things (IoT) is considered the first real evolution of the Internet after Web 2.0. Through it, revolutionary new applications with power to permeate people's lives are expected to emerge. IoT is a term used to refer to the integration of physical and virtual objects, with different purposes and in different areas. This technology is enabled by several others, such as the popularization of sensors, wireless networks and the explosion of cloud storage and big data. However, considering that any device equipped with computing resources and Internet access can collect, analyze and make decisions based on pre- established criteria, the development of an architecture for the management of this environment is necessary and requires further studies. In this way, the aim of this paper is to propose an architecture for the efficient management of all the infrastructure that compose an Internet of Things environment, and ensures the Quality of Service (QoS). For this, the proposed dynamic and self-manageable architecture aims to relate mechanisms of prediction and load balancing, as well as access control, resources management and security.
Proceedings of the 23rd Brazillian Symposium on Multimedia and the Web, 2017
The growth of video surveillance devices increases the rate of streaming data. However, even work... more The growth of video surveillance devices increases the rate of streaming data. However, even working in the Fog Computing environment, these smart devices may fail collecting information, producing missing or invalid data. This issue can affect the user quality of experience, because the PTZ-controller may lose the target object tracking. Therefore, this paper presents the Singular Spectrum Analysis - (SSA), as the method to replace missing values in this complex environment of intelligent surveillance cameras. SSA is characterized within time series field by performing a non-parametric spectral estimation with spatial-temporal correlations. The values not correctly monitored, were estimated by SSA with accuracy, allowing the tracking of a suspect object.
Anais do XXXIX Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (SBRC 2021), 2021
Em Microgrids a produção de energia é realizada combinando fontes renováveis e não renováveis de ... more Em Microgrids a produção de energia é realizada combinando fontes renováveis e não renováveis de geração de energia. Desse modo, é fundamental o controle da geração não renovável para evitar o desperdício. Esse tipo de problema tem sido investigado por várias pesquisas, que empregam variações do ajuste do controlador Proporcional-Integral-Derivativo (PID) para evitar perdas de energia. Entretanto, nenhum dos trabalhos empregou uma estratégia para reduzir o tempo de equilíbrio entre os geradores de energia. Nesse contexto, este trabalho apresenta o KaspaFog, uma abordagem que emprega uma estratégia de predição de dados utilizando o modelo SARIMA e uma rede neural com aprendizado por reforço para ajustar o controle da geração de energia. O KaspaFog é uma infraestrutura na névoa apoiada pela nuvem, devido à necessidade de processamento e tempos de respostas rápidos. Com o uso do KaspaFog, foi alcançada uma redução de 18% na produção de energia não renovável em comparação ao ajuste Zieg...
2018 International Conference on High Performance Computing & Simulation (HPCS), 2018
The number of surveillance cameras distributed over Smart Cities and the streaming workload gener... more The number of surveillance cameras distributed over Smart Cities and the streaming workload generated by them have been increased. Although Fog computing has been used to reduce latency and jitter time, the gateways IoT may fail to collect information, producing missing or invalid data, affecting the quality of service. Therefore, this paper presents an analysis of gap filling algorithms to data missing problem in a smart surveillance environment. Performance Evaluation study has shown that it is possible to maximize the accuracy of data imputation using Singular Spectrum Analysis (SSA). SSA is characterized by time series field by performing a non-parametric spectral estimation with spatial-temporal correlations. Statistical outcomes have confirmed the requirement of accurate data imputation techniques to the Smart City environment, specifi¬cally in the surveillance scenario. In addition, the performance evaluation technique has allowed emphasizing the contribution of imputation data approaches. These approaches can estimate values that were not correctly monitored, increasing the accuracy in the estimation of the Streaming video, and thus improving the quality of service.
A computacao em nuvem introduz um novo nivel de flexibilidade e escalabilidade para provedores e ... more A computacao em nuvem introduz um novo nivel de flexibilidade e escalabilidade para provedores e clientes, pois aborda desafios como a rapida mudanca em cenarios de Tecnologia da Informacao (TI) e a necessidade de reduzir custos e tempo no gerenciamento da infraestrutura. No entanto, para oferecer garantias de qualidade de servico (QoS) sem limitar o numero de solicitacoes aceitas, os provedores devem ser capazes de dimensionar de maneira dinâmica e eficiente as solicitacoes de servico a serem executadas nos recursos computacionais disponiveis nos datacenters. O balanceamento de carga nao e uma tarefa trivial, envolvendo desafios relacionados a demanda de servicos, que podem mudar instantaneamente, para modelagem de desempenho, implantacao e monitoramento de aplicativos em recursos de TI virtualizados. Desta forma, o objetivo deste trabalho e desenvolver e avaliar o desempenho de diferentes heuristicas de balanceamento de carga para um ambiente de nuvem, a fim de estabelecer um mape...
Proceedings of the XIV Brazilian Symposium on Information Systems - SBSI'18, 2018
Cloud computing introduces a new level of flexibility and scalability for providers and clients, ... more Cloud computing introduces a new level of flexibility and scalability for providers and clients, because it addresses challenges such as rapid change in Information Technology (IT) scenarios and the need to reduce costs and time in infrastructure management. However, to be able to offer quality of service (QoS) guarantees without limiting the number of requests accepted, providers must be able to dynamically and efficiently scale service requests to run on the computational resources available in the data centers. Load balancing is not a trivial task, involving challenges related to service demand, which can shift instantly, to performance modeling, deployment and monitoring of applications in virtualized IT resources. In this way, the aim of this paper is to develop and evaluate the performance of different load balancing heuristics for a cloud environment in order to establish a more efficient mapping between the service requests and the virtual machines that will execute them, and to ensure the quality of service as defined in the service level agreement. By means of experiments, it was verified that the proposed heuristics presented better results when compared with traditional and artificial intelligence heuristics.
As ferramentas de virtualização têm um papel fundamental no crescimento da utilização da Computaç... more As ferramentas de virtualização têm um papel fundamental no crescimento da utilização da Computação em Nuvem. Por meio da virtualização, é possível realizar a migração de máquinas virtuais dentro de um provedor de serviços de nuvem, proporcionando a utilização eficiente dos recursos. Entretanto, não oram evidenciadas quais técnicas são mais indicadas de acordo com os cenários distintos de carga no qual o sistema está operando. Diferentemente das avaliações de desempenhos simplesmente comparativas encontradas na literatura, este trabalho propõe a utilização de um modelo estatístico consistente para avaliação de desempenho das técnicas de migração: (i) live migration e (ii) non-live migration. O objetivo do modelo estatístico é identificar o comportamento das técnicas de migração de máquinas virtuais sob diferentes situações de cargas de trabalho. O modelo estatístico proposto para utilização é composto pela abordagem de avaliação de desempenho das técnicas de migração de máquina virt...
2018 IEEE Symposium on Computers and Communications (ISCC), 2018
In the last 20 years, the amount of energy consumed has grown more than 50% and due to a shortage... more In the last 20 years, the amount of energy consumed has grown more than 50% and due to a shortage of energy resources in the future, will not be possible to meet all this demand. The current distribution model transports energy from stations to consumers, but does not consider the use of alternative sources. The smart grids have emerged to allow the inclusion of alternative forms of energy generation in the grid. Yet, to avoid an overload in the system is necessary to calculate the power flow in real time. In this paper, we use Fog Computing as mean to reduce the logical distance between the central distribution and consumption spot. IoT devices in the network edge have more effectiveness and less cost to handle the power flow information. We evaluate the performance of the Newton-Raphson and Gauss-Seidel algorithms with the objective of developing calculations in real time of the load flow problem with the help of fog. Our results have shown that is possible to make a smart grid ba...
As Smart Grids sao redes responsaveis por distribuir a energia de forma segura e promover uma med... more As Smart Grids sao redes responsaveis por distribuir a energia de forma segura e promover uma medicao justa do consumo. Por terem uma grande quantidade de sensores para monitorar e registrar diferentes quantidades de dados ao longo do dia, podem deixar de coletar informacoes, produzindo dados ausentes ou invalidos, afetando a qualidade do servico. Esse artigo apresenta a proposta de um algoritmo adaptativo, construido a partir da avaliacao de desempenho de dois algoritmos utilizados para a imputacao de dados faltantes, o Spline e o Singular Spectrum Analysis(SSA). A avaliacao de desempenho mostra melhorias significativas na imputacao de dados faltantes com o algoritmo construido, permitindo uma medicao mais precisa do consumo mesmo com dados faltantes.
Proceedings of the Symposium on Applied Computing, 2017
This paper makes experimental evaluations that involve the allocation of virtual machines in a cl... more This paper makes experimental evaluations that involve the allocation of virtual machines in a cloud environment. Virtual machine allocation is an open research field in cloud, which can lead to the best performance for clients. However, the allocations are made by estimating the number of resources that need to be allocated to the virtual machines in the host without taking account of the possible workload required for these virtual machines. In carrying out this, we set up a cloud prototype, together with virtual machines with the same configuration as that for Amazon and Microsoft providers, so that our prototype could be validated. After this, we allocated as many virtual machines as possible in a single host based on our own infrastructure and involving homogeneous workloads and heterogeneous workloads. The results showed that the benefits obtained from heterogeneous sets of virtual machines were better than the homogeneous sets.
Anais do Simpósio Brasileiro de Sistemas de Informação (SBSI), 2017
Computação em nuvem é um estilo de computação no qual os provedores de recursos podem oferecer se... more Computação em nuvem é um estilo de computação no qual os provedores de recursos podem oferecer serviços sob demanda de forma transparente e os clientes geralmente pagam de acordo com o uso. A nuvem introduz um novo nível de flexibilidade e escalabilidade para usuários abordando desafios como a rápida alteração em cenários de Tecnologia de Informação (TI) e a necessidade de reduzir custos e tempo no gerenciamento de infraestrutura. No entanto, para ser capaz de oferecer garantias de qualidade de serviço (QoS) sem limitar o número de requisições aceitas, os provedores devem ser capazes de escalonar de forma dinâmica e eficiente as requisições de serviços para serem executadas nos recursos disponíveis. O balanceamento de carga não é uma tarefa trivial, envolvendo desafios relacionados à demanda de serviço, a qual pode mudar instantaneamente, à modelagem de desempenho, e implantação e monitoramento de aplicações em recursos de TI virtualizados. Dessa forma, o objetivo deste artigo é des...
Energy advancement and innovation have generated several challenges for large modernized cities, ... more Energy advancement and innovation have generated several challenges for large modernized cities, such as the increase in energy demand, causing the appearance of the small power grid with a local source of supply, called the Microgrid. A Microgrid operates either connected to the national centralized power grid or singly, as a power island mode. Microgrids address these challenges using sensing technologies and Fog-Cloudcomputing infrastructures for building smart electrical grids. A smart Microgrid can be used to minimize the power demand problem, but this solution needs to be implemented correctly so as not to increase the amount of data being generated. Thus, this paper proposes the use of Fog computing to help control power demand and manage power production by eliminating the high volume of data being passed to the Cloud and decreasing the requests’ response time. The GridLab-d simulator was used to create a Microgrid, where it is possible to exchange information between consum...
As information about clients and businesses is migrating to the cloud, there are growing concerns... more As information about clients and businesses is migrating to the cloud, there are growing concerns about how safe this environment is. Furthermore, it is known that as more stringent levels of security are required, the countermeasures, necessary to maintain the security of the system, are subjected to increasing interference on the performance. In the case of cloud computing, it is possible to compensate the overhead generated by the security mechanisms by changing the number of available resources on-the-fly. The aim of this paper is to perform a performance evaluation of a service involving the application of security mechanisms. We considered the change of computing resources during the execution time by means of a dynamic and self-managed module, which was responsible for load balancing, efficient utilization of resources and Quality of Service level assurance. According to the results of the experiments, we verified that the approach in the Vertical environment provided the fulfilment of the requirements defined in the Service Level Agreement, even with security overhead, with slight changes in the service costs.
Uploads
Papers