[go: up one dir, main page]

0% found this document useful (0 votes)
27 views11 pages

Dynamic Task Scheduling and Resource Allocation Fo

ss

Uploaded by

dada samir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views11 pages

Dynamic Task Scheduling and Resource Allocation Fo

ss

Uploaded by

dada samir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Journal of Physics: Conference Series

PAPER • OPEN ACCESS You may also like


- Designing Online Healthcare Using DDD
Dynamic Task Scheduling and Resource in Microservices Architecture
M Rizki, A N Fajar and A Retnowardhani
Allocation for Microservices in Cloud - Design of decision support system service
in the Space Science Center using
microservices approach
To cite this article: Sanath Mugeraya and Kailas Devadkar 2022 J. Phys.: Conf. Ser. 2325 012052 Elyyani, Y. Andrian, A.Z. Utama et al.

- Design of Information System Architecture


of Garment Enterprises Based on
Microservices
Weilun Tang, Li Wang and Guangtao Xue
View the article online for updates and enhancements.

This content was downloaded from IP address 94.139.60.187 on 02/09/2022 at 01:43


International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

Dynamic Task Scheduling and Resource Allocation for


Microservices in Cloud

Sanath Mugeraya, Kailas Devadkar

Department of Computer Engineering, Sardar Patel Institute of Technology, Mumbai,


India.

sanath.mugeraya@spit.ac.in, kailas_devadkar@spit.ac.in

Abstract- With the emergence of new companies and the expansion of the information
technology sector, the need for Cloud Computing becomes apparent. Currently, the enterprises
are rapidly transitioning from monolithic architecture to microservice-driven architecture. This
research study has discovered that all task scheduling algorithms were designed for a specific
(set) number of virtual machines, which resulted in the bottleneck problem, where multiple tasks
were assigned to the microservice scheduler and the execution time of processing the tasks was
significantly increased. Therefore, to address this issue, a novel model was designed based on
the number of tasks and accordingly the number of virtual machines were dynamically generated
to send the tasks to the microservice scheduler one by one, and the difficulties with execution
time were also addressed. The study also discovered that due to the multiple workloads on the
microservices, resource allocation becomes extremely difficult. To address this issue,
containerized microservices were discovered. Here, the microservices would be distributed in
containers. To implement the dynamic work scheduling technique, a cloud microservice
translator would be developed, where a user may upload a text file and quickly get it dynamically
translated. The main aim of this research work is to improve the task scheduling and resource
allocation in microservices.

Keywords- Task Scheduling, Microservices, Resource allocation, Cloud Computing

I. INTRODUCTION

With the increasing demand of data storage and web services, the new age technologies would run their
applications on physical servers. Despite the hype, with the increase in the size of data and with
increasing functionality, the traditional servers could not handle the large amount of data. This increases
the need for companies and many other organizations to migrate all their data resources to cloud
environment. The need for cloud microservices has increased in the recent times as it is more
productive, flexible and cost-effective. Cloud microservices were introduced so as to tackle the problem
of traditional cloud service-oriented architecture which puts the scalability and fault tolerance part in
danger. But cloud microservices are a boon because they can execute independent instructions as
compared to monoliths which are difficult to maintain and evaluate due to their complexity.

The main advantage of moving into cloud technology is that it is based on the concept of virtualization.
Virtualization basically gives the most essentially important technology that powers cloud computing.
So, in simple words it separates the software layer of the server from the hardware layer of the server.

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

In recent years the general trend that has been observed is that majority of multinational companies are
quickly shifting from monolithic architecture to microservices due to the increasing complexity of
monolithic type of architecture. So Microservice are an approach on how to develop an application as a
set of small services running in its own process. Monolithic APIs (Application Programming Interface)
are made up of small modules that are tightly coupled together and making changes to one part of the
API requires changes to be made in the whole code which in turn results in the increased complexity.

The traditional style architecture is not made to run in cloud environment without making large
modifications. So microservice is an attempt to arrange an application as a set of loosely coupled
modules. In order to make the microservices scalable, the microservices are packaged as a container
before deploying it any platform that helps in the containerized deployment.

II. RELATED WORK

[1] This paper speaks about how deploying microservices is a big task. It also speaks about the
importance of bring a dynamic multi objective scheduling which would deal with the problem of poor
resource allocation of microservices which would result in delayed response time and wastage of
resource capacities. It also recommends bringing new technologies in orchestration frameworks which
includes Kubernetes, which can result in proper scheduling of microservices and can improve the overall
throughput of the system. For scheduling of the microservice, a knapsack problem was designed, where
objects with values and their respective weights were the two variables. This algorithm was called Least
Waste Fast First algorithm. So the proposed Least Waste Fast First algorithm was compared along First
Come First Serve (FCFS). Spread algorithm and Binpack Algorithm. The proposed Least Waste Fast
First algorithm showed great improvements with respect to resource utilization, CPU utilization,
scheduling latency and execution time with respect to other algorithms.

[2] This paper proposes elastic scheduling algorithm which aims to better task scheduling and deals with
the problem with respect to scaling of application. It also deals with the scheduling of tasks, which meets
the deadline constraints. As a part of research for this paper it was found that all the existing algorithms
ignore the importance of streaming workloads. The elastic scheduling also ensures that the cost of virtual
machines is reduced. After identifying all the above problems an attempt was made so as to integrates
scheduling of tasks and scaling configuration
In order to design a system which solves the problem of streaming workloads an optimization problem
was taken into consideration.

The elastic scheduling which was proposed has three sections:

1. Configuration Solving

2. Urgency based Scheduling

3. Auto Scaling Configuration

So, in order to test this algorithm, a simulation platform was built, where the elastic scheduling
was compared with many other algorithms such as ProCis, SCS and IC-OCPD2. So, it was
found that the proposed elastic scheduling outperformed all the algorithms with respect to
success ratio, auto scaling and better response time.

[3] In this particular paper an INTMA algorithm which is Interaction Aware Resource
Allocation algorithm was proposed for deploying the microservices. In order to deploy the

2
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

microservices a new model was proposed which was called as Binary Quadratic Programming
Problem. The INTMA algorithm was compared against INTRR algorithm which is Interaction
aware Round Robin Algorithm. This paper also deals with problem with respect to microservice
specific performance objective. What was also observed was that when it comes to designing
microservice all the focus is on resource capacity, but the nature of the workload is always
underestimated. This paper also stresses on the importance of microservice interaction. In order
to compare the two algorithms and to evaluate the performance metrics of the two algorithms,
Google Cloud platform was used. The microservices were tested for both INTMA and INTRR
which has shown better response time, throughput and also improvement in the scheduling
duration.

[4] This paper discusses the issues which are encountered when microservices are scheduled
and highlights the problem with respect to resource management. Even though there are lot of
advancements in technologies with respect to virtualization which includes container-based
technology and hypervisor-based technology, but there isn’t any framework or tech stack that
manages a cluster of microservices. Also, while designing microservices, in order to have
effective task scheduling we must take into consideration the number of requests it would
handle.

III. SYSTEM DESIGN AND ANALYSIS

Figure 1. System Design of the designed Microservice Application

In order to build a lambda function, we first create an IAM (Identity and Access Management) in
Amazon web services console, where we take access for S3 object trigger, AWS translate trigger and
AWS cloud watch event.

So as to build a containerized microservice we make use of AWS ECR ie. Elastic Container Registry.
We then take access for Elastic Container Registry from IAM role, where we get the credentials from
AWS. After getting the access in ECR we create a repository inside which the docker file which was
built and run is pushed.

After pushing the docker image to the ECR repository which was created, an image URI would be
generated which would be used to create a lambda function using the option of creating lambda function
with image.

3
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

In the AWS lambda console, we create an S3 trigger and a python code is written wherein any file
uploaded on the input S3 bucket would in turn trigger lambda function, which would further trigger
AWS translate resources.

The AWS translate would dynamically provide all the resources which are needed for the content
translation. After which the file would be translated dynamically and the translated content would be
uploaded on the output S3 bucket.

Fig. 2. Detailed Architecture of the Designed Microservice Application

In the proposed Dynamic Scheduling model, depending upon the number of files uploaded on the S3
bucket, that many number of virtual machines would dynamically be created as compared to the previous
models where files would be given to static number of virtual machines. The file which is given to the
virtual machine would further be given to data broker. Here the data broker would combine and integrate
the multiple existing service into a new service. The file from the data broker is given to cloudlets. The
file from the cloudlets would be further given to our algorithm, which in turn would give it to the
microservice scheduler. The microservice scheduler would convert the file into a microservices. Since
we created a containerized microservice using Docker, the docker which acts as an orchestrator which
schedule and run the microservice in containers. The Amazon Web Services translator trigger which has
been created would allocate the translate resources and would give the output in the output S3 bucket.

4
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

IV. PERFORMANCE EVALUATION

In this particular section, the experimental evaluation of the proposed dynamic scheduling and elastic
scheduling is presented, which includes the experimental environment and the results.

1. EXPERIMENTAL ENVIRONMENT

In order to demonstrate the advantages of the proposed dynamic scheduling and elastic scheduling, a
containerized microservice application was designed where any text file uploaded in the input S3 bucket
would trigger the lambda function, which in turn would trigger the Amazon Web services (AWS)
translator and would translate the text file and upload on the output S3 bucket. As soon as the files would
be uploaded on the input S3 bucket many number of virtual machines would be dynamically created.
The created virtual machines would further send the files to data broker. The data broker would further
submit the files to the cloudlets. To show the differences in the two algorithms, at a time 5,10,15,20 files
ranging from 5kb to 100kb were simultaneously uploaded on the input S3 bucket and the timestamps on
the output S3 bucket were noted so as to demonstrate the advantages of the proposed dynamic scheduling
model.

2. EXPERIMENTAL RESULTS

By noting the timestamps of the translated text files at the output S3 bucket, we get the time taken for
parallel as well serial execution. So depending on the time taken for getting the output, we make a
comparative study of the proposed dynamic scheduling and elastic scheduling. For the comparative
study four parameters were considered for evaluating the performance. The parameters were Execution
Time, Speedup, Efficiency and Throughput.

1. Execution Time

Number of
Number of Dynamic Elastic Improvement in
Virtual
Tasks Scheduling Scheduling both algorithms
Machines
5 14 17 5 21.42%
10 17 21 10 23.59%
15 20 25 15 25%
20 23 29 20 26.08%

Table 1. Execution Time of proposed Dynamic Scheduling and Elastic Scheduling where number of
virtual machines would be same as the number of tasks given

5
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

Figure 3. Comparison of execution time of proposed Dynamic Scheduling and Elastic Scheduling.

According to the results in Table 1, it is found that the execution time of proposed dynamic scheduling
has been minimized by 24.0225% in comparison to elastic scheduling.

2. Speedup

Speedup is defined the ratio of time taken for serial execution to the time taken for parallel execution.

𝑻𝒊𝒎𝒆 𝒕𝒂𝒌𝒆𝒏 𝒇𝒐𝒓 𝑺𝒆𝒓𝒊𝒂𝒍 𝑬𝒙𝒆𝒄𝒖𝒕𝒊𝒐𝒏


Speedup=𝑻𝒊𝒎𝒆 𝒕𝒂𝒌𝒆𝒏 𝒇𝒐𝒓 𝑷𝒂𝒓𝒂𝒍𝒍𝒆𝒍 𝑬𝒙𝒆𝒄𝒖𝒕𝒊𝒐𝒏

Number of
Number of Elastic Dynamic Improvement in
Virtual
Tasks Scheduling Scheduling both the algorithms
Machines
5 0.68 0.73 5 7.35%
10 1.809 2 10 10.55%
15 3.4 3.933 15 15.67%
20 4.79 5.91 20 23.382%

Table 2. Execution Time of proposed Dynamic Scheduling and Elastic Scheduling where number of
virtual machines would be same as the number of tasks given

6
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

Figure 4. Comparison on the Speedup of Proposed Dynamic Scheduling and Elastic Scheduling.

According to the results in Table 2, it is found that the speedup of proposed dynamic scheduling has
been increased by 14.238% in comparison to elastic scheduling.

3. Efficiency

𝑺𝒑𝒆𝒆𝒅𝒖𝒑
Efficiency=
𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑽𝒊𝒓𝒕𝒖𝒂𝒍 𝑴𝒂𝒄𝒉𝒊𝒏𝒆𝒔

Number of
Number of Elastic Dynamic Improvement in
Virtual
Tasks Scheduling Scheduling both the algorithms
Machines
5 0.136 0.146 5 7.37%
10 0.1809 0.2 10 10.558%
15 0.2266 0.2622 15 15.71%
20 0.2395 0.2955 20 23.382%

Table 3. Efficiency of proposed Dynamic Scheduling and Elastic Scheduling where number of virtual
machines would be same as the number of tasks given

7
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

Figure 5. Comparison of Efficiency of proposed Dynamic Scheduling and Elastic Scheduling.

According to the results in Table 3, it is found that the efficiency of proposed dynamic scheduling has
been increased by 14.255% in comparison to elastic scheduling.

4. Throughput

Here in this experiment throughput is measured by the number of tasks performed per second

Number of
Number of Elastic Dynamic Improvement in
Virtual
Tasks Scheduling Scheduling both the algorithms
Machines
5 0.294 0.3571 5 21.46%
10 0.476 0.588 10 23.52%
15 0.6 0.75 15 25%
20 0.69 0.869 20 25.94%

Table 4. Throughput of proposed Dynamic Scheduling and Elastic Scheduling where number of
virtual machines would be same as the number of tasks given

Figure 6. Comparison of Throughput of proposed Dynamic Scheduling and Elastic Scheduling.

8
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

According to the results in Table 4, it is found that the throughput of proposed dynamic scheduling has
been increased by 23.98% in comparison to elastic scheduling.

V. CONCLUSION AND FUTURE WORK

This paper proposes Dynamic Task Scheduling algorithm for microservices in Cloud Computing
Environment. The proposed algorithm aims to minimize the execution time, bring in improvement in
speedup, betterment in efficiency and improved throughput of the designed application. The execution
time of the proposed dynamic scheduling is reduced by 24.0255%. The speedup of the proposed
dynamic scheduling is increased by 14.238%. The efficiency of the proposed dynamic scheduling is
increased by 14.255%. The throughput of the proposed dynamic scheduling is increased by 23.98%
respectively.

In near future, there is a scope for improvement since microservices is a diverse and huge topic. The
task scheduling algorithm can be made even better and tested for various other parameters as well.

REFERENCES

[1] Hamid Mohammadi Fard “Dynamic Multi objective task scheduling of microservices in Cloud
Technical University of Darmstadt” in 2020 IEEE/ACM 13th International Conference on Utility and
Cloud Computing (UCC)

[2] Sheng Wang, Zhijun Ding “Elastic Scheduling for microservice applications in cloud Changjun
Jiang” (2020)

[3] Christina Terese Joseph , Prof. K. Chandrasekaran Dynamic Interaction Aware Resource allocation
for microservices in Cloud Computing Lab, Department of Computer Science and Engineering,
National Institute of Technology Karnataka, Surathkal (2020) 101785

[4] Maria Fazio and Antonio Celesti “Issues in scheduling microservices in cloud” University of
Messina 2325-6095/16/$33.00 © 2016 IEEE

[5] Aniello Castiglione “ Challenges in delivering microservices in cloud University of Salerno” 2016
IEEE

[6] S. Soltesz , H. Pötzl , M.E. Fiuczynski , A. Bavier , L. Peterson , Container- based operating system
virtualization: a scalable, high- performance alternative to hypervisors, in: ACM SIGOPS, vol. 41,
ACM, 2019, pp. 275–287 .

[7] Y. Niu, F. Liu, and Z. Li, “Load balancing across microservices,” in Proc. IEEE INFOCOM, 2018,
pp. 199-207.

[8] C. Qu, R.N. Calheiros, and R. Buyya, “Auto-scaling web applications in


clouds: A taxonomy and survey,” ACM Compute. Surveys, vol. 51, no. 6, pp. 173, Sep. 2019.

[9] M. Fazio , A. Celesti , R. Ranjan , C. Liu , L. Chen , M. Villari , Open issues in scheduling
microservices in the cloud, IEEE Cloud Compute. 3 (5) (2018) 81–88

9
International Conference on Electronic Circuits and Signalling Technologies IOP Publishing
Journal of Physics: Conference Series 2325 (2022) 012052 doi:10.1088/1742-6596/2325/1/012052

[10] E. Casalicchio, A study on performance measures for auto-scaling cpu- intensive containerized
applications, Cluster Comput (2019

[11] D. Bernstein , Containers and cloud: from lxc to docker to kubernetes, IEEE Cloud Compute (2019)
81–84 .

[12] J.P. Martin , A. Kandasamy , K. Chandrasekaran , Exploring the support for high performance
applications in the container runtime environment, Hum.-Centric Compute. Inf. Sci. 7 (1) (2019)

10

You might also like