Site Loader
Rock Street, San Francisco

Abstract— Cloud computing is a kind of
parallel and distributed framework to supply, utilize and conveyance of IT
administrations or services by means of internet. Due to the development of
cloud, scheduling has become one of the major challenges. Scheduling alludes to
an arrangement of strategies to control the request of work to be performed by
resources.. A good scheduler is the one that makes its scheduling strategy
suitable for specific use as per changing environment and the nature of the task.
Manual scheduling consumes lot of time along with many
resources. Hence to avoid these,
resources to a task can easily be assigned  and managed till its completion through a
workflow. We studied different scheduling algorithms and based on those,  parameters being affected are enumerated in
this paper.

Keywords- cloud computing; scheduling; workflow management; workflow  scheduling;  resource  utilization

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now


In recent
years,  the area of cloud computing has
witnessed significant development and its applications have grown manifold. Cloud
computing can be defined as the perfect example of parallel computing,
distributed computing, pervasive computing, utility computing and grid computing.
The cloud computing model enables convenient and on-demand network access to
various shared but configurable computing resources (e.g. networks, servers,
storage, applications, and services) so that they can be quickly arranged and
released with minimum management effort or service provider interaction.
Availability of resources is promoted by the cloud model. Chen, et. al 1 have.0
reported that, “cloud model comprises of five essential characteristics
that are on-demand self-service, broad network access, resource pooling, rapid
elasticity, measured service), three service models (SaaS, PaaS and IaaS) that offer
flexible payment services and four deployment models (Private, Community,
Public and Hybrid)”. Huge amounts of computer power based on Internet and
data storage, its security, convenience, efficiency and can be obtained through
cloud. There are a progression of issues that have been broadly created and
utilized as a part of cloud computing. Out of these the most exceptional one is
the means by which to diminish cost while keeping or enhancing the Quality of
Service (QoS), and finally amplifying the incomes.

                Business process
management and business process automation can be upheld by workflow technology.
It can upgrade adaptability of business process system and the capacity of
fitting for change. The business process can be


made plans to some sensible little occasions by utilizing workflow technology.
The procedural semantics which depict limitation connection between activities
can be exclusively displayed and controlled by workflow innovation. A directed
acyclic graph depicts a workflow application. The hubs of the graph indicate
tasks and the edges signify the inside reliance between activities.
Continuously there are a progression of tasks in single workflow, and as a
result of the dependency each task may broadcast to other tasks. The definition
and execution control of business processes can be viably overseen by a
workflow management system (WfMS). The key segment of a WfMS is the workflow
engine that deals with the workflow’s execution, which thus can be utilized to
perform workflow planning, information transmission and fault tolerance
management. The primary point of workflow scheduling is to allocate significant
task to the predetermined applicable resource at specific time. For
correspondence between information resources, information transmission is
utilized. At the point when the execution of some task turns out badly, the
fault tolerance management becomes an integral factor. For a Workflow
Management System, workflow planning is most basic on the grounds that a proper
booking arrangement significantly affects execution of WfMS.

Process Of Scheduling

step-wise and brief description of procedure Scheduling in cloud environment is
presented. Cloud scheduling process is divided into below stages 2:

i.        Identification of resource and status
gathering – The brokers of data centre identify the resources available in the
cloud’s network. After this they gather the information related to their
current status.

ii.       Resource selection on the basis of
parameters – It is basically a stage in which resources are selected and
decision are made. The resource selection for the task execution is guided by
certain parameters such as time, cost etc.

iii.      Task Execution –It is the last stage in
which the task is submitted to the selected resource and after that the
processing begins. The user interaction is limited only to the submission of
the task and for specifying the specifications as per their requirements. All
other things are handled by the broker of the cloud provider. The task is hence
assigned to the resource of the virtual machine and finally executed.


scheduling is the assignment of the application tasks to the different Virtual
Machines (VMs) for execution 3. The inter-dependent tasks execution on
distributed resources is mapped and managed by the scheduling process. The
allocation of relevant resources to the workflow tasks is done by the
scheduling process such that execution is carried out completely in accordance
with the Service Level Agreements (SLAs) with the users. Workflow scheduling is
a prominent 4 NP-complete problem. For distributed systems like grids,
several heuristic as well as meta-heuristic methods have already been proposed.
After the workflow is defined, all such tasks for which the parent tasks have been
completed can be executed. Hence a workflow scheduling can harness the parallel
hardware potential in a process intensive way by intelligent assignment and
distribution of the tasks among the available resources as and when those
resources become available. However there is also and apprehension that it could
be a naïve strategy to place a task on the next available resource. It is generally
seen in data intensive workflows that the scheduling of child operations on
same resources on which their parents are also scheduled dramatically reduces
data transfer operations. Additionally during the execution process there is
every possibility of a schedule, that was initially valid, becoming invalid at
a later stage due to the allocated resources going offline or also may be due
to increase in network latencies/ congestion. QoS is the collective effort of
services performance, which determines the degree of the satisfaction of a user
for the services and can be expressed in terms of qualitative measures like
makespan (completion time), cost, availability, reliability, priority, load
balance and deadline 5. A Compromised-Time-Cost (CTC) algorithm to achieve
lower cost while meeting the user-designated deadline has been proposed 6.
However, this was found to increase the overall job completion time. FIFO
scheduling was proposed by evaluating the entire group of tasks in the job
queue 7. It, however, failed to optimize the scheduling to adapt to changing
loads. A heuristic based scheduling on Particle Swarm Optimization (PSO), 8
lead to minimization of total cost of execution but did not cater to optimizing
the completion time. A priority based job scheduling 9 model was proposed for
assigning priority to workflow tasks and makespan, but, cost was not
considered. In Ant Colony Optimization, a single VM which has better
performance will be used for executing all tasks. This leads to load imbalance
as there would be heavy load on one VM while others would be idle. The
algorithm was found to be not scalable and the makespan of the tasks was also
large. Bi-Objective Priority based Particle Swarm Optimization schedule
workflows to minimize execution cost while meeting a user defined deadline10.
However, minimization of completion time was not considered. Meta heuristic
techniques discussed above, have considered only one or two QoS parameter 
11. However, SLAs require multiple QoS criteria to be met and efficient
workflow methodology are required to meet these requirements.

workflow scheduling remains a key challenge for cloud computing applications.

directed acyclic graph (DAG) can be used to represent a workflow application

is a Non-deterministic polynomial (NP)-complete problem

minimisation of the makespan of a parallel application by properly allocating
the tasks to the processors/resources and proper arrangement of task execution
sequences is the main objective of workflow scheduling techniques.

it is also pertinent to specify that the cloud services are to a great extent
expensive for processing, bandwidth and capacity, and so forth. The plan of
action of pay as you go additionally can lessen the cost of workflow execution.
Consequently, streamlining of the workflow execution in cloud processing has
turned out to be one of the examination hotspots in the exploration of workflow
and cloud computing recently.

As a
rule, it is important to guide tasks to the proper execution resource by some
workflow scheduling algorithms in cloud, which will straightforwardly impact
the achievement rate of cloud workflow scheduling and the execution efficiency.
Plus, not at all like conventional workflow scheduling, cloud workflow scheduling
ought to consider not just the ideal mix and usage of the resources
additionally the limitation of time grouping and causation of each assignment
to acquire the last outcome. As a result, the cloud workflow scheduling problem
is typically a NP-hard problem. The following are the ramifications of the
exploration in cloud workflow scheduling:

user’s QoS ask for is of satisfaction that can be incredibly progressed by this
scheduling. It not just advances the user’s delight of workflow execution cost,
additionally draws in the users to utilize cloud services, therefore help to
accomplish most extreme benefit.

can enhance the resource use of cloud services given by cloud specialist
organization. Considering workflow occasion attributes, the use of resources
that are included in workflow scheduling will be enhanced essentially.

advancement and use of cloud registering and in addition workflow innovation
can be incredibly advanced by this scheduling. It has tremendous potential
particularly in the ranges, for example, astronomy, biomedicine, science,
quality expression information examination, and furthermore in the case serious
applications, for example, web based business, and so on..

Workflow Scheduling Model

In a
workflow scheduling model the overall distribution of the resources among
various tasks can be shown though a model. 
Workflow scheduling model can be explained as below:


It can be modeled by a directed acyclic graph G (T,E)

is a set of n tasks {t1,
t2, …, tn}, R denotes the resources.

is a set of arcs between two

each arc ei,j = (ti, tj)
represents a precedence constraint

dummy tasks: tentry and texit

scheduling problem

workflow scheduling issue is the mapping of workflow assignments to Grid

T -> R (
i.e. R is functionally dependent on T),  so that the makespan M is minimized.

workflow task is an arrangement of guidelines that can be executed on a
solitary processing component of a computing resource. In a workflow, an entry
task does not have any parent task and an exit task does not have any child
task. Furthermore, a child task can’t be executed until the greater part of its
parent tasks are finished. The task that has the majority of its parent tasks
completed at any time of scheduling is known as a Ready task.

scheduling methodology has to also take into consideration the interdependency
of the tasks while mapping the tasks to the distributed resources. The user’s
task requests should also have to be provisioned with minimal management effort
and resource provider interaction.

Quality Of Service (QOS)- Constraints

Quality of service alludes to
activity prioritization and resource reservation control components instead of
the accomplished service quality. Quality of service is the capacity to give
distinctive priority to various applications, clients, or information streams,
or to ensure a specific level of execution to an information stream.

Makespan is the total length of the schedule (i.e. when all jobs have
finished processing).
Cost:  The overall cost to set up a pool of
resources (Virtual Machines) where implementation is performed.
Scheduling time is considered as the scheduling overhead, which
constitutes a lot of time utilized by the scheduler to produce the schedule.
Deadline constraints suspend or end running jobs at a specific time. There
are two sorts of deadline constraints:

A run window, specified at the queue
level, suspends a running job
A termination time, specified at the job
level, terminates a running job

Balance: Cloud load balancing is the way toward appropriating workloads and
computing resources in a cloud computing condition. Load balancing enables
enterprise to supervise application or workload requests by allocating
resources among different Personal Computers PCs, systems or servers.
Efficient: Efficient energy use, sometimes simply called energy
efficiency, is the goal to reduce the amount of energy required to provide
products and services..

Recent Research Trends

This section describes some of recent research
approaches developed for allocating resources to tasks for every virtual
machine and then analyse which workflow scheduling algorithm works best under
given quality of service (QoS) constraints.

                Wu, X. et. al 3, proposed a task
scheduling algorithm based on QoS-driven for cloud computing. At first, the
algorithm computes the priority of tasks, in order to reflect the precedence
relation of tasks according to the special attributes of tasks, and then sorts
tasks by priority. Secondly, their algorithm evaluates the completion time of
each task on different services, and schedules the task as quickly as time
permits as indicated by the sorted task queue.

S., & LinlinWu, S. M. 8, experimented with a workflow application
by varying its computation and communication costs. They compared the cost
savings when using, Particle Swarm Optimization (PSO) and existing ‘Best
Resource Selection’ (BRS) algorithm. Results showed that PSO can accomplish,
three times cost savings as compared to BRS and great circulation of workload
onto resources.

                Sornapandy, S.&
Sadhasivam, G. 12, addressed
an approach for optimizing scheduling using a nature inspired Intelligent Water
Drops (IWD) algorithm. The performance of Ant Colony Optimization (ACO)
algorithm for task scheduling is compared with the proposed IWD approach and it
is proved that task scheduling using IWD can efficiently and effectively
allocate tasks to suitable resources in the grid.

                Du, Y. & Li, X. 13, set up a workflow model which
fit in with the current framework and resources. And the the framework receives
adaptable workflow process system to improve the versatility and opening and
solve the exception and abnormity manipulation. The outcomes demonstrated that
it cannot just give the dispatching orders framework with drafting and
discharge work, additionally set dispatcher free from the fussy day by day
business and predigest task flow and increase production efficiency.

P. And Anand Sheila 14, presented a novel approach for load balancing which
aims at task scheduling with efficient provisioning of the cloud resources.
This paper looks at balancing the task load among the different VMs; trying to
equally disperse the load on all the computing nodes while maximizing their
utilization and minimizing the total task execution time and cost.

                Prakash V. &
Bala A, 15, concentrated on usage of workflow scheduling that decreases the
general execution time of jobs in the workflow. The proposed scheme has been
assessed by utilizing simulation based investigation through WorkflowSim. plan
is sufficiently powerful to ideally utilize the resources. The results
examination demonstrated that the execution time of the proposed is less than
the existing approaches.

                Elayaraja J. & Dhanasekar S.
16, presented the important heuristics such as cost, makespan, number of
cores (multicore). They  introduced Ant
Colony Optimization for the scheduling problem in hybrid clouds and also
considered available bandwidth when scheduling workflows. In their paper, they
have interpreted that Ant Colony Optimization is one of the best optimization
techniques in scheduling workflows using heuristics.

                Rodriguez A. & Buyya R. 17
in their paper proposed a resource provisioning and scheduling strategy for
scientific workflows on Infrastructure as a Service (IaaS) clouds. An algorithm
based on the meta-heuristic nhancement procedure, particle swarm Optimizatuon
(PSO), which plans to limit the general workflow execution cost while meeting
deadline constraints. Additionally, heuristic is assessed utilizing CloudSim
and different well known logical workflows of various sizes are computed. The
outcomes demonstrated that their this approach performed superior to the present
cutting edge algorithms.

                Gupta I., Kumar M. S. & Jana P. K
18, proposed a task duplication-based workflow scheduling algorithm for
heterogeneous cloud environment which is centered on task duplication
realization. The proposed algorithm has two phases. The first phase computes
the priority of all the tasks and second phase goes through the scheduling with
task duplication by calculating data arrival time from one task to another
task. Proposed algorithm aims to minimize workflow execution time and to
maximize the resource utilization. The performance evaluation of the proposed
algorithm is done on benchmark scientific workflow applications with different
task-cloud heterogeneity.

                Adhikari M. & Amgoth T.
19, an Efficient Workflow Scheduling Algorithm (EWSA) which can deal with
countless application at the same time is presented. The goal of the algorithm
is to assess the execution time of the considerable number of tasks
dynamically. The algorithm likewise makes an appropriate Virtual Machine (VM)
with least resources to such an extent that the whole application can be
executed inside its deadline. Through simulation, they set up that the proposed
algorithm performs superior to the current algorithm as far as different
execution measurements are considered.

                Zhou C. & Garg S. K 20, assessed
a few most normally utilized workflow planning algorithms to comprehend which
algorithm will be the best for the efficient execution of dynamic workflows. Keeping
in mind the end goal to enhance effectiveness of dynamic workflow planning
which are driven by Big Data, they have examined the execution of extensive
variety of most regularly utilized booking algorithms for a few most well known
logical workflows, including FCFS, MinMin, MaxMin, Round-Robin, MCT, Data
Mindful, Haul and DHEFT algorithm. The workflows are CyberShake, Montage,
Epigenomics, Inspiral and Sipht. The examinations led under four sorts of
reasonable situations in view of how diverse sorts of dynamic parts of Big Data
can impact the execution of the workflows.

                Sooezi N., Abrishami S. &
Lotfian M. 21, proposes a communication based algorithm for workflows with
enormous volumes of data in a multi-cloud environment. The proposed algorithm
changes the meaning of the Partial Critical Paths (PCP) to limit the cost of
workflow execution while meeting a client characterized deadline.         

Conclusion and Future Work

                Due to the the fundamental properties, workflow
technologies are perfectly suited for the development of industrial and
scientific applications, and due to the high-end provisioning technology, cloud
computing can allow elasticity in the acquisition of computing and storage
resources for the execution of workflow applications. The design of the
workflow systems allows the making of applications into tasks and their
execution thereof in an organised manner. In cloud computing the task
scheduling and resource management is important as it has an impact on economy
and application execution time. Various state-of-the art tools are employed for
analysing the various scheduling algorithms. After this these algorithms are
put into action in the actual cloud environment. We, in this paper, have
reviewed the work already done by various researchers on the workflow
scheduling algorithms and the findings have been tabulated on the basis of
scheduling parameters and cloud environment. The different parameters shown in
the literature are execution time, resource utilisation, cost optimisation,
makespan minimisation etc. on the basis of which the researchers worked in
workflow scheduling. Our observations reveal that factors like real time
applications reliability, usage of resources in balanced manner, backup and fault
tolerance should also be included in optimisation.

The impact, which the different pricing interval lengths have on workflow
scheduling, is also worth studying for the future research. Another noteworthy
area for future research is Instance intensive workflows. Much more work can
also be done on intelligent workflow scheduling algorithm to improvise it by
taking into consideration various QoS constraints. The literature review has
revealed that no doubt lots of QoS factors have been taken into consideration
but still there is room for certain improvements in load balancing, operation
cost and energy consumption.

Shiv Kumar, Vinus Sharma, Rahul Sharma

Department of Computer Science

Govt. Gandhi Memorial Science College,

Jammu, J&K

Shiv Kumar, Vinus Sharma, Rahul

Department of Computer Science

Govt. Gandhi Memorial Science

Jammu, J&K

A Study of Workflow Scheduling Algorithms in Cloud
Computing Under Qos Constraint

Post Author: admin


I'm Eunice!

Would you like to get a custom essay? How about receiving a customized one?

Check it out