Get 15% Discount on your first purchase

Cart (0)

Workload in Cloud Computing

Here use the term workload to refer to the utilization of IT resources on which an application is hosted. Workload is the consequence of users accessing the application or jobs that need to be handled automatically. Workload becomes imminent in different forms, depending on the type of IT resource for which it is measured: servers may experience processing load, storage offerings may be assigned larger or smaller amounts of data to store or may have to handle queries on that data. Communication IT resources, such as networking hardware or messaging systems may experience different data or message traffic. In scope of the abstract workload patterns, we merely assume this utilization to be measurable in some form [14].

Static Workload: IT resources with an equal utilization over time experience static workload.Static workloads are characterized by a more-or-less flat utilization profile over time within certain boundaries. This means that there is normally no explicit necessity to add or remove processing power, memory or bandwidth for change in workload reasons. When provisioning for such a workflow the necessary IT resources can be provisioned for this static load plus a certain overprovisioning rate to deal with the minimal variances in the workload. There is a relatively low cost overhead for this minimal overprovisioning.

Periodic Workload: IT resources with a peaking utilization at reoccurring time intervals experience periodic workload.In our real-lives periodic tasks and routines are very common. For example, monthly paychecks, monthly telephone bills, yearly car checkups, weekly status reports, or the daily use of public transport during rush-hour, all these tasks and routines occur in well-defined intervals. They are also characterized by the fact that a lot of people perform them at the same intervals. As a lot of the business processes supporting these tasks and routines are supported by IT systems today, there is a lot of periodic utilization that occurs on these supporting IT systems.

Once-in-a-Lifetime Workload:IT resources with an equal utilization over time disturbed by a strong peak occurring only once experience once-in-a-lifetime workload.As a special case of periodic workload, the peaks of periodic utilization can occur only once in a very long timeframe. Often, this peak is known in advance as it correlates to a certain event or task. Even though this means that the challenge to acquire the needed resources does not arise frequently, it can be even more severe. The discrepancy between the regularly required number of IT resources and those required during the rare peak is commonly greater than for periodic workloads. This discrepancy makes long term investments in IT resources to handle this onetime peak very inefficient. However, due to the severe difference between the regularly required IT resources and those required for the one-time peak, the demand can often not be handled at all without increasing IT resources.

Unpredictable Workload: IT resources with a random and unforeseeable utilization over time experience unpredictable workload.Random workloads are a generalization of periodic workloads as they require elasticity but are not predictable. Such workloads occur quite often in the real world. For example, sudden increases of Website accesses due to weather phenomena or shopping-sprees when new products gain an unforeseen attention and public interest. The resulting occurrence of peaks or at least their height and duration often cannot be foreseen in advance under these conditions [14].

Read More

Open Source Resources in Cloud Computing

Eucalyptus: Released as an open-source (under a FreeBSD-style license) infrastructure for cloud computing on clusters that duplicates the functionality of Amazon’s EC2, Eucalyptus directly uses the Amazon command-line tools. Startup Eucalyptus Systems was launched this year with venture funding, and the staff includes original architects from the Eucalyptus project. The company recently released its first major update to the software framework, which is also powering the cloud computing features in the new version of Ubuntu Linux [13].

Red Hat’s Cloud: Linux-focused open-source player Red Hat has been rapidly expanding its focus on cloud computing. At the end of July, Red Hat held its Open Source Cloud Computing Forum, which included a large number of presentations from movers and shakers focused on open-source cloud initiatives. You can find free webcasts for all the presentations here. Stevens’ webcast can bring you up to speed on Red Hat’s cloud strategy. Novell is also an open source-focused company that is increasingly focused on cloud computing, and you can read about its strategy here [13].

Cloudera: The open-source Hadoop software framework is increasingly used in cloud computing deployments due to its flexibility with cluster-based, data-intensive queries and other tasks. It’s overseen by the Apache Software Foundation, and Yahoo has its own time-tested Hadoop distribution. Cloudera is a promising startup focused on providing commercial support for Hadoop [13].

Traffic Server: Yahoo this week moved its open-source cloud computing initiatives up a notch with the donation of its Traffic Server product to the Apache Software Foundation. Traffic Server is used in-house at Yahoo to manage its own traffic, and it enables session management, authentication, configuration management, load balancing, and routing for entire cloud computing software stacks. Acting as an overlay to raw cloud computing services, Traffic Server allows IT administrators to allocate resources, including handling thousands of virtualized services concurrently [13].

Read More

What is Cloud Computing? A complete information

Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or under provisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT [10].

Cloud computing is one of the latest technology that is very popular now a days in IT industries as well as in R&D. This cloud computing technology is a model of development that comes after the introduction of distributed computing. As compare the cloud computing with the distributed computing in this there is a multilevel virtualization. The whole work that is related to cloud computing works in a virtual environment. To get the advantages of cloud user needs to only connect to the internet and after that user can easily use the powerful computing and capacity of storage. Cloud computing services provided by CSP (cloud service provider) as per user requirements. In order to ful-fill the demand of different users, they provide different quality of services. In order to conclude the term cloud is an executable environment having dynamic behavior of resources as well as users providing multiple services [11].From a hardware point of view, three aspects are new in Cloud Computing [11].

  1. The illusion of infinite computing resources available on demand, thereby eliminating the need for Cloud Computing users to plan far ahead for provisioning.
  2. The elimination of an up-front commitment by Cloud users, thereby allowing companies to start small and increase hardware resources only when there is an increase in their needs.
  3. The ability to pay for use of computing resources on a short-term basis as needed (e.g., processors by the hour and storage by the day) and release them as needed, thereby rewarding conservation by letting machines and storage go when they are no longer useful.

Figure 2.1 Overview of Cloud Computing [12]

Cloud Computing is nothing but using and accessing applications through internet. In addition to configuration and manipulation of applications we can also store data online. Usually in cloud computing you do not need to install any software for any application to run or work in your PC, this is what makes a difference which avoids platform dependency issues. This is how Cloud computing is making applications mobile and collaborative.The basic cloud computing architecture is divided into two main parts [12].

  1. Front End: The front end is the client part. It consists of interfaces and applications which are necessary to access other applications. The front end is connected to back end via internet. For example web browsers are front ends.
  2. Back End: It is the cloud by itself containing huge data storage, security, deployment models, service, servers, cloud infrastructure, management etc.

Read More