Friday 24 May 2013

An Introduction to Cloud Servers & Their Benefits - Part 3: Cost & Deployment

The final instalment of this trio of articles looks at the features of the two cloud server deployment models, public and private, as well as discussing how they can deliver real cost savings to their customers.

Cost Efficiencies
As mentioned previously, the responsive scalability of pooled cloud servers means that cloud services can offer significant cost efficiencies for the end user - the most salient of which is that the client need only pay for what they use. Without being bound by the fixed physical capacities of single servers, clients are not required to pay up front for capacity which they may not make use of, whether it be their initial outlay or subsequent steps up to cater for increases in demand. In addition, they avoid the set up costs which would otherwise be incurred by bringing individual servers online. Instead any set up costs generated when the underlying cloud servers were brought online are overheads for the cloud provider and are diluted by economies of scale before having any impact on their pricing model.? This is particularly the case as many cloud services minimise the effort and expense of specific cloud server and platform configurations by offering standardised services into which the client taps.

Lastly, cloud models allow providers to do away with long term lock-ins. Without the longer term overheads of bringing individual servers online for individual clients and maintaining them there isn’t the dependency on those clients for a return on that investment from the provider’s point of view.

Deployment
There are two common deployment models for cloud services which span the service level models (IaaS, PaaS, SaaS) described in part one: ?Public Cloud:and Private Cloud.

Perhaps the most familiar to general population, and also the most likely to deliver some of the features and benefits mentioned previously, is the typical public cloud model. This model utilises the large number of pooled cloud servers located in data centers, to provide a service over the internet which members of the public can sign up for and access. However, the exact level of resource - and therefore capacity, scalability and redundancy - underpinning the each public cloud service will depend on each provider. The underlying infrastructure, including servers, will be shared across all of the service’s end users whilst the points at which the service can be accessed are open to anyone, anywhere, on any device as long as they have an internet connection. Consequently, one of the model’s key strengths, its accessibility, leads to its most prominent weakness, security.

Services which need to implement higher levels of security can instead use private cloud models. The architecture of private clouds can vary but they are defined by the fact that the cloud is ring-fenced for the use of one client. Servers can either be located in a data center, and accessed via leased lines or trusted provider networks, or on the client’s premises, and accessed by secure local network connections. They can be provisioned as either physical or virtual servers, but they’ll never be shared across multiple clients. Access to the servers and the cloud service will always be behind the client’s firewall to ensure that only trusted users can even attempt to use it.

Private clouds, therefore, offer greater levels of security (depending on the exact set up), but utilising smaller pools of servers means that they cannot always match the economies of scale, high capacities, redundancy and responsive scalability of public cloud models. Although, these qualities can still be achieved more readily than more traditional fixed capacity server configurations on local or trusted networks.

For more information and insight on cloud computing, cloud servers and other related services you can visit this cloud infrastructure provider’s site.
Enhanced by Zemanta

Monday 20 May 2013

An Introduction to Cloud Servers & Their Benefits - Part 2: Scalability & Reliability

Having, in the first part of this article, described what cloud servers are and how they work within the context of cloud computing, the following instalments go on to discuss how they generated some of the key features that drive the adoption of the cloud at both a personal and enterprise level. This instalment covers the two performance related benefits of scalability and reliability.

Scalability
By combining the computing power of a significant number of cloud servers, cloud providers can offer services which are massively scalable and have no limiting capacities. With hypervisors pulling resource from the plethora of underlying servers as and when needed, cloud services can be responsive to demand so that increased requests from a client’s particular cloud service can be met instantaneously with the computing power that it needs. There is no issue with functions being limited by the capacity of one server and therefore clients having to acquire and configure additional servers when there are rises in demand. What’s more, with cloud services, where the product has already be provisioned, the client can simply tap into the service without the costs and delays of the initial server set up that would otherwise be incurred.

For those clients whose IT functions are susceptible to large fluctuations in use, for example websites with varying traffic levels, pooled cloud server resource removes the chance of service failure when there are spikes in demand. Additionally, on the flip side, it removes the need to invest in high capacity setups - as contingency for these spikes - which would go unused for a large proportion of time. Indeed, if the client’s demands fall, the resource they use (and pay for) can also reduce accordingly.

Reliability - Redundancy & Uptime
As mentioned the high number of cloud servers used to form a cloud service offering means that services are less likely to be disrupted with performance issues or downtime due to spikes in demand. However, the model also protects against single points of failure. If one server goes offline it won’t disrupt the service to which it was contributing resource because there are plenty other servers to seamlessly provide that resource in its place. In some cases, the physical servers are located across different data centres and even different countries so that there could conceivably be an extreme failure causing a data centre to go offline without the cloud service being disrupted. In some models, back ups are specifically created in different data centres to combat this risk.

In addition to unforeseen failures, pooled server resource can also allow maintenance - for example, patching of operating systems - to be carried out on the servers and networks without any disruption or downtime for the cloud service. What’s more, that maintenance, as well as any other supporting activities optimising the performance, security and stability of the cloud servers will be performed by staff with the relevant expertise working for either the cloud service provider or the hosting provider. In other words, the end user has no need to invest in acquiring that expertise themselves and can instead focus on the performance of the end product.

For more information and insight on cloud computing, cloud servers and other related services you can check out this blog from a cloud industry insider

Enhanced by Zemanta

Tuesday 14 May 2013

An Introduction to Cloud Servers & Their Benefits - Part 1: Definitions

The concept of cloud computing appears omnipresent in our modern world as we rely on on-demand computing to manage our digital lives across multiple devices - mobiles, tablets, laptops - whilst at home, in the office or on the move. This trio of articles introduces the key component in cloud computing, the servers that underpin each service and provide the computing resource, as well as describing how they provide some of cloud computing's most notable benefits.

Definitions
Cloud Servers: As mentioned above, can be defined as the servers that are used to provide computing resource for cloud computing. In essence they are servers which are networked together to provide a single pool of computing power which cloud based services can draw resource from.

Cloud Computing: Describes any computing service whereby computing power is provided as a on-demand service via a public network - usually the internet. Broadly cloud services can be categorised using the three following models:
  • IaaS – Infrastructure as a Service:
    • Pooled physical cloud server and networking resource (without any software platforms). Instead of the user being provided with a single distinct physical server, multiples thereof or shares therein, they are provided with the equivalent resources - disk space, RAM, processing power, bandwidth - drawn from the underlying collective cloud servers. These IaaS platforms can then be configured and used to install the software, frameworks, firmware etc (e.g., solution stacks) needed to provide IT services and build software applications.
  • PaaS – Platform as a Service:
    • Virtualised software platforms using pooled cloud servers and network resource. These services offer the collective physical resources of IaaS together with the above-mentioned software bundles so that the user has a preconfigured platform on which they can build their IT applications.
  • SaaS – Software as a Service:
    • Cloud based applications provided using pooled computing resource. This is the most familiar incarnation of cloud computing for most members of the public as it includes any application - such as web based email, cloud storage, online gaming - provided as a service. The applications are built and run in the cloud with end users accessing them via the internet, often without any software downloads necessary.

How Cloud Servers Work
Traditional computing infrastructure models tend to revolve around the idea of single server being used for a particular IT function (e.g., hosting, software applications etc), whether it be that that server is a dedicated server - i.e., for the sole use of that client - or shared across multiple clients. Shared servers may have used the one software/platform installation for all of their IT functions/clients or they may have delivered Virtual Private Servers (VPS) where each client has distinct operating environment which they can configure.

Cloud computing can deliver similar virtualised server environments but they use resource drawn from not one, but a multitude of individual physical cloud servers which are networked together to provide combined pool of server resource. In a sense, it uses a platform that could be considered as a form of clustered hosting whereby the resource demands of an individual client’s IT functions are spread across numerous distinct servers. However, with cloud hosting the resource pool has enough capacity, with sufficient servers, to provide resource which multiple clients can tap into as they need to.

Within the infrastructure of cloud services, cloud servers are networked with what are called hypervisors which are responsible for managing the resource allocation of each cloud server. In other words they control how much resource is pulled from each underlying cloud server when demands are made of the pool of servers, as well as managing the virtualised operating environments which utilise this resource.

For more information and insight on cloud computing, cloud servers and other related services you can check out this blog all about cloud servers and hosting
Enhanced by Zemanta