Thursday 15 August 2013

Using Technology to Achieve a Work-Life Balance - Communications

Voice over Internet Protocol, how it works
Voice over Internet Protocol, how it works (Photo credit: Wikipedia)
The previous instalment of this article looked at how the two concepts of VPN and BYOD can be used to improve the work-life balance of employees by providing connected computing and so this second instalment continues the theme with the focus instead on internet based communication channels.

VoIP
VoIP is an acronym which stands for the term Voice over Internet Protocol. In other words, it represents the technologies in which voice data, typically telephone calls, is transferred over the internet. With recent advances in VoIP technology, it is able to almost seamlessly take the place of traditional fixed phone lines and offer many other benefits in addition. As it does replace more and more PSTN (public switched telephone network - the traditional mostly analog network) lines in both our homes and workplaces, often the only way to distinguish whether you are using VoIP vs. PSTN is by the extra functionality that comes with it. Much of this functionality concerns integration, into unified communications packages (see below), and flexibility, both of which allow workers to make more efficient use of their working hours and reduce the work creep into their personal lives.

VoIP packages can vary but the functionality on offer can typically include the ability to make and receive phone calls free of a fixed location or device, as long as an internet connection is available. The user may be able to use differing interfaces to handle calls, depending on the device in question - for example, a web interface and a headset on a desktop computer vs. a mobile phone vs. a digital handset - but the end result will be unaffected as far as the person on the other end of the line is concerned. The consequences for employees is that they are able to work outside of the office (at home for example) to make better use of their time and still be as accessible on the end of the phone as they would normally be on site; often on the same number . Commutes can be negated when needs be and flexible working arrangements can be embraced far more easily.

Video Conferencing
As with VoIP, the benefits of video conferencing, in terms of achieving a better work life balance are all about providing effective but flexible communication possibilities free of location dependencies. The term, sometimes referred to as video chat or video calling, describes scenarios and technologies in which users can talk face to face using video streams over the internet - essentially like VoIP with added video. Many organisations have meeting rooms with video conferencing facilities to allow communication between office locations without the need for travel but the technology is also used in portable devices and desktop computers to provide the means for visual communication on the move. It is possible, for example, to join a meeting in the office using a smartphone when on the train, using video conferencing.

The technology, therefore, allows workers to communicate using all of the visual cues that make face-to-face contact so effective without travelling long distances, or, when more beneficial, from alternative locations such as home. Combined in a unified communications package (see below), it can even facilitate collaboration on work with colleagues as though they were in the same office despite being stationed in disparate locations. All of this reduced travel and location independence does of course mean more time at home as fewer demands on time spilling outside of work hours.

Unified Communications
Bringing all of these together to offer joined up communications channels, is unified communications (UC). This is a very dynamic area of tech but the ultimate goal is that individuals are able to communicate seamlessly, switching between different methods/channels such as voice calls, video calls, email and instant messaging (IM) or SMS, across varying devices and platforms, without the conversation dropping at any stage. UC can use the concept of an integrated inbox where more traditional emails sit alongside IMs and even voicemail messages, managed with unified contact lists. The concept is being adopted at enterprise level with solutions such as Microsoft Lync but it also creeping into our day-to-day lives with developments such as Google Hangouts. The latter brings together our personal conversations across mobile and desktop devices using instant messaging, email and video calling and provides another example of how high technology is permeating our personal lives and in turn increasing the expectation and perceived possibilities of its use in the workplace.

UC is attractive to enterprise because offers businesses efficiencies in terms of the speed and effectiveness of communications alongside reduced travel costs. However, UC adoption in the workplace can also improve both the motivation - due to the effectiveness of the work employees carry out using the technology - and the working hours of employees so that their jobs take less of a toll on their personal lives and ultimately their happiness.

To find out more about the uses of unified communications including VoIP and video conferencing in the workplace you can check out what’s on offer from an enterprise level provider of unified communications.

Enhanced by Zemanta

Tuesday 16 July 2013

Using Technology to Achieve a Work-Life Balance - VPN & BYOD

The following article introduces some of the technologies that can be used to help us improve our work-life balance by being more efficient and flexible in what work we do and where we do it from. Many of the technologies help us to work from dynamic locations and make communications and the sharing of information speedier and more versatile - thus providing benefits to both employers and their employees with higher staff morale and higher productivity.

VPN
The term VPN, or Virtual Private Network, is used to describe scenarios and technologies that allow two disparate local computer networks (LANs) to be securely connected across public networks such as the internet. The exact technologies and protocols can vary with some VPNs using software programs and/or network configurations, but the basic principle is that the data that is transmitted between the two endpoints on each network or device is contained within encrypted packets, whilst each endpoint itself requires authentication to restrict access to authorised users. As the encrypted packets can only be decoded at these endpoints, the data cannot be intercepted as it travels across the public networks.

This secure system can be used, not to only connect two distinct LANs regardless of geographical location, but also to connect individual machines/devices to LANs. For businesses it can therefore be a valuable technology for connecting separate office sites or allowing employees to connect with centralised LANs when travelling or working from other locations. However, for individual employees looking to improve their work-life balance it can be a vital tool allowing them to work from home and still access all of the secure files and data stored on their office’s servers, as well as running programs, such as email clients as if they were sat at their usual desk. The flexibility that this offers can, where the employer is obliging, massively ease time and travel pressures, preventing work from encroaching excessively on personal time. Moreover, employers using VPN are more likely to be obliging when it comes to changing working locations due to the security VPN offers, as well as the continuity, with employees able to contribute and work at the same capacity as if they were on-site.

BYOD
BYOD, which stands for Bring Your Own Device, represents a fast growing trend in the workplace whereby employees are permitted, and sometimes encouraged, to use their own personal devices in place of those provided by their company. A BYOD policy has to tackle security concerns as ‘untrusted’ devices (with varying malware vulnerabilities and the potential to take private data off the network) are introduced to otherwise restricted workplace LANs. On the other hand, adoption can reduce a business’s IT spend, introduce more IT functionality to the workplace and make individual employees more productive - as they work on devices with which they are more familiar. The adoption of BYOD and its benefits therefore relies on technologies such as VPN (above) to provide secure connections between devices and LANs (without necessarily bringing the device directly onto the LAN behind the firewall).

BYOD helps to improve the work-life balance because it blurs further the boundary between working from the office, on the road or at home so that there is a seamless transition between each; reducing the need to travel/commute in many cases. It can also increase the period and efficiency of output which, in turn, can mean that the working day eats less into personal time. All of which means more personal time spent at home. Furthermore, it can’t be underestimated how a sense of morale at work affects the work-life dynamic and so using devices with which one is familiar, experienced and comfortable can be important.

To find out more about the uses of unified communications including VoIP and video conferencing in the workplace you can check out what’s on offer from an enterprise level provider of unified communications.
Enhanced by Zemanta

Monday 15 July 2013

The Need for Standardisation in Cloud Computing - The Issues

The second instalment in this couplet of articles looks at some of the issues that the standardisation of cloud computing, including the European Commission's potential initiative, will aim to tackle and clarify in order to drive the adoption of cloud computing forward, particularly within its ripest markets, such as amongst SMEs.

Control Over Data & Security
Arguably the primary area for concern for prospective cloud computing clients is that of data handling. In other words knowing and controlling where your data - personal details if you are a personal consumer, or client data etc for businesses - is stored (geographically and technologically) and how secure it is in that location. For on-site private clouds, for example, this is less of a concern, but where a client is signing up to a public cloud service, based upon shared server resources and public network connections, they may not fully understand, nevermind influence, where their data lives.

The cloud computing model means that a client’s data can ultimately be stored across national boundaries, across continents even and that raises many issues around the varying jurisdictions under which that data exists. It can lead to conflicts between the differing jurisdictions under which cloud providers operate and under which the data they control is stored. As an illustration of this, data held by US cloud providers must be supplied to the US government on request under the US’s Patriot Act, even when that data is physically hosted in another country. Consequently, as a UK resident signing up to a Microsoft Live account for example, a consumer may not in fact realise that their personal details, and who has access to them, are governed by US laws, despite understanding that Microsoft is a US based company. In many other scenarios, private and commercial users of cloud services may not be confident of the nationality of the provider in the first place, never mind the jurisdictions and legislation that govern their data.

Not only may this lack of transparency lead to concerns over who has access to and jurisdiction over data but it may also raise questions about what security measures are applied to safeguard that data against those who shouldn’t have access to it in any case, and against any kind of intentional corruption of that data. Security vulnerabilities can of course sit at many points in the cloud model but client perception and awareness of those at source - at server/data centre level - can be far less clear than of those at the access points with which the client interacts.

Interoperability
Presently, the primary driver for standardisation across the industry, whether amongst clients who feel they have a grasp of their data handling processes or not, is that of interoperability - that is being able to switch their IT functions from one provider’s platform to another compatible platform. The current cloud market raises many questions in this area for organisations: if it moves some of its functions to one cloud provider will it then be locked into that provider for its associated functions; can it integrate functions hosted with other providers; will this carry prohibitive costs; can it switch workloads between different cloud services seamlessly; will it be able to migrate away from its chosen provider if a more preferable solution comes along? From a commercial point of view these are some of the most pressing questions when choosing a cloud provider and so there is gathering momentum across the industry, not just in Europe, to establish open standards which free consumers to treat their cloud computing services more like the utility computing that the cloud has long promised to deliver. In other words, allowing clients to tap into different provider’s services as and when they need them, without lock-ins and without barriers.

There are many other aspects of a cloud proposition, in relation to performance, uptime, storage space etc, that clients can have difficulty understanding and comparing like for like. All of these issues demand clear and standardised SLAs to come into play in order to define the language and metrics in which information can be presented to clients. Form a client’s point of view, however, the key is to seek out reputable cloud providers, enquire about interoperability physical hosting locations, use recommendations from clients with established relationships where possible and steer well clear of ambiguities.

To find out more about accessing secure and transparent cloud based sevices across the EU you can check out this pan-European operator of virtual data centres.

Thursday 11 July 2013

Careers in IT Beyond Development and System Admin - Project Management

Project Management Lifecycle
Project Management Lifecycle (Photo credit: IvanWalsh.com)
There is no doubt that the for those looking to switch their career paths, the world of IT, or information technology, can provide an attractive option because of the basic fact that the world’s demand for IT solutions is only going to increase. Whether it's the production of computing devices and software for the personal retail market or the creation and maintenance of IT infrastructures within the enterprise space, the demand keeps rising as we become more and more dependent on technology to underpin every aspect of our personal and professional lives.

When many people think of IT careers they may well think of writing code for software programs, or maintaining the hardware on which those programs run, however, there are a myriad of essential jobs which don’t require you to touch a line of code or tinker with a server. The following piece highlights some of the primary but alternative skill areas that anyone looking to kickstart a career in IT should investigate.

Project Management
Perhaps the most salient example of an IT career beyond development is project management. Every IT project, for example the deployment of a new in-house administration system, the building of a new website or the release of a software package, requires the guiding hand of a Project Manager to:

  • bring all of the project members (developers, network administrators, testers, marketers) and other stakeholders (clients, company boards) together in collaboration
  • keep communication flowing
  • scope and plan the project - including project objectives, scheduling and defined completion criteria
  • track and report the progress of the project
  • manage risk
  • determine the methodologies and tools that the project will employ to meet its stated aims


Project Managers (PMs) are often required to become familiar with a variety of different technologies between projects however, they rarely need to be hands on with creating IT elements themselves. It is therefore beneficial to have experience in IT, but any suitable individual with the ability to learn quickly and build a good team around them - i.e., good interpersonal and other soft skills - will be able to succeed. It is often a lone role within a project team and therefore requires that the individual is capable of working effectively on their own as well as within a team; with the skills to organise (and lead) both themselves and their team. In short, it requires a multi faceted skill set incorporating and combining soft skills with a strong technical awareness.

One attractive benefit of a Project Management role is that the skills employed are largely transferable, especially between different areas of IT such as software developments and systems deployment, but also beyond the IT sphere into other areas of business. Often the only limiting factor will be the individual’s ability to pick up the specifics of their project’s subject matter, as mentioned above.

There are a number of other similar roles, which can take on the title of project management, but where some of the responsibilities and levels of control are diluted, including project administrators and facilitators. Sometimes these may work alongside a project manager and can present an opportunity to gain valuable experience about what the role can entail.

The next installment in this series of articles will look at testing/quality assurance roles in relation to both software and hardware.

To find out more about training and qualifying to become a project manager you can check out this organisation offering ISEB certification in London.
Enhanced by Zemanta

Wednesday 10 July 2013

The Need for Standardisation in Cloud Computing - Introduction

To many observers, cloud computing may appear to be spreading like wildfire with both enterprise and personal users jumping at the chance to take advantage of the cost effectiveness, scalability and flexibility that it offers. However, there is a strong debate amongst industry experts, and beyond, as to whether this uptake, however rapid, has been severely tempered by a lack of trust and understanding around cloud services from prospective clients.

The debate stems from the thought that there is a perceived lack of transparency caused by the multi-server approach to the creation of cloud platforms and the differing propositions put forward by individual cloud providers; and that this in turn is obscuring client understanding of what it is exactly that they are choosing to sign up to.

Moreover, it is argued that some of the markets that would benefit most from cloud adoption are made up of the same clients that are more predisposed to be reluctant to take on the perceived risk of signing up to cloud services. As a generalisation, large scale enterprises are perhaps more likely to possess the budgets to either host private cloud services internally or engage with third party providers to define exactly what they getting from their service, whilst private users perhaps only engage with the cloud at a lower level where they don’t have the inclination to analyse the performance and security issues at play. However, in between, SMEs (small to medium sized enterprises) are likely to be better informed as to both the benefits and risks of cloud computing platforms, but won’t necessarily have the budgets to bring IT functions in-house or employ dedicated professionals with the expertise themselves to procure the most suitable services. In other words, the likelihood is that SMEs are more reliant on third party suppliers, aware of the non-specific risks of each proposition but unable to control the specific vulnerabilities of the cloud.

Standardisation

Many propose that, as has been the case in many markets that have preceded cloud computing, the answer to client wariness is standardisation with the aim of delivering transparencies. In other words, create a market where a client can shop between multiple providers and judge their security levels, data handling, performance and service stability on comparable metrics.

One of the main driving forces behind standardisation in the European cloud computing market is the European Commission (the executive arm of the European Union) who are keen to implement a new set of standards across the 27 constituent members of the EU. The intention of these standards, alongside updated data protection rules, is to allow the consistent delivery of cloud services spanning national borders within the union - much like the commission has delivered with economic migration. Together they aim to build client trust that their data is handled and stored within the same legal frameworks whether it is physically hosted in Holland or the UK, for example. The policies would see the introduction of a certification scheme whereby cloud providers across the EU would be certified if they were shown to conform to the commission's standards on data handling, interoperability and security.

Perhaps at the crux of what the commission's policy would seek to clean up, however, is the topic of SLAs or Service Level Agreements. These are the documents that outline what it is exactly that the client is signing up for and what services they can expect from their cloud provider when they hand over their money. Standardising SLAs is key to building trust in the cloud computing market because it introduces the transparency for cloud clients when they are analysing their options. The subjects covered by SLAs are explored further in the second part of the article.

To find out more about accessing secure and transparent cloud based sevices across the EU you can check out this operator of pan-European virtual data centres.

Monday 8 July 2013

The Features of Mobile Tech and their Applications for Good Causes - Connectivity

The following two parted article examines the key features of mobile technology and how these features can be vital to those looking to use mobile devices for good causes; whether it be aid or health agencies employing them ‘in the field’ or charities rolling them out to disadvantaged individuals to improve their quality of life.

Wireless Connectivity
Perhaps the primary advantage of any portable device, in terms of its use for charities and aid agencies for example, is the factor that makes it portable in the first place - that is its wireless connectivity. There are a myriad of ways in which devices can connect with the rest of the world. When it comes to delivering basic communications, vast swathes of the developed and developing world are covered by mobile phone networks and, as the more basic mobile phones are becoming cheaper and cheaper, it has never been easier to make phone contact across disparate locations. Where mobile phone signals don’t exist yet (due to a lack of mobile phone base stations) there is still the option of using satellite phones which are potentially able to relay voice and simple data communications, using satellites, between any two locations across the planet.

However, the real advances in the last decade have gone beyond sharing one-to-one conversations in opening up the internet, and therefore the ability to share vast amounts of information, to portable devices such as notebooks, tablets and smartphones. Such devices, when connected to wireless (WiFi) hotspots or 3G (and increasingly 4G) cellular data networks, can give users access to the internet when they are away from fixed (home or office) locations, in transit or in the field to allow data sharing and collaboration. For any aid worker or health professional working in the field for example, this can be a vital means of immediately sharing information about patients or environments, with colleagues - perhaps back at a base - for further analysis and diagnosis.

Using the internet (on 3G for example) to carry data, mobile VPN (virtual private networks) can also provide a secure link between disparate locations so that any of this data/information which is private and sensitive can be shared with, and restricted to, trusted individuals and networks. Following on from the above example, but more specifically, health workers visiting patients off-site in their own homes can instantaneously update health records on secure centralised servers using mobile VPN technology, with no need to wait until they are back at base.

GPS
Most portable devices such as smartphones and tablets also come with GPS integrated within their connectivity options. GPS, or Global Positioning System, uses signals bounced from multiple satellites to identify where any device in the world is located. For individual consumers, it helps us to navigate maps and see what services and attractions are nearby, but for aid workers it can allow vital information concerning their context to be logged alongside other data they are sharing. If water samples, or photos are being captured for example, GPS functionality can add an automatic and accurate time (incorporating time zone) and place stamp, making the information far more complete, accurate and useful.

What’s more, for those actually carrying a portable device, the GPS functionality can prove life saving as allows their position to be tracked and picked up when they are in need of assistance. Whether it's a lost walker or an elderly individual who’s had a fall, their support and rescue can locate them without delay where GPS is utilised.

To find out more about how mobile devices are being used for good causes you can check out these videos from Vodafone's Mobile for Good Foundation, or get more information on the uses of VPN.
Enhanced by Zemanta

Friday 28 June 2013

The Features of Mobile Tech and their Applications for Good Causes - Usability & Portability

Having looked into how the connectivity of mobile devices can be useful in providing flexible and diverse communication channels for those carrying out charity work, the following part of this article highlights the more interactive features that increase the functionality of the devices for both aid workers and disadvantaged individuals.

Cameras
A digital camera is now standard in the vast majority of portable or mobile devices and an increasing proportion can capture both photo and video. Coupled with the aforementioned connectivity of such devices, this provides many charity or health organisations with the ability to capture evidence and information on what is happening ‘in the field’ and share it almost immediately with their target audiences. For example, the visual aspects of symptoms can be recorded and reported by health professionals to their colleagues for further analysis; or evidence on the extent of disasters can be collated, shared and assessed to inform what actions should be taken next. Moreover, this evidence can be shared with the wider world to effectively communicate situations where aid action is required to the public, across the globe, in order to raise the profile and ultimately the funding for a cause. Even where charity or aid workers are not on the scene, the prevalence of portable devices with in-built cameras and connectivity means that the public’s attention can be drawn to those in need by witnesses with such devices documenting events.

Size
Arguably the second decisive factor in defining a device as portable is its size. Devices such as tablets and particularly smartphones can be carried and transported to any part of the world that people can get themselves to. As these devices are packed with computing power that would have, only a few years previously, been restricted to desktop PCs, plus connectivity that crosses boundaries, they are not only able to collect information but process it, analyse it, share it and report it wherever they are. Whether it’s individuals who require more assistance themselves, or people and organisations helping others, the fact that they are not tethered to a location means that they are able to deal with demands when and where they occur.

Touch Screen Displays
The development of a touch screen is fundamental to the existence of portable devices as it has allowed them to do away with both physical keyboards and mice to become handheld and pocket sized.

What’s more, however, touch screens provide a more intuitive way for some less able users to make the most out of their device. By generating possibilities around the creation of specific accessibility applications, they open up functionality to individuals with specific needs. Large, increasingly high resolution displays, for example, allow touch screen controls to be made as large as possible and well defined for those who have difficulties perceiving the usual small controls. Using software (see below), controls no longer need to be fixed and can be dynamically configured to suit the user’s needs. Button colours, keyboard layouts and languages, for example, can all be changed using software to benefit the device’s user.

Software
Finally the element that brings many of the hardware capabilities together, integrates them and leads to ingenious solutions to niche but demanding problems is the software. On each platform, solid and useful pre-packaged software is complemented by stores of 3rd party applications, developed by people ranging from global software companies to keen individuals building apps in their spare time. It is hard to find a problem too big or too small where someone hasn’t attempted to develop an app to deal with it. Therefore with mobile device processing power growing at an exponential rate and the physical capabilities of high connectivity, GPS and camera functions all on offer, the real potential mobile devices have in aiding good causes is the creativity of those developing solutions for them.

To find out more about how mobile devices are being used for good causes you can check out these videos from the Vodafone Foundation, or get more information on communications and networking from this provider.
Enhanced by Zemanta

Monday 10 June 2013

The Powerhouses of Global Steel Production

English: A view of the former Bethlehem Steel ...
English: A view of the former Bethlehem Steel from the Fahy Bridge in Bethlehem, Pennsylvania. This photo was taken shortly before demolition began to make way for the Sands BethWorks casino project. Jschnalzer 23:29, 31 July 2007 (UTC) (Photo credit: Wikipedia)
This article takes a look at some of the principal producers of steel across the world and how the profile of the global industry is evolving, with focus moving to the East, both in terms of production and consumption.

In the latter stages of the 20th century, through to the present day, there has been a shift in emphasis within the steel producing industry, from the old powerhouses of Europe, where production has dropped significantly since the 1970s, to the new manufacturing hubs of Asia, with their vast natural resources - including of course the iron ore and fossil fuels required to produce steel. As the industry has become more efficient and reliant on mechanical processes the number of individuals employed by producers has dropped but this effect has been more pronounced in these old powerhouses, demonstrating the relative scaling back of operations. For example, in the EU, employment dropped by 72% between 1974 and 2000, from 996,000 employees to 278,000; whilst in the US it was down by 71% from 521,000.

The following lists the top 3 steel producers in the world using 2012 output figures from the World Steel Association and highlights the changing landscape of the industry.

China

China has emerged as, by a distance, the largest producer of steel across the globe accounting for a whopping 46.3% of the world’s annual production in 2012, according to the World Steel Association. Most countries, witnessed a dip in 2009 due to the economic difficulties but China’s output marched on regardless, rising by 221.6 million tonnes between 2007 and 2012 - a margin in itself double the total level of output of the next main producer Japan. China’s output of 716.5 million tonnes in 2012 was over 4 times that of the entire European Union put together, and over 8 times that of the US. The UK, who sit 18th on the list of global suppliers, meanwhile have an annual output of a mere 9.8 million tonnes, just 1/73 of China’s.

The eastern superpower also claimed to be the top steel exporter in 2011, although not by the same margin due to the size of their internal market and consumption. In total, 47.9 million tonnes left the country in 2011 however due to China’s aforementioned demand for the raw material and their consequent imports of foreign steel, the country slips to 2nd when it comes to the top net exporters, behind Japan.

Japan
In contrast to China, the output from Japan has actually dropped a little in the last 5 years (-11%) but it remains the second highest producing country in its own right (the EU collectively has a higher output), producing around 107 million tonnes. However, Japan does almost catch China when it comes to the amount of that steel that is exported rather than earmarked for internal consumption (40.7m tonnes) and indeed takes top spot as the highest net exporter in the world, above its neighbour. As well as having slightly lower levels of steel imports than China, Japan exports a far higher proportion of its output, around 38% in comparison with China’s 7%.

United States
The US has slipped to 3rd in the pecking order, and, similar to Japan, has been hit by the recession with production levels almost halving between 2007 and 2009, before rallying in 2012 to sit just 10% below 2007’s levels. With output of 88.6 million tonnes in the last year it seems inevitable that the US will be caught and overtaken by India in the next few years as their steel output sits just 12 million tonnes behind, having risen by 43% in the last five years.

Due to the scale of manufacture in the US, the superpower consumes most of its steel internally and thus, as well as sitting far down the table of exporters, holds the position of the worlds primary importer of steel, both in sheer numbers and when offset against their own exports. Only the EU (thanks primarily to Germany) as a collective can claim to import more of the world’s steel than the US.

In summary the global steel markets are witnessing the US and EU moving away from production and instead relying on imports, particularly from Asia, to meet their high levels of demands. China meanwhile, has taken on the role of the industry's megalith, with almost unparalleled natural resources and driven by the need to meet both global demand alongside the manufacturing and building demands of its own mammoth population and internal markets.

To find out more about the state of the global steel industry you can visit this organisation who trades in steel.
Enhanced by Zemanta

Thursday 6 June 2013

A Glossary of Housing Related Terms - Part 1

English: An icon from the Crystal icon theme. ...
English: An icon from the Crystal icon theme. Nederlands: Een icoontje van het Crystal icon thema (Photo credit: Wikipedia)
Legal rights and regulations concerning housing related matters are most commonly associated with laws surrounding the ownership of property (in its broad legal definition) and more specifically, fixed property (buildings, land, fixtures and fittings etc), known as real estate in some jurisdictions. The following article aims to provide an introduction into some of the key terms that are involved in property and therefore housing law.

Landlord
The owner of any real estate or property (including land) that is rented (i.e., leased – see below) by another party. In some scenarios the landlord can be the party who rents the property from the party who has personal ownership of it, and in turn subleases it – in which case they will still have superior title to that property over the underlying tenant.

Tenant
A tenant is someone who has hold over something – defined as a tenement – but does not own it. The term is most prominent in housing law where a tenant is therefore someone who rents the use of a property from a landlord. Tenancy comes with rights of occupation over the property concerned, despite the fact that the property is never considered to be under the tenant’s personal ownership.

Lease
A more general term describing a contract requiring payment by a user of something to the owner of that thing, for a certain amount of time. In the context of fixed property or real estate, a lease will commonly be referred to as a rental agreement and will be arranged between a landlord (lessor) and tenant (lessee).

Eviction
The process of removing someone from a property, the term ”eviction” doesn’t describe a specific scenario. Lawful evictions occur where the inhabitant has no legal right to live in the property because they have broken terms of their lease, their lease has expired or someone else has a superior claim to the ownership of the property (including lenders following a default on a mortgage).

Unlawful eviction can occur when these conditions are not met, most commonly when a landlord forcibly removes a tenant without following legal processes, particularly when they have failed to serve to required notice.

Repossession
The process of an owner of a property who has superior title/ownership rights on that property claiming it back into their possession, without going through court. The process can be carried out by a lender where a loan has been secured against property, or by an owner in the case of property being leased out. The legal right to repossess will usually be triggered only by a failure to pay monies due in either case.

In the UK the term is most commonly associated with the reclaiming of a property stake by a mortgage lender in the event of the borrower defaulting on the mortgage (i.e., failing to make repayments).

Squatter
An individual who occupies land or property (usually abandoned or unoccupied before they take up residence) over which they have no legal rights. Depending on the jurisdiction a squatter could be committing either a civil or criminal offence, however, in England and Wales squatting has been classified a criminal offence as of 2012.

To find out more about legal matters surrounding housing issues you can visit a specialist housing law firm.

Friday 24 May 2013

An Introduction to Cloud Servers & Their Benefits - Part 3: Cost & Deployment

The final instalment of this trio of articles looks at the features of the two cloud server deployment models, public and private, as well as discussing how they can deliver real cost savings to their customers.

Cost Efficiencies
As mentioned previously, the responsive scalability of pooled cloud servers means that cloud services can offer significant cost efficiencies for the end user - the most salient of which is that the client need only pay for what they use. Without being bound by the fixed physical capacities of single servers, clients are not required to pay up front for capacity which they may not make use of, whether it be their initial outlay or subsequent steps up to cater for increases in demand. In addition, they avoid the set up costs which would otherwise be incurred by bringing individual servers online. Instead any set up costs generated when the underlying cloud servers were brought online are overheads for the cloud provider and are diluted by economies of scale before having any impact on their pricing model.? This is particularly the case as many cloud services minimise the effort and expense of specific cloud server and platform configurations by offering standardised services into which the client taps.

Lastly, cloud models allow providers to do away with long term lock-ins. Without the longer term overheads of bringing individual servers online for individual clients and maintaining them there isn’t the dependency on those clients for a return on that investment from the provider’s point of view.

Deployment
There are two common deployment models for cloud services which span the service level models (IaaS, PaaS, SaaS) described in part one: ?Public Cloud:and Private Cloud.

Perhaps the most familiar to general population, and also the most likely to deliver some of the features and benefits mentioned previously, is the typical public cloud model. This model utilises the large number of pooled cloud servers located in data centers, to provide a service over the internet which members of the public can sign up for and access. However, the exact level of resource - and therefore capacity, scalability and redundancy - underpinning the each public cloud service will depend on each provider. The underlying infrastructure, including servers, will be shared across all of the service’s end users whilst the points at which the service can be accessed are open to anyone, anywhere, on any device as long as they have an internet connection. Consequently, one of the model’s key strengths, its accessibility, leads to its most prominent weakness, security.

Services which need to implement higher levels of security can instead use private cloud models. The architecture of private clouds can vary but they are defined by the fact that the cloud is ring-fenced for the use of one client. Servers can either be located in a data center, and accessed via leased lines or trusted provider networks, or on the client’s premises, and accessed by secure local network connections. They can be provisioned as either physical or virtual servers, but they’ll never be shared across multiple clients. Access to the servers and the cloud service will always be behind the client’s firewall to ensure that only trusted users can even attempt to use it.

Private clouds, therefore, offer greater levels of security (depending on the exact set up), but utilising smaller pools of servers means that they cannot always match the economies of scale, high capacities, redundancy and responsive scalability of public cloud models. Although, these qualities can still be achieved more readily than more traditional fixed capacity server configurations on local or trusted networks.

For more information and insight on cloud computing, cloud servers and other related services you can visit this cloud infrastructure provider’s site.
Enhanced by Zemanta

Monday 20 May 2013

An Introduction to Cloud Servers & Their Benefits - Part 2: Scalability & Reliability

Having, in the first part of this article, described what cloud servers are and how they work within the context of cloud computing, the following instalments go on to discuss how they generated some of the key features that drive the adoption of the cloud at both a personal and enterprise level. This instalment covers the two performance related benefits of scalability and reliability.

Scalability
By combining the computing power of a significant number of cloud servers, cloud providers can offer services which are massively scalable and have no limiting capacities. With hypervisors pulling resource from the plethora of underlying servers as and when needed, cloud services can be responsive to demand so that increased requests from a client’s particular cloud service can be met instantaneously with the computing power that it needs. There is no issue with functions being limited by the capacity of one server and therefore clients having to acquire and configure additional servers when there are rises in demand. What’s more, with cloud services, where the product has already be provisioned, the client can simply tap into the service without the costs and delays of the initial server set up that would otherwise be incurred.

For those clients whose IT functions are susceptible to large fluctuations in use, for example websites with varying traffic levels, pooled cloud server resource removes the chance of service failure when there are spikes in demand. Additionally, on the flip side, it removes the need to invest in high capacity setups - as contingency for these spikes - which would go unused for a large proportion of time. Indeed, if the client’s demands fall, the resource they use (and pay for) can also reduce accordingly.

Reliability - Redundancy & Uptime
As mentioned the high number of cloud servers used to form a cloud service offering means that services are less likely to be disrupted with performance issues or downtime due to spikes in demand. However, the model also protects against single points of failure. If one server goes offline it won’t disrupt the service to which it was contributing resource because there are plenty other servers to seamlessly provide that resource in its place. In some cases, the physical servers are located across different data centres and even different countries so that there could conceivably be an extreme failure causing a data centre to go offline without the cloud service being disrupted. In some models, back ups are specifically created in different data centres to combat this risk.

In addition to unforeseen failures, pooled server resource can also allow maintenance - for example, patching of operating systems - to be carried out on the servers and networks without any disruption or downtime for the cloud service. What’s more, that maintenance, as well as any other supporting activities optimising the performance, security and stability of the cloud servers will be performed by staff with the relevant expertise working for either the cloud service provider or the hosting provider. In other words, the end user has no need to invest in acquiring that expertise themselves and can instead focus on the performance of the end product.

For more information and insight on cloud computing, cloud servers and other related services you can check out this blog from a cloud industry insider

Enhanced by Zemanta

Tuesday 14 May 2013

An Introduction to Cloud Servers & Their Benefits - Part 1: Definitions

The concept of cloud computing appears omnipresent in our modern world as we rely on on-demand computing to manage our digital lives across multiple devices - mobiles, tablets, laptops - whilst at home, in the office or on the move. This trio of articles introduces the key component in cloud computing, the servers that underpin each service and provide the computing resource, as well as describing how they provide some of cloud computing's most notable benefits.

Definitions
Cloud Servers: As mentioned above, can be defined as the servers that are used to provide computing resource for cloud computing. In essence they are servers which are networked together to provide a single pool of computing power which cloud based services can draw resource from.

Cloud Computing: Describes any computing service whereby computing power is provided as a on-demand service via a public network - usually the internet. Broadly cloud services can be categorised using the three following models:
  • IaaS – Infrastructure as a Service:
    • Pooled physical cloud server and networking resource (without any software platforms). Instead of the user being provided with a single distinct physical server, multiples thereof or shares therein, they are provided with the equivalent resources - disk space, RAM, processing power, bandwidth - drawn from the underlying collective cloud servers. These IaaS platforms can then be configured and used to install the software, frameworks, firmware etc (e.g., solution stacks) needed to provide IT services and build software applications.
  • PaaS – Platform as a Service:
    • Virtualised software platforms using pooled cloud servers and network resource. These services offer the collective physical resources of IaaS together with the above-mentioned software bundles so that the user has a preconfigured platform on which they can build their IT applications.
  • SaaS – Software as a Service:
    • Cloud based applications provided using pooled computing resource. This is the most familiar incarnation of cloud computing for most members of the public as it includes any application - such as web based email, cloud storage, online gaming - provided as a service. The applications are built and run in the cloud with end users accessing them via the internet, often without any software downloads necessary.

How Cloud Servers Work
Traditional computing infrastructure models tend to revolve around the idea of single server being used for a particular IT function (e.g., hosting, software applications etc), whether it be that that server is a dedicated server - i.e., for the sole use of that client - or shared across multiple clients. Shared servers may have used the one software/platform installation for all of their IT functions/clients or they may have delivered Virtual Private Servers (VPS) where each client has distinct operating environment which they can configure.

Cloud computing can deliver similar virtualised server environments but they use resource drawn from not one, but a multitude of individual physical cloud servers which are networked together to provide combined pool of server resource. In a sense, it uses a platform that could be considered as a form of clustered hosting whereby the resource demands of an individual client’s IT functions are spread across numerous distinct servers. However, with cloud hosting the resource pool has enough capacity, with sufficient servers, to provide resource which multiple clients can tap into as they need to.

Within the infrastructure of cloud services, cloud servers are networked with what are called hypervisors which are responsible for managing the resource allocation of each cloud server. In other words they control how much resource is pulled from each underlying cloud server when demands are made of the pool of servers, as well as managing the virtualised operating environments which utilise this resource.

For more information and insight on cloud computing, cloud servers and other related services you can check out this blog all about cloud servers and hosting
Enhanced by Zemanta

Friday 12 April 2013

Cambridge - A Few Interesting Facts

Front of the college Peterhouse on Trumpington...
Front of the college Peterhouse on Trumpington Street. (Photo credit: Wikipedia)
Cambridge is a world famous city and largely for one reason, its university  The institution does indeed dominate the town’s history and continues to shape its profile today; and as a result the town has had a notable impact on the wider culture and wealth of the country for the last 800 years. The following article provides a handful of interesting facts about the city that you may or may not have known already, and that give an idea of its stature.

The Old University
As mentioned, Cambridge and its university are essentially synonymous - the reason that the city has such a global profile. The university is not only one of the top five in the world but can claim an almost unrivalled heritage being as it is the second oldest in the English speaking world, and the third oldest that is still in existence in entirety - behind only Oxford and Bologna. In fact, the institution owes its very existence to a decamping from Oxford in the first place following disputes there between the scholars and the locals. This first groups of incoming scholars can be dated back to 1209 although the university didn't receive its royal charter until 1231. The first of its colleges that we still know today can even be traced back to the 13th century with the founding of Peterhouse college by the Bishop of Ely in 1284.

Scientific Soccer
The modern game of football may have been given its moniker by the other university in Oxford but Cambridge can be considered to have been instrumental in its development. Arguably the first ever game of what we would recognise as football or soccer was played in the centre of the city on Parker’s Piece - a park still popular with locals and students alike. The game in 1848 was the first to use the Cambridge Rules which went on to be a prime influence behind the first ever of set of standardised association football rules 15 years later. What’s more, many of the fundamental tactics that shape the way the game is played to this day can be attributed to the university’s team. The Combination Game, as it came to be known, promoted the idea of each player having a position on the pitch, and a role in the therefore in the team, as well as reliance on the passing of the ball in place of dribbling and charging. These revolutionary changes are taken for granted now but were labelled ‘scientific’ in the 19th century and many have credited their development to the Cambridge University side of 1882.

Granting of City Status
Cambridge had been granted a town charter as far back as the 12th century, however, due in part perhaps to a number of episodes - like the one in which it found itself on the wrong side of the peasant’s revolt in the 14th century leading in turn to a revised charter and more control placed in the hands of the university - as well as the lack of a cathedral, it took until the mid 20th century to gain city status. To the surprise of many who assume that Cambridge is a typical Cathedral city, it still doesn't have a Cathedral and instead falls within the diocese of Ely.

Grant being the Operative Word
It may be well know that the town’s name can be ascribed to its position at the bridge over the famous River Cam - the iconic scene of punting students on a sunny afternoon - but what is perhaps not so well known is that the river actually owes its current name to the town and not vice versa. The Anglo-Saxon name for the river was Granta and the name for the town therefore was Grantabrycge, meaning Bridge over the River Granta. Indeed the Anglo-Saxon abbreviation for the town, as seen on coins minted there, used to be Grant. However, this name has been subsequently corrupted down the centuries to arrive at the modern ‘Cambridge’, whilst the river has since borrowed the ‘Cam’ back. The name Granta is still used to refer to the river in some contexts, including a couple of its tributaries, and traces can be seen in modern place-names such as Grantchester - a village on Cambridge’s outskirts (which is allegedly home to the highest concentration of Nobel Prize winners in the world).
Enhanced by Zemanta

Security Challenges Faced by Cloud Hosting - Handling Data

The final part of this article looks at how and where data is stored or handled and the issues that arise in cloud computing through the process of creating multiple instances of data across multiple server platforms. Cloud computing relies on this mechanism for many of its key benefits but, by doing so, invites further challenges for data security.

Data Protection
Data collection and storage is usually bound by legislation or regulation which varies depending on the jurisdiction under which a service falls. Most prominent regulations, however (e.g., those in the US and Europe) share certain principles in common that demand, for example, that data is collected with the subject’s permission, with their full understanding of what the data will be used for, only if the data is relevant to the stated purpose, only for that stated purpose, with transparency and with accountability. For the subject of the data this should mean that they consent to the service provider collecting data relating to them, they know what data that is, who has access to it and why, as well as how to access it themselves if they want to.

It is therefore paramount for IT service providers, who have stewardship of any data, that they are able to identify where data is stored within those services that they provide, how to access it and whether it is secure. However, the abstraction of cloud services in particular can cause challenges for those who utilise them to store or process data because they cannot necessarily guarantee where this data is at any given time. The physical location and guardianship can be obscured, with data hosting sometimes crossing different sites, geographical boundaries and even jurisdictions.

In such cases where private information is involved, the answer often lies with private clouds employing on-site hosting as mentioned in earlier parts of this article, but there is often a trade off with some of the other benefits of cloud which are discussed below.

Multiple Data Instances
Two of cloud computing’s biggest selling points are that of redundancy and scalability. These are often achieved by utilising multiple servers to provide the underlying computing resource, with, therefore, the data within a cloud service being ultimately stored across these numerous servers. Moreover, cloud structures will also create multiple instances of data across these servers to provide a further layer of redundancy protection. However, the more servers that data is shared across, the greater the risk that this data may be susceptible to security vulnerabilities on one of those servers (e.g., malware, hacks); whilst the more instances there are of a piece of data, the greater the risk (by definition) that that data may be accessed and used by unauthorised users. Essentially, data in one place needs to be protected once, data stored in a 100 places, will need to be protected 100 times.

What’s more, as each server and platform is likely to be shared, particularly in the public cloud model, each data instance may be subject to another security threat introduced, inadvertently or otherwise by the 3rd party users who share the resources. In a private cloud, however, this threat is reduced as the cloud resource exists behind the one organisation's firewall and fewer instances of the data are created in the first place (fewer servers to pool). Consequently there is always a degree of trade off between introducing security risk and the level of redundancy and scalability built into a system (although of course redundancy can prevent data loss in itself). Private clouds may be more secure but with smaller pool of resource they cannot match the levels of redundancy and scalability offered by the vast capacities of public clouds.
Enhanced by Zemanta

Tuesday 9 April 2013

Security Challenges Faced by Cloud Hosting - Building in Security

English: This image describes a technology arc...
English: This image describes a technology architecture about Private Cloud. (Photo credit: Wikipedia)
As mentioned in part one of this article there are multiple stages at which information stored through cloud hosting platforms must be protected against data loss and unauthorised access. The first step is to secure the physical elements of a cloud hosting platform as described, however, the additional steps involve architectural and software based security measures to protect not only the platforms on which the data is stored, but also the data in transit and the subsequent points of access that allow valid users to interact with the data.

Public Cloud Models
Cloud offerings, including cloud hosting, can be broadly categorised, in terms of the way they are deployed (regardless of whether they are Infrastructure, Platform or Software as a Service), as either being Public Cloud, Private Cloud or Hybrid Cloud (a combination of the two). Much of the distinction between public and private clouds revolves around levels of security and privacy rather than technical specifications. As the name suggests, public clouds use points of access which are accessible on public networks (e.g., the internet), public networks to transfer information and shared clustered cloud servers to store information. Essentially anyone can ‘knock on the door’ of the cloud service, attempt to intercept its information in transit and potentially share its server resources. The services, should of course be protected by end point authentication, data encryption and anti-virus/firewall measures on the server platform to keep data secure but they are exposed to ‘attack’ at almost every point in their architecture. It is therefore important that consumers of such services are aware of what risks each service carries and what the provider puts in place to safeguard their customers’ data.

Private Cloud
For organisations dealing with highly sensitive data, however, they may demand more restrictions on who can attempt to access the cloud service, the networks it utilises and the sharing of cloud servers. In particular, some organisations will be governed by regulation which demands that they retain control of data for which they are ultimately responsible.

Private clouds may employ differing architectures, but they are defined by providing the aforementioned security measures. Servers can be located on an organisation’s own premises or within a data centre facility but they will be ringfenced for the use of that sole client; whether it be with physical hardware separation or virtualised separation between server clusters, an organisation’s cloud platform will be behind their own firewall. What’s more, to protect data in transit, and to prevent untrusted users from accessing the cloud, private clouds can again use either physical or virtualised separation from public shared networks. For example, an organisation can utilise local area network (LAN) connections to access a cloud which hosted on internal on-site servers or a physically distinct leased line when connecting to servers in a remote location. Alternatively, technologies such as MPLS (Multi-Label Switching Protocol) can be used to provide organisations with trusted network connections, controlled by individual providers, across public network infrastructure. The latter can provide more flexibility and allow the organisation to benefit to a greater extent from the scalability that cloud hosting providers can provide.

Hybrid Cloud
A hybrid cloud combines elements of public and private clouds and so can provide the security that organizations require for their sensitive and private data whilst allowing them to access cost efficient scalability in the public cloud for their non-sensitive operations. For example, an organization may store all of their protected client data in systems and databases hosted on site in a private cloud as required by regulation but pull computing resource from a public cloud for their brochureware website’s hosting platform.

Data Centre Expertise
The previous part of this article mentioned the benefits of a data center location in terms of the physical maintenance of servers preventing data loss. Similarly it is worth noting that both public clouds and private clouds which utilise a third party data center location for their server hosting (whilst introducing vulnerabilities in data transfer) can benefit from on-site expertise in the maintenance of software and anti-virus measures, including for example patching, to optimise both the preservation and security of data.
Enhanced by Zemanta

Security Challenges Faced by Cloud Hosting - Physical Security

Data Center
Data Center (Photo credit: bandarji)
The following two posts explore the topic of cloud hosting and the challenges it faces in providing secure data environments for enterprise consumers. In addition, it discusses the measures taken to combat these challenges, whether they be physical risks to hosting platforms or cybercrime.

The Need for Secure Data
The concept of security in all aspects of computing can be said to fall into two areas, the preservation of data and the control of data. The first of these concerns is the ability to ensure that data is not lost or corrupted, whether it be sensitive (i.e., private) or not. Data preservation may be essential for the effective operations of a business, for example, to be able to contact suppliers/clients or monitor and analyse business performance (business intelligence). In many cases firms are required to preserve data for periods of time by regulatory bodies in order to provide audit trails on their activities and where data is deemed personal, sensitive or private in relation to customers, suppliers or employees, firms will also be required by data protection laws to maintain that data.

The second issue pertains to the risk of sensitive data being seen by those who should not have access to it. Again data protection laws govern firms when it comes to only obtaining personal data with an individual’s permission and then ensuring that they control who has access, restricting unwarranted access. In addition however, firms will invariably want to keep their own business operations private as well to prevent competitors gaining an advantage on them.

All IT infrastructure needs to confront these security issues whether it be personal or enterprise level computing and this has been a particular challenge for cloud computing in general, including cloud based hosting.

The Vulnerabilities
Cloud computing services ultimately require networks of physical servers to create the pool of computing resource from which clients can access their computing as a service, which means that all cloud resources always have some form of physical location. In addition, cloud services rely on a point at which the end users can access them, often publicly available on the internet as well as of course a public network such as the internet to transfer the data used by the service. These three elements to a typical public cloud service each have their own vulnerabilities in terms of the protection and preservation of data.

Physical Security
In terms of the physical infrastructure used to build a cloud service, many of the security challenges are the same as those faced by any other hosting platform. To keep data secure, providers first need to keep the infrastructure secure and running, and the data centres where cloud servers are housed take great measures to these ends. In terms of access, they ensure that the facilities themselves are secured against unauthorised personnel by using tools such as biometrics, security cameras, guards and limited access to individual server suites. This not only controls the risk of intentional sabotage or physical hacks but also the risk of accidental damage caused by one engineer affecting another organisation’s servers, for example.

Furthermore, servers and network infrastructures are protected against physical damage using advanced fire protections systems and environmental controls such as temperature management. Controlling the temperature inside data centres is one of the primary expenses of a data centre provider due to the vast amount of heat generated by working servers. The aim of the exercise is to ensure that servers can run at their optimal temperatures but if left unchecked the damage caused could take cloud servers offline completely. Data centres employ techniques such as chiller units, ventilation and water cooling to keep temperature regulated and servers running smoothly.

Cloud servers and their networks also benefit from the general expertise of data centre providers to keep the hardware maintained and up to date, ensuring that the chances of other hardware failures are reduced. As with alternative hosting solutions which locate servers in data centres, such as colocation, dedicated hosting and VPS (virtual private servers), this expertise may be accessed at a fraction of the cost it would take for a businesses to deploy in-house.

However, these physical security measures are only the first step. The second part of this post explores the efforts taken to keep cloud hosting software operating smoothly and prevent data from falling into the wrong hands.
Enhanced by Zemanta

Friday 22 March 2013

Cloud vs Dedicated Hosting - Part 4: Security

Having compared cloud with traditional dedicated hosting solutions on their respective costs and performance issues in the preceding posts in this series, the final instalment provides further analysis of the two in regard to security issues.

Security

For many private and enterprise customers, security is the primary area of concern when making the switch from traditional localised computing to cloud computing solutions, particularly when it comes to the topic of hosting. Businesses that require high levels of security to be applied to their hosting platforms have traditionally flocked to dedicated hosting solutions, to avoid the vulnerabilities introduced by sharing servers with other companies or business functions. These enterprise customers have since been somewhat reticent to make the switch to cloud (despite the efficiencies mentioned previously).

Dedicated Server Security

Dedicated servers have, by design, features which are conducive to high levels of security in that they are individual platforms on discrete servers which are operated for single purposes - i.e., they do not share disk space or computing power with other services or businesses. This distinction leads to a number of security benefits in terms of both protecting access to hosted data and the preservation of that data. To achieve these twin aims, the risk of hackers or malware accessing the data and/or corrupting it is minimised; by not having any other functions/companies sharing the hosting platform it reduces the number of possible points of entry/access and therefore the number of security vulnerabilities on the server. What’s more, a business sharing a host server would have no control over the effectiveness of the measures taken to secure these vulnerabilities if they are sharing the server with third party businesses. The dedicated model also removes the competing demands placed on the physical computing capabilities of the server by other hosting platforms/solutions stacks/businesses’ IT projects, meaning that there is less risk of server or network failures leading to the unavailability or loss of data.

Cloud Hosting Security

Cloud Hosting platforms therefore need to re-address these issues as they fundamentally rely on the concept of shared or pooled computing resource. Public cloud models will struggle to offer the same protection as a dedicated platform because they not only share physical hosting infrastructure across multiple virtualised hosting platforms for disparate customers, but have further vulnerabilities in that the access points to such services are across public networks - in other words anyone can ‘knock on the door’ and any information being transferred between access point and server is at risk of being intercepted. Furthermore, one organisation who is a consumer of the service has no influence or control over the trustworthiness of others who may have signed up to share these pooled resources.


If you want to find out more about the respective benefits of cloud and dedicated hosting platforms then you can check out this blog from Interoute’s Matt Finnie.
Enhanced by Zemanta

Thursday 21 March 2013

Cloud vs Dedicated Hosting - Part 3: Enterprise Focus

SOMF Cloud Computing Model
SOMF Cloud Computing Model (Photo credit: Wikipedia)
The third post in this series looks at some of the pros and cons of dedicated and cloud hosting solutions when it comes to providing the services that enterprise customers actually demand. Much focus in the industry has in the past been concentrated on the technical capabilities of the respective platforms but the key to adoption across enterprise is how that technology sates business requirements.

Customer Experience
A traditional weakness of cloud computing, and perhaps a consequence of its on-demand access model is the area of SLAs and targeting enterprise consumers’ needs effectively. The utility style of the service means that consumers have to some extent fitted their computing needs to the cloud services available rather than vice versa in order to benefit from the economies of scale and reduced costs. After all, the service is to a large extent defined and packaged up by the provider with the consumer tapping into it as and when they need it.

Dedicated platforms have, in the past, outperformed cloud in this area with the ability to provide customisation and control over individual servers and the use of more suitable SLAs on better defines services. Businesses have been able to take their IT requirements to a provider of dedicated hosting and build the platform around it (cost permitting) leading to a more bespoke set up.

However, there is now a concerted effort within the cloud sector to provide better targeted enterprise applications with in-built flexibility, scalability and security as well as SLAs which accurately reflect the performance of these services and the needs of enterprise. An example of this move away from a one-size-fits all model is the development of the idea of cloud application stores where organisations can purchase the components they need individually to construct a cloud package which is tailored to their business needs. In other words, providers create and define individual components but customers configure their overall bespoke service using these elements.

Choice
The benefits mentioned above and in the preceding posts in relation to the cloud result in arguably the key long term driver for enterprise adoption of cloud hosting and cloud computing in general, that of choice. Ultimately, the flexibility of the model means that anything is theoretically possible for an enterprise customer if they have the budget and their provider has the resources.

The same can be said about traditional dedicated platforms (at a greater cost) but the scalability issues encountered by businesses using dedicated servers are, as stated, negated with cloud hosting by the removal of the concept of capacity. As mentioned previously dedicated platforms can be used to provide a bespoke hosting solution for enterprise customers at any level but once the platform is established any further changes to it may require time and expense. With cloud hosting, if a business wishes to try a particular project or campaign as a short term venture they can so with minimal lead time, pay-as-you-go costing, and responsive scaling, thus reducing these costs and the resulting risks of the venture.

Part 4 of this series of posts goes on to focus in more detail on the topic of security and how the two hosting solutions compare.

If you want to find out more about the respective benefits of cloud and dedicated hosting platforms then you can check out this blog on IaaS.
Enhanced by Zemanta