Wednesday 21 November 2012

The History of Family Offices - Through the Ages

Having briefly described the incarnations that family offices in the present can take, the second installment of the article looks at how we have arrived at these business models through centuries of evolution.

Ancient History
The concepts behind family offices have developed independently across the world, especially in Asia where there was a strong tradition of family dynasties in countries such as China and Japan. These dynasties required the means to preserve their legacies and power and significant facet of that would have revolved around their wealth. Some of the earliest incarnations of family office-type teams can be traced to these roots, with evidence of people being employed to preserve the wealth and welfare, for example, of the Shang dynasty in China as long ago as the 17th century BC.

Early European History
In Europe, the idea of a family office can trace its financial history back to the inception of the concept of banking for the sake of sustaining and preserving wealth; in contrast to money lending. One of the earliest examples of this practice can be attributed to the age of the crusades when a need arose to place wealth in trust whilst noblemen were away fighting in foreign lands. The demand for trusts to conserve wealth remained throughout the many conflicts that affected the continent down the subsequent centuries.

Generally speaking, family office-type and trust services were provided by traditional banking operators such as the Jewish communities, the protestant Swiss and Scottish bankers. However, prominent, sometimes ruling, European families from the middle ages - those that we now think of as dynasties like the Medicis of Florence and, more latterly, the Rothschilds family who spread across western Europe - also created their own banks and would have provided similarly styled services, They would have managed wealth creation on behalf of their own families but then, in turn, would have also offered services to the other prominent families of the time; effectively acting as forerunners of what we would now think of as private banks or specialist banking for HNWs. The lines for such families would no doubt have been very blurred, between entities that we would class as family offices and their banking services - because the family wealth would have been inextricably tied up with their banking activities. The families who were creating these early banks, and providing financing for others, were also likely to be those with the wealth to warrant a need for family office-style services.

Away from the financial management, land owners, from the middle ages onwards, also employed people to run their estate and the workers who would have toiled on their lands and serve the family in their day to day lives. Through the feudal systems of medieval Europe (for example, after the Norman invasion in England), ownership of land became focused within the minority nobility who in turn could give access to that land to vassals in exchange for their loyalty and labour. These labourers evolved through time, beyond the breakdown of feudalism, into teams of land workers and servants. These workers were free citizens and simply employed by the wealthy land owners to maintain the estate - lands and properties. The concept of these servant classes arguably reached its zenith in the Victorian era but as relative wealth declined the larger teams became the preserve of the ultra-wealthy only, creating the footprint for family offices.

Modern US History
The modern European incarnations of family offices in particular can therefore trace their lineage back to the medieval estate managers and family banks but the modern concept is also heavily influenced by the resurgence of family offices as financial management organisations in the US. In fact term itself, “Family Office” originates from modern US usage.

The American resurgence occurred at the end of the 19th century and start of 20th when private offices were established by wealthy US families in response to a lack of third party financial services, such as private banks, which targeted ultra-HNW individuals. The banks were prohibited by US legislation from offering joined-up services and so it was left to teams of advisors and other financial firms, such as accountants and legal partnerships, to provide the services associated with family offices, often creating family trusts in the process.

The specialist companies that started to appear focused on managing the financial elements of multiple wealthy families in contrast to the in-house offices that had existed before and that were previously popular across the pond. Latterly, relaxation in the US laws has allowed for the integration of family office services back into private banks.
© Stuart Mitchell 2012
Enhanced by Zemanta

Tuesday 20 November 2012

A Glossary of Networking Terms - Part 3

English: wireless access point
English: wireless access point (Photo credit: Wikipedia)
In the final instalment of this trilogy of articles highlighting some of the more common computer networking terms, the three terms concerned are all common and integral to small home networks as well as enterprise levels. As such they are terms with which it is useful for many people outside of the IT industry to be familiar.

Firewall
The concept of a firewall in computing takes its name from the idea of the physical construct used to prevent fire spreading between buildings and rooms or compartments. In computing therefore, firewalls perform an analogous function preventing packets of data that may harm a computer or network from passing through - an obvious example of such packets would be viruses. In practise firewalls are therefore software applications or hardware based systems which sit on the connection points between LANs or individual devices and public WANs like the internet. They will monitor all traffic attempting to enter or leave the LAN/device and grant or deny access to it based upon rules that the user is able to predefine and control.

Gateway/Router
What is now referred to as a router was originally, and in some contexts still is, termed a gateway. It can be thought of as the hub at the heart of a network through which all communications (data packet transfers) pass. The routing element describes the process of receiving data packets from one device, determining its destination address, comparing that to a list of known devices, whether it be games consoles or dedicated hosting servers fro websites, and then forwarding the packets to that destination. Whereas the term ‘router’ has become more commonplace, the term ‘gateway’ is used more specifically to describe devices which allow communication between networks/computers/programs that use differing protocols.

Routers can take the form of physical devices or software applications and in practice many, like those used in home networks, can also provide the functions of a number of network elements such as network switches, modems (to connect to the rest of the internet), firewalls and wireless access points. They therefore also act in the more generic sense as ‘gateways’ to the local networks that sit behind them for any traffic being transferred to or from wider networks such as WANs and the internet.

Wifi/Wireless
Wifi is short for wireless fidelity and is used in general to describe technologies that deliver digital communications between devices using radio waves. Networks that utilise WiFi can interchangeably be referred to as wireless networks or wireless local area networks (WLANs). As with the wired Ethernet, the technologies are standardised by the Institute of Electrical and Electronics Engineers Standards Association (IEEE), however the actual term “Wi-Fi” is a trademark of the Wireless Alliance, who are the trade association that certify that products adhere to those standards.

WLANs are created using a Wireless Access Point (WAP) which takes wireless data and forwards it, through a wired connection, to or from a router - often WAPs are integrated into routers themselves. The areas in range of WAPs are called a wireless hotspot and are now commonplace in peoples homes as well as throughout business premises and public spaces. Often in the latter two areas, WAPs can be positioned so that hotspots overlap to provide extensive networks. The Wi-Fi standards ensure that any enabled device will be able to connect to a WAP, and, although transfer speeds across wireless networks will generally be slower than wired counterparts (and can be disrupted further by interference where competing signals use the same frequency channels) newer standards are still capable of delivering data thirsty services, such as streaming for video conferencing, without the restrictions of fixed wiring.

One of the major concerns with WLANs is security because any device within range of a WAP can see the network and pick up the signal. Consequently, there are a number of measures that are used to restrict access to private WLANs and to keep transmissions on those networks secure. Initially WEP (Wireless Equivalent Privacy) and latterly WPA (Wi-Fi Protected Access) and WPA2 have been implemented to this end to ensure that transmissions are encrypted and passwords are required in order to connect to the network. In addition, networks can be configured to only allow connections from predefined devices by using their unique MAC (Media Access Control) addresses.
Enhanced by Zemanta

A Glossary of Networking Terms - Part 2

The second instalment in this trilogy of articles explaining some of the more common terms found in networking jargon introduces two terms which may already be familiar to many computer users as technologies they make use but without necessarily a full awareness of their definitions: Ethernet and VPNs.

Ethernet
The term Ethernet is most frequently used to describe a type of network cable, common in our homes and workplaces, but actually applies to the protocol which defines the cable and its associated technologies (e.g., computer ports) that can be used to connect computer devices to each other within a network. As a result the term can even be used in some contexts to refer to the networks (predominantly LANs - Local Area Networks) themselves that they form.

The Ethernet protocol is standardised by the Institute of Electrical and Electronics Engineers Standards Association (IEEE) and is the most common such network connection used in the IT industry. The original Ethernet standard delivered a data transfer rate of 10 megabits per second and utilised copper coaxial cables (i.e., inner cable which carries signal is insulated by secondary sheet material) to do so. With the advent of successive standards such as Fast Ethernet (100 Mbit/s) the transfer rates have risen significantly up to the highest at 100 Gigabits a second. The Ethernet cables used t0o achieve these speeds have developed to include fibre optic and twisted pair (usually copper) cables alongside the original coaxial forms, with the most common and familiar cables in the home and workplace being twisted pair.

VPN
The abbreviation VPN is short for the term Virtual Private Network. This concept covers a broad array of technologies, including EVPN (Ethernet VPN), but essentially describes a secure connection between computers or LANs made across a public network such as the internet. VPNs allow communications between separate networks to be kept private and secure with no-one intercepting them and/or viewing them in transit. Whereas secure WANs may have previously relied (and in some cases still do rely) on dedicated physically distinct leased lines to ensure that the information being transferred is done so outside of the public domain, VPNs create what is called a tunnel to provide the same effect. VPN tunnels are a virtualised equivalent ‘through’ which encrypted data packets are transferred - essentially representing the idea that the data packets are cloaked to appear as though they are normal public network data transmissions without providing any visibility of the data they contain. The packets containing the core data are encrypted and encapsulated within outer packets which are further encrypted and which simply display information on which network gateway they are intended. The encrypted packets of data can only be unencrypted to view the contents when they reach the predesignated destination computer.

VPNs are a vital tool in business to allow workers to work securely off-site (as if they were in the office), to connect disparate branch offices, reduce physical business travel and to allow networks to incorporate and utilise varying types of computing device (e.g., tablet vs desktop). In turn, these opportunities all allow businesses to increase efficiency - reducing costs and boosting productivity - as well as improving flexibility and employee morale.
Enhanced by Zemanta

A Glossary of Networking Terms - Part 1

local area network
local area network (Photo credit: benschke)
Nearly every part of our lives these days will be affected, influenced and often facilitated by the use of computers and more to the point, networks of computers. Whether it be the wireless networks of PCs, games consoles and phones sharing internet connections in our homes or the vast business infrastructures that provide so many of the services we take for granted; even the internet itself. As with all areas of technology though, the world of networking is thick with jargon and so can appear very esoteric to those that don’t work within the field. This article therefore aims to summarise a few of the terms that people might stumble across in their everyday encounters with networks.

Packets
Data packets, sometimes simple referred to as packets in a computing context, are to some extent self explanatory in that they are pieces/units of digital data which are formatted into ‘packets’ in order for them to be transferred across networks. There are two elements to these packets, the information that is being transferred (sometimes called the payload) and the control data, which contains information to help it reach its destination. The control data can include details about the destination and source addresses, error checking, information about the size and type of packet (which protocols it follows) and information to help reconstruct bigger units of data where they fragmented into smaller packets. A helpful analogy is that of a letter which can contain all of the information it needs to reach its destination and be interpreted by the reader correctly, recorded before and after the content itself.

LAN vs WAN
LAN is the abbreviation of Local Area Network; WAN is the abbreviation of Wide Area Network. LANs and WANs are to some extent defined in contrast to each other.

In simple terms LANs are small networks which connect devices in one location. They are usually constructed using Ethernet cables (and their compatible technologies) and/or Wireless protocols to connect the localised devices, all behind one firewall. As a result of the proximity of devices and therefore the technologies that can be used to connect them, LANs are able to offer high speed data transfer rates between the interconnected devices. They are commonly deployed and utilised in single work locations such as offices but over the last decade or so have also become common in peoples homes to enable their computers and entertainment devices to talk to each other and share an internet connection. They are, by their nature, private networks (with communication possible between local devices behind a firewall) and usually have a single gateway to public networks such as the internet.

WANs, on the other hand, are networks which span multiple locations to interconnect individual devices or separate local networks. The term can therefore be applied to the entire network they form (including LANs) or the just the connections made between the separate localised networks. WANs can employ a variety of technologies to this effect. They can take the form of private networks where dedicated leased lines or virtual private networks are used to securely connect disparate private LANs together, or they can simply form open communication networks which the public can use to share information. In the latter sense the internet can be considered to be a variation of a WAN. Private WANs are integral fro businesses working across multiple locations to ensure that each location can interact and communicate securely and to offer the economies of scale that arise from the centralisation of infrastructure such as the use of central servers for business hosting.
Enhanced by Zemanta

Thursday 15 November 2012

The Data Centre Arms Race - US & Asia

Image representing IBM as depicted in CrunchBase
Image via CrunchBase
The final part of this trilogy looking into the growth of data centre facilities around the world focuses on the existing power house of data centre construction, North America, in particular the US, and the emerging contenders within Asia.

US
There are currently thought to be five data centres that are as large or larger than that in Newport, Wales and four of them are to be found in the US. The smallest of these is the NAP of the Americas data centre, located within the urban sprawl of Miami in Florida. Matching the Newport facility for floor space at 750,000 square feet, it is not only a key installation for both the US military and global DNS infrastructure, but is a vital hub for IT Operations in the south east of the US and Latin America beyond.

The next up the chain is the home of Twitter’s servers amongst others - the QTS Metro Data Centre in Atlanta Georgia - with a square footage of 990,000. The building which originated as a Sears distribution centre and, like many of the other contenders, was repurposed into a data centre, now houses its own substation to support its vast power consumption.

The single largest data centre building in the world is the Lakeside Technology Center. The only centre over one million square feet at 1.1m, it can be found in another re-purposed ex-Sears building, this time in Chicago (a city also home to Microsoft's largest facility at 700k sq.ft). The size of this facility can be illustrated by the fact that, within the Chicago area, the data centre consumes more power than any other facility excluding the city’s airport, being fed with 100MW of energy. This consumption is tempered however by the innovative use of a shared 8.5million gallon reservoir of chilled brine. In common with most of those reported here the facility is multi-tenant, offering colocation and business hosting to an array of clients. It is housed within a building, complete with gothic architecture, that Sears once used for its printing presses.

Although not a single building, the US’s largest data centre complex can be found in Las Vegas, Nevada with the expansion of the SuperNap 7 facility. The project currently has over 2m square feet of floor space and continues to grow. In total, this behemoth requires an energy capacity of a whopping 500 MVA.

Asia
If you were to believe the headline figures, Hebei Province in China is home to a data centre which would blow all of the world’s other largest facilities out of the water. The data centre being built as a cloud computing city by American IT firm IBM in conjunction with Chinese company Range, claims to have a gigantic 6.2 million square feet of floor space; nearly six times the Lakeside facility in the US. However, in actuality a large proportion of this space will be used for other purposes, such as office space, and the true footage of the data centre itself is considered to be in the region of 300,000 to 650,000.

Instead the largest data centre on the continent is considered to be the Tulip Data City facility in Bangalore, India with a floor space of around 900,000 square feet. In common with the Hebei Province centre above it has been built with IBM and also offers 80,000 of office space for customers. Although, due to its scale, it consumers 40 MVA of electricity, it can still classify itself as being a green facility as, at 1.9 PUE, it sneaks under the industry standard of 2.0 PUE.

As mentioned previously the challenge facing the data centre industry is to build big and to build green. Cloud computing and other new technologies are creating the demand for extra data centre capacity but this capacity can only be achieved in a sustainable and cost effective manner by finding smarter ways to manage the environments, primarily the temperatures, in which servers run to reduce their vast power consumption. However, with innovation such as the use of hot and cold aisles, thermal energy stores and the use of environmental resources, including external air and water, companies are opening up the possibilities for larger scale green data centre development.
© Stuart Mitchell 2012

Enhanced by Zemanta

The Data Centre Arms Race - UK & Europe

The following installments of this article look at the progress of this data centre arms race and a few of the new generation of large data centres that have been built across the world. There are, no doubt, some facilities which would warrant a mention that don’t appear below because of a lack of available data, particularly regarding those run by online application giants such as Google and Facebook for purely their own use. Information is more plentiful for data centres run as leased hosting and/or colocation facilities as they are looking to actively advertise their specifications.

When looking at the world’s largest data centres, there are a number of ways in which their sizes can be measured, including the scale of their operation, based on power consumption, the number of servers in use or the total digital storage capacity; or their potential capacity taking into account the number of units and total rack-space available in each facility including un-utilised capacity and potential space for colocation purposes.

This article focuses on the latter idea of capacity by comparing the floor space of each data centre’s buildings to get a scale of the actual facilities themselves rather than their technical capacities.

UK
Using this metric, the UK’s capital, London, is the largest data centre market in Europe. In other words it has collectively more data centre floor space than any other European city or single location. This statistic, however, reflects the high number of individual data centre facilities that are to be found within the confines of the M25 and not necessarily the size of any individual building. The legacy of early data centre adoption has left London with an advanced data centre ecosystem despite the costs of real estate in the area. This cost though has prohibited the growth of the largest facilities.

Consequently the UK’s largest individual data centre can actually be found nearly 150 miles away in South Wales, in Newport. Not only is it the UK’s largest but also the largest in Europe with 750,000 square feet of floor space. The scale of the centre means that it consumes 90 MVA of electricity and therefore incorporates its own electrical substation which has the incredible capacity to power a city of 400,000 inhabitants. It is also notable for its advanced security measures including exSAS guards, 3-skin walls, perimeter fences and biometric identification. The data centre offers enterprise customers of varying sizes the ability to rent isolated suites, units and/or colocation space to fulfil their hosting requirements from business hosting to managed hosting resellers for example. The first tenants of the data centre were BT and Logica.

Europe
Beyond the UK, the next largest data centre in Europe is still to be found within the British Isles. The home of Microsoft’s cloud operations in Dublin, Ireland is not only a behemoth of a data centre with 550,000 square feet of floor space, but is also a pioneer in green computing using a power consumption of a meagre 1.25 PUE. This energy efficiency is achieved without chiller units by combining the circulation of cool external air with smart containment of heat producing equipment.

© Stuart Mitchell 2012
Enhanced by Zemanta

Wednesday 14 November 2012

How Data Centers Are Becoming Greener

Data Centre
Data Centre (Photo credit: Route79)
It’s all too easy when you’re surfing the net to completely forget the impact that doing so may have on environment. We instinctively know that it’s greener to look up some information online than drive down to the library for example, but that is partly because we tend to think of the internet as somehow ethereal with no physical base and therefore no tangible effect on the environment. However, all of the data that we view on the web must be stored somewhere and the vast majority lives on servers in large data centers which unfortunately do have a significant environmental footprint.


Reports in 2007 found that Information and Communication Technologies (ICT) accounted for 2% of the world’s harmful gas emissions with data centers in turn culpable for 14% of that figure. As our use of the internet and the trade in digital information grows - and in particular as the concept of cloud computing continues to take off with our data being stored remotely ‘in the cloud’ (i.e., on providers’ vast server networks) for us to access anytime anywhere - the demand for data centers is continually abounding. Providers are therefore increasingly looking for solutions and innovations to become more efficient to meet the twin objectives of cutting their own costs whilst reducing their unsustainable environmental impacts.


All data centers comprise of two key elements which can each provide a number of opportunities for financial and environmental efficiencies. The first is the actual IT equipment. the stuff that provides the core function and purpose of a center, such as the servers themselves and the network switches serving them. The second is all of the infrastructure that is required to house the IT equipment and keep it running efficiently and securely. The infrastructure can be made up of cooling equipment, security devices, lighting etc.

The ratio of energy that is used in the data center’s infrastructure to the energy used to power the IT equipment is known as Power Usage Effectiveness (PUE) and is the industry standard in measuring their efficiency. A PUE score of 2, for example, would signify that for every unit of power being consumed by the IT equipment a further unit was being consumed by the infrastructure.


Renewable Energy
The first step to becoming a greener data center can be to ensure that the source of the energy or electricity being used is renewable. This can be achieved either partnering up exclusively with a supplier of renewable energy or by sourcing energy directly using sustainable methods. Some providers are going as far as locating solar energy farms on site to obtain the energy they need.


Energy Monitoring
It is also important to have accurate and in depth monitoring of the energy that is consumed at each point within the data center so that further efficiencies can be spotted. Most providers will have monitoring in place to calculate the PUE score but the accuracy of this monitoring and the assignment of energy consumption between the IT equipment and infrastructure can potentially vary slightly from one center to another.


Energy Efficiency
It is unavoidable that the largest proportion of energy used by a data center will be used by its IT Equipment, such as the servers, which is the fundamental purpose of the center. However, savings can still be made here, and throughout the supporting infrastructure, by employing the most energy efficient hardware that can be sourced. A significant amount of energy is, for example, lost in inefficient power suppliers before it even reaches the servers. Although this might mean a high initial outlay the power savings, will over the long term translate to financial as well as environmental savings.


Temperature Management
Often the biggest consumer of power, aside from the IT equipment, is the equipment used for cooling the data center.


There is a slight myth in that conditions inside a data center need to be kept at a low temperature. In reality they can operate efficiently at temperatures up to 80F, so providers can make immediate power savings by simply turning down the thermostat. In addition the main chunk of the cost for keeping the interior of a building cool is usually spent on chiller units. As a result, providers are increasingly looking to other solutions to make both energy and cost savings. Amongst these alternative solutions is the choice of locating the data center in naturally cool environments such as Alaska or Scandinavia and then allowing the cool air from outside to circulate through the building. There are also so-called free cooling mechanisms which (although not strictly free) can use pumps to recycle the cool air within the data center rather than chillers.


As well as circulating cool air, many data centers use cold water to reduce their temperatures. Again this can be a cost effective and sustainable option when for example data centers are located by their own source of water and then use purify the water themselves. Cooling water does need to be purified for this purpose but not to the same extent as mains water so the data center can carry out the process with less wasted energy consumption if they do it themselves using a local source.


Materials
As with any manufacturing process, key savings can be made in the production of all of the equipment used in a data center, from the servers to the cooling systems. By sourcing materials locally, for example, the initial carbon footprint of those materials can be cut. Once they have served their purpose, they may also still have a life beyond that; servers which have been superseded for a particular role or function should still be re-purposed within the center in another role where they are still adequately powerful. Those elements which cannot be reused within the data center may still be of use to others and so reselling them may be a further option.


Finally, units which are completely redundant can still be broken down into their components and then repurposed or resold and, failing that, the core materials in the components should be recycled as appropriate. All repurposing and reselling will reduce the demand for new equipment to be built and acquired and therefore as well as saving the data center money from purchasing new equipment it will also reduce the consumption of the raw materials and the energy used in their construction.


Most large data center providers are constantly exploring new and innovative ways to reduce their PUE scores and therefore their energy consumption, keeping costs down and increasing their green credentials. However, the booming demand for services such as cloud hosting and Colocation means that this challenge will never go away.


© Stuart Mitchell 2011
Enhanced by Zemanta

Web Hosting - An Overview

If you have ever set up either a personal or business website you will no doubt have encountered the concept of website hosting and depending on the level of service you needed you may have investigated a few different hosting solutions. The following article aims to give a useful overview to the options that are available to you and how they meet the differing aims of security, availability, costs, technical guidance and performance.

A hosting provider will either provide clients with space on servers to store and make their website accessible or a location in which they can house their own web server.

Colocation Hosting:
When a hosting provider offers the use of a data centre to house the client’s own server, it is known as colocation. Although the client will need to supply the server they can still benefit from the physical safety and security of the data centre and the facility’s high bandwidth to boost performance and availability. According to the package they sign up for they may also benefit from expert maintenance supplied by the provider.

Shared Hosting:
For less critical and lower performance driven websites, particularly where security is not such an issue, clients can opt to host their website on a shared server provided by the hosting company. In this scenario the site is stored on the same physical server as the websites of other clients (usually many others) and consequently, if there are performance issues, security issues, corruptions etc that occur with the sharing sites they may have knock on effects on each other.

Websites which are hosted on shared servers may also have to share resources such as server configurations and software, and therefore will not necessarily provide the solution for clients wanting a bespoke or tailored hosting solution.

The big advantage of shared hosting is however, the fact that it is more affordable for smaller scale operations and you can even find some resellers who will offer free web hosting on shared servers, funded by advertising.

Virtual Dedicated Servers:
Otherwise referred to as a Virtual Private Server (VPS), this uses a shared physical server but employs separate operating system installations or partitions for each site so that there is less shared resource between sites and more independently tailored software configuration. Virtual dedicated servers therefore carry a lower risk of security issues such as malware propagating from one site to another. and higher performance levels as the virtual spaces can be set up independently.

Dedicated Hosting:
In this scenario a website is hosted on its own physical server, usually within a data centre and therefore will not be affected by knock on issues from other sites sharing the server. In addition, the client will be able to fully configure the server to their needs, either with their own access or by handing over control to the hosting provider (see Managed Hosting).

Clustered hosting:
The term clustered hosting applies to any hosting solution in which the website is hosted on multiple servers, whether they be dedicated or shared servers. The solution provides a good solution for client’s looking for a high levels of uptime - because if one server fails the website can still run off the remaining servers - and performance - as load balancing can be employed so that web traffic is evenly distributed between the different servers so that individual servers are not overloaded.

Cloud Hosting:
This is essentially a form of clustered hosting whereby websites are stored on multiple servers via the cloud and can therefore benefit from the high levels of capacity, scalability and load balancing that the clustered solution offers whilst bringing it to a wider consumer market. With cloud hosting the client has access to a vast resource of servers so that they can use and pay for as much or as little as they need at any given time without the physical hardware restrictions of traditional hosting.

Managed Hosting:
This type of hosting package is ideal for a client whose website will require more complex configuration or will need to be adaptable to the changes in the clients requirements, e.g., business domains with changing propositions and marketing strategies.

Managed services can incorporate any of the other hosting packages but are most often associated with dedicated hosting plans. Here the provider will offer the management and maintenance of the website’s server configuration using a level of knowledge that the client is not likely to have themselves to ensure that performance, availability and security are maintained to a high level.

Importantly, providers can ensure that the server set up is fully responsive to changes in the website’s proposition or usage, such as the introduction of an ecommerce element or spikes in the level of traffic hitting the site.

Whatever level of online presence you are looking for with a basic understanding of the different elements on offer, you will be well placed to pick a Hosting solution to suit your needs; whether you want an affordable mass market cloud solution for a personal website or a robust and fully managed dedicated server solution for a critical and high visibility business domain.
Enhanced by Zemanta