Network Communications News (NCN) November 2016 | Page 29
cloud computing & virtualisation
S P E C I A L F E AT U R E
Watch this space
Carrie Higbie of Siemon examines the role of cloud data centres and how
a well thought out design can ensure maximum physical layer uptime.
C
loud solutions are growing at a
tremendous pace and we are
now seeing immense computing
power available to all. According
to the results of 451 Research’s
Voice of the Enterprise survey,
the majority of organisations
questioned are expecting to increase
their storage spending over the next
year. Notably, the research revealed that
the proportion of spending on public
cloud storage services is set to at least
double over the next two years.
In contrast to traditional colocation
data centres, cloud based variants put
the facility owner in the driving seat in
terms of the network infrastructure and
how it is provisioned. One of the key
differentiators between
the two types of
facility is
the speed in which this type of activity
can be carried out, as cloud variants
tend to be less siloed, with no distinct
server, storage and network teams,
something that engenders a more
collaborative approach.
Likewise, there is no ‘one size fits
all’ solution to configuring a cloud data
centre, as it depends on whether it will
be a public, private or hybrid facility.
As the trend away from in-house data
centres continues, workloads tend to
run on an organisation’s own servers
in a data centre, while data spikes are
offloaded to public cloud providers.
When selecting a cloud provider,
consideration has to be given to
security. If an organisation’s critical
data is on its own servers, it is under
the company’s own control. However,
once it enters the public cloud, it is not.
Although many organisations are happy
to put forms and other non-sensitive
information in the public cloud,
when it comes to details of
employees, it’s very much a
case of ‘buyer beware’.
The way in which resilience is
being addressed within the cloud
data centre sector is also undergoing
change. The default option used to be
N+1 redundancy – in other words, the
duplication of critical components or
functions of a system with the intention of
increasing reliability of the system, usually
in the case of a back up or fail-safe.
However, this approach incurs
costs and forward thinking data centre
managers are now more circumspect
about employing it. They perform risk
profiles on what is being supported by any
given piece of hardware, before deciding
whether N+1 redundancy is needed.
The current trend is undoubtedly
moving towards server virtualisation.
Utilising consolidation to transform a data
centre into a flexible cloud infrastructure,
it uses a software application to divide
one physical server into multiple isolated
virtual environments. The benefits of this
approach include reduced heat, lower
hardware costs, faster redeployment
and back ups, more accurate testing, no
vendor lock in, improved disaster recovery
and improved environmental friendliness.
Cause and effect
Detailed and careful
planning are absolutely
the key to an efficient and
within budget end result.
Regardless of the type of resilience
employed, the use of service level
agreements (SLAs) within the public
cloud sector mean that it must be
effective. SLAs are often used for data
and network services such as hosting,
servers and leased lines and common
factors include percentage of network
uptime, power uptime and the amount
of scheduled maintenance windows.
Due diligence is vital and end
users should carry out a thorough
technical check on the specification
of the facility. One piece of advice is
to look at the server rooms and see if
they are highly populated – if they are,
ask how much of the available power
has been consumed and how much is
still available. Furthermore, ask for a
copy of the preventative maintenance
schedule (PPM) and ask to see
maintenance records or reports over
29
29-30 Cloud – Siemon.indd 29
01/11/2016 15:01