CSA UK Chapter Blog

Guest Post - Resilience in the cloud

posted 7 May 2019, 09:23 by Lee Newcombe   [ updated 7 May 2019, 10:22 ]

Here's the latest guest blog post - another one courtesy of Leron Zinatullin (@le_rond), this time on considerations when it comes to operating resiliently if using cloud services.  Don't forget, feel free to contact us if you feel that you have a burning issue that you'd like to get off your chest via a guest blog post. Practical lessons learned that you'd like to share would be most gratefully received by ourselves (as a Chapter) and also by our readers.  And now, over to Leron...

Resilience in the Cloud

Modern digital technology underpins the shift that enables businesses to implement new processes, scale quickly and serve customers in a whole new way.

Historically, organisations would invest in their own IT infrastructure to support their business objectives and the IT department's role would be focused on keeping the ‘lights on’.

To minimise the chance of failure of the equipment, engineers traditionally introduced an element of redundancy in the architecture. That redundancy could manifest itself on many levels. For example, it could be a redundant datacentre, which is kept as a ‘hot’ or ‘warm’ site with a complete set of hardware and software ready to take the workload in case of the failure of a primary datacentre. Components of the datacentre, like power and cooling, can also be redundant to increase the resiliency.

On a lesser scale, within a single datacentre, networking infrastructure elements can be redundant. It is not uncommon to procure two firewalls instead of just one to configure them to balance the load or just to have a second one as a backup. Power and utilities companies still stock up on critical industrial control equipment to be able to quickly react to a failed component.

The majority of effort, however, went into protecting the data storage. Magnetic disks were assembled in RAIDs to reduce the chances of data loss in case of failure and backups were relegated to magnetic tapes to preserve less time-sensitive data and stored in separate physical locations.

Depending on specific business objectives or compliance requirements, organisations had to heavily invest in these architectures. One-off investments were, however, only one side of the story. On-going maintenance, regular tests and periodic upgrades were also required to keep these components operational. Labour, electricity, insurance and other costs were adding to the final bill. Moreover, if a company was operating in a regulated space, for example if they processed payments and cardholder data then external audits, certification and attestation were also required.

With the advent of cloud computing, companies were able to abstract away a lot of this complexity and let someone else handle the building and operation of datacentres and dealing with compliance issues relating to physical security.

The need for the business resilience, however, did not go away.

Cloud providers can offer options that far exceed (at comparable costs) the traditional infrastructure; but only if configured appropriately.

One example of this is the use of 'zones' of availability, where your resources can be deployed across physically separate datacentres. In this scenario, your service can be balanced across these availability zones and can remain running even if one of the 'zones' goes down. Capital investment required to achieve such functionality is much greater if you would to build your own infrastructure for this. In essence, you would have to build two or more datacentres. -. You better have a solid business case for this.

Additional resiliency in the cloud, however, is only achieved if you architect your solutions well: running your service in a single zone or, worse still, on a single virtual server can prove less resilient than running it on a physical machine.

It is important to keep this in mind when deciding to move to the cloud from the traditional infrastructure. Simply lifting and shifting your applications to the cloud may, in fact, reduce the resiliency. These applications are unlikely to have been developed to work in the cloud and take advantage of these additional resiliency options. Therefore, I advise against such migration in favour of re-architecting.

Cloud Service Provider SLAs should also be considered. Compensation might be offered for failure to meet these, but it’s your job to check how this compares to the traditional “5 nines” of availability in a traditional datacentre – alongside the financial differences between service credits as recompense and business losses from lack of availability.

You should also be aware of the many differences between cloud service models.

When procuring a SaaS, for example, your ability to manage resilience is significantly reduced. In this case you are relying completely on your provider to keep the service up and running, potentially raising the provider outage concern. In this scenario, archiving and regular data extraction might be your only options apart from reviewing the SLAs and accepting the residual risk. Even with the data, however, your options are limited without a second application on-hand to process that data, which may also require data transformation. Study the historical performance and pick your SaaS provider carefully.

IaaS gives you more options to design an architecture for your application, but with this great freedom comes great responsibility. The provider is responsible for fewer layers of the overall stack when it comes to IaaS, so you must design and maintain a lot of it yourself. When doing so, assume failure rather than thinking of it as a (remote) possibility.  Availability zones are helpful, but not always sufficient.  What scenarios require consideration of the use of a separate geographical region? Do any scenarios or requirements justify a need for a second cloud services provider? The European Banking Authority recommendations on Exit and Continuity can be an interesting example to look at from a testing and deliverability perspective.

Finally, PaaS, as always, is somewhere in-between SaaS and IaaS. I find that a lot of the times it depends on a particular platform; some of them will give you options you can play with when it comes to resiliency and others will retain full control. Be mindful of characteristics of SaaS that also affect PaaS from a redundancy perspective. For example, if you’re using a proprietary PaaS then you can’t just lift and shift your data and code.

Above all, when designing for resiliency, take a risk-based approach. Not all your assets have the same criticality. Understand the priorities, know your RPO and RTO. Remember that SaaS can be built on top of AWS or Azure, exposing you to supply chain risks.

Even when assuming the worst, you may not have to keep every single service running should the worst actually happen. For one thing, it's too expensive - just ask your business stakeholders. The very worst time to be defining your approach to resilience is in the middle of an incident, closely followed by shortly after an incident.  As with other elements of security in the cloud, resilience should “shift left” and be addressed as early in the delivery cycle as possible.  As the Scout movement is fond of saying – “be prepared”.

About the author

Leron Zinatullin is an experienced risk consultant, specialising in cyber-security strategy, management and delivery. He has led large-scale, global, high-value security transformation projects with a view to improving cost performance and supporting business strategy. He has extensive knowledge and practical experience in solving information security, privacy and architectural issues across multiple industry sectors. Leron is the author of The Psychology of Information Security.

Twitter: @le_rond

www.zinatullin.com

CSA UK Research Topics

posted 16 Nov 2018, 07:22 by Lee Newcombe   [ updated 6 Dec 2018, 09:59 ]

Lewis Troke was elected into the position of Director of Research for the UK Chapter at our recent Annual General Meeting.  We are using this as a great opportunity to start afresh with our approach to research, now under the guidance of Lewis.  What security guidance are UK organisations adopting cloud computing looking for?  What can we, as your local chapter of the Cloud Security Alliance, do to meet those needs?  Please take some time out of your busy day to complete the questionnaire over at SurveyMonkey and let us know your thoughts. We're keen to provide the guidance that UK organisations need, so please don't be shy in letting us know where those needs lie.  We have our own views of course, but we would much rather have the members drive our priorities.  
Many thanks in advance!


Cloud Security Governance Approaches

posted 5 Oct 2018, 05:14 by Lee Newcombe   [ updated 5 Oct 2018, 06:00 ]

Here's the latest in our series of guest blog posts on the Chapter blog.  This one comes from Leron Zinatullin (@le_rond) and describes the pros and cons of different governance approaches in relation to securing cloud implementations.  Don't forget, feel free to contact us if you feel that you have a burning issue that you'd like to get off your chest via a guest blog post. We can help.

Governance Models - Cloud

Your company has decided to adopt Cloud. Or maybe it was among the ones that relied on virtualised environments before it was even a thing? In either case, cloud security has to be managed. How do you go about that?


Before checking out vendor marketing materials in search of the perfect technology solution, let’s step back and think of it from a governance perspective. In an enterprise like yours, there are a number of business functions and departments with various level of autonomy. Do you trust them to manage business process-specific risk or choose to relieve them from this burden by setting security control objectives and standards centrally? Or maybe something in-between?


Centralised model

Managing security centrally allows you to uniformly project your security strategy and guiding policy across all departments. This is especially useful when aiming to achieve alignment across business functions. It helps when your customers, products or services are similar across the company, but even if not, centralised governance and clear accountability may reduce duplication of work through streamlining the processes and cost-effective use of people and technology (if organised in a central pool).


If one of the departments is struggling financially or is less profitable, the centralised approach ensures that overall risk is still managed appropriately and security is not neglected.  This point is especially important when considering a security incident (e.g. due to misconfigured access permissions) that may affect the whole company.

Responding to incidents in general may be simplified not only from the reporting perspective, but also by making sure due process is followed with appropriate oversight.

There are, of course, some drawbacks. In the effort to come up with a uniform policy, you may end up in a situation where it loses its appeal. It’s now perceived as too high-level and out of touch with real business unit needs. The buy-in from the business stakeholders, therefore, might be challenging to achieve.


Let’s explore the alternative; the decentralised model.


Decentralised model

This approach is best applied when your company’s departments have different customers, varied needs and business models. This situation naturally calls for more granular security requirements preferably set at the business unit level. 


In this scenario, every department is empowered to develop their own set of policies and controls. These policies should be aligned with the specific business need relevant to that team. This allows for local adjustments and increased levels of autonomy. For example, upstream and downstream operations of an oil company have vastly different needs due to the nature of activities they are involved in. Drilling and extracting raw materials from the ground is not the same as operating a petrol station, which can feel more like a retail business rather than one dominated by industrial control systems.


Another example might be a company that grew through a series of mergers and acquisitions where acquired companies retained a level of individuality and operate as an enterprise under the umbrella of a parent corporation.


With this degree of decentralisation, resource allocation is no longer managed centrally and, combined with increased buy-in, allows for greater ownership of the security programme.


This model naturally has limitations. These have been highlighted when identifying the benefits of the centralised approach: potential duplication of effort, inconsistent policy framework, challenges while responding to the enterprise-wide incident, etc. But is there a way to combine the best of both worlds? Let’s explore what a hybrid model might look like.


Hybrid model

The middle ground can be achieved through establishing a governance body setting goals and objectives for the company overall, and allowing departments to choose the ways to achieve these targets. What are the examples of such centrally defined security outcomes? Maintaining compliance with relevant laws and regulations is an obvious one but this point is more subtle.


The aim here is to make sure security is supporting the business objectives and strategy. Every department in the hybrid model in turn decides how their security efforts contribute to the overall risk reduction and better security posture.  This means setting a baseline of security controls and communicating it to all business units and then gradually rolling out training, updating policies and setting risk, assurance and audit processes to match. While developing this baseline, however, input from various departments should be considered, as it is essential to ensure adoption.


When an overall control framework is developed, departments are asked to come up with a specific set of controls that meet their business requirements and take distinctive business unit characteristics into account. This should be followed up by gap assessment, understanding potential inconsistencies with the baseline framework.


In the context of the Cloud, decentralised and hybrid models might allow different business units to choose different cloud providers based on individual needs and cost-benefit analysis.  They can go further and focus on different solution types such as SaaS over IaaS.


As mentioned above, business units are free to decide on implementation methods of security controls providing they align with the overall policy. Compliance monitoring responsibilities, however, are best shared. Business units can manage the implemented controls but link in with the central function for reporting to agree consistent metrics and remove potential bias. This approach is similar to the Three Lines of Defence employed in many organisations to effectively manage risk. This model suggests that departments themselves own and manage risk in the first instance with security and audit and assurance functions forming second and third lines of defence respectively.


What next?


We’ve looked at three different governance models and discussed their pros and cons in relation to Cloud. Depending on the organisation the choice can be fairly obvious. It might be emerging naturally from the way the company is running its operations. All you need to do is fit in the organisational culture and adopt the approach to cloud governance accordingly.


The point of this article, however, is to encourage you to consider security in the business context. Don’t just select a governance model based on what “sounds good” or what you’ve done in the past. Instead, analyse the company, talk to people, see what works and be ready to adjust the course of action.


If the governance structure chosen is wrong or, worse still, undefined, this can stifle the business instead of enabling it. And believe me, that’s the last thing you want to do.

Be prepared to listen: the decision to choose one of the above models doesn’t have to be final. It can be adjusted as part of the continuous improvement and feedback cycle. It always, however, has to be aligned with business needs.



Summary


Centralised model

Decentralised model

Hybrid model

A single function responsible for all aspects of a Cloud security: people, process, technology, governance, operations, etc.

Strategic direction is set centrally, while all other capabilities are left up to existing teams to define.

Strategy, policy, governance and vendors are managed by the Cloud security team; other capabilities remain outside the Cloud security initiative.

Advantages

Advantages

Advantages

·         Central insight and visibility across entire cloud security initiative

·         High degree of consistency in process execution

·         More streamlined with a single body for accountability

·         Quick results due to reduced dependencies on other teams

 

·         High level of independence amongst departments for decision-making and implementation

·         Easier to obtain stakeholder buy-in

·         Less impact on existing organisation structures and teams

·         Increased adoption due to incremental change

·         High degree of alignment to existing functions

·         High-priority Cloud security capabilities addressed first

·         Maintains centralised management for core Cloud security requirements

·         Allows decentralised decision-making and flexibility for some capabilities

 

Disadvantages

Disadvantages

Disadvantages

·         Requires dedicated and additional financial support from leadership

·         Makes customisation more time consuming

·         Getting buy-in from all departments is problematic

·         Might be perceived as not relevant and slow in adoption

 

·         Less control to enforce Cloud security requirements

·         Potential duplicate solutions, higher cost, and less effective control operations

·         Delayed results due to conflicting priorities

·         Potential for slower, less coordinated development of required capabilities

·         Lack of insight across non-integrated cloud infrastructure and services

·         Gives up some control of Cloud security capability implementation and operations to existing functions

·         Some organisation change is still required (impacting existing functions)


About the author

Leron Zinatullin is an experienced risk consultant, specialising in cyber-security strategy, management and delivery. He has led large-scale, global, high-value security transformation projects with a view to improving cost performance and supporting business strategy. He has extensive knowledge and practical experience in solving information security, privacy and architectural issues across multiple industry sectors. Leron is the author of The Psychology of Information Security.

Twitter: @le_rond

www.zinatullin.com

 

CSA UK AGM Updates

posted 24 Sep 2018, 03:54 by Lee Newcombe   [ updated 10 Oct 2018, 01:26 ]

Our Chapter Annual General Meeting (AGM) was held last week, kindly hosted by Trend Micro. With respect to Chapter business, the following outcomes were noted:
  • Lee Newcombe - elected to Vice-Chair
  • Lewis Troke - elected to Director of Research
  • Paul Simmonds - elected as General Board Member.
Each appointment lasts for two years. It was encouraging how many of the attendees expressed an interest in two of our other positions - Director of Events and Director of Communications - and so we hope to be able to fill those positions shortly.  The more observant amongst you will note that we currently have only a Vice-Chair as no-one was nominated for the role of Chair - Lee Newcombe (your author :)) will deputise whilst we await a candidate willing to put themselves up for election as Chair.

It was a positive interactive session with some great content being presented by knowledgeable practitioners. Many thanks to Dave Walker of AWS for a fantastic technical session on using Lambda to automate DevSecOps, Craig Savage from VMware for walking through the cultural and governance issues associated with adoption of hybrid cloud, Francesco Cipollone for sharing practical lessons learned from a number of public cloud deployments and to Bharat Mistry for showing how API-enabled security tooling is the way forward.  The slides from the day will be added to this post as they become available.

As ever, we are here to support the UK cloud community.  Feel free to contact us with ideas for what you want from your chapter in terms of outputs and activities!

Guest Post - Francesco Cipollone

posted 14 Dec 2017, 01:55 by Lee Newcombe   [ updated 14 Dec 2017, 02:02 ]

One of the things we're keen to do here is to share lessons learned by those who are actively implementing cloud services.  As such, I'm pleased to offer the opportunity to contribute guest articles sharing cloud security war stories to this blog.  Our first guest author is Francesco Cippolone of NSC42 who has kindly taken the time to write up a number of thoughts relating to identity on the Office 365 platform.  His article can be found below - thanks Francesco!

O365 Identity Article


Let me start by saying that by no means am I a pure authentication expert nor a Microsoft expert. As many of you, I'm on the journey to the cloud and learning as I go. Please provide any feedback or any contribution to the article so as to make it as accurate as possible.


Identity and Access management with O365/Azure


A few weeks ago, I had a conversation with a colleague about identities in Office 365 and the discussion lead to the various nuances of where the identities are located.

I have to admit, with a bit of shame, that in previous transformation projects I haven't much considered this topic; nonetheless with GDPR around the corner (May 2018) this topic is quite important.

I've done a bit of research but I haven't found a comprehensive article on the identities, where they are stored, how they are used etc. and so I've decided to put something together myself.  

In this small article, I will use interchangeably the word identity and account.


Acronyms used:

We all hate them but we can’t live without them, for the sake of clarity I’ll list the meaning of the terms that I’m going to use in the article:

  • AAD – Azure Active Directory

  • AD – Microsoft Active Directory

  • AD Connect – Active Directory Connection services

  • ADFS – Active Directory Federation Services

  • B2B/B2C – Azure Directory Service – Business to Business and Business to Consumer

  • EU – European Union

  • EU GDPR – European Union General Data Protection Regulation (enforced from May 2018)

  • DPA/EU-DPD – Data Protection Act 1998 (following EU Data Protection Directive 1995)

  • GP/GPO – AD Group Policy/Group Policy Object

  • IAM – Identity and Access Management

  • IDaaS – IDentity as a Service

  • IdP – Identity Provider

  • MS – Microsoft

  • MFA – Multi Factor Authentication

  • O365 – Office 365SSO – Single Sign On

  • SME – Small and Medium Enterprise

  • SAR – Subject Access Request (GDPR)

  • WAP – Web Application Proxy (for AD)



Accounts in the Microsoft Cloud world.

Readers who are not familiar with Identities in Azure/Office 365 should please refer to the article from MS understanding O365 Identity and the more generic, but outdated, choosing O365 sign-in method (a bit outdated but still a good overview of the identity models available when using the Microsoft cloud platform).


The Azure and Office365 cloud services rely on a backend version of Azure Active Directory service (commonly referred to as AAD). Using AAD implies the creation of additional accounts inside the Microsoft cloud, however there are different methodologies with different implications. Let's start with the basic types of identities in O365:


  • Federated Identities (AD+AAD+ADFS) - These kinds of identities are effectively located inside the on-premises identity store (e.g. Microsoft Active Directory). This technology enables the synchronization of selected attributes of the on-premises directory object (AD accounts and others) with O365 but authentication decisions are made on-premises with the cloud environment trusting the on-premises environment. This kind of identity strategy is often integrated with some kind of Single Sign On (SSO) technology (like ADFS or another third-party tool).  This approach keeps password hashes on premises, enables the centralized management of identities (as they are effectively in AD), and facilitates the re-use of existing strong authentication methods as well as traditional security controls (e.g. AD GP/GPO password policy).

  • Synchronised Identities (AD+AAD+AD-Connect) - The identity (accounts) are separate but synchronised, i.e. copied from on-premises into the cloud. The identity used in Office365/Azure is stored in AAD. The identity used on-prem will reside in the identity store used on-prem (usually Active Directory). The identity's password is one of the attributes synchronised with the cloud platform via AD-Connect.  Cloud users are required to enter their credentials to access cloud services.

  • Isolated Identities (AD & AAD) - In this specific case there is no link between the identity used on-prem and the identity used in the Microsoft Cloud. I've not seen many instances of this approach and usually it is a corner case. Nonetheless this option does not require an on premises server and could be ideal for SME or start-up.

  • Special Cases B2B and B2C – The Microsoft AAD service has some additional aid to provide to applications developed in Azure (e.g. using azure Web Service PaaS) an underlying Identity Database that could contain the Company Identities as well as external customers Identities. Those special instances of AAD allow the creation/federation of external accounts in the AAD but without the need of creating the account on the underlying AD. This method allows the isolation of the customer accounts in AAD and aids in the reduction of potential customer oriented regulation (like GDPR or DPA). The main difference between the B2B and B2C is that the first allows federation of the AAD with external customers while the latter allows the creation of accounts without the need of federation and with some more freedom on the username (for a comparison between B2B and B2C refer to: B2B compared to B2C). The B2C - Business to Consumer -  is more oriented to consumer application where a user would want to just create an account with his e-mail address as username and does not require any federation (for more information on B2C refer to: B2C Overview). The B2B – Business to Business - instead is more oriented to Business to Business, as the name implies, type of interaction; it allows the federation between the AAD and another Directory (for more information on B2B refer to: B2B Overview). The B2B and B2C are outside the scope of this article, I’ve inserted an overview here for completeness.   


Identity Location:

In each of the cases above, the identity information is stored in different locations. With geographical regulation (e.g. GDPR) the actual location and control of an account is important.


With use of cloud services the identity information could end up spreading across multiple locations (on-premises and Azure AD). For this reason, it is important to choose your preferred identity option in conjunction with a review of the key regulatory factors linked to data protection, including GDPR. An example of a factor to keep in consideration when choosing your approach could be the geo-restriction on where the identity information is stored/processed as it may be classed as personal data. One example of a breach of identity-related personal information could be the use of an identity store for European identities located outside the EEA region (for example in America). This example highlights the fact that the chosen identity model will determine how much of the on-premises identity information is replicated in the cloud (and which cloud region) and this should inform the wider decision-making process with respect to your identity model.



Figure 1 - Synchronised Identities Architecture

  • In the Synchronised Identity case the controlling account resides in the cloud identity store (AAD in this specific case). The password hash of the two accounts (on-prem and AAD) is synchronised, along with other account attributes but there are two separate accounts with shared attributes. The accounts link is subject to the settings of AD Connect and AAD. It is possible to refine items like password reset and other similar settings.




Figure 2 - Federation Architecture Sketch


  • In the Federated Identities case the controlling account resides on the Controlling Identity Store (Usually Active Directory), usually referred to as an Identity Provider or IdP in federated scenarios. Once a user authenticates against one of the cloud portals the request of authentication is forwarded to the IdP (Identity Provider). Hence the AAD and the authentication portal acts only as a front-end facing the user. This method also facilitates the use and re-use of on-premises security methodology like strong authentication, password policy driven by AD GPO, auditing. Moreover, the password hashes and the identities are not stored in the cloud provider – the cloud provider trusts the on-premises IdP.





  • In the Isolated identity case the authentication process for On-Prem and cloud (Azure/Office 365) is completely separate.


Deployment Components:


  • AD - Active Directory

  • ADDC - Active Directory Domain Controllers

  • ADFS - Active Directory Federation Services

  • WAP - Web Application Proxy (optional component for frontend Sign On – for more info refer to Hybrid Identity Requirements)


Decide where do you want to deploy your components

  • Azure/Other cloud provider Deployment

  • On-Prem Deployment


Note: The recommendation from Microsoft is to deploy ADFS Servers as close as possible to the Domain Controllers

Note2: the number of TCP ports needed to be opened between the ADFS servers and the AD controllers is quite substantial. Consider, if your architecture pattern and security policy allows them, deploying the AD and ADFS in the same zone (minimum filtering between the two systems) so as not to punch a lot of holes in your firewalls.


Application Proxy Location:


The communication between the user web requests and the backend authentication is normally handled by the WAP, while "internal" requests (coming from trusted networks if you still rely on that concept) will go directly to the ADFS servers.

Below there is a deployment example followed by the authentication flow. For a full list of ports and component refer to Hybrid Identity Requirements.



Figure 3 – Federation Detailed Architecture


Figure 4 - Federation Authentication Flow


.

Additional Option - Multi Factor Authentication




Figure 5 - MFA Architecture

In addition to the methods described in the earlier section there is an additional security component that could be added to the picture – Multifactor Authentication (MFA).

The idea behind multifactor authentication is to have a physical item required as part of the authentication (for more information on multi factor authentication refer to Multi-factor authentication Wikipedia article).  

The multifactor authentication token comes in different shape and forms:

  • As an SMS to the selected phone (note they tend to be a bit delayed)

  • As a call to the selected phone

  • As a one-time password/token generator application installed on a device  

Personal Note - I’ve found the token generator application to be the most reliable as it does work without signal.

Multifactor authentication creates an additional challenge to a potential attacker as it requires additional effort to get hold of the physical device (or the token value for the particular moment) providing the second factor.

Recent attacks, such as the reported compromise of the Deloitte e-mail service, have shown that single-factor authentication could be “easily” compromised. I’ve specifically used “easily” with quotation marks (“) as it all comes down to how much an Organization protects privileged identities and the configurations they choose to deploy.  In general, top-level accounts should be used for initial configuration purposes only and then locked away with day to day administration activities using less privileged accounts.  

MFA can be deployed for various applications:

Please note that the Azure Active Directory MFA (also referred as full-MFA) comes with Azure Active Directory Premium plans. In order to identify the various versions of MFA, and select which one is most applicable to the specific situation, refer to MFA plans.


Pitfalls

Below is a list of a few key points that I've noted in cloud migration projects and that, hopefully, might help you avoid the same issues:


  • Simplicity vs adoption: Usually synchronisation (AD-Connect) is easier to implement than Federation/SSO (ADFS) but it requires the user to authenticate two times: once on the laptop, and another time on the o365 portals. This usually makes cloud adoption harder for an enterprise due to a sub-optimal user experience.

  • Additional Component: The organization will need an additional infrastructure component, e.g. ADFS or equivalent, whether it decides to use the AD-Connect or Federation methods described above.

  • Identity Resilience and Federation/Synchronisation readiness: Both ADFS and AD-Connect come as software components. If not planned carefully those two pieces of software might fail (causing an outage) or receive an overwhelming number of unplanned requests (legitimate or DDoS).

  • Identification of Identity Stores: In an enterprise, identity could span across several different systems. If identity is not consolidated as much as is needed, the process of integration between the cloud Identity and Access Management (IAM) systems and the on-prem environment might result in a little nightmare of delays. Definitely better to have a single authoritative source of identity to build from.

  • Use of Multi Factor Authentication (in short MFA): Depending on the enterprise security policy, strong authentication might be required. If not considered carefully this might result in a painful step. Despite the fact that Microsoft MFA Services works quite well in the basic scenario, it is  harderis harder to implement in traditional enterprise scenarios.

MFA authentication requires additional application integration that not always works with legacy software.

One example is the configuration of office clients (specifically outlook) versions prior 2016. The MFA does not talk nicely with any office client version prior 2016 that do not support modern authentication (the named component for the MFA PaaS service and office suite).

Other challenges to MFA relies on the end user and the changes in the user experience (additional step required to access resources), e.g.  with Bring Your Own Device (BYOD) the mail synchronisation usually relies on Active Sync; this component tends to conflict with the MFA.

  • ADFS only works with AD: if the organisation utilises an identity store that is different from AD this might add another layer of complexity due to the need to integrate different technology components, e.g. implement a specific federation tool such as Ping to act as an intermediary.

  • Adapt to Microsoft changes: AAD is a PaaS service and new features are introduced regularly. Failure to plan for them might result in being forced to adapt later (e.g. the use of classic portal vs the new Azure portal (ASM vs ARM))

  • AAD is not AD: AAD has a lot of features, and Microsoft is constantly adding new ones but fundamentally AAD is not a full blown Active Directory. To summarize the key differences AD is a directory service (with structures and capabilities like OUs, GPOs, domain join, etc…) while AAD is an identity solution (stores and authenticate users). A full-blown comparison between the two directory services is outside the scope of this article (for a quick overview refer to this article) but just to cite few major points:

    • AAD still lacks a modern and flexible way to manage Group Policies.

    • AAD has a flat OU Structure.

    • AAD is in the cloud and has different authentication methods and as such does not support methods such as Kerberos or NTLM

  • Plan ahead with respect to which security features to use: Azure offers some security features that could be used in conjunction with, or to enhance, the existing security controls applied to IAM systems such as:

    • MFA: authentication of users by multiple methods

    • AAD Identity Protection: allows to identify vulnerable accounts (for more information refer to https://docs.microsoft.com/en-us/azure/active-directory/active-directory-identityprotection)

    • OMS/Security Centre: allows you to monitor and log incidents as well as identify potential tampering (the Security centre correlation feature is a premium service)

    • Windows Hello

    • Windows OS Base: please note that certain features (like windows hello) work from Windows 10 onward

  • Using Cloud extends the IAM perimeter: with the introduction of O365 and the AAD component the IAM perimeter is extended to the cloud and is partially outside of the company's control as AAD is a PaaS service.



Take Aways

A good planning phase is always required before moving into any kind of project (IT or other discipline).

Together with a plan it is good to have a short and long-term strategy. Some elements to consider are:

  • Understand the business context and how to align the identity strategy with the overall business strategy

  • Understand the key requirements from internal policy and align the AD/AAD services to the security strategy

  • Identify the geographical and regulatory restrictions that apply to your business

  • How will GDPR impact the AAD (not a comprehensive list):

      • Geographical constraint

      • Response to SAR (subject access request)

    • How to track and tackle the sprawl of Identity repositories

  • What flavour of AD/AAD will be used in 1/2/5 years

  • What operating system base is going to be used in 1/2/5 years

  • What federation/SSO system is going to be used in 1/2/5 years?

  • Will you want to re-use your identities on other, non-Microsoft, cloud platforms?



Cloud integration and portability

posted 29 Aug 2017, 08:55 by Lee Newcombe

Integration and portability – either working across multiple cloud providers or else shifting workloads from one provider to another – remain amongst the trickier areas of cloud strategy and security.  Different business strategies and priorities will drive different approaches.  For example, if you take the view that service resilience is your primary concern then the idea of placing all your eggs in one basket, even one as well made as AWS or Azure, may be anathema.  This can then drive architectures that must either split components across multiple cloud providers so as to reduce impact of compromise (including outages) or to use a secondary cloud provider to provide contingency in the event of a failure of your primary supplier.  If you’re going to support portability (the ability to shift workloads between cloud providers) then you need to avoid lock-in which can drive you towards containerisation such that you can take your encapsulated infrastructure from one provider to another – subject to tooling and skills.   This does mean that you end up abstracting away from provider-specific APIs and capabilities where you can (e.g. containerisation, deployment of “cloud management platforms”), which is counter to the idea of going truly cloud-native in this author’s opinion.

What about integration?  This is a more interesting proposition.   Why not use Azure AD (or other cloud-based identity provider) to manage the identities and entitlements used across your cloud supply chain?  Why shouldn’t you send audit logs from a variety of cloud providers to AWS S3 buckets and then Glacier for long term storage (or to a cloud-based SIEM service for analysis)?  Why not go for a distributed microservices architecture with consumable services hosted across cloud providers? You do introduce additional complexity however, for example, how will you

  • secure network connectivity?

  • encrypt data in transit and at rest and perform the necessary key management?

  • authenticate and authorise interactions?

  • consolidate security monitoring and incident response?

  • consolidate billing to maintain a view on costs?

  • maintain an understanding of where data is flowing and why?

  • track operational responsibility for service delivery?

  • secure, monitor and track usage of the exposed API’s enabling integration?

  • secure your automated deployment pipeline across the diverse supply chain?

  • prevent latency-sensitive services from becoming reliant upon multiple traversals over the internet?

  • set and manage service levels for in-house applications built on a multi-cloud platform?

But if you either make use of provider APIs or front your own services with your own APIs then this kind of integration of “best of breed” services can support a move towards truly cloud-native approaches without being utterly reliant on a single provider. That said, failure of a cloud provider hosting a critical component could still take down a multi-cloud hosted microservices-based application if it’s not built with resilience in mind.  It’s also worth noting that adoption of PaaS or FaaS services will abstract away some of these issues for you!

The simplest approach however may well be to pick a cloud provider you are happy with and go all-in (albeit with minimal but necessary intregration such as federated identity).   Complexity is, and will remain, the enemy of security.  If you are prepared to accept the low risk of multi-region cloud provider outage then perhaps you would be best to avoid the complexity of full-on integration or portability and concentrate instead on account-based segmentation of services within a single provider.  

In summary, there is no one-size fits all approach.  Portability may overly constrain your virtualised infrastructure, negating many of the perceived benefits of cloud, whilst some level of integration is likely to be necessary (e.g. federated identity management).  The big question, as is often the case with cloud, is one of trust.  Do you trust your cloud providers to be there when you need them or do you need to engineer in contingency cloud-provider arrangements?  The choice is yours.

Small Business Guidance

posted 2 Aug 2017, 06:34 by Lee Newcombe   [ updated 2 Aug 2017, 06:39 ]

One of the projects we currently have underway in the UK chapter relates to the provision of guidance tailored towards small businesses. Cloud offers start-ups and small businesses the IT capabilities they need to compete with more established organisations but it is unlikely that such firms will have dedicated security teams tasked to secure such capabilities.  This project aims to provide pragmatic insight to help those asked to secure cloud services in small businesses to close some of that gap. 

Here's an update from Andy Camp who is running with this project...


Update

Version 4.0 of the CSA Security Guidance for Critical areas of Focus in Cloud Computing is a 152 page document full of extremely useful information. This document is however difficult to interpret and onerous to implement for the majority of Small and Medium Enterprises (SME’s) who constitute over 99% of businesses in the UK and whose turnover (2014 Figures) represents 47% of the private sector turnover in the UK

The document is difficult to interpret either because the SME’s do not directly employ specialist Security resources or because even if they do there are other more pressing operational security issues to be addressed. This is further complicated by the fact that Cloud suppliers are 3rd parties and so Procurement and Legal expertise may also be required to navigate the 3rd party security assurance activities conducted  alongside procurement and legal issue resolution.

The UK government has previously stated its intention to use more SME’s with the caveat that they must be appropriately secure, demonstrated through participation in the Cyber Essentials scheme. The requirement to obtain Cyber Essentials (if selling to the UK public sector) and changes to legislation (e.g. GDPR) and regulatory requirements means that SME’s cannot afford the luxury of simply assuming security of their data and services in the cloud.

This SME guidance is essentially a business-based 3rd party security assurance approach for SME’s to use to assess prospective cloud suppliers. It is based upon a Business Impact Assessment, a simple method of supplier assurance that leads to a Risk statement and options to manage any identified risks. The criticality of the cloud supply chain to the SME can then be used to prioritise implementation of the risk management activity.

SME’s largely focus on their core revenue earning activities. The mantra for them therefore is that any work on non-core activity must meet three quality criteria: it must be Appropriate, Affordable and Achievable.

The final cloud security report for SME’s will include guidance on

  • Context – assessing what legislation, regulation, contracts and business strategy affects Cloud Service adoption

  • Business Impact – if your cloud supplier fails, what impact is your business likely to suffer?

  • Cloud Supplier Assurance – assessing each supplier to see if strength of their controls meets the needs of the organisation.

  • Risk Assessment - using both the Business Impact and Supplier Assurance Activities to see if the risk of using a particular cloud supplier is acceptable to your business.

  • Working out your options – the steps you can take to mitigate the risk of cloud to your business

  • Implementing your option(s) – taking account of the resources you have, suggested approaches to prioritise and address the risk mitigation options for the most, to the least, critical cloud service you use.

Cloud Integration Project Update

posted 1 Nov 2016, 07:07 by Lee Newcombe

As I’ve blogged previously, we have a number of research projects currently underway under the auspices of the UK Chapter.    One of those projects relates to cloud integration.    The cloud integration project is being led by John Arnold and John has kindly produced the below text as an example of where that project is heading.

 Issues involved in cloud integration

 1.      Identity.  Users (both privileged and end users) need to access cloud services as easily as on-premises services.  Ideally, we need to achieve the following:

-        Single administration – users don’t need to be administered separately for each cloud service.  Privileges in cloud based services can be accessed by mapping to a common identity store.

-        Single credential – users don’t need to manage their credentials separately for each cloud service

-        Single session – users don’t need to log on separately for each cloud service.

2.      Security monitoring.  The enterprise SOC needs to receive feeds from cloud services just as it does from on-premises services.  The enterprise will need to be able to adjust and define the feeds it gets, where they are sent, and how any transfer is scheduled and protected.

3.      Infrastructure and application monitoring and control.  The enterprise needs to be able to manage its cloud-based applications and infrastructure in the same way as its on-premises resources.

4.      Provisioning.  Spinning up and down instances, allocating VM images, containers and filesystems, needs to work seamlessly across cloud-based and on-premises services.

5.      Inter-application communications.  On premises and cloud based services need to be able to communicate seamlessly and securely.

6.      Security policies.  Where security policies can be virtualised, for instance using the XACML standard, these should be uniform across cloud and on-premises services.

If you have expertise in any of the above areas or have an opinion that you’d like to contribute to the project then please don’t hesitate to get in touch with me at lee.newcombe at my cloudsecurityalliance.org.ukemail address and I’d be delighted to put you in touch with John and his workstream colleagues.     

Our aim for all of our research projects is to provide pragmatic, proven guidance tailored for UK cloud consumers: the more input we get from experienced cloud adopters, the more effective our guidance will be.  

UK Chapter Research Update

posted 23 Sep 2016, 00:45 by Lee Newcombe   [ updated 3 Oct 2016, 03:11 ]

Cloud is no longer new.   It’s not been new for a few years now.  I spent the first few years of my time working in cloud security saying “next year will be the year of cloud”.   I’m not saying that any more.   If my client interactions are anywhere near indicative of the wider environment (and working for one of the Big 4, I’d like to think that they are!) then 2015 was the year of cloud.    I saw many clients, including previously reluctant multi-nationals and financial services organisations, moving live workloads to the cloud.   They’d been dabbling with test and development for a while but last year seemed to represent a step change in acceptance of the use of cloud for the hosting of live services.    So what does this have to do with research?   Well, lots of the guidance I see out there still seems to be fairly high level and theoretical rather than pragmatic and based on the harsh lessons that can come from building cloud services for real.   My aim as the current Director of Research for the UK Chapter is to shepherd the development of a set of documents that contain proven, useable guidance to help UK businesses adopt cloud safely.   Think genuine lessons learned and achievable solutions rather than exhortations to negotiate a Right of Audit to the data-centres of Amazon, Microsoft and Google…

We now have four different projects up and running.   The four active projects are:

·        Cloud Migration – led by Anish Mohammed

·        Cloud Integration – led by John Arnold

·        Guidance for Small Business – led by Andy Camp; and

·        Guidance for Public Sector Users - led by Owen Sayers.

We are actively seeking volunteers to help the project leaders to develop the content for their specific projects, particularly from those who have experience of delivering secure cloud services at the shop-floor.  Please, step forward and share your lessons learned; let’s move the guidance from theory to practice whilst raising your industry profile at the same time.   Everyone’s a winner!

 

If this is of interest then please drop me a line at my @cloudsecurityalliance.org.uk email address (lee.newcombe) and I’ll put you in touch with the relevant project leads.

UK-specific Cloud Guidance

posted 17 May 2016, 03:46 by Lee Newcombe

One of the things that we, as a UK chapter, are keen to deliver to our members is some local flavour to the comprehensive but generic guidance produced by the wider global Cloud Security Alliance.    To that aim, I thought it worthwhile to highlight some of the local, UK-specific, sources of security guidance to help out those of you either already working in the cloud or else exploring the potential for future cloud-based delivery of services.     Whilst aimed very much at the Public Sector, the Cloud Security Principles and associated cloud security guidance published on Gov.uk offers a considerable amount of pragmatic guidance to those looking to adopt cloud services.      The landing page for a variety of guidance documents can be found here:

 

https://www.gov.uk/government/collections/cloud-security-guidance

 

One, more hard to find, Government source of guidance relates to the security assertions that cloud providers looking to deliver to HMG clients are required to make as part of entry on to the G-Cloud procurement framework.   Those requested assertions may be a useful baseline for organisations wondering what information they should be seeking from cloud providers as part of their own due diligence exercises.    Cloud providers may wish to cover these topics in the security, risk and assurance documentation that they offer their prospective clients (perhaps under non-disclosure agreement) to assist in their decision making.   The UK G-Cloud Supplier Security Assertions can be found on Github at:

 

https://github.com/alphagov/supplier-submission-portal/tree/master/conf

 

There are long-standing concerns around cloud computing relating to data protection and privacy issues such as data sovereignty.    These concerns were recognised by the Information Commissioner (ICO) here in the UK and the ICO released guidance on the usage of cloud computing back in 2012.    The cloud world has moved on since, e.g. many cloud providers are now signed up to the model data protection clauses published by the European Commission, however the guidance is still a worthwhile read and can be found at:

 

https://ico.org.uk/media/for-organisations/documents/1540/cloud_computing_guidance_for_organisations.pdf

 

The final piece of UK-specific guidance that I’ll point you towards in this post is that offered by the Financial Conduct Authority (FCA).   Regulatory compliance is another factor commonly touted as inhibiting the adoption of cloud services.    As such, the pragmatic nature of the draft guidance produced by the FCA should be welcomed.   It will be interesting to see what the final version of the guidance contains once it has been approved following the consultation round; but for now the draft guidance can be found here:

 

https://www.fca.org.uk/static/documents/guidance-consultations/gc15-06.pdf

 

I hope you found the more UK-focussed flavour of this post valuable – please let us know either way!

 

Lee Newcombe

Lee is a member of the Board of the UK Chapter of the CSA, a named contributor to the CSA’s “Security Guidance for Critical Areas of Focus in Cloud Computing” document and author of the book “Securing Cloud Services” published in 2012.

1-10 of 12