CSA UK Chapter Blog

The road to the cloud- The story of public versus private

posted 29 Nov 2019, 08:07 by Francesco Cipollone   [ updated 29 Nov 2019, 08:20 ]

Dr. Wendy has a accoun

By Dr. Wendy Ng - DevSecOps Security Advisor for Experian

A collaboration between Experian and CSA UK&I

Cloud Security Alliance - UK Chapter


The road to the cloud- The story of public versus private

We are on the cusp of being a quarter of a way through the 21st century and you need to decide. Public or private cloud? But, what do these terms actually mean? Let me help walk you through and hopefully by the end of the article you will have a better idea (or at least you will feel welcomed into the 21st cloud century). 

Gartner predicts an exponential growth of cloud services, reaching $370 billion (which is about £200 billion in the UK)      by 2020. 

Source: Gigabit magazine

Early concerns on security implications, of multi-tenanted systems, have essentially been dissipated by improved understanding of responsibility boundaries and controls to achieve company and industry-specific regulatory compliance requirements. 

Just about every organisation worth their salt from all sectors; public, private or non-profit, will have had, or is undergoing, large transformation programmes which will include public cloud strategies for corporate assets.

Just as a side note the UK government is pushing for a cloud first policy internally and has published guidance online, you can find more here: https://www.gov.uk/guidance/government-cloud-first-policy  

To give you an idea of the scale and growth, take a look at the following diagram:

Source: Statista - Licence NSC42 Ltd

Whilst no control can be perfect, our understanding is that the public cloud has matured, and organisations are increasingly willing to accept the residual risks from public cloud platforms with enforced access and security controls. This, combined with their ease of use, has contributed to an increasing rate of public cloud adoption, to serve the needs and objectives of the organisation and business. There is no shortage of success stories of partnerships with public cloud vendors and their ability to provide value for the organisation. 

This is particularly true for retailers, who experience significant changes to resource requirements, that can be perfectly served by the inherent elasticity of the public cloud. Other early adopters include new start-ups, as public cloud platforms eliminate the need for significant upfront investment for infrastructure. Even amongst the more established players from traditional industries, public cloud is becoming entrenched, often through a hybrid model. 

Despite the clear speed of adoption in the retail space there is still a bit of skepticism on the level of security that a well-placed cloud security programme can quickly disperse. Nonetheless, a clear understanding of the division of responsibilities is required.

One of the early drivers of public cloud is the platform’s capability to deliver operational efficiencies; you will only pay for the services you use, thus there will be no idle servers, storage, networking equipment or technical staff, unable to contribute towards productivity despite capital investments.      Of course, these operational efficiencies only emerge if the business is willing to transform its ways of working, so that they can operate in this cloud-native manner. Whilst not a scientific study, a review of the recent results from technology giants suggests at least a correlation between those with significant cloud services and overachievers. 

A screenshot of a cell phone  Description automatically generated

Securing Cloud Services, Lee Newcombe

Public cloud comes in a variety of ‘flavours’ dependent on system management responsibilities of the assets, all of which have the acronym ‘as-a-Service’; and are typically more expensive than basic products.

 So, it should come as no surprise that for certain workloads, public cloud platforms are likely to be more expensive than on-premise private clouds, where the organisation is responsible for managing the entire infrastructure and systems. Nevertheless, the central concept of public cloud is their ability to take advantage of scale and pooling of resources. This allows service providers to make investments in technologies; the bigger user group means that they      can also provide a focal point for ideas and feedback on developments in the user community. This would provide them with greater visibility on industry trends and make strategic contributions to advances in the industry. 

One clear example is the pipeline of tools for DevOps, a collaborative practice which aids software development processes by breaking down silos between teams, which is supported by toolsets, to cater for and respond to changing consumer expectations.

We could speak about the integration of tools in the pipeline in this article, but we will be off on a tangent. Nonetheless, we will come back on the subject as this is a hot topic right now. 

Public clouds are enablers, designed to be responsive to changes to an organisation’s workload requirements; it is no accident that industries which experience significant fluctuations in workloads, such as retailers, are some of the most enthusiastic adopters of public clouds. They  can also be easy to adopt – too easy in fact, for holders of corporate credit cards; a subscription to a cloud-based service, to test its capability, can all too quickly become a critical IT service to a section of the business, without a proper procurement and vendor fiscal and security due-diligence process. Thus, the ease of adoption of public cloud could increase the frown lines on a CFO – as well as those on the CIO and CISO! Nonetheless, cloud adoption require careful planning and in order to leverage the power of the cloud and the full suite of tools it offers some re-thinking is required on the application migrated, often cloud migration are interpreted as lift and shift. 

Another concern, especially for the larger organisations, which has the advantage of being able to scale, is over-reliance on third-party vendors. Strategically, it is advisable to maintain internal capabilities, which may include developing toolsets, especially for organisations with a large operational footprint. For smaller organisations, decisions will be based on balancing investments on growth and safeguarding against possible operational disruptions on supportive functionalities. 

Whilst some workloads will be more suited to the inherent elastic nature of a public cloud, which may also offer a more diverse geographic presence than an on-premises private cloud, the relatively high operational costs of public clouds need to be taken into consideration. At some point, especially for large workloads with predictable (and probably consistent) resource requirements, the cost of initial capital hardware investments will be more efficient for the organisation when the lower operational costs of private clouds are taken into account. Thus, especially for large organisations, a hybrid public-private cloud strategy could provide the best balance to hedge against technical, operational and financial risks.

Why Cloud Migrations Fail – some practical, key factors from the Trenches

posted 29 Nov 2019, 07:27 by Francesco Cipollone

Why Cloud Migrations Fail – some practical, key factors from the Trenches

By Dimitri Yates

It is widely accepted that cloud computing offers benefits such as agility, flexibility and scale.

There is also a shift in the financial model from capex to opex.

In order to maximise the benefits offered by the cloud, as well as leverage and properly manage the financial model, changes are required across the organisation, particularly in larger organisations.

Changes and plans for changes must be made for the organisational structures and processes (people and processes) as well as the technology which the organisation wishes to migrate into the cloud.

Practical experience and challenges observed in large cloud migration projects show that many organisations did not anticipate and were not ready for challenges such as 

  1. Impact of the changes to the political climate and organisational structure;

  2. Preparing for the necessary changes in organisation behaviour, culture and ways of working

  3. Upskilling, training and employee communication.

These are non-technical issues which can and often are overlooked by what is seen by senior management as a purely technical domain (i.e. cloud migration). Certainly, there is a purely technical side to the cloud migration coin, however, the non-technical aspects are often overlooked. These non-technical aspects have the potential to derail otherwise well intentioned, valid cloud migration initiatives.

Lets briefly touch on each point above in the context and practicality of what happens during cloud migrations.

People don’t like change – especially if they feel threatened by potential loss of influence (turf, staff, influence etc.) and the uncertainty which follows cloud driven changes. There are always numerous touch points (departments, teams, etc.) in an organisation impacted by any decision to migrate to the cloud. These may have been unforeseen by decision makers. 

An example of this is procurement. Traditionally organisations have always had staff working in procurement to procure IT resources, potentially with long standing relationships not just internally, but with various vendors built over many years. Cloud computing does not need a procurement department – or at least not of the size maintained traditionally. This realisation, when it comes, leads to resistance as people fear for their jobs – people use their networks and, in many cases, go to extreme lengths to sabotage the cloud migration - and succeed in doing so.

This example can also be seen in the ‘traditional’ IT organisation, where the entire org structure is resistant to the Cloud migration for all the same reasons – protection ‘of turf’ by senior management all the way down to junior staff members, fear of losing their jobs, etc. Another factor in this equation is mindset – the ‘we have always done it this way’ mindset, where staff are resistant to change and fast, agile ways of working for any number of reasons, and one of the reasons why many ‘agile’ projects ad methodologies fail in ‘traditional’ environments.

So, what can be done? First the groundwork and lines of communication with staff at all levels must be thoroughly planned and invested in. Time and funding must be allocated to ensure the message/communication is right from the onset. Channels of communication via forums, regular meetings etc. are in place to allay peoples fears and put minds at rest as best, and as soon as, possible. There will be inevitable redundancies, or re allocations as a result of a cloud migration, and it is best to get these out of the way as soon as possible, so the rest of the workforce can get trained, educated, and prepared for the new structures and ways of working.

Training and upskilling will play a huge part in the success of the cloud migration process. For instance, procurement team members tend to have good accounting skills and existing relationships with finance for instance. These skills and relationships can be immediately leveraged by retraining procurement staff in how cloud billing works, chargebacks to various departments, setting budgets per department, division, etc. Most people tend to settle in after the initial shock/resistance mode and enjoy the new skills and challenges brought about by the new ways of working, particularly once the threat of losing their jobs/livelihood is addressed.

The same applies to other departments more directly related to cloud migrations such as engineering. By investing in communicating with existing staff, investing in retraining them, staff feel valued and no longer under threat of losing their jobs. 

Staff tend to buy into agile and CI/CD methodologies significantly more once they realise, they will be spending more time thinking about, and coming up with more creative (and interesting) IT solutions rather doing mundane, repetitive tasks which can now be handled by code and automated.

What is hybrid cloud computing?

posted 29 Nov 2019, 07:24 by Francesco Cipollone

What is hybrid cloud computing?

(Hybrid) cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability.

National Institutes for Standards in Technology (NIST)

While a Hybrid Cloud can take many forms; in essence it means managing two (or more) disparate cloud environments (say private & public) as one, or having management tools in place that let the two (or more) environments be managed and appear as one single environment.

Hybrid cloud uses a mix of on-premises, private cloud together with third-party, public cloud services; with orchestration between multiple platforms thus allowing workloads to move between the various cloud environments as computing needs; thus giving businesses greater flexibility and more deployment options.

A cloud is hybrid when;

  • You extend an internal web server (that has “burst capacity”) into a cloud service during the Christmas sales period, managed as a single instance by the company, and with customers seeing no difference.

  • A company with a global presence, hosting a major global sporting event, managing the master service internally; but replicates to multiple instances globally managed as one, so consumers of the event/service get fast local access and low latency.

  • When an internal business process is designed “as-a-service” enabling it to connect with multiple cloud environments as though they were just one environment.

A cloud is not hybrid when;

  • If you utilise only multiple public cloud services via orchestration (this is multi-cloud)

  • If developers in a company use a public cloud service to prototype a service that is disconnected from the companies private cloud or its data center.

  • Using a SaaS application for a project but that data never moves into the company’s data center.

Orchestration of the hybrid cloud 

Cloud orchestration provide a means for a cloud service customer to manage the deployment and operation of applications and associated datasets across hybrid cloud (most orchestration services also support multi-cloud environments). Cloud orchestration solutions offer greater flexibility and simplified operations to securely create, deploy, and operate applications and services across hybrid clouds and increase the speed of delivery thus providing a smooth, process driven, error-free delivery together with continuous compliance.

Hybrid cloud challenges

Despite its benefits, hybrid cloud computing can present technical, business and management challenges. 

  • It requires API compatibility, especially across disparate cloud providers.

  • It needs solid network connectivity and has the potential for connectivity issues,

  • You can suffer from service-level agreements (SLAs) breaches.

  • You may have security concerns if sensitive data lands on public cloud servers.

  • There can be budget concerns around overuse of storage or bandwidth and proliferation of mismanaged images.

  • There is a need for good management of the information flow, especially with rapidly changing data requiring synchronisation.

  • There can be a complex mix of policies, permissions and limits that must be managed (and may vary) across disparate cloud providers.

Guest Post - Resilience in the cloud

posted 7 May 2019, 09:23 by Lee Newcombe   [ updated 7 May 2019, 10:22 ]

Here's the latest guest blog post - another one courtesy of Leron Zinatullin (@le_rond), this time on considerations when it comes to operating resiliently if using cloud services.  Don't forget, feel free to contact us if you feel that you have a burning issue that you'd like to get off your chest via a guest blog post. Practical lessons learned that you'd like to share would be most gratefully received by ourselves (as a Chapter) and also by our readers.  And now, over to Leron...

Resilience in the Cloud

Modern digital technology underpins the shift that enables businesses to implement new processes, scale quickly and serve customers in a whole new way.

Historically, organisations would invest in their own IT infrastructure to support their business objectives and the IT department's role would be focused on keeping the ‘lights on’.

To minimise the chance of failure of the equipment, engineers traditionally introduced an element of redundancy in the architecture. That redundancy could manifest itself on many levels. For example, it could be a redundant datacentre, which is kept as a ‘hot’ or ‘warm’ site with a complete set of hardware and software ready to take the workload in case of the failure of a primary datacentre. Components of the datacentre, like power and cooling, can also be redundant to increase the resiliency.

On a lesser scale, within a single datacentre, networking infrastructure elements can be redundant. It is not uncommon to procure two firewalls instead of just one to configure them to balance the load or just to have a second one as a backup. Power and utilities companies still stock up on critical industrial control equipment to be able to quickly react to a failed component.

The majority of effort, however, went into protecting the data storage. Magnetic disks were assembled in RAIDs to reduce the chances of data loss in case of failure and backups were relegated to magnetic tapes to preserve less time-sensitive data and stored in separate physical locations.

Depending on specific business objectives or compliance requirements, organisations had to heavily invest in these architectures. One-off investments were, however, only one side of the story. On-going maintenance, regular tests and periodic upgrades were also required to keep these components operational. Labour, electricity, insurance and other costs were adding to the final bill. Moreover, if a company was operating in a regulated space, for example if they processed payments and cardholder data then external audits, certification and attestation were also required.

With the advent of cloud computing, companies were able to abstract away a lot of this complexity and let someone else handle the building and operation of datacentres and dealing with compliance issues relating to physical security.

The need for the business resilience, however, did not go away.

Cloud providers can offer options that far exceed (at comparable costs) the traditional infrastructure; but only if configured appropriately.

One example of this is the use of 'zones' of availability, where your resources can be deployed across physically separate datacentres. In this scenario, your service can be balanced across these availability zones and can remain running even if one of the 'zones' goes down. Capital investment required to achieve such functionality is much greater if you would to build your own infrastructure for this. In essence, you would have to build two or more datacentres. -. You better have a solid business case for this.

Additional resiliency in the cloud, however, is only achieved if you architect your solutions well: running your service in a single zone or, worse still, on a single virtual server can prove less resilient than running it on a physical machine.

It is important to keep this in mind when deciding to move to the cloud from the traditional infrastructure. Simply lifting and shifting your applications to the cloud may, in fact, reduce the resiliency. These applications are unlikely to have been developed to work in the cloud and take advantage of these additional resiliency options. Therefore, I advise against such migration in favour of re-architecting.

Cloud Service Provider SLAs should also be considered. Compensation might be offered for failure to meet these, but it’s your job to check how this compares to the traditional “5 nines” of availability in a traditional datacentre – alongside the financial differences between service credits as recompense and business losses from lack of availability.

You should also be aware of the many differences between cloud service models.

When procuring a SaaS, for example, your ability to manage resilience is significantly reduced. In this case you are relying completely on your provider to keep the service up and running, potentially raising the provider outage concern. In this scenario, archiving and regular data extraction might be your only options apart from reviewing the SLAs and accepting the residual risk. Even with the data, however, your options are limited without a second application on-hand to process that data, which may also require data transformation. Study the historical performance and pick your SaaS provider carefully.

IaaS gives you more options to design an architecture for your application, but with this great freedom comes great responsibility. The provider is responsible for fewer layers of the overall stack when it comes to IaaS, so you must design and maintain a lot of it yourself. When doing so, assume failure rather than thinking of it as a (remote) possibility.  Availability zones are helpful, but not always sufficient.  What scenarios require consideration of the use of a separate geographical region? Do any scenarios or requirements justify a need for a second cloud services provider? The European Banking Authority recommendations on Exit and Continuity can be an interesting example to look at from a testing and deliverability perspective.

Finally, PaaS, as always, is somewhere in-between SaaS and IaaS. I find that a lot of the times it depends on a particular platform; some of them will give you options you can play with when it comes to resiliency and others will retain full control. Be mindful of characteristics of SaaS that also affect PaaS from a redundancy perspective. For example, if you’re using a proprietary PaaS then you can’t just lift and shift your data and code.

Above all, when designing for resiliency, take a risk-based approach. Not all your assets have the same criticality. Understand the priorities, know your RPO and RTO. Remember that SaaS can be built on top of AWS or Azure, exposing you to supply chain risks.

Even when assuming the worst, you may not have to keep every single service running should the worst actually happen. For one thing, it's too expensive - just ask your business stakeholders. The very worst time to be defining your approach to resilience is in the middle of an incident, closely followed by shortly after an incident.  As with other elements of security in the cloud, resilience should “shift left” and be addressed as early in the delivery cycle as possible.  As the Scout movement is fond of saying – “be prepared”.

About the author

Leron Zinatullin is an experienced risk consultant, specialising in cyber-security strategy, management and delivery. He has led large-scale, global, high-value security transformation projects with a view to improving cost performance and supporting business strategy. He has extensive knowledge and practical experience in solving information security, privacy and architectural issues across multiple industry sectors. Leron is the author of The Psychology of Information Security.

Twitter: @le_rond


CSA UK Research Topics

posted 16 Nov 2018, 07:22 by Lee Newcombe   [ updated 6 Dec 2018, 09:59 ]

Lewis Troke was elected into the position of Director of Research for the UK Chapter at our recent Annual General Meeting.  We are using this as a great opportunity to start afresh with our approach to research, now under the guidance of Lewis.  What security guidance are UK organisations adopting cloud computing looking for?  What can we, as your local chapter of the Cloud Security Alliance, do to meet those needs?  Please take some time out of your busy day to complete the questionnaire over at SurveyMonkey and let us know your thoughts. We're keen to provide the guidance that UK organisations need, so please don't be shy in letting us know where those needs lie.  We have our own views of course, but we would much rather have the members drive our priorities.  
Many thanks in advance!

Cloud Security Governance Approaches

posted 5 Oct 2018, 05:14 by Lee Newcombe   [ updated 5 Oct 2018, 06:00 ]

Here's the latest in our series of guest blog posts on the Chapter blog.  This one comes from Leron Zinatullin (@le_rond) and describes the pros and cons of different governance approaches in relation to securing cloud implementations.  Don't forget, feel free to contact us if you feel that you have a burning issue that you'd like to get off your chest via a guest blog post. We can help.

Governance Models - Cloud

Your company has decided to adopt Cloud. Or maybe it was among the ones that relied on virtualised environments before it was even a thing? In either case, cloud security has to be managed. How do you go about that?

Before checking out vendor marketing materials in search of the perfect technology solution, let’s step back and think of it from a governance perspective. In an enterprise like yours, there are a number of business functions and departments with various level of autonomy. Do you trust them to manage business process-specific risk or choose to relieve them from this burden by setting security control objectives and standards centrally? Or maybe something in-between?

Centralised model

Managing security centrally allows you to uniformly project your security strategy and guiding policy across all departments. This is especially useful when aiming to achieve alignment across business functions. It helps when your customers, products or services are similar across the company, but even if not, centralised governance and clear accountability may reduce duplication of work through streamlining the processes and cost-effective use of people and technology (if organised in a central pool).

If one of the departments is struggling financially or is less profitable, the centralised approach ensures that overall risk is still managed appropriately and security is not neglected.  This point is especially important when considering a security incident (e.g. due to misconfigured access permissions) that may affect the whole company.

Responding to incidents in general may be simplified not only from the reporting perspective, but also by making sure due process is followed with appropriate oversight.

There are, of course, some drawbacks. In the effort to come up with a uniform policy, you may end up in a situation where it loses its appeal. It’s now perceived as too high-level and out of touch with real business unit needs. The buy-in from the business stakeholders, therefore, might be challenging to achieve.

Let’s explore the alternative; the decentralised model.

Decentralised model

This approach is best applied when your company’s departments have different customers, varied needs and business models. This situation naturally calls for more granular security requirements preferably set at the business unit level. 

In this scenario, every department is empowered to develop their own set of policies and controls. These policies should be aligned with the specific business need relevant to that team. This allows for local adjustments and increased levels of autonomy. For example, upstream and downstream operations of an oil company have vastly different needs due to the nature of activities they are involved in. Drilling and extracting raw materials from the ground is not the same as operating a petrol station, which can feel more like a retail business rather than one dominated by industrial control systems.

Another example might be a company that grew through a series of mergers and acquisitions where acquired companies retained a level of individuality and operate as an enterprise under the umbrella of a parent corporation.

With this degree of decentralisation, resource allocation is no longer managed centrally and, combined with increased buy-in, allows for greater ownership of the security programme.

This model naturally has limitations. These have been highlighted when identifying the benefits of the centralised approach: potential duplication of effort, inconsistent policy framework, challenges while responding to the enterprise-wide incident, etc. But is there a way to combine the best of both worlds? Let’s explore what a hybrid model might look like.

Hybrid model

The middle ground can be achieved through establishing a governance body setting goals and objectives for the company overall, and allowing departments to choose the ways to achieve these targets. What are the examples of such centrally defined security outcomes? Maintaining compliance with relevant laws and regulations is an obvious one but this point is more subtle.

The aim here is to make sure security is supporting the business objectives and strategy. Every department in the hybrid model in turn decides how their security efforts contribute to the overall risk reduction and better security posture.  This means setting a baseline of security controls and communicating it to all business units and then gradually rolling out training, updating policies and setting risk, assurance and audit processes to match. While developing this baseline, however, input from various departments should be considered, as it is essential to ensure adoption.

When an overall control framework is developed, departments are asked to come up with a specific set of controls that meet their business requirements and take distinctive business unit characteristics into account. This should be followed up by gap assessment, understanding potential inconsistencies with the baseline framework.

In the context of the Cloud, decentralised and hybrid models might allow different business units to choose different cloud providers based on individual needs and cost-benefit analysis.  They can go further and focus on different solution types such as SaaS over IaaS.

As mentioned above, business units are free to decide on implementation methods of security controls providing they align with the overall policy. Compliance monitoring responsibilities, however, are best shared. Business units can manage the implemented controls but link in with the central function for reporting to agree consistent metrics and remove potential bias. This approach is similar to the Three Lines of Defence employed in many organisations to effectively manage risk. This model suggests that departments themselves own and manage risk in the first instance with security and audit and assurance functions forming second and third lines of defence respectively.

What next?

We’ve looked at three different governance models and discussed their pros and cons in relation to Cloud. Depending on the organisation the choice can be fairly obvious. It might be emerging naturally from the way the company is running its operations. All you need to do is fit in the organisational culture and adopt the approach to cloud governance accordingly.

The point of this article, however, is to encourage you to consider security in the business context. Don’t just select a governance model based on what “sounds good” or what you’ve done in the past. Instead, analyse the company, talk to people, see what works and be ready to adjust the course of action.

If the governance structure chosen is wrong or, worse still, undefined, this can stifle the business instead of enabling it. And believe me, that’s the last thing you want to do.

Be prepared to listen: the decision to choose one of the above models doesn’t have to be final. It can be adjusted as part of the continuous improvement and feedback cycle. It always, however, has to be aligned with business needs.


Centralised model

Decentralised model

Hybrid model

A single function responsible for all aspects of a Cloud security: people, process, technology, governance, operations, etc.

Strategic direction is set centrally, while all other capabilities are left up to existing teams to define.

Strategy, policy, governance and vendors are managed by the Cloud security team; other capabilities remain outside the Cloud security initiative.




·         Central insight and visibility across entire cloud security initiative

·         High degree of consistency in process execution

·         More streamlined with a single body for accountability

·         Quick results due to reduced dependencies on other teams


·         High level of independence amongst departments for decision-making and implementation

·         Easier to obtain stakeholder buy-in

·         Less impact on existing organisation structures and teams

·         Increased adoption due to incremental change

·         High degree of alignment to existing functions

·         High-priority Cloud security capabilities addressed first

·         Maintains centralised management for core Cloud security requirements

·         Allows decentralised decision-making and flexibility for some capabilities





·         Requires dedicated and additional financial support from leadership

·         Makes customisation more time consuming

·         Getting buy-in from all departments is problematic

·         Might be perceived as not relevant and slow in adoption


·         Less control to enforce Cloud security requirements

·         Potential duplicate solutions, higher cost, and less effective control operations

·         Delayed results due to conflicting priorities

·         Potential for slower, less coordinated development of required capabilities

·         Lack of insight across non-integrated cloud infrastructure and services

·         Gives up some control of Cloud security capability implementation and operations to existing functions

·         Some organisation change is still required (impacting existing functions)

About the author

Leron Zinatullin is an experienced risk consultant, specialising in cyber-security strategy, management and delivery. He has led large-scale, global, high-value security transformation projects with a view to improving cost performance and supporting business strategy. He has extensive knowledge and practical experience in solving information security, privacy and architectural issues across multiple industry sectors. Leron is the author of The Psychology of Information Security.

Twitter: @le_rond



CSA UK AGM Updates

posted 24 Sep 2018, 03:54 by Lee Newcombe   [ updated 10 Oct 2018, 01:26 ]

Our Chapter Annual General Meeting (AGM) was held last week, kindly hosted by Trend Micro. With respect to Chapter business, the following outcomes were noted:
  • Lee Newcombe - elected to Vice-Chair
  • Lewis Troke - elected to Director of Research
  • Paul Simmonds - elected as General Board Member.
Each appointment lasts for two years. It was encouraging how many of the attendees expressed an interest in two of our other positions - Director of Events and Director of Communications - and so we hope to be able to fill those positions shortly.  The more observant amongst you will note that we currently have only a Vice-Chair as no-one was nominated for the role of Chair - Lee Newcombe (your author :)) will deputise whilst we await a candidate willing to put themselves up for election as Chair.

It was a positive interactive session with some great content being presented by knowledgeable practitioners. Many thanks to Dave Walker of AWS for a fantastic technical session on using Lambda to automate DevSecOps, Craig Savage from VMware for walking through the cultural and governance issues associated with adoption of hybrid cloud, Francesco Cipollone for sharing practical lessons learned from a number of public cloud deployments and to Bharat Mistry for showing how API-enabled security tooling is the way forward.  The slides from the day will be added to this post as they become available.

As ever, we are here to support the UK cloud community.  Feel free to contact us with ideas for what you want from your chapter in terms of outputs and activities!

Guest Post - Francesco Cipollone

posted 14 Dec 2017, 01:55 by Lee Newcombe   [ updated 14 Dec 2017, 02:02 ]

One of the things we're keen to do here is to share lessons learned by those who are actively implementing cloud services.  As such, I'm pleased to offer the opportunity to contribute guest articles sharing cloud security war stories to this blog.  Our first guest author is Francesco Cippolone of NSC42 who has kindly taken the time to write up a number of thoughts relating to identity on the Office 365 platform.  His article can be found below - thanks Francesco!

O365 Identity Article

Let me start by saying that by no means am I a pure authentication expert nor a Microsoft expert. As many of you, I'm on the journey to the cloud and learning as I go. Please provide any feedback or any contribution to the article so as to make it as accurate as possible.

Identity and Access management with O365/Azure

A few weeks ago, I had a conversation with a colleague about identities in Office 365 and the discussion lead to the various nuances of where the identities are located.

I have to admit, with a bit of shame, that in previous transformation projects I haven't much considered this topic; nonetheless with GDPR around the corner (May 2018) this topic is quite important.

I've done a bit of research but I haven't found a comprehensive article on the identities, where they are stored, how they are used etc. and so I've decided to put something together myself.  

In this small article, I will use interchangeably the word identity and account.

Acronyms used:

We all hate them but we can’t live without them, for the sake of clarity I’ll list the meaning of the terms that I’m going to use in the article:

  • AAD – Azure Active Directory

  • AD – Microsoft Active Directory

  • AD Connect – Active Directory Connection services

  • ADFS – Active Directory Federation Services

  • B2B/B2C – Azure Directory Service – Business to Business and Business to Consumer

  • EU – European Union

  • EU GDPR – European Union General Data Protection Regulation (enforced from May 2018)

  • DPA/EU-DPD – Data Protection Act 1998 (following EU Data Protection Directive 1995)

  • GP/GPO – AD Group Policy/Group Policy Object

  • IAM – Identity and Access Management

  • IDaaS – IDentity as a Service

  • IdP – Identity Provider

  • MS – Microsoft

  • MFA – Multi Factor Authentication

  • O365 – Office 365SSO – Single Sign On

  • SME – Small and Medium Enterprise

  • SAR – Subject Access Request (GDPR)

  • WAP – Web Application Proxy (for AD)

Accounts in the Microsoft Cloud world.

Readers who are not familiar with Identities in Azure/Office 365 should please refer to the article from MS understanding O365 Identity and the more generic, but outdated, choosing O365 sign-in method (a bit outdated but still a good overview of the identity models available when using the Microsoft cloud platform).

The Azure and Office365 cloud services rely on a backend version of Azure Active Directory service (commonly referred to as AAD). Using AAD implies the creation of additional accounts inside the Microsoft cloud, however there are different methodologies with different implications. Let's start with the basic types of identities in O365:

  • Federated Identities (AD+AAD+ADFS) - These kinds of identities are effectively located inside the on-premises identity store (e.g. Microsoft Active Directory). This technology enables the synchronization of selected attributes of the on-premises directory object (AD accounts and others) with O365 but authentication decisions are made on-premises with the cloud environment trusting the on-premises environment. This kind of identity strategy is often integrated with some kind of Single Sign On (SSO) technology (like ADFS or another third-party tool).  This approach keeps password hashes on premises, enables the centralized management of identities (as they are effectively in AD), and facilitates the re-use of existing strong authentication methods as well as traditional security controls (e.g. AD GP/GPO password policy).

  • Synchronised Identities (AD+AAD+AD-Connect) - The identity (accounts) are separate but synchronised, i.e. copied from on-premises into the cloud. The identity used in Office365/Azure is stored in AAD. The identity used on-prem will reside in the identity store used on-prem (usually Active Directory). The identity's password is one of the attributes synchronised with the cloud platform via AD-Connect.  Cloud users are required to enter their credentials to access cloud services.

  • Isolated Identities (AD & AAD) - In this specific case there is no link between the identity used on-prem and the identity used in the Microsoft Cloud. I've not seen many instances of this approach and usually it is a corner case. Nonetheless this option does not require an on premises server and could be ideal for SME or start-up.

  • Special Cases B2B and B2C – The Microsoft AAD service has some additional aid to provide to applications developed in Azure (e.g. using azure Web Service PaaS) an underlying Identity Database that could contain the Company Identities as well as external customers Identities. Those special instances of AAD allow the creation/federation of external accounts in the AAD but without the need of creating the account on the underlying AD. This method allows the isolation of the customer accounts in AAD and aids in the reduction of potential customer oriented regulation (like GDPR or DPA). The main difference between the B2B and B2C is that the first allows federation of the AAD with external customers while the latter allows the creation of accounts without the need of federation and with some more freedom on the username (for a comparison between B2B and B2C refer to: B2B compared to B2C). The B2C - Business to Consumer -  is more oriented to consumer application where a user would want to just create an account with his e-mail address as username and does not require any federation (for more information on B2C refer to: B2C Overview). The B2B – Business to Business - instead is more oriented to Business to Business, as the name implies, type of interaction; it allows the federation between the AAD and another Directory (for more information on B2B refer to: B2B Overview). The B2B and B2C are outside the scope of this article, I’ve inserted an overview here for completeness.   

Identity Location:

In each of the cases above, the identity information is stored in different locations. With geographical regulation (e.g. GDPR) the actual location and control of an account is important.

With use of cloud services the identity information could end up spreading across multiple locations (on-premises and Azure AD). For this reason, it is important to choose your preferred identity option in conjunction with a review of the key regulatory factors linked to data protection, including GDPR. An example of a factor to keep in consideration when choosing your approach could be the geo-restriction on where the identity information is stored/processed as it may be classed as personal data. One example of a breach of identity-related personal information could be the use of an identity store for European identities located outside the EEA region (for example in America). This example highlights the fact that the chosen identity model will determine how much of the on-premises identity information is replicated in the cloud (and which cloud region) and this should inform the wider decision-making process with respect to your identity model.

Figure 1 - Synchronised Identities Architecture

  • In the Synchronised Identity case the controlling account resides in the cloud identity store (AAD in this specific case). The password hash of the two accounts (on-prem and AAD) is synchronised, along with other account attributes but there are two separate accounts with shared attributes. The accounts link is subject to the settings of AD Connect and AAD. It is possible to refine items like password reset and other similar settings.

Figure 2 - Federation Architecture Sketch

  • In the Federated Identities case the controlling account resides on the Controlling Identity Store (Usually Active Directory), usually referred to as an Identity Provider or IdP in federated scenarios. Once a user authenticates against one of the cloud portals the request of authentication is forwarded to the IdP (Identity Provider). Hence the AAD and the authentication portal acts only as a front-end facing the user. This method also facilitates the use and re-use of on-premises security methodology like strong authentication, password policy driven by AD GPO, auditing. Moreover, the password hashes and the identities are not stored in the cloud provider – the cloud provider trusts the on-premises IdP.

  • In the Isolated identity case the authentication process for On-Prem and cloud (Azure/Office 365) is completely separate.

Deployment Components:

  • AD - Active Directory

  • ADDC - Active Directory Domain Controllers

  • ADFS - Active Directory Federation Services

  • WAP - Web Application Proxy (optional component for frontend Sign On – for more info refer to Hybrid Identity Requirements)

Decide where do you want to deploy your components

  • Azure/Other cloud provider Deployment

  • On-Prem Deployment

Note: The recommendation from Microsoft is to deploy ADFS Servers as close as possible to the Domain Controllers

Note2: the number of TCP ports needed to be opened between the ADFS servers and the AD controllers is quite substantial. Consider, if your architecture pattern and security policy allows them, deploying the AD and ADFS in the same zone (minimum filtering between the two systems) so as not to punch a lot of holes in your firewalls.

Application Proxy Location:

The communication between the user web requests and the backend authentication is normally handled by the WAP, while "internal" requests (coming from trusted networks if you still rely on that concept) will go directly to the ADFS servers.

Below there is a deployment example followed by the authentication flow. For a full list of ports and component refer to Hybrid Identity Requirements.

Figure 3 – Federation Detailed Architecture

Figure 4 - Federation Authentication Flow


Additional Option - Multi Factor Authentication

Figure 5 - MFA Architecture

In addition to the methods described in the earlier section there is an additional security component that could be added to the picture – Multifactor Authentication (MFA).

The idea behind multifactor authentication is to have a physical item required as part of the authentication (for more information on multi factor authentication refer to Multi-factor authentication Wikipedia article).  

The multifactor authentication token comes in different shape and forms:

  • As an SMS to the selected phone (note they tend to be a bit delayed)

  • As a call to the selected phone

  • As a one-time password/token generator application installed on a device  

Personal Note - I’ve found the token generator application to be the most reliable as it does work without signal.

Multifactor authentication creates an additional challenge to a potential attacker as it requires additional effort to get hold of the physical device (or the token value for the particular moment) providing the second factor.

Recent attacks, such as the reported compromise of the Deloitte e-mail service, have shown that single-factor authentication could be “easily” compromised. I’ve specifically used “easily” with quotation marks (“) as it all comes down to how much an Organization protects privileged identities and the configurations they choose to deploy.  In general, top-level accounts should be used for initial configuration purposes only and then locked away with day to day administration activities using less privileged accounts.  

MFA can be deployed for various applications:

Please note that the Azure Active Directory MFA (also referred as full-MFA) comes with Azure Active Directory Premium plans. In order to identify the various versions of MFA, and select which one is most applicable to the specific situation, refer to MFA plans.


Below is a list of a few key points that I've noted in cloud migration projects and that, hopefully, might help you avoid the same issues:

  • Simplicity vs adoption: Usually synchronisation (AD-Connect) is easier to implement than Federation/SSO (ADFS) but it requires the user to authenticate two times: once on the laptop, and another time on the o365 portals. This usually makes cloud adoption harder for an enterprise due to a sub-optimal user experience.

  • Additional Component: The organization will need an additional infrastructure component, e.g. ADFS or equivalent, whether it decides to use the AD-Connect or Federation methods described above.

  • Identity Resilience and Federation/Synchronisation readiness: Both ADFS and AD-Connect come as software components. If not planned carefully those two pieces of software might fail (causing an outage) or receive an overwhelming number of unplanned requests (legitimate or DDoS).

  • Identification of Identity Stores: In an enterprise, identity could span across several different systems. If identity is not consolidated as much as is needed, the process of integration between the cloud Identity and Access Management (IAM) systems and the on-prem environment might result in a little nightmare of delays. Definitely better to have a single authoritative source of identity to build from.

  • Use of Multi Factor Authentication (in short MFA): Depending on the enterprise security policy, strong authentication might be required. If not considered carefully this might result in a painful step. Despite the fact that Microsoft MFA Services works quite well in the basic scenario, it is  harderis harder to implement in traditional enterprise scenarios.

MFA authentication requires additional application integration that not always works with legacy software.

One example is the configuration of office clients (specifically outlook) versions prior 2016. The MFA does not talk nicely with any office client version prior 2016 that do not support modern authentication (the named component for the MFA PaaS service and office suite).

Other challenges to MFA relies on the end user and the changes in the user experience (additional step required to access resources), e.g.  with Bring Your Own Device (BYOD) the mail synchronisation usually relies on Active Sync; this component tends to conflict with the MFA.

  • ADFS only works with AD: if the organisation utilises an identity store that is different from AD this might add another layer of complexity due to the need to integrate different technology components, e.g. implement a specific federation tool such as Ping to act as an intermediary.

  • Adapt to Microsoft changes: AAD is a PaaS service and new features are introduced regularly. Failure to plan for them might result in being forced to adapt later (e.g. the use of classic portal vs the new Azure portal (ASM vs ARM))

  • AAD is not AD: AAD has a lot of features, and Microsoft is constantly adding new ones but fundamentally AAD is not a full blown Active Directory. To summarize the key differences AD is a directory service (with structures and capabilities like OUs, GPOs, domain join, etc…) while AAD is an identity solution (stores and authenticate users). A full-blown comparison between the two directory services is outside the scope of this article (for a quick overview refer to this article) but just to cite few major points:

    • AAD still lacks a modern and flexible way to manage Group Policies.

    • AAD has a flat OU Structure.

    • AAD is in the cloud and has different authentication methods and as such does not support methods such as Kerberos or NTLM

  • Plan ahead with respect to which security features to use: Azure offers some security features that could be used in conjunction with, or to enhance, the existing security controls applied to IAM systems such as:

    • MFA: authentication of users by multiple methods

    • AAD Identity Protection: allows to identify vulnerable accounts (for more information refer to https://docs.microsoft.com/en-us/azure/active-directory/active-directory-identityprotection)

    • OMS/Security Centre: allows you to monitor and log incidents as well as identify potential tampering (the Security centre correlation feature is a premium service)

    • Windows Hello

    • Windows OS Base: please note that certain features (like windows hello) work from Windows 10 onward

  • Using Cloud extends the IAM perimeter: with the introduction of O365 and the AAD component the IAM perimeter is extended to the cloud and is partially outside of the company's control as AAD is a PaaS service.

Take Aways

A good planning phase is always required before moving into any kind of project (IT or other discipline).

Together with a plan it is good to have a short and long-term strategy. Some elements to consider are:

  • Understand the business context and how to align the identity strategy with the overall business strategy

  • Understand the key requirements from internal policy and align the AD/AAD services to the security strategy

  • Identify the geographical and regulatory restrictions that apply to your business

  • How will GDPR impact the AAD (not a comprehensive list):

      • Geographical constraint

      • Response to SAR (subject access request)

    • How to track and tackle the sprawl of Identity repositories

  • What flavour of AD/AAD will be used in 1/2/5 years

  • What operating system base is going to be used in 1/2/5 years

  • What federation/SSO system is going to be used in 1/2/5 years?

  • Will you want to re-use your identities on other, non-Microsoft, cloud platforms?

Cloud integration and portability

posted 29 Aug 2017, 08:55 by Lee Newcombe

Integration and portability – either working across multiple cloud providers or else shifting workloads from one provider to another – remain amongst the trickier areas of cloud strategy and security.  Different business strategies and priorities will drive different approaches.  For example, if you take the view that service resilience is your primary concern then the idea of placing all your eggs in one basket, even one as well made as AWS or Azure, may be anathema.  This can then drive architectures that must either split components across multiple cloud providers so as to reduce impact of compromise (including outages) or to use a secondary cloud provider to provide contingency in the event of a failure of your primary supplier.  If you’re going to support portability (the ability to shift workloads between cloud providers) then you need to avoid lock-in which can drive you towards containerisation such that you can take your encapsulated infrastructure from one provider to another – subject to tooling and skills.   This does mean that you end up abstracting away from provider-specific APIs and capabilities where you can (e.g. containerisation, deployment of “cloud management platforms”), which is counter to the idea of going truly cloud-native in this author’s opinion.

What about integration?  This is a more interesting proposition.   Why not use Azure AD (or other cloud-based identity provider) to manage the identities and entitlements used across your cloud supply chain?  Why shouldn’t you send audit logs from a variety of cloud providers to AWS S3 buckets and then Glacier for long term storage (or to a cloud-based SIEM service for analysis)?  Why not go for a distributed microservices architecture with consumable services hosted across cloud providers? You do introduce additional complexity however, for example, how will you

  • secure network connectivity?

  • encrypt data in transit and at rest and perform the necessary key management?

  • authenticate and authorise interactions?

  • consolidate security monitoring and incident response?

  • consolidate billing to maintain a view on costs?

  • maintain an understanding of where data is flowing and why?

  • track operational responsibility for service delivery?

  • secure, monitor and track usage of the exposed API’s enabling integration?

  • secure your automated deployment pipeline across the diverse supply chain?

  • prevent latency-sensitive services from becoming reliant upon multiple traversals over the internet?

  • set and manage service levels for in-house applications built on a multi-cloud platform?

But if you either make use of provider APIs or front your own services with your own APIs then this kind of integration of “best of breed” services can support a move towards truly cloud-native approaches without being utterly reliant on a single provider. That said, failure of a cloud provider hosting a critical component could still take down a multi-cloud hosted microservices-based application if it’s not built with resilience in mind.  It’s also worth noting that adoption of PaaS or FaaS services will abstract away some of these issues for you!

The simplest approach however may well be to pick a cloud provider you are happy with and go all-in (albeit with minimal but necessary intregration such as federated identity).   Complexity is, and will remain, the enemy of security.  If you are prepared to accept the low risk of multi-region cloud provider outage then perhaps you would be best to avoid the complexity of full-on integration or portability and concentrate instead on account-based segmentation of services within a single provider.  

In summary, there is no one-size fits all approach.  Portability may overly constrain your virtualised infrastructure, negating many of the perceived benefits of cloud, whilst some level of integration is likely to be necessary (e.g. federated identity management).  The big question, as is often the case with cloud, is one of trust.  Do you trust your cloud providers to be there when you need them or do you need to engineer in contingency cloud-provider arrangements?  The choice is yours.

Small Business Guidance

posted 2 Aug 2017, 06:34 by Lee Newcombe   [ updated 2 Aug 2017, 06:39 ]

One of the projects we currently have underway in the UK chapter relates to the provision of guidance tailored towards small businesses. Cloud offers start-ups and small businesses the IT capabilities they need to compete with more established organisations but it is unlikely that such firms will have dedicated security teams tasked to secure such capabilities.  This project aims to provide pragmatic insight to help those asked to secure cloud services in small businesses to close some of that gap. 

Here's an update from Andy Camp who is running with this project...


Version 4.0 of the CSA Security Guidance for Critical areas of Focus in Cloud Computing is a 152 page document full of extremely useful information. This document is however difficult to interpret and onerous to implement for the majority of Small and Medium Enterprises (SME’s) who constitute over 99% of businesses in the UK and whose turnover (2014 Figures) represents 47% of the private sector turnover in the UK

The document is difficult to interpret either because the SME’s do not directly employ specialist Security resources or because even if they do there are other more pressing operational security issues to be addressed. This is further complicated by the fact that Cloud suppliers are 3rd parties and so Procurement and Legal expertise may also be required to navigate the 3rd party security assurance activities conducted  alongside procurement and legal issue resolution.

The UK government has previously stated its intention to use more SME’s with the caveat that they must be appropriately secure, demonstrated through participation in the Cyber Essentials scheme. The requirement to obtain Cyber Essentials (if selling to the UK public sector) and changes to legislation (e.g. GDPR) and regulatory requirements means that SME’s cannot afford the luxury of simply assuming security of their data and services in the cloud.

This SME guidance is essentially a business-based 3rd party security assurance approach for SME’s to use to assess prospective cloud suppliers. It is based upon a Business Impact Assessment, a simple method of supplier assurance that leads to a Risk statement and options to manage any identified risks. The criticality of the cloud supply chain to the SME can then be used to prioritise implementation of the risk management activity.

SME’s largely focus on their core revenue earning activities. The mantra for them therefore is that any work on non-core activity must meet three quality criteria: it must be Appropriate, Affordable and Achievable.

The final cloud security report for SME’s will include guidance on

  • Context – assessing what legislation, regulation, contracts and business strategy affects Cloud Service adoption

  • Business Impact – if your cloud supplier fails, what impact is your business likely to suffer?

  • Cloud Supplier Assurance – assessing each supplier to see if strength of their controls meets the needs of the organisation.

  • Risk Assessment - using both the Business Impact and Supplier Assurance Activities to see if the risk of using a particular cloud supplier is acceptable to your business.

  • Working out your options – the steps you can take to mitigate the risk of cloud to your business

  • Implementing your option(s) – taking account of the resources you have, suggested approaches to prioritise and address the risk mitigation options for the most, to the least, critical cloud service you use.

1-10 of 15