13 November 2019


Who’s afraid of the Big Bad Cloud? Part 2 Cloud & Security

The Cloud is becoming increasingly important to companies. It offers the promise of greater agility and a significantly shortened Time To Market. In an increasingly competitive world where Time to Market is a key success factor, it is mainly this perspective that motivates decision-makers to move towards the Cloud.

However, to take full advantage of the benefits of the Cloud and achieve a genuinely increased agility, transforming infrastructure management processes and project organisation is essential.

Indeed, it is not the technologies but the processes that are essentially responsible for the lack of agility and in particular the separation between the design teams in charge of application development and the production teams responsible for infrastructure provision and management.

Hence, to render projects more agile and shorten cycles, it is necessary to increase the autonomy of project teams and allow them to create and manage the infrastructures of their environments (Development, Integration, Validation).

For real agility, project environments must therefore be managed by development teams, which allows the genuine implementation of a DevOps approach. We must therefore switch from a ‘Request/Response’ mode to a ’Do it yourself’ mode.

Likewise, prototyping or ephemeral environments can be directly created as part of initiatives managed at the level of business teams (Marketing Campaign, Modelling, etc.), thus promoting innovation.

The distribution of infrastructure management responsibilities across all development teams has a strong impact on the implementation of safety. Indeed, it becomes necessary to compartmentalise the environments of each project strongly to prevent a team from impacting the resources of another team by mistake or deliberately.

In concrete terms, this need for strong compartmentalisation leads to the creation of different Cloud accounts per environment and per project, and in particular to the separation of production and non-production accounts, rapidly leading to the creation of tens or even hundreds of Cloud accounts in the case of large organisations.

In the first part of this article, I referred to the need for controlling and supervising the security measures implemented in Cloud infrastructures to detect and eliminate non-compliance as early as possible and ensure that security and compliance rules are respected.

The large-scale deployment of Cloud environments across hundreds of accounts makes it necessary to automate these controls and apply them in real time.

To achieve this, a Continuous Compliance approach based on the following principles should be used:

  • Implementation of security policies defining authorised actions for each profile: definition of authorisations for each role (developer, integrator, tester, Data Scientist, administrator, etc.) with a least privilege principle.
  • Event-based deployment of control rules activated in real time during the creation, modification, deletion of a resource.
  • Periodic analysis of the configurations to detect deviations from the defined rules.
  • Traceability: collection and centralisation of all logs generated by API calls from the Cloud platform.

When a nonconformity is detected, a safety alert is raised and transmitted to the project teams and safety referents. Two cases may arise: where possible, an automatic corrective action can be applied to correct the non-compliance immediately; otherwise, the safety alert must be processed manually.

This Continuous Compliance approach has notably been implemented by Société Générale. Christophe Parisel’s excellent article compares the different implementation strategies possible on the AWS and Azure platforms.

The main issues to be addressed when implementing security controls are:

  • Network security
  • Identity and access management
  • Data protection
  • Data leakage prevention
  • Traceability and auditing

Network security:

The micro-segmentation of environments creates a multitude of network bubbles and real complexity in the management of flows and routing.

The urbanisation of the network infrastructure and particularly the large-scale consideration of the problem of routing and the opening of flows between the hundreds of network bubbles is a very structuring point that must be studied very early on in the overall design of the Cloud platform.

Identity and access management:

Identity and access management is also a crucial factor in the implementation of security rules and policies. It is important to centralise all identities within a single account or directory and assign each identity with one or more profiles defining access to one or more Cloud accounts.

Data protection:

Data protection mechanisms should be defined according to the sensitivity of the data. It is therefore important beforehand to define a classification of the data and, for each data item or document, identify its level of confidentiality. In general, classifications define three or four levels of confidentiality:

  • Level 1 – Public data: no special protection measures are required
  • Level 2 – Internal data: this data must be accessible only to authorised persons
  • Level 3 – Sensitive data: this data must be subject to enhanced protection measures. It is generally encrypted and access to it is systematically traced
  • Level 4 – Secret data: This data is systematically encrypted and accessible only by a very limited number of people within the company. They are usually stored in separate areas with marked access and enhanced security measures.

Data Leakage Prevention (DLP):

Data leakage prevention is generally based on mechanisms that prevent data from leaving its execution context, including blocking the channels that allow data to be output on public or external networks. However, this is not always possible, especially when using managed PaaS services that require flow openings to external access points. In this case, all outgoing flows should be tracked to detect possible data leaks retrospectively and be able to identify their source.

Traceability and auditing:

Logs produced by the Cloud platform during creation, modification, deletion and resource access on all accounts must be collected and centralised in storage hosted in a dedicated Cloud account that is only accessible in read-only mode from other accounts to ensure the integrity of the logs. These logs can then be analysed to detect certain actions or behaviours that do not conform to security rules.

In conclusion, security is a key element in the implementation of Cloud infrastructures and the micro-segmentation of accounts, which is needed to isolate different projects, makes it mandatory to automate control rules and corrective actions where possible.

Eric Datei

Cloud & Architecture Leader