In a previous article we have covered the basics about the shared security model and outlined that the cloud provider is not your watchful nanny taking care of your every step. The most it will do if it sees you running around with scissors is maybe tell you it's bad and get back to reading the newspaper. With that in mind let's explore the common responsibility boundaries and where you might want to focus your attention in your case.
Let's start by taking a look at the responsibilities that any cloud provider covers, despite the service used. That includes the physical core infrastructure: datacenters with power delivery, core networking (both inside the datacenter and between the datacenters), storage and compute. So any firmware problem, any CPU vulnerability and any patches required to the switching infrastructure is solely the issue of the cloud provider and so is adding and provisioning hundreds of physical servers on a weekly basis. With infrastructures that large and customers that important they have absolutely no other options but to build to the highest redundancy, security and efficiency levels in the industry and then push it further.
People taking care of the datacenter infrastructure definitely know how large of a task that is. And we are not talking for a setup for a single organization. We're talking about multiple network overlays, enabling networks seamlessly spanning across multiple datacenters, the ability to have multiple customers build seemingly identical infrastructures and have them in the same datacenters not only with your typical IaaS offerings, but managed Kubernetes services, managed databases, App Services and Functions. Having thousands of customers and hundreds of services, the providers must ensure that there is no way a single customer could interfere with others or with the core cloud services.
One thing that is not very commonly represented in the on-premise environment is the management and automation plane, which is represented by a user portal and APIs to provision and manage resources. That is also something that is supported by the cloud provider. It is tightly integrated with the identity management part, which ensures multi tenant capabilities. It is important to note, however, that despite the fact that the cloud vendor guarantees the access mechanisms to be secure, the cloud management plane is a publicly accessible endpoint, created to make it easy for all of the customers to manage their resources.
For most new cloud adopters that is not only an easy way to get started, but also a potentially new attack vector, which requires the customer (or a customer’s trusted partner) to ensure that only the appropriate personnel has access to the required resources and that such small, yet crucial things like multifactor authentication are in place. Auditing the access rights and keys, especially those of programmatic users (service principles) is crucial to not lose track of what access has been granted to whom and whether it is still required. Doing that is key to not have your infrastructure taken over by some not so friendly folks who might have intentions that are not in your best interest.
With all the underlying infrastructure and the management plane taken care of by the cloud provider, we arrive at the border of customer responsibilities. IaaS services are still prevalent in most enterprise organizations when thinking about it and with them you have to build your own small datacenter within the platform of the hosting provider. That includes a closed virtual network, which, unless desired otherwise, is accessible to only you and potentially some virtual machines running your chosen operating system. And with that you have the luxury of having your own world-class data center for maybe less than 100€ a month for your small workload. So configuring your own infrastructure network, configuring your VMs the right way and patching them is still very much your own or your Managed Cloud Services Provider’s (MCSP) task. And so is everything that you build on top of it: all the frameworks and software that you deploy and all of the data that you store.
But we know that the cloud providers want to get more clientele and they sure did not waste time for the last 10+ years. So there’s a whole new world beyond IaaS, which changes the balance of responsibilities between the provider and the customer. PaaS and SaaS services allow to significantly reduce the effort that is needed to start up in the cloud or modernize your application.
A common example of a PaaS service is a managed database service. What it allows you to do is to be able to provision a database (for example, MSSQL) without going through the tedious process of setting up your host VMs, creating an Always-on cluster and in the end taking care of the usual maintenance tasks of patching all of the software. All this will be taken care or by the cloud provider. And we’re willing to bet that you don’t want to even try getting back to managing your Exchange servers instead of just having to create new user accounts in O365.
By using higher and higher levels of service from the public cloud provider more and more of the responsibilities are relieved from you. However, if you build and host solutions in the cloud and have customers in there, you are still responsible for your customer data in the end, despite the level of services that are being used. Did you write a secure enough application? Did you make sure the traffic is encrypted end-to-end using a secure enough cipher suite? Did your permission model allow only the minimum required access rights? These and any other questions still remain. To add to that, in the real world a company, which has been in the market for quite a while will probably be using IaaS, PaaS and SaaS services from one or more cloud providers, so in the end there still be a fair amount of maintenance on the customer’s end. At least in the transition period it is crucial to plan, align and execute things right within your organization or with the help of your MCSP.
We often say that the physical layer of the server infrastructure is taken care of by the cloud provider. And, to be frank, that really is the case most of the time. However due to the shared nature of the cloud, a couple of interesting scenarios might emerge. One of them would be “is my data actually secure with the cloud provider?”
And we’re not talking about data availability. With that much redundancy in the providers infrastructure and adequate planning, that’s usually not the issue. What about someone actually trying to access my data? We know that the cloud providers encrypt as much of the client data as they can by default. However, they are also the ones usually managing the keys to it. It makes things simple and for the most part secure for the customer. If the highest possible control over the data is required, the leading cloud vendors offer functionality to use your own keys to encrypt the data in your VMs or database servers and store keys in HSM modules.
Another threat that from mainly theoretical became quite real in the recent years were hardware vulnerabilities, threatening to break all the isolation that a hypervisor can provide. With an on-premise situation that is only half as bad. With a cloud environment, where a single physical server might host a dozen of different clients, this is dangerous on a new level.
We know that cloud providers tend to react to these threads very seriously and with the highest alert and that these threats are not that common. However, they still exist. And the vendors are well aware of it. That is the main reason why they now offer confidential computing platforms, providing hardware-based isolated trusted execution environments, which are based on secure enclaves for the CPU and memory, minimizing the risk of interference. If you don’t know whether this is something you would require, ask your MCSP to look at the requirements that you have.
1. Ensure your identity management is top notch. If you might have gotten by with less than ideal permission model implementation in your limited isolated environment, it will most likely have to improve once you move to the public cloud as more management possibilities leave more room for user error.
2. If you’re still treating your internal network as a safe zone, stop doing it (even if you’re not in the cloud). Privilege escalation and lateral movement are very common scenarios of getting compromised. Zero trust security model is something that should be embraced for a more robust security. Strong identity-based access model and auditing are key to not become a victim of cybercrime.
3. Leverage the security tools that are provided by the cloud providers. The higher level services that the cloud providers offer might not be as familiar as the on-premises environment that you have built and maintained for years, and your security tools might need to evolve to be compatible as well. However, during the past few years the providers started offering many (often free) security insight tools, which can provide a lot of useful data out of the box.
4. Remember that you are still responsible for your customer data. Both in and outside the cloud that is still the case. Encryption at rest, in transit and in use can and should be used for your data. Like we mentioned in the article, it can work in a cloud environment as well as in your own datacenter.
5. Invest in your IT team’s education and constant learning to be proficient in properly managing public cloud and it security and/or establish a trusted partnership with a Managed Cloud Services Provider (like BTT Cloud) to help you not only with the initial steps to move on to the public cloud, but also modernize and innovate at the right pace. Initial mistakes without proper planning can cost a lot in the long run.
Pasirinkite kaip norite su mumis susisiekti. Galite rezervuoti susitikimą arba galite gauti individualų pasiūlymą. Gavę užklausą susisieksime su jumis per 1 d.d.