Cloud adoption continues to increase as organisations are either taking their first steps into the cloud, or progressing their IT strategies, whether it’s a full cloud migration, multi-cloud or delivering a hybrid architecture.
A great workload to leverage the cloud has been as a backup repository. By using the cloud, we can meet multiple improvements to our 3-2-1-1-0 minimum strategy for backups. In case you aren’t familiar with the 3-2-1-1-0 strategy, it is:
3 copies of data, across 2 different backup media types, 1 of which must be off-site to the production environment, 1 backup at least must be offline or “immutable”, with 0 backup validation errors.
One threat that the 3-2-1-1-0 strategy doesn’t cover explicitly is the concept of security boundaries. It is entirely possible to achieve all 3-2-1-1-0 objectives purely within your own environment, or exclusively within a single cloud, but there are problems with either approach, as we’ll look into now.
Internally Managed Only Aka the Traditional Strategy
Most organisations by default would just manage all backups internally, the theory being that the relevant teams manage backup policies, prioritisation, recovery testing and the overall backup lifecycle. This is great on paper, until you encounter a problem with an insider threat. There are numerous stories of organisations being taken down/offline following the actions of a disgruntled employee.
Whilst it’s possible to sue the (presumably former) employee, it’s often too late and the damage is already done. Most forms of data resilience within an internally managed environment still rely on a form of trust within the staff, whether that’s ensuring that offline backups such as tape actually run, to ensuring no privileges are abused such as wrongfully utilising physical access to a hardened repository.
Externally Managed Aka the Shared Responsibility Strategy
With the continued growth of the cloud sector, organisations are starting to push the management responsibilities of the infrastructure, underlying foundational components and lifecycle processes out to external organisations, often public cloud vendors.
This provides the benefit of a trust silo, your applications and workloads may function across different platforms, but the underlying resources powering these don’t need to trust each other, and quite rightly won’t.
However, when data leaves your private infrastructure’s jurisdiction and exists within a shared space, this introduces risk.
Securing the Front Door, but ignoring the Back Door
When we think of security and the cloud, we tend to think of conditional access, Multi-Factor Authentication (MFA) and advanced analytics, beyond that we start to think it’s the cloud provider’s responsibility to secure this. But the cloud is a shared responsibility model, and you are ultimately responsible for your data.
Don’t believe me? Check your cloud vendor’s documentation, here’s Microsoft‘s and Amazon‘s for example.
However, once we move beyond the controls we can see and configure, there’s very little insight into the inner-workings of these cloud platforms. We have to trust that everything is secure. But this flies in the face of the advice we’re given for our own networks, to assume that they’ve already been compromised.
Cloud Security Breaches, Fact not Theory
I’ve said for a long time that it would be a matter of when a cloud provider was compromised, not if. Over the past year in particular we’ve seen both Microsoft and Amazon have multiple vulnerabilities identified (and thankfully resolved prior to public disclosure). But this does bring the questions, how severe could these really be?
Taking a break from discussing cloud backups for a moment, let’s look at some of the vulnerabilities disclosed in the last 12 months.
Microsoft Azure – Azure Cosmos DB Jupyter Notebook Feature – “ChaosDB”
Starting off with a smaller threat, it was discovered by a security researcher in August 2021 that there was a vulnerability in the Jupyter Notebook feature of Azure Cosmos DB that meant you could gain access to another customer’s resources by using the account’s primary read-write key.
Immediately this is a boundary violation and is a considerable cause for concern, the damage was limited in scope only to those adopting the feature, but it shows you never know how these platforms will be exploited, until they are. If you’d like to read more about the Microsoft incident review and response you can see their official blog post here.
The article however downplays some important facts, this vulnerability was first introduced in 2019 when the feature was added, the feature was turned on by default for all Cosmos DBs in February 2021, that was over 3,300 customers that were impacted. To keep this blog post balanced, you can read more about the write up on the vulnerability by Wiz, the security researchers that discovered this vulnerability, here.
Microsoft Azure OMI Virtual Machine Management Extension – OMIGOD
Microsoft sometimes deploy virtual machine extensions as part of providing integrations between their Azure platform and the virtual machine workloads. In September 2021, Wiz discovered that Open Management Infrastructure (OMI), a management tool deployed on Azure Linux virtual machines, and with minimal public documentation and a noticeable lack of awareness about its deployment, was vulnerable to four exploits.
Why was OMI such a prime target? The OMI Agent runs as root within Linux, and any user can communicate with it using a UNIX socket or (when configured to allow external access) via a HTTP API.
Three of the four vulnerabilities discovered for OMI were privilege escalation vulnerabilities, dangerous in their own right, but required a mechanism to interact with OMI to be useful. The fourth vulnerability allowed for remote code execution, providing the way to trigger the other three exploits.
By default, OMI isn’t externally accessible, but it was discovered that some Azure services, such as Configuration Management, expose an HTTPS port for interacting with OMI. If you thought it couldn’t get much worse, the exploit is painfully simple. With a single packet it was possible to become root on the Linux machine, by stripping the auth header. You can find out more about these vulnerabilities by reading the blog post from Wiz here.
Microsoft’s response didn’t provide reassurances either, with Microsoft informing customers they would have to patch OMI themselves, though later amending that statement to include they would automatically patch this vulnerability if auto-update was enabled for the VM. You can read Microsoft’s official MSRC response here.
AWS Glue Vulnerability – “SuperGlue”
AWS didn’t have a great start to 2022, with not one, but two vulnerabilities disclosed, we’ll get to the other one shortly, but to start with, we’ll look at the AWS Glue Vulnerability.
AWS Glue is a serverless data integration service offered by AWS. AWS Glue by design will have access to different resources within AWS tenants when deployed, as one of its primary functions is to crawl your data sources. A security firm called Orca Security discovered a feature within AWS Glue that could then be exploited to obtain credentials to a role within the AWS service’s own account. Orca Security gained full control to the service API via this and then could even escalate their privileges further, ending up with full administrative privileges to all resources for the service in the region they were testing. You can read Orca Security’s full write up here.
In summary, if your AWS tenant used AWS Glue, you could’ve been affected by this if it had been used maliciously. AWS used the information and reviewed all of the audit logs they had (which, impressively go all the way back to the launch of the service) and confirmed this hadn’t been exploited. AWS prepared a short statement which is available here.
AWS CloudFormation Vulnerability – “BreakingFormation”
Arguably saving the biggest of the vulnerabilities until last, on the same day as SuperGlue, Orca Security also published their discovery of an AWS CloudFormation-based vulnerability.
Orca Security discovered a zero-day, leveraged by triggering an XXE vulnerability, allowed them to compromise a server within CloudFormation. Once the CloudFormation server was compromised they discovered sensitive data on the compromised server, which was thankfully redacted within their blog post, and they discovered they were able to run as an AWS internal service. Based on the information discovered, Orca Security initially believed that they could utilise this vulnerability to bypass tenant boundaries and gain access to ANY resource in AWS. Orca Security’s blog post provides further information and can be read here.
Once again, I’ll give credit where due, AWS developed a fix in under 25 hours, which was rolled out across all AWS regions within 6 days. Interestingly AWS claim that the vulnerability could not be used to gain access to resources or customer data, as Orca Security never attempted to potentially interrupt the AWS platform in testing this theory, we’ll have to take AWS’ word for this. AWS’s statement including this counterclaim is available here.
As a closing comment on this vulnerability, even if the exploit itself did stop short of having access to everything, the compromising of the host could’ve been a truly crucial foothold into the inner AWS infrastructure that could’ve then be leveraged as part of a further attack chain.
Should I be avoiding the cloud?
This isn’t a call to avoid the cloud, it may have its problems, but the speed of resolution and thoroughness of the response certainly highlight that the cloud is actually better architected than most traditional environments. This is however a reminder that utilising the cloud is not without risk and certainly not without responsibility.
The two takeaways from this article should be:
- Avoid utilising excessive services, this increases your vulnerability footprint, and when implementing them, be sure to align yourself to the best practices available.
- Take additional steps to secure your data beyond relying on the cloud provider’s solutions. For example, if you’re using Veeam to backup to the cloud, encrypt your backups via Veeam, this way a compromise of the tenant doesn’t mean a compromise of your data.
No approach is without risk, the whole purpose of a backup is to survive and recover from incidents or accidents, by factoring in the risks of platforms into our backup posture, we can better be prepared for the continually evolving threat of the IT landscape.
Leave a Reply