Common confusion: security as a service vs. usage responsibility
A common confusion in cloud implementations is thinking that being on AWS, Azure, or GCP means data, applications, and configurations are automatically protected. Companies assume that firewall, identity, encryption, or monitoring controls are the sole responsibility of the provider. This distorts the actual shared responsibility model and leaves critical gaps.
In a retail company with production workloads on AWS, the development team assumed IAM was “cloud stuff.” They did not implement granular access policies nor credential rotation. The result: a developer shared a key with a third party on Slack, without any controls activated. This led to unauthorized access to an S3 bucket containing sensitive data. AWS is not responsible for controlling which users are created nor how credentials are managed within your tenant.
Security by the provider is limited to the infrastructure they operate: physical data centers, backbone network, hardware, and lower layers of the stack. What you create, deploy, and expose is your direct responsibility. Failing to understand this distinction is a common pattern in companies that migrate without sufficient cloud maturity.
Default configurations and their false sense of security
Many cloud services come with functional defaults that are not necessarily secure. Some examples: S3 buckets without automated encryption, overly permissive policies generated by “policy generators,” or VMs with SSH ports open to the internet. The belief that “everything will be secure from the start” leads to negligent operations.
An insurance company in LATAM deployed an architecture in GCP using Cloud Storage and Compute Engine. They trusted the initial configurations, assuming the storage buckets were protected. After a compliance team audit, they found three buckets with ‘allUsers:READER’ in their ACLs. The leak did not occur due to a GCP failure, but rather because they didn’t review or restrict inherited access.
- Files shared publicly without prior warning
- Services exposed without reviewing firewall rules
- Resources with admin roles without justification
All of these points trace back to a single root cause: assuming that what the provider offers comes with pre-set security intentions, which is incorrect.
Frequent anti-pattern: leaving security “for later” in cloud projects
A widespread practice in mid-sized companies is rushing cloud projects and assuming security can be “fine-tuned later.” Under business pressure, the MVP is launched while omitting basic postures like instance hardening, strict access control, account segregation, or active log monitoring. Unsurprisingly, these environments become vulnerable surfaces with no early detection.
In a digital bank expanding regionally, a microservices-based architecture was migrated to containers on AWS. In the initial phase, no AWS GuardDuty controls were deployed, CloudTrail was not enabled in all regions, and hardcoded keys were used in environment variables. The result: persistent suspicious activity went undetected until network metrics raised alerts for unusual traffic.
The mistake was assuming the provider would “notify” of such anomalies. Without active configuration, there are no alerts, no tracing, and worse: the necessary traceability for post-incident investigation is lost.
How to do it in practice: secure what you deploy
Breaking this mindset requires changing the starting point: nothing you configure in your cloud account is secured by default for your use cases. The provider offers the tools, but activation, configuration, and context are your team’s responsibility.
- Create explicit IAM policies: Define and assign least privilege roles, avoiding “*” in actions and resources. Perform quarterly reviews.
- Configure monitoring actively: Enable CloudTrail, GuardDuty, or Cloud Logging depending on the platform. Integrate with your SIEM.
- Validate exposure: Use tools like AWS Trusted Advisor or scripts with AWS CLI to detect unwanted public resources.
- Review shared permissions: Identify where public links, unrotated keys, or credentials in source code have been used.
In internal audits, add controls such as mandatory tagging for critical resources, alerts for unencrypted creations, and SCP policies in AWS Organizations to restrict dangerous behaviors (e.g., deny IAM:CreateAccessKey outside the secure environment).
Recommendations for corporate environments
There are no shortcuts to cloud security. The idea that “the provider takes care of it” disables your internal security operational muscle. In real projects, this results in environments with no monitoring, excessive permissions, and poorly protected sensitive data.
To operate securely in the cloud, teams must assume that everything created, configured, and exposed is the organization’s responsibility. The tools are there, but activating, fine-tuning, and maintaining them is the client’s job. Assign clear roles, audit continuously, limit blast radius, and do not delegate security by default.
The shared responsibility model is not an equal split. It’s a contract in which the provider takes care of their stack, and you take care of yours. What you put in the cloud will only be as secure as you design it to be.
Interested in Cloud Security?
Technical analysis, hands-on labs and real-world cloud security insights.