Serverless is often sold internally as “less attack surface” because there is no need to patch operating systems or operate machines. That is partially true, but it often leads to a dangerous consequence: assuming risk disappears. In practice, what changes is the type of exposure and the point where control concentrates: identities, events, dependencies, secrets, and control plane configuration.
In companies under pressure to deliver, what repeats most is not a sophisticated exploit, but “temporary” configurations that stay forever: functions with broad permissions to unblock a sprint, open triggers “just for testing”, secrets pasted into environment variables, and NPM/PyPI packages without governance. That is the breeding ground for the false sense of security.
Myth: “I don’t manage servers, so there is no significant risk”
Removing servers reduces one class of problems (host hardening, OS patches, agents, etc.), but risk shifts to what you do control: IAM/roles, invocation configuration, network, logs, dependencies, and secrets. In serverless, an authorization error or an exposed secret often has immediate impact because the function is already connected to critical data and services.
In production a pattern appears: because “it’s just a function”, rigor is relaxed that would be applied to a server. Typical result: a Lambda with a role that includes read/write permissions over multiple buckets and tables “because several API routes need it”. When that function is compromised (by a dependency or a logic flaw), the attacker does not need to move laterally over the network: lateral movement happens via cloud APIs, using your permissions.
Real consequence in a company: the incident does not show up as a service outage, but as unauthorized data access, creation of resources for persistence, or silent information extraction. And the postmortem becomes harder because control is distributed between managed service, permissions, and events.
Excessive permissions and role abuse: the modern “root” of serverless
Most effective compromises in serverless do not start with RCE; they start with authorization that is too broad. In AWS, the Lambda execution role; in Azure, the Function App managed identity; in GCP, the service account associated with the Cloud Function. If that identity can read secrets, enumerate resources, or write to storage, the attacker already has a clear pivot path.
A common mistake in AWS is allowing broad actions for convenience: "Action": "s3:*" on "Resource": "*", or secretsmanager:GetSecretValue permissions for “any secret” because the function needs one. In Azure, assigning Contributor on the resource group to the managed identity to avoid permission tickets. In GCP, using a service account with Editor for speed. In all three cases, the blast radius stops being “one function” and becomes “a large portion of the environment”.
How to pivot from a compromised function: if the attacker manages to execute code (through a vulnerable dependency, insecure deserialization, or a logic path that evaluates inputs), the next step is usually to call the control plane with the credentials already present in the runtime. For example: list secrets, read configurations, download data from storage, or even modify the function itself to persist (upload new code, change variables, add triggers). This is not theory: it is the natural path when the function identity is too powerful.
- AWS: with
lambda:UpdateFunctionConfigurationorlambda:UpdateFunctionCodepermissions, the attacker can rewrite the function for persistence; withiam:PassRole, they can launch resources with more privileged roles.
In a company this translates into an incident that is “fixed” by redeploying, but reappears because persistence remained in the configuration itself or in delegated permissions. The investigation often finds that the function role had more permissions than the team believed.
- Azure: an identity with Contributor can modify app settings, connections, and deployments; if it also has access to Key Vault, the jump to other systems (DBs, queues, internal APIs) is direct.
The operational lesson is that a function role/identity must be treated as a high-value credential, even if the runtime is managed.
Exposed endpoints and event injection: when the trigger is the vector
Serverless lives on events: HTTP, queues, topics, storage, schedulers. The false security appears when it is assumed that “if it’s behind API Gateway” or “if it’s a managed trigger” it is already protected. In practice, the vector is usually the poorly authenticated or poorly validated trigger, not the infrastructure.
Common example in AWS: a Lambda exposed via API Gateway with authorization set to NONE because “we’ll add Cognito later”. In Azure: Functions with Authorization level set to Anonymous or shared keys leaked in repositories or pipelines. In GCP: Cloud Functions with public invocation enabled for convenience. That “temporary” endpoint ends up indexed, scanned, or shared outside the perimeter and becomes a stable entry point.
- Event injection: when a function processes messages from a queue/topic without validating their origin or schema, an attacker who manages to publish to that channel (due to misassigned permissions or external integrations) can inject malicious events.
In production it shows up as jobs that “suddenly” execute unexpected paths: payloads that trigger retries, inflate costs, force extreme parsing, or cause internal APIs to be called with manipulated parameters. The impact is not only security: also availability and billing.
- Typical consequence: a function that assumes the event comes from a “trusted” producer and uses fields from the message to build S3/Blob/GCS paths or database queries; an injected event turns that into exfiltration or deletion if there are also broad permissions.
The corporate pattern that worsens this is the lack of event contracts (versioned schemas, strict validation, publisher control) and the tendency to reuse the same queue/topic for multiple purposes “to save”.
Vulnerable dependencies and poorly managed secrets: the silent hit
In serverless, the deployment cycle is often fast and automated, but that does not guarantee what goes into the package is controlled. Dependencies with known vulnerabilities, typosquatting, or abandoned packages are a frequent way to gain execution inside the runtime. The problem is amplified when “minimum viable” is packaged without an SBOM, without version pinning, and without effective scanning in CI.
The other side is secret management. It is common to see credentials in environment variables, in configuration files inside the ZIP, or injected by pipelines without controls. Even when a manager is used (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager), the mistake is made of granting read access to “all secrets in the project” because “that way it won’t fail at runtime”. Once inside the function, extracting secrets is often trivial.
How to do it in practice: configure least-privilege permissions and validate that they are actually enforced with concrete control plane reviews. For example, in AWS, avoid wildcard policies and restrict resources; an execution policy should look more like this than like a generalized *:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue"],
"Resource": ["arn:aws:secretsmanager:REGION:ACCOUNT:secret:prod/app/db-*" ]
},
{
"Effect": "Allow",
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::mi-bucket-prod/prefix-especifico/*"]
}
]
}
Then validate there are no unexpected privileges by reviewing the role in IAM (attached and inline policies), and using permission analysis tools (for example, IAM Access Analyzer) to detect unused actions or escalation paths such as iam:PassRole. In Azure and GCP, the operational equivalent is to review RBAC/role assignments to the managed identity or service account, and confirm that the secret is scoped by resource and not by the whole project.
Common production mistakes in AWS, Azure, and GCP that feed the false sense of security
The problem is not “serverless”, but how it is operated under pressure. In AWS the reused role between functions repeats: a single “lambda-execution-prod” role with cumulative permissions for multiple services. In Azure, a Function App that groups many functions with a single identity and, therefore, a set of permissions that grows without control. In GCP, a service account shared by multiple Cloud Functions “to simplify”. It is convenient, but it turns any flaw into a cross-cutting incident.
Another recurring mistake is trusting perimeter controls that do not naturally exist in serverless. A public HTTP endpoint with weak validation, combined with broad permissions, is enough for a serious incident without needing to exploit infrastructure. Incomplete logging is also common: traces that do not include identity, event origin, or enough context to investigate, and insufficient retention for correlation.
Practical examples that show up in internal audits:
- AWS Lambda: functions with default VPC access without real need, which complicates visibility and causes alerts to be ignored; or functions without concurrency limits where endpoint abuse becomes degradation and cost.
The business effect is often twofold: on the one hand, security (exfiltration or unauthorized modification); on the other, operations (cost spikes and saturation of downstream systems such as databases).
- Azure Functions / GCP Cloud Functions: accidental exposure due to public invocation configuration or shared keys; and inherited permissions at group/project level that allow reading secrets or writing to storage without the application team being aware.
When this happens, the internal conversation often stays at “close the endpoint” or “rotate the secret”, but the real problem was the absence of guardrails: oversized identities and lack of event validation.
Recommendations for corporate environments
The false sense of security in serverless is born from confusing “I don’t manage servers” with “I don’t have attack surface”. In reality, risk concentrates in identities (roles/service accounts), triggers (HTTP/events), and supply chain (dependencies/secrets). A compromise of a function with broad permissions allows pivoting via cloud APIs and scaling impact without moving through traditional networking.
As realistic quick wins in a company, the most effective approach is usually: reduce privileges tangibly (roles per function or per very narrowly scoped set), block unjustified public invocation and require robust authentication on endpoints, validate schema and event origin before processing them, and harden secret management (scoped by resource and with rotation). Complement this with operational validations: review RBAC/IAM assignments, detect wildcard policies, and ensure sufficient logging to investigate.
If the team can only do one thing this week, let it be reviewing the execution identity of each critical function and removing “just in case” permissions. In serverless, that “just in case” is often tomorrow’s incident.
Interested in Cloud Security?
Technical analysis, hands-on labs and real-world cloud security insights.