Yes, we use Lambdas but the CICD pipeline is shared and currently deploys everything everywhere every time though it will be more efficient to detect and only deploy deltas (there are some minor out-of-the-box detections of non-change but further optimization is very easily possible). For now, it's more important to pursue product market fit - the deploy is ~4 min per environment which is plenty good for current purposes and could easily be parallelized among other obvious and easy wins.
OVERALL: I recommend the Serverless Framework but SSO wasn't supported at the time I started building so we use the CDK for deploying everything. The CDK stacks for the S3 bucket (etc.) and API are separate but that's a matter of factoring for clarity since it would be fine to have one big stack. For similar reasons, I factor the API and API deployment separately, alongside the WWW and WWW deployments. The entire company is a monorepo with a single command at the root that, beyond one-time account bootstrapping (which also uses CloudFormation and CDK), triggers scripts run locally or remotely to execute unit testing, deployment, and acceptance testing in a progressive multi-account deployment (on main). The CICD script installs dependencies, runs tests, identifies itself and uses that to assume account specific roles to deploy into the individual environments. This is all pretty excessive for a startup but seemed like a worthwhile and more secure foundation for actually trustworthy experimentation in delivering solutions into some of people's most sensitive private spaces and visions to go deeper.
Meta-note: I accept that the separated pieces of the infrastructure have timing deltas and that some bugs could arise out of this once we've scaled. This is never not a problem but can have diminishing windows and/or effects with investment. This can be solved either by coding the API to handle current and current-1 contracts or maintaining concurrent current and current-1 lambda version deployments, where divergence occurs. None of that contradicts a single deployment of consistent code. There are deeper solutions with greater financial efficiency but I'll need a wild success scenario before those make business sense to invest in (monoimage k8s, knative, and so on).
LAMBDA: we have built configuration that specifies the build entry points for tree shaking so that our deploy sizes are tiny and that is used by the CDK pipeline too. Our data tables are declared there too so that code and deploy share config. The Lambdas are grouped by event type (i.e. HTTP/ApiGateway, S3, Kinesis, EventBridge, etc.) and our declarations are structured according to generic Lambda defaults, a source-event-specific defaults, and a function specific declaration, allowing overrides at every level. This may sound complicated but it very cleanly organizes the configuration rather compactly, providing incredible clarity, and making scope of effect explicit and consistent.
Everything is written in TypeScript which covers internal compile-time type safety but I have written a layer of middleware for the Lambdas so that they declare their contracts (that uses AJV to validate schema to type equivalence [JSON Schema is far more specific of course]) and entry points and enforces schema compliance (engineering requirements) at all boundaries in and out of the code base (i.e. receipt of API calls in client, receipt of client requests to API, S3 data, database puts/receipts, and so on).
This got long so feel welcome to reach out (see profile) if you want more specific detail about any specific piece.