Organizations in the present day are utilizing knowledge greater than ever to drive decision-making and innovation. As a result of they work with petabytes of data, they’ve historically gravitated in the direction of two distinct paradigms—knowledge lakes and knowledge warehouses. Whereas every paradigm excels at particular use instances, they typically create unintended obstacles between the info belongings.
Information lakes are sometimes constructed on object storage reminiscent of Amazon Easy Storage Service (Amazon S3), which offer flexibility by supporting various knowledge codecs and schema-on-read capabilities. This allows multi-engine entry the place varied processing frameworks (reminiscent of Apache Spark, Trino, and Presto) can question the identical knowledge. Then again, knowledge warehouses (reminiscent of Amazon Redshift) excel in areas reminiscent of ACID (atomicity, consistency, isolation and sturdiness) compliance, efficiency optimization, and simple deployment, making them appropriate for structured and sophisticated queries. As knowledge volumes develop and analytics wants grow to be extra complicated, organizations search to bridge these silos and use the strengths of each paradigms. That is the place the idea of lakehouse structure is utilized, providing a unified strategy to knowledge administration and analytics.
Over time, a number of distinct lakehouse approaches have emerged. On this publish, we present you find out how to consider and select the precise lakehouse sample to your wants.
The knowledge lake centric lakehouse strategy begins with the scalability, cost-effectiveness, and adaptability of a conventional knowledge lake constructed on object storage. The objective is so as to add a layer of transactional capabilities and knowledge administration historically present in databases, primarily by means of open desk codecs (reminiscent of Apache Hudi, Delta Lake, or Apache Iceberg). Whereas open desk codecs have made vital strides by introducing ACID ensures for single-table operations in knowledge lakes, implementing multi-table transactions with complicated referential integrity constraints and joins stays difficult. The elemental nature of querying petabytes of information on object storage, typically by means of distributed question engines, may end up in gradual interactive queries at excessive concurrency when in comparison with a extremely optimized, listed, and materialized knowledge warehouse. Open desk codecs introduce compaction and indexing, however the full suite of clever storage optimizations present in extremely mature, proprietary knowledge warehouses remains to be evolving in knowledge lake-centric structure.
The knowledge warehouse centric lakehouse strategy presents sturdy analytical capabilities however has vital interoperability challenges. Although knowledge warehouses present JAVA Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers for exterior entry, the underlying knowledge stays in proprietary codecs, making it troublesome for exterior instruments or companies to instantly entry it with out complicated extract, remodel, and cargo (ETL) or API layers. This could result in knowledge duplication and latency. An information warehouse structure may help studying open desk codecs, however its skill to put in writing to them or take part of their transactional layers may be restricted. This restricts true interoperability and might create shadow knowledge silos.
On AWS, you possibly can construct a fashionable, open lakehouse structure to realize unified entry to each knowledge warehouses and knowledge lakes. By utilizing this strategy, you possibly can construct refined analytics, machine studying (ML), and generative AI functions whereas sustaining a single supply of fact for his or her knowledge. You don’t have to decide on between an information lake or knowledge warehouse. You should utilize present investments and protect the strengths of each paradigms whereas eliminating their respective weaknesses. The lakehouse structure on AWS embraces open desk codecs reminiscent of Apache Hudi, Delta Lake, and Apache Iceberg.
You may speed up your lakehouse journey with the subsequent era of Amazon SageMaker, which delivers an built-in expertise for analytics and AI with unified entry to knowledge. SageMaker is constructed on an open lakehouse structure that’s absolutely suitable with Apache Iceberg. By extending help for Apache Iceberg REST APIs, SageMaker considerably provides interoperability and accessibility throughout varied Apache Iceberg-compatible question engines and instruments. On the core of this structure is a metadata administration layer constructed on AWS Glue Information Catalog and AWS Lake Formation, which offer unified governance and centralized entry management.
Foundations of the Amazon SageMaker lakehouse structure
The lakehouse structure of Amazon SageMaker has 4 major elements that work collectively to create a unified knowledge platform.
- Versatile storage to adapt to the workload patterns and necessities
- Technical catalog that serves as a single supply of fact for all metadata
- Built-in permission administration with fine-grained entry management throughout all knowledge belongings
- Open entry framework constructed on Apache Iceberg REST APIs for common compatibility
Catalogs and permissions
When constructing an open lakehouse, the catalog—your central repository of metadata—is a crucial element for knowledge discovery and governance. There are two kinds of catalogs within the lakehouse structure of Amazon SageMaker: managed catalogs and federated catalogs.
You should utilize an AWS Glue crawler to robotically uncover and register this metadata in Information Catalog. Information Catalog shops the schema and desk metadata of your knowledge belongings, successfully turning information into logical tables. After your knowledge is cataloged, the subsequent problem is controlling who can entry it. When you may use complicated S3 bucket insurance policies for each folder, this strategy is troublesome to handle and scale. Lake Formation gives a centralized database-style permissions mannequin on the Information Catalog, supplying you with the pliability to grant or revoke fine-grained entry at row, column, and cell ranges for particular person customers or roles.
Open entry with Apache Iceberg REST APIs
The lakehouse structure described within the previous part and proven within the following determine additionally makes use of the AWS Glue Iceberg REST catalog by means of the service endpoint, which gives OSS compatibility, enabling elevated interoperability for managing Iceberg desk metadata throughout Spark and different open supply analytics engines. You may select the suitable API based mostly on desk format and use case necessities.
On this publish, we discover varied lakehouse structure patterns, specializing in find out how to optimally use knowledge lake and knowledge warehouse to create sturdy, scalable, and performance-driven knowledge options.
Bringing knowledge into your lakehouse on AWS
When constructing a lakehouse structure, you possibly can select from three distinct patterns to entry and combine your knowledge, every providing distinctive benefits for various use instances.
- Conventional ETL is the traditional technique of extracting knowledge, reworking it and loading it into your lakehouse.
When to make use of it:
-
- You want complicated transformations and require extremely curated and optimized knowledge units for downstream functions for higher efficiency
- You have to carry out historic knowledge migrations
- You want knowledge high quality enforcement and standardization at scale
- You want extremely ruled curated knowledge in a lakehouse
- Zero-ETL is a contemporary architectural sample the place knowledge robotically and constantly replicates from a supply system to lakehouse with minimal or no handbook intervention or customized code. Behind the scenes, the sample makes use of change knowledge seize (CDC) to robotically stream all new inserts, updates, and deletes from the supply to the goal. This architectural sample is efficient when the supply system maintains a excessive diploma of information cleanliness and construction, minimizing the necessity for heavy pre-load transformations, or when knowledge refinement and aggregation can happen on the goal finish inside lakehouse. Zero-ETL replicates knowledge with minimal delay, and the transformation logic is carried out on the goal finish nearer to the place the insights are generated by shifting it to a extra environment friendly, post-load part.
When to make use of it:
-
- You have to scale back operational complexity and acquire versatile management over knowledge replication for each close to real-time and batch use instances.
- You want restricted customization. Whereas zero-ETL implies minimal work, some gentle transformations may nonetheless be required on the replicated knowledge.
- You have to reduce the necessity for specialised ETL experience.
- You have to preserve knowledge freshness with out processing delays and scale back danger of information inconsistencies. Zero-ETL facilitates quicker time-to-insight.
- Information federation (no-movement strategy) is a technique that permits querying and mixing knowledge from a number of disparate sources with out bodily shifting or copying it right into a single centralized location. This query-in-place strategy permits the question engine to attach on to the exterior supply programs, delegate and execute queries, and mix outcomes on the fly for presentation to the person. The effectiveness of this structure sample is determined by three key elements: community latency between programs, supply system efficiency capabilities, and the question engine’s skill to push down predicates to optimize question execution. This no-movement strategy can considerably scale back knowledge duplication and storage prices whereas offering real-time entry to supply knowledge.
When to make use of it:
-
- You have to question the supply system instantly to make use of operational analytics.
- You don’t wish to duplicate knowledge to save lots of on cupboard space and related prices inside your Lakehouse.
- You’re prepared to commerce some question efficiency and governance for speedy knowledge availability and one-time evaluation of stay knowledge.
- You don’t have to steadily question the info.
Understanding the storage layer of your lakehouse on AWS
Now that you just’ve seen alternative ways to get knowledge right into a lakehouse, the subsequent query is the place to retailer the info. As proven within the following determine, you possibly can architect a contemporary open lakehouse on AWS by storing the info in an information lake (Amazon S3 or Amazon S3 Tables) or knowledge warehouse (Redshift Managed Storage), so you possibly can optimize for each flexibility and efficiency based mostly in your particular workload necessities.
A contemporary lakehouse isn’t a single storage expertise however a strategic mixture of them. The choice of the place and find out how to retailer your knowledge impacts every thing from the pace of your dashboards to the effectivity of your ML fashions. It’s essential to take into account not solely the preliminary price of storage but additionally the long-term prices of information retrieval, the latency required by your customers, and the governance obligatory to take care of a single supply of fact. On this part, we delve into architectural patterns for the info lake and the info warehouse and supply a transparent framework for when to make use of every storage sample. Whereas they’ve traditionally been seen as competing architectures, the fashionable and open lakehouse strategy makes use of each to create a single, highly effective knowledge platform.
Common objective S3
A normal objective S3 bucket in Amazon S3 is the usual, foundational bucket kind used for storing objects. It gives flexibility as a way to retailer your knowledge in its native format with no inflexible upfront schema. Due to the flexibility of an S3 bucket to decouple storage from compute, you possibly can retailer the info in a extremely scalable location, whereas a wide range of question engines can entry and course of it independently. This implies that you could select the precise device for the job with out having to maneuver or duplicate the info. You may retailer petabytes of information with out ever having to provision or handle storage capability, and its tiered storage lessons present vital price financial savings by robotically shifting less-frequently accessed knowledge to extra reasonably priced storage.
The present Information Catalog capabilities as a managed catalog. It’s recognized by the AWS account quantity, which implies there isn’t any migration wanted for present Information Catalogs; they’re already accessible within the lakehouse and grow to be the default catalog for the brand new knowledge, as proven within the following determine.
A foundational knowledge lake on normal objective S3 is very environment friendly for append-only workloads. Nonetheless, its file-based nature lacks the transactional ensures of a conventional database. That is the place you should utilize the help of open-source transactional desk codecs reminiscent of Apache Hudi, Delta Lake, and Apache Iceberg. With these desk codecs, you possibly can implement multi-version concurrency management, permitting a number of readers and writers to function concurrently with out conflicts. They supply snapshot isolation, in order that readers see constant views of information even throughout write operations. A typical medallion structure sample with Apache Iceberg is depicted within the following determine. When constructing a lakehouse on AWS with Apache Iceberg, clients can select between two main approaches for storing their knowledge on Amazon S3: Common objective S3 buckets with self-managed Iceberg or utilizing the absolutely managed S3 Tables. Every path has distinct benefits, and the precise alternative is determined by your particular wants for management, efficiency, and operational overhead.
Common objective S3 with Self-managed Iceberg
Utilizing normal objective S3 buckets with self-managed Iceberg is a conventional strategy the place you retailer each knowledge and Iceberg metadata information in customary S3 buckets. With this feature, you preserve full management however are liable for managing the whole Iceberg desk lifecycle, together with important upkeep duties reminiscent of compaction and rubbish assortment.
When to make use of it:
- Most management: This strategy gives full management over your complete knowledge life cycle. You may fine-tune each side of desk upkeep, reminiscent of defining your individual compaction schedules and techniques, which may be essential for particular high-performance workloads or to optimize prices.
- Flexibility and customization: It’s ideally suited for organizations with robust in-house knowledge engineering experience that have to combine with a wider vary of open-source instruments and customized scripts. You should utilize Amazon EMR or Apache Spark to handle the desk operations.
- Decrease upfront prices: You pay just for Amazon S3 storage, API requests, and the compute assets you employ for upkeep. This may be less expensive for smaller or less-frequent workloads the place steady, automated optimization isn’t obligatory.
Word: The question efficiency relies upon solely in your optimization technique. With out steady, scheduled jobs for compaction, efficiency can degrade over time as knowledge will get fragmented. It’s essential to monitor these jobs to make sure environment friendly querying.
S3 Tables
S3 Tables gives S3 storage that’s optimized for analytic workloads and gives Apache Iceberg compatibility to retailer tabular knowledge at scale. You may combine S3 desk buckets and tables with Information Catalog and register the catalog as a Lake Formation knowledge location from the Lake Formation console or utilizing service APIs, as proven within the following determine. This catalog will likely be registered and mounted as a federated lakehouse catalog.
When to make use of it:
- Simplified operations: S3 Tables robotically handles desk upkeep duties reminiscent of compaction, snapshot administration and orphan file cleanup within the background. This automation eliminates the necessity to construct and handle customized upkeep jobs, considerably decreasing your operational overhead.
- Automated optimization: S3 Tables gives built-in computerized optimizations that enhance question efficiency. These optimizations embrace background processes reminiscent of file compaction to handle the small information downside and knowledge structure optimizations particular to tabular knowledge. Nonetheless, this automation trades flexibility for comfort. As a result of you possibly can’t management the timing or technique of compaction operations, workloads with particular efficiency necessities may expertise various question efficiency.
- Concentrate on knowledge utilization: S3 Tables reduces the engineering overhead and shifts the main target to knowledge consumption, knowledge governance and worth creation.
- Simplified entry to open desk codecs: It’s appropriate for groups who’re new to the idea of Apache Iceberg however wish to use transactional capabilities on knowledge lake.
- No exterior catalog: Appropriate for smaller groups who don’t wish to handle an exterior catalog.
Redshift managed storage
Whereas the info lake serves because the central supply of fact for all of your knowledge, it’s not probably the most appropriate knowledge retailer for each job. For probably the most demanding enterprise intelligence and reporting workloads, the info lake’s open and versatile nature can introduce efficiency unpredictability. To assist guarantee the specified efficiency, take into account transitioning a curated subset of your knowledge from the info lake to an information warehouse for the next causes:
- Excessive concurrency BI and reporting: When tons of of enterprise customers are concurrently operating complicated queries on stay dashboards, an information warehouse is particularly optimized to deal with these workloads with predictable, sub-second question latency.
- Predictable efficiency SLAs:– For crucial enterprise processes that require knowledge to be delivered at a assured pace, reminiscent of monetary reporting or end-of-day gross sales evaluation, an information warehouse gives constant efficiency.
- Complicated SQL workloads: Whereas knowledge lakes are highly effective, they’ll battle with extremely complicated queries involving quite a few joins and big aggregations. An information warehouse is purpose-built to run these relational workloads effectively.

The lakehouse structure on AWS helps Redshift Managed Storage (RMS), a storage possibility supplied by Amazon Redshift, a completely managed, petabyte-scale knowledge warehouse service within the cloud. RMS storage helps the computerized desk optimization supplied in Amazon Redshift reminiscent of built-in question optimizations for knowledge warehousing workloads, automated materialized views, and AI-driven optimizations and scaling for steadily operating workloads.
Federated RMS catalog: Onboard present Amazon Redshift knowledge warehouses to lakehouse
Implementing a federated catalog with present Amazon Redshift knowledge warehouses creates a metadata-only integration that requires no knowledge motion. This strategy enables you to lengthen your established Amazon Redshift investments into a contemporary open lakehouse framework whereas sustaining compatibility with present workflows. Amazon Redshift makes use of a hierarchical knowledge group construction:
- Cluster degree: Begins with a namespace
- Database degree: Accommodates a number of databases
- Schema degree: Organizes tables inside databases
While you register your present Amazon Redshift provisioned or serverless namespaces as a federated catalog in Information Catalog, this hierarchy maps instantly into the lakehouse metadata layer. The lakehouse implementation on AWS helps a number of catalogs utilizing a dynamic hierarchy to prepare and map the underlying storage metadata.
After you register a namespace, the federated catalog robotically mounts throughout all Amazon Redshift knowledge warehouses in your AWS Area and account. Throughout this course of, Amazon Redshift internally creates exterior databases that correspond to knowledge shares. This mechanism stays fully abstracted from finish customers. By utilizing federated catalogs, you possibly can create and use speedy visibility and accessibility throughout your knowledge ecosystem. Permissions on the federated catalogs may be managed by Lake Formation for each similar account and cross account entry.
The true functionality of federated catalogs emerges when accessing Amazon Redshift-managed storage from exterior AWS engines reminiscent of Amazon Athena, Amazon EMR, or open supply Spark. As a result of Amazon Redshift makes use of proprietary block-based storage that solely Amazon Redshift engines can learn natively, AWS robotically provisions a service-managed Amazon Redshift Serverless occasion within the background. This service-managed occasion acts as a translation layer between exterior engines and Amazon Redshift managed storage. AWS establishes computerized knowledge shares between your registered federated catalog and the service-managed Amazon Redshift Serverless occasion to allow safe, environment friendly knowledge entry. AWS additionally creates a service-managed Amazon S3 bucket within the background for knowledge switch.
When an exterior engine reminiscent of Athena submits queries towards Amazon Redshift federated catalog, Lake Formation handles the credential merchandising by offering the non permanent credentials to the requesting service. The question executes by means of the service-managed Amazon Redshift Serverless, which accesses knowledge by means of robotically established knowledge shares, processes outcomes, offloads them to a service-managed Amazon S3 staging space, after which returns outcomes to the unique requesting engine.
To trace the compute price of the federated catalog of present Amazon Redshift warehouse, use the next tag.
aws:redshift-serverless:LakehouseManagedWorkgroup worth: "True"
To activate the AWS generated price allocation tags for billing perception, observe the activation directions. You can even view the computational price of the assets in AWS Billing.
When to make use of it:
- Current Amazon Redshift investments: Federated catalogs are designed for organizations with present Amazon Redshift deployments who wish to use their knowledge throughout a number of companies with out migration.
- Cross-service knowledge sharing:– Implement so groups can share present knowledge in an Amazon Redshift knowledge warehouse throughout completely different warehouses and centralize their permissions.
- Enterprise integration necessities: This strategy is appropriate for organizations that have to combine with established knowledge governance. It additionally maintains compatibility with present workflows whereas including lakehouse capabilities.
- Infrastructure management and pricing:– You may retain full management over compute capability for his or her present warehouses for predictable workloads. You may optimize compute capability, select between on-demand and reserved capability pricing, and fine-tune efficiency parameters. This gives price predictability and efficiency management for constant workloads.
When implementing lakehouse structure with a number of catalog varieties, deciding on the suitable question engine is essential for each efficiency and value optimization. This publish focuses on the storage basis of lakehouse, nonetheless for crucial workloads involving in depth Amazon Redshift knowledge operations, take into account executing queries inside Amazon Redshift or utilizing Spark when doable. Complicated joins spanning a number of Amazon Redshift tables by means of exterior engines may lead to increased compute prices if the engines don’t help full predicate push-down.
Different use-cases
Construct a multi-warehouse structure
Amazon Redshift helps knowledge sharing, which you should utilize to share stay knowledge between supply and goal Amazon Redshift clusters. By utilizing knowledge sharing, you possibly can share stay knowledge with out creating copies or shifting knowledge, enabling makes use of instances reminiscent of workload isolation (hub and spoke structure) and cross group collaboration (knowledge mesh structure). With out a lakehouse structure, you could create an specific knowledge share between supply and goal Amazon Redshift clusters. Whereas managing these knowledge shares in small deployments is comparatively simple, it turns into complicated in knowledge mesh architectures.
The lakehouse structure addresses this problem so clients can publish their present Amazon Redshift warehouses as federated catalogs. These federated catalogs are robotically mounted and made accessible as exterior databases in different client Amazon Redshift warehouses inside the similar account and Area. By utilizing this strategy, you possibly can preserve a single copy of information and use a number of knowledge warehouses to question it, eliminating the necessity to create and handle a number of knowledge shares and scale with workload isolation. The permission administration turns into centralized by means of Lake Formation, streamlining governance throughout your complete multi-warehouse atmosphere.
Close to real-time analytics on petabytes of transactional knowledge with no pipeline administration:
Zero-ETL integrations seamlessly replicate transactional knowledge from OLTP knowledge sources to Amazon Redshift, normal objective S3 (with self-managed Iceberg) or S3 Tables. This strategy eliminates the necessity to preserve complicated ETL pipelines, decreasing the variety of shifting elements in your knowledge structure and potential factors of failure. Enterprise customers can analyze contemporary operational knowledge instantly relatively than working with stale knowledge from the final ETL run.
See Aurora zero-ETL integrations for an inventory of OLTP knowledge sources that may be replicated to an present Amazon Redshift warehouse.
See Zero-ETL integrations for details about different supported knowledge sources that may be replicated to an present Amazon Redshift warehouse, normal objective S3 with self-managed Iceberg, and S3 Tables.
Conclusion
A lakehouse structure isn’t about selecting between an information lake and an information warehouse. As an alternative, it’s an strategy to interoperability the place each frameworks coexist and serve completely different functions inside a unified knowledge structure. By understanding elementary storage patterns, implementing efficient catalog methods, and utilizing native storage capabilities, you possibly can construct scalable, high-performance knowledge architectures that help each your present analytics wants and future innovation. For extra info, see The lakehouse structure of Amazon SageMaker.
Concerning the authors










