It is a visitor publish by Oleh Khoruzhenko, Senior Workers DevOps Engineer at Bazaarvoice, in partnership with AWS.
Bazaarvoice is an Austin-based firm powering a world-leading critiques and scores platform. Our system processes billions of client interactions by means of scores, critiques, pictures, and movies, serving to manufacturers and retailers construct shopper confidence and drive gross sales by utilizing genuine user-generated content material (UGC) throughout the client journey. The Bazaarvoice Belief Mark is the gold commonplace in authenticity.
Apache Kafka is among the core elements of our infrastructure, enabling real-time knowledge streaming for the worldwide evaluation platform. Though Kafka’s distributed structure met our wants for high-throughput, fault-tolerant streaming, self-managing this complicated system diverted crucial engineering sources away from our core product growth. Every element of our Kafka infrastructure required specialised experience, starting from configuring low-level parameters to sustaining the complicated distributed methods that our clients depend on. The dynamic nature of our surroundings demanded steady care and funding in automation. We discovered ourselves consistently managing upgrades, making use of safety patches, implementing fixes, and addressing scaling wants as our knowledge volumes grew.
On this publish, we present you the steps we took emigrate our workloads from self-hosted Kafka to Amazon Managed Streaming for Apache Kafka (Amazon MSK). We stroll you thru our migration course of and spotlight the enhancements we achieved after this transition. We present how we minimized operational overhead, enhanced our safety and compliance posture, automated key processes, and constructed a extra resilient platform whereas sustaining the excessive efficiency our international buyer base expects.
The necessity for modernization
As our platform grew to course of billions of day by day client interactions, we would have liked to discover a approach to scale our Kafka clusters effectively whereas sustaining a small staff to handle the infrastructure. The restrictions of self-managed Kafka clusters manifested in a number of key areas:
- Scaling operations – Though scaling our self-hosted Kafka clusters wasn’t inherently complicated, it required cautious planning and execution. Every time we would have liked so as to add new brokers to deal with elevated workload, our staff confronted a multi-step course of involving capability planning, infrastructure provisioning, and configuration updates.
- Configuration complexity – Kafka gives tons of of configuration parameters. Though we didn’t actively handle all of those, understanding their impression was vital. Key settings like I/O threads, reminiscence buffers, and retention insurance policies wanted ongoing consideration as we scaled. Even minor changes might have important downstream results, requiring our staff to keep up deep experience in these parameters and their interactions to make sure optimum efficiency and stability.
- Infrastructure administration and capability planning – Self-hosting Kafka required us to handle a number of scaling dimensions, together with compute, reminiscence, community throughput, storage throughput, and storage quantity. We would have liked to fastidiously plan capability for all these elements, typically making complicated trade-offs. Past capability planning, we had been accountable for real-time administration of our Kafka infrastructure. This included promptly detecting and addressing element failures and efficiency points. Our staff wanted to be extremely attentive to alerts, typically requiring rapid motion to keep up system stability.
- Specialised experience necessities – Working Kafka at scale demanded deep technical experience throughout a number of domains. The staff wanted to:
- Monitor and analyze tons of of efficiency metrics
- Conduct complicated root trigger evaluation for efficiency points
- Handle ZooKeeper ensemble coordination
- Execute rolling updates for zero-downtime upgrades and safety patches
These challenges had been compounded throughout peak enterprise intervals, resembling Black Friday and Cyber Monday, when sustaining optimum efficiency was important for Bazaarvoice’s retail clients.
Selecting Amazon MSK
After evaluating varied choices, we chosen Amazon MSK as our modernization resolution. The choice was pushed by the service’s capacity to attenuate operational overhead, present excessive availability out of the field with its three Availability Zone structure, and provide seamless integration with our present AWS infrastructure.
Key capabilities that made Amazon MSK the clear selection:
- AWS integration – We already used AWS companies for knowledge processing and analytics. Amazon MSK linked straight with these companies, assuaging the necessity to construct and keep customized integrations. This meant our present knowledge pipelines would proceed working with minimal adjustments.
- Automated operations administration – Amazon MSK automated our most time-consuming duties. We now not have to manually monitor situations and storage for failures or reply to those points ourselves.
- Enterprise-grade reliability – The platform’s structure matched our reliability necessities out of the field. Multi-AZ distribution and built-in replication gave us the identical fault tolerance we’d fastidiously constructed into our self-hosted system, now backed by AWS’s service ensures.
- Simplified improve course of – Earlier than Amazon MSK, model upgrades for our Kafka clusters required cautious planning and execution. The method was complicated, involving a number of steps and dangers. Amazon MSK simplified our improve operations. We now use automated upgrades for dev and check workloads and keep management over manufacturing environments. This shift lowered the necessity for intensive planning classes and a number of engineers. In consequence, we keep present with the most recent Kafka variations and safety patches, enhancing our system reliability and efficiency.
- Enhanced safety controls – Our platform required ISO 27001 compliance, which generally concerned months of documentation and safety controls implementation. Amazon MSK got here with this certification built-in, assuaging the necessity for separate compliance work. Amazon MSK encrypted our knowledge, managed community entry, and built-in with our present safety instruments.
With Amazon MSK chosen as our goal platform, we started planning the complicated job of migrating our crucial streaming infrastructure with out disrupting the billions of client interactions flowing by means of our system.
Bazaarvoice’s migration journey
Transferring our complicated Kafka infrastructure to Amazon MSK required cautious planning and exact execution. Our platform processes knowledge by means of two essential elements: an Apache Kafka Streams pipeline that handles knowledge processing and augmentation, and consumer purposes that transfer this enriched knowledge to downstream methods. With 40 TB of state throughout 250 inside subjects, this migration demanded a methodical strategy.
Planning part
Working with AWS Options Architects proved crucial for validating our migration technique. Our platform’s distinctive traits required particular consideration:
- Multi-Area deployment throughout the US and EU
- Complicated stateful purposes with strict knowledge consistency wants
- Very important enterprise companies requiring zero downtime
- Various client ecosystem with completely different migration necessities
Migration challenges
The largest hurdle was migrating our stateful Kafka Streams purposes. Our knowledge processing runs as a directed acyclic graph (DAG) of purposes throughout areas, utilizing static group membership to stop disruptive rebalancing. It’s vital to notice that Kafka Streams retains its state in inside Kafka subjects. For purposes to recuperate correctly, replicating this state precisely is essential. This attribute of Kafka Streams added complexity to our migration course of. Initially, we thought-about MirrorMaker2, the usual device for Kafka migrations. Nevertheless, two basic limitations made it difficult:
- Danger of dropping state or incorrectly replicating state throughout our purposes.
- Lack of ability to run two situations of our purposes concurrently, which meant we would have liked to close down the primary utility and look ahead to it to recuperate from the state within the MSK cluster. Given the dimensions of our state, this restoration course of exceeded our 30-minute SLA for downtime.
Our resolution
We determined to deploy a parallel stack of Kafka Streams purposes studying and writing knowledge from Amazon MSK. This strategy gave us ample time for testing and verification, and enabled the purposes to hydrate their state earlier than we delivered the output to our knowledge warehouse for analytics. We used MirrorMaker2 for enter subject replication, whereas our resolution supplied a number of benefits:
- Simplified monitoring of the replication course of
- Averted consistency points between state shops and inside subjects
- Allowed for gradual, managed migration of shoppers
- Enabled thorough validation earlier than cutover
- Required a coordinated transition plan for all shoppers, as a result of we couldn’t switch client offsets throughout clusters
Shopper migration technique
Every client kind required a fastidiously tailor-made strategy:
- Customary shoppers – For purposes supporting Kafka Shopper Group protocol, we carried out a four-step migration. This strategy risked some duplicate processing, however our purposes had been designed to deal with this situation. The steps had been as follows:
- Configure shoppers with
auto.offset.reset: newest. - Cease all DAG producers.
- Look ahead to present shoppers to course of remaining messages.
- Reduce over client purposes to Amazon MSK.
- Configure shoppers with
- Apache Kafka Join Sinks – Our sink connectors served two crucial databases:
- A distributed search and analytics engine – Doc versioning relied on Kafka document offsets, making direct migration unimaginable. To handle this, we carried out an answer that concerned constructing new search engine clusters from scratch.
- A document-oriented NoSQL database – This supported direct migration with out requiring new database situations, simplifying the method considerably.
- Apache Spark and Flink purposes – These offered distinctive challenges attributable to their inside checkpointing mechanisms:
- Offsets managed exterior Kafka’s client teams
- Checkpoints incompatible between supply and goal clusters
- Required full knowledge reprocessing from the start
We scheduled these migrations throughout off-peak hours to attenuate impression.
Technical advantages and enhancements
Transferring to Amazon MSK basically modified how we handle our Kafka infrastructure. The transformation is greatest illustrated by evaluating key operational duties earlier than and after the migration, summarized within the following desk.
| Exercise | Earlier than: Self-Hosted Kafka | After: Amazon MSK |
| Safety patching | Required devoted staff time for Kafka and OS updates | Absolutely automated |
| Dealer restoration | Wanted guide monitoring and intervention | Absolutely automated |
| Shopper authentication | Complicated password rotation procedures | AWS Id and Entry Administration (IAM) |
| Model upgrades | Complicated process requiring intensive planning | Absolutely automated |
The main points of the duties are as follows:
- Safety patching – Beforehand, our staff spent 8 hours month-to-month making use of Kafka and working system (OS) safety patches throughout our dealer fleet. Amazon MSK now handles these updates mechanically, sustaining our safety posture with out engineering intervention.
- Dealer restoration – Though our self-hosted Kafka had automated restoration capabilities, every incident required cautious monitoring and occasional guide intervention. With Amazon MSK, node failures and storage degradation points resembling Amazon Elastic Block Retailer (Amazon EBS) slowdowns are dealt with completely by AWS and resolved inside minutes with out our involvement.
- Authentication administration – Our self-hosted implementation required password rotations for SASL/SCRAM authentication, a course of that took two engineers a number of days to coordinate. The direct integration between Amazon MSK and AWS Id and Entry Administration (IAM) minimized this overhead whereas strengthening our safety controls.
- Model upgrades – Kafka model upgrades in our self-hosted atmosphere required weeks of planning and testing in addition to weekend upkeep home windows. Amazon MSK manages these upgrades mechanically throughout off-peak hours, sustaining our SLAs with out disruption.
These enhancements proved particularly precious throughout high-traffic intervals like Black Friday, when our staff beforehand wanted intensive operational readiness plans. Now, the built-in resiliency of Amazon MSK gives us with dependable Kafka clusters that function mission-critical infrastructure for our enterprise. The migration made it potential to interrupt our monolithic clusters into smaller, devoted MSK clusters. This improved our knowledge isolation, supplied higher useful resource allocation, and enhanced efficiency predictability for high-priority workloads.
Classes discovered
Our migration to Amazon MSK revealed a number of key insights that may assist different organizations modernize their Kafka infrastructure:
- Professional validation – Working with AWS Options Architects to validate our migration technique caught a number of crucial points early. Though our staff knew our purposes effectively, exterior Kafka consultants recognized potential issues with state administration and client offset dealing with that we hadn’t thought-about. This validation prevented pricey missteps through the migration.
- Information verification – Evaluating knowledge throughout Kafka clusters proved difficult. We constructed instruments to seize subject snapshots in Parquet format on Amazon Easy Storage Service (Amazon S3), enabling fast comparisons utilizing Amazon Athena queries. This strategy gave us confidence that knowledge remained constant all through the migration.
- Begin small – Starting with our smallest knowledge universe in QA helped us refine our course of. Every subsequent migration went smoother as we utilized classes from earlier iterations. This gradual strategy helped us keep system stability whereas constructing staff confidence.
- Detailed planning – We created particular migration plans with every staff, contemplating their distinctive necessities and constraints. For instance, our machine studying pipeline wanted particular dealing with attributable to strict offset administration necessities. This granular planning prevented downstream disruptions.
- Efficiency optimization – We discovered that using Amazon MSK provisioned throughput supplied clear value benefits when storage throughput turned a bottleneck. This function made it potential to enhance cluster efficiency with out scaling occasion sizes or including brokers, offering a extra environment friendly resolution to our throughput challenges.
- Documentation – Sustaining detailed migration runbooks proved invaluable. After we encountered related points throughout completely different migrations, having documented options saved important troubleshooting time.
Conclusion
On this publish, we confirmed you ways we modernized our Kafka infrastructure by migrating to Amazon MSK. We walked by means of our decision-making course of, challenges confronted, and techniques employed. Our journey remodeled Kafka operations from a resource-intensive, self-managed infrastructure to a streamlined, managed service, enhancing operational effectivity, platform reliability, and staff productiveness. For enterprises managing self-hosted Kafka infrastructure, our expertise demonstrates that profitable transformation is achievable with correct planning and execution. As knowledge streaming wants develop, modernizing infrastructure turns into a strategic crucial for sustaining aggressive benefit.
For extra info, go to the Amazon MSK product web page, and discover the great Developer Information to study in regards to the options obtainable that can assist you construct scalable and dependable streaming knowledge purposes on AWS.
In regards to the authors