Conferences

Playtika Is Winning at Streaming Data Transformations

1

All organizations have some sort of ETL or ELT process. At the Big Data Conference, Jack Gudenkauf of Playtika introduced a new type of process, named PSTL or Parallelized Streaming Transformation Loads allowing for parallelization of the data load and transform process throughout the entire pipeline.

Background

Playtika is the leader of social casino gaming with games such as the World Series of Poker, Bingo Blitz and Slotomania. Jack Gudenkauf, the VP of Big Data at Playtika, lead the development and implementation of an architecture and solution which provided scalable and reliable management of the entire pipeline of all these external studios.

Jack’s inspiration for this solution came from working as Manager of the Twitter Analytics Data Warehouse team, and his time at Microsoft, where he invented a precursor to the PSTL system called “Shuttle”, which was a multi-tier distributed data platform to facilitate the importing of disparate MSN.com data. For a few years, Jack had a vision of a unified data pipeline with parallelism throughout; from the streaming in of data, through the transformations covering poly-structured data. His solution had to have high availability, strong durability, increase productivity, enable analytics, support SQL, be performant, and scale at every point in the pipeline.

Original Architecture

Originally at Playtika, the sources of truth were siloed databases for each studio such as Bingo Blitz or Slotomania. The method of identifying users or sessions was made locally and then fed into their global data warehouse, Vertica.

Playtika Original ETL Architecture

In the original architecture, the game applications would write to Flume, next to a Java ETL parser and loader, then load into Vertica using COPY. There was a need to create a global source of truth in the global data warehouse with a robust solution which could be parallelized.

New Architecture

Playtika PSTL Architecture

In the new architecture, the game applications are able to write to a local Kafka queue as producers. This small change allows reading in parallel of real-time messages from the “queues”. Next, Spark is used instead of a monolithic Java application for ETL. When partitions are read from Kafka, they are loaded directly into RDDs (Resilient Distributed Dataset) allowing for parallel execution of transformation across the cluster. Lastly, instead of a single COPY command to load into Vertica, multiple COPY streams can be made in parallel.

The differentiator in this architecture is multiple applications can run in parallel and have transformations running independently of the extracts. Additionally, the loaders can write independently of the transformations. With a solution such as PSTL, this architecture can be parellelized in each step. This enables analytics over semi-structured data, machine learning without consuming resources on Vertica, and data validation throughout the entire pipeline.

The Solution

In Playtika’s solution, a topic in Kafka represents an application combined with a session or user and some sort of delineation of data which happens to match the Vertica fact tables. Through this, topics can be partitioned allowing for multiple streams reading partitions of data. The RDDs in Spark are partitioned elements in memory across the Hadoop cluster and can be operated on in parallel. Users and sessions are imported and mapped to give consistency across all data stores. A UserId globally identifies a user throughout the system.

The Vertica projection hash function on UserId is leveraged before data gets loaded, to determine which specific Vertica node a record would be loaded too. Using Spark, the session, user, and other RDD record column(s), are hashed to match the projection column(s) of the destination Vertica table. This avoids data movement between Vertica nodes on load.

Data is loaded in parallel from in-memory RDD partitions directly to specific Vertica nodes based on the hash(). This is accomplished using a Vertica User Defined tcpServer COPY Source. The source being Spark RDD Partitions, or any streaming source such as netcat, which streams data in parallel over a socket, to each Vertica node listening on a given port. Using a single Vertica COPY command per table, all Vertica nodes listen on a given port, and each node may also have a level of parallelism set (e.g., 4), allowing direct, concurrent, parallel writes to each Vertica node:

After loading into the Vertica data warehouse, the RDDs will write the raw un-parsed JSON, and the transformed structured Spark Data Frames (RDD’s with schema) that now match Vertica table definitions, to Parquet/ORC format HDFS files. Each streaming batch of Kafka partition data offsets, that were processed, is stored in MySQL, which enables an idempotent replaying of data in case of bugs in the PSTL.

The architecture is a true parallel scalable model.

The Result

Playtika was able to load 451 GB in 7 minutes 35 seconds using an 8 node cluster. As a reference from last year’s post, Facebook needed a 270 node cluster with 45 dedicated loader nodes to ingest 35 TB/hour. Playtika can achieve this ingest rate with an 81 node cluster and no dedicated loader nodes. The solution is also scalable.

Playtika Ingest Nodes

Conclusion

Jack’s session on the PSTL architecture and solution packed the room at the Big Data Conference. The session not only highlighted an amazing architecture and solution, but the innovation which leveraged Kafka, Spark and Vertica. Moving away from dedicated ephemeral nodes to having targeted pre-hashed data loads leads to faster and more efficient data ingestion. Organizations will be able to test out new techniques for loading when the hash function is released as part of Excavator.

Lastly, Jack remarked that he was extremely grateful for the support from technical resources from Vertica.

They’re amazing! Having shipped 15 products at Microsoft, I can tell you that you would be hard pressed to get this kind of love from most vendors.

You can see all the slides from the session on Slideshare, view the PSTL Wiki on GitHub, and connect with Jack on Twitter.

October 19, 2015

The session is now available for viewing.

About the author / 

Norbert Krupa

Norbert is the founder of vertica.tips and a Solutions Engineer at Talend. He is an HP Accredited Solutions Expert for Vertica Big Data Solutions. He has written the Vertica Diagnostic Queries which aim to cover monitoring, diagnostics and performance tuning. The views, opinions, and thoughts expressed here do not represent those of the user's employer.

1 Comment

  1. Arun December 12, 2015 at 6:43 AM -  Reply

    Hp vetica is an awsome analytics db if you need speed ..and quick time to value

Leave a Reply

Upcoming Events

  • No upcoming events
AEC v1.0.4

Subscribe to Blog via Email

Enter your email address to subscribe and receive notifications of new posts by email.

Read more use cases here.

Notice

This site is not affiliated, endorsed or associated with HPE Vertica. This site makes no claims on ownership of trademark rights. The author contributions on this site are licensed under CC BY-SA 3.0 with attribution required.
%d bloggers like this: