The world’s leading publication for data science, AI, and ML professionals.

Incremental window functions using AWS Glue Bookmarks

The out-of-order data landing problem

Applying window functions over data is non-trivial if data arrives out-of-order (with respect to the dimension the window function is applied across). For clarity, lets take timeseries data for this example as our window dimension. If timeseries data arrives from Tuesday through Thursday of a week, then at a later time data from Monday of that week arrives, the data has arrived out-of-order.

Photo by Ricardo Gomez Angel on Unsplash
Photo by Ricardo Gomez Angel on Unsplash

As a window function output is sensitive to its surroundings in timespace, the results of the window function would be altered by the new out-of-order data that landed. All affected data needs to be reprocessed.

You could reprocess all the data when data arrives out-of-order. But, when data quantities are large, reprocessing the entire dataset becomes impractical. This article discusses an efficient approach, using the approach building an AWS Glue predicate pushdown described in my previous article. This approach only reprocesses the data affected by the out-of-order data that has landed.

Solution

Glue ETL Job environment setup

Inspect new data

Use AWS Glue Bookmarks to feed only new data into the Glue ETL job.

new = glueContext.create_dynamic_frame.from_catalog(database="db", table_name="table", transformation_ctx='new')

Find the earliest timestamp partition for each partition that is touched by the new data.

Note: in this example below the data is partitioned as partition1 > timestamp_partition where timestamp_partition is the only timeseries partitioning.

Build the pushdown predicate string of the data from entire dataset that needs to be processed/re-processed. For partitions without data that had landed out-of-order, manually define a window of data to unlock into the past to ensure new data landing in order is processed correctly.

Note: in the example below we are partitioning in timestamp_partition by date.

Applying the window function on all required data

Now we can load all the data that needs to be processed/re-processed using the predicate string we just built.

Then we can define our window on all the data loaded.

Apply your function on your windows, we use the last function as an example here.

To ensure that any old data that has been changed during its reprocessing is overwritten use the PySpark API overwrite mode directly to S3.

Conclusion

Building pushdown predicate strings from the new data delivered from the AWS Glue Bookmarks enables only processing/re-processing the required partitions of dataset, even when using window functions, which are inherently sensitive to their surroundings within the dataset.


Originally published at https://datamunch.tech.


Related Articles