Data Engineering

How Do Column Types Affect Join Speeds In Data Warehouses?

A Controlled Experiment

Borna Almasi
Towards Data Science
4 min readJan 29, 2021

--

When I first started building the Data Vault at Georgian, I couldn’t decide what column data type to use as my tables’ primary key.

I had heard that integer joins vastly outperform string joins, and I was worried about degrading join performance as our data grew.

In the SQL databases of the operational world, this decision is pretty much made for you by giving you auto-incrementing integer primary keys out of the box.

However, in the Data Warehousing world, whether you’re building a Kimball or Data Vault or something else, you need to make this choice explicitly.

You can generate an integer, a UUID string, or hash your keys into a single column, which comes with many benefits. As if that wasn’t complex enough, your hashed keys can be stored as strings of bytes, and each algorithm’s output may vary in length.

This brings up a question:

How does the column type of keys affect the speed of joins in Data Warehouses?

After some digging, I found some benchmarks for transactional databases, but that’s not what I was looking for. Logically speaking, integers must be faster than strings and byte-strings because there are generally fewer bytes to scan. But… by how much!?

Knowing the answer seemed very important because, in a data warehouse, a bad choice can get multiplied a billion-fold.

I finally buckled under the pressure of curiosity and decided to run a benchmark on BigQuery to answer this question for myself.

Experiment Design

I decided to generate 5 million rows of random numbers and test joining them (without cache) on the following types of keys:

Here is the code I used to generate the tables I wanted to join:

/* GENERATE_ARRAY has a limit of 1M rows
so I had to union a bunch of them together */

WITH
keys_1 AS (SELECT * FROM UNNEST(GENERATE_ARRAY(1,1000000)) AS key),
keys_2 AS (SELECT * FROM UNNEST(GENERATE_ARRAY(1000001,2000000)) AS key),
keys_3 AS (SELECT * FROM UNNEST(GENERATE_ARRAY(2000001,3000000)) AS key),
keys_4 AS (SELECT * FROM UNNEST(GENERATE_ARRAY(3000001,4000000)) AS key),
keys_5 AS (SELECT * FROM UNNEST(GENERATE_ARRAY(4000001,5000000)) AS key),
keys_union AS (
SELECT key FROM keys_1 UNION ALL
SELECT key FROM keys_2 UNION ALL
SELECT key FROM keys_3 UNION ALL
SELECT key FROM keys_4 UNION ALL
SELECT key FROM keys_5
),
keys_hashed AS (
SELECT
key,
MD5(CAST(key AS STRING)) as key_md5_bytes,
TO_HEX(MD5(CAST(key AS STRING))) as key_md5_str,
FARM_FINGERPRINT(CAST(key AS STRING)) AS key_farm,
SHA1(CAST(key AS STRING)) AS key_sha_bytes,
TO_HEX(SHA1(CAST(key AS STRING))) AS key_sha_str
FROM keys_union
)
SELECT *, rand() AS val FROM keys_hashed

And here is the code I used to test make join:

SELECT
t1.val, t2.val
FROM bq_benchmark.t1
JOIN bq_benchmark.t2
USING(<key column here>);

I ran an interactive join query 30 times for each key type so I could use a Z-test to test for the difference between the mean query times and get reliable confidence intervals.

Experiment Results

Some definitions you may find helpful when interpreting the results:

Lower Bound 90% Confidence Interval: There’s a 5% probability that the true mean query time is below this number.

Upper Bound 90% Confidence Interval: There’s a 5% probability that the true mean query time is above this number.

Standard Deviation: A measure of how much deviation (on either side) from the mean query time we observed in our sample on average.

Standard Error of the Estimate of The Mean: How much the true mean query time deviates from our sample's estimated mean query time.

Statistics for Query Times For Each Column Type

You may also be interested in the comparative view of the above data. To keep things simple, I will only compare the difference in mean query times and ignore the confidence intervals of the differences (which I’ve made available in the excel download below).

Relative Comparison of Mean Query Time

Conclusion

Here is everything I took away from this experiment.

  1. Integers are about 1.2x faster than bytes and about 1.4x faster than strings.
  2. If you have access to FARM_FINGERPRINT and you’re only using BigQuery, go ahead and use that (you can always switch it up later)
  3. Otherwise, simply use MD5 as your hash function stored as bytes.
  4. If you choose to use a string, don’t use hex encoding as I did. Base64 encoding will result in smaller strings and thus faster query times than this (but not as fast as raw bytes)

I’ve made my entire experiment available for you to download in an Excel sheet. I’ve made it dead simple to use. Feel free to add your own data to it and experiment on the data warehouse of your choice!

Click Here To Download The Excel Sheet

This article was originally published on the Show Me The Data blog and has been re-formatted for Medium.

--

--

Analytics in the VC space | Former Founder | Marketer | Full-stack Engineer