image via unsplash.com

Effective Data Storytelling for Larger-than-Memory Datasets

Creating intuitive, interactive web applications to visualize Big Data with Streamlit, Dask, and Coiled

Avril Aysha
Towards Data Science
10 min readJul 22, 2021

--

tl;dr

Integrating Streamlit with Dask and Coiled allows you to create intuitive, interactive web applications that can process large amounts of data effortlessly. This blogpost walks you through writing an integrated Streamlit-on-Coiled script to present 10+ GB of data in an interactive heatmap visualisation. We then expand the script to include:

  1. a heavier groupby computation,
  2. interactive widgets to scale the Coiled cluster up or down, and
  3. an option to shut down the cluster on demand.

Streamlit Supercharged for Big Data

Streamlit enables data scientists to build lightweight, intuitive web applications without writing any frontend code. You don’t even have to leave the friendly confines of Python; it’s that good ;)

That said, working with a front-end solution like Streamlit can become cumbersome when the size of your dataset or computation increases beyond something that completes within a few seconds. It’s likely that you’re using Streamlit in the first place because you want to create a smoother, more intuitive user experience. And my guess is that having your user sit around for minutes (or hours!) while a computation runs doesn’t make it into your definition of “smooth”…

You could, of course, choose to pay for and manage expensive clusters of machines (either locally or on the cloud). But unless your Streamlit app is as popular as Netflix, it’s likely that your cluster will sit idle for long periods of time. That means time and money wasted. Not great, either!

Delegating your heavy compute to a Dask cluster could well be worth considering here. Coiled allows you to spin up on-demand Dask clusters in the cloud without having to worry about any of the DevOps like setting up nodes, security, scaling or even shutting the cluster down. Joining forces in a single web app, Streamlit handles the frontend layout and interactivity of your application while Coiled sorts out the backend infrastructure for demanding computations.

This blog post will show you how to build a Streamlit-on-Coiled application. We’ll start with a basic script that loads more than 10 GBs of data from the NYC Taxi dataset into an interactive user interface. From there, we’ll really get the most out of Dask and Coiled by running an even heavier workload. Finally, we’ll tweak our Streamlit interface to allow the user to scale the cluster up and down using a simple slider and include a button to shutdown the cluster, giving the user even more control over their computational power — without having to do any coding.

You can download the basic and final, extended Python scripts from this GitHub repo. To code along, you’ll need a Coiled Free Tier account, which you can set up using your GitHub credentials via cloud.coiled.io. Some basic familiarity with Dask and Streamlit is helpful, but not a must.

Disclaimer: I work at Coiled as a Data Science Evangelist Intern. Coiled is founded by Matthew Rocklin, the initial author of Dask, an open-source Python library for distributed computing.

Visualizing a Larger-than-memory Dataset

The example script below uses Coiled and Streamlit to read more than 146 million records (10+ GB) from the NYC Taxi data set and visualize locations of taxi pickups and dropoffs. Let’s break down what’s happening in the script:

First, we import the Python libraries we need to run the script, in this case Coiled, Dask, Streamlit, and Folium.

In the next section, we create the front-end user interface with Streamlit. We start with some descriptive headers and text and then include two drop-down boxes to allow the user to select the kind of data they want to visualize.

From there, we write the function that will spin up a Coiled cluster. This is where we specify the number of workers, the name of the cluster so we can reuse it later (this is crucial if you have multiple people viewing your Streamlit app), and the software environment to distribute to our scheduler and workers. See this page in the Coiled docs for more on how to set up software environments.

You can view any active and closed clusters, as well as your software environments and cluster configurations on the Coiled Cloud page (provided you’re signed in to your account).

Next, we load in the data from the public Amazon S3 bucket as a Dask DataFrame, specifying the columns we want to include and the blocksize of each partition. Note the call to df.persist() here. This persists the DataFrame on the cluster so that it doesn’t need to be reloaded every time the app refreshes. After this call, the dataset is available for immediate access as long as the cluster is running.

Finally, we use the input of the Streamlit widgets above to create a subset of the data called map_data and pass that to the Folium map, specifying that we want it displayed as a heatmap rendering.

That’s it! Let’s see what this looks like.

Note that this is a stand-alone Python script that you can run from your terminal using streamlit run <path/to/file> — and not from a Jupyter Notebook.

So go ahead and run that in your terminal…and in a matter of seconds, your browser should present you with an interactive interface like the one below.

Pretty amazing right? Especially when you consider that with every refresh of the map, the app is processing over 146 million rows (that’s more than 10GB) of data in the blink of an eye!

Populating the Dask Dashboard

Let’s now move on to see how Coiled handles even heavier workloads. If you happened to click the URL to the Dask Dashboard, you would’ve seen that the computations to generate the map were completed in just a few tasks. While Dask handles that without skipping a beat, it is actually designed for distributed computing — and really shows its teeth when there’s a large number of tasks for it to run through. So let’s give it a chance to shine, shall we?

We’ll create a new section in the script that allows the user to set up a groupby computation. We’ll give the user the option to choose which column to group by…and which type of summary statistic to calculate. And include a button to trigger the computation.

Saving the Python script and rerunning streamlit run <path/to/file> in your terminal will load the updated version of the Streamlit app, like the one below.

Dask dashboard showing groupby computations
Extended Streamlit interface

We can now customize our groupby computation with the new drop-down options. Clicking the new button triggers some heavy computation on our Coiled cluster, calculating a summary statistic of over 146 million rows in a whoppin’ 45 seconds.

Scaling and Shutting Down Your Coiled Cluster

But…if we’re being picky, it was a little overkill to use that entire cluster to generate the maps; which consisted of just a handful of tasks. And, on the other side of the spectrum, maybe you’re presenting this app to your overworked CEO right before an important board meeting and the last thing you want to do is have them stare at a turning wheel while the groupby computation runs…for 45 seconds.

If only there was a way to scale our cluster up or down depending on our computation needs…

With a call to coiled.Cluster.scale() we can specify the number of workers that our cluster has. Note that we have to specify the name of the cluster to scale inside that call. Let’s go ahead and add a new section in our script where we attach that call to an interactive Streamlit slider. This means our user can now adjust their computational power as needed…right here in our web app, without having to write a single line of code.

Note that while downscaling is instant, scaling a cluster up takes a minute or two. The good thing is that you can continue running your computation while the cluster scales. You can use the Coiled Cloud web interface to see how many workers your cluster currently has.

Coiled Cloud UI Dashboard — cloud.coiled.io

Finally, let’s build in a button that allows the user to shutdown the cluster to avoid unnecessary costs. Note that there’s a trade-off here: if you’re doing quick iterations of the Streamlit app, we recommend keeping the cluster running so you don’t have to wait for it to spin up every time you re-run the script. In this case, it’s important to name your cluster so that you can reference it in subsequent runs. If you’re all done for the foreseeable future, however, it’s good practice to shut the cluster down.

And we’ll just sneak in a pro-tip here: Coiled clusters by default shut down after 20 minutes of inactivity. You can tweak this by using the idle_timeout keyword argument to set your own preferred time-out window.

Let’s Recap

We started out by running the Streamlit-on-Coiled example script from the Coiled Docs. We saw how quickly and effortlessly we were able to create an intuitive, interactive web application that could process 146 million rows of data. Next, we took this a little further and gave the users of our web app the ability to calculate a heavier computation on our Coiled cluster. We then supercharged our computation by building in an option to scale the cluster up (or down) as needed. Finally, we discussed when and how to shut down your cluster to avoid unnecessary costs.

I hope this blogpost helps you create effective data storytelling apps to communicate the impact of your data science workflows. If you have any questions or suggestions for future material that would be helpful, please feel free to reach out either here or on the Coiled Community Slack channel, I’d love to hear from you!

--

--