GETTING STARTED

Hi everyone, today I will demonstrate a very simple way to implement the data extraction from Excel/csv file to your SQL database in bubble.io.
This workflow uses AWS S3/Lambda function as a connecting bridge which is both time efficient compared with manual input and cost efficient if you’re using AWS free tier services.
The only cost would be installing a plugin in bubble called AWS File Uploader ($35) and that’s all. Amazing, right?
Let’s now dig in and see how it works.
Step 1: Confirm your input data and build database accordingly
This is actually an egg and chicken problem. Usually it happens when you find out that Excel/Access can no longer serve your purpose of managing and retrieving data promptly and that’s when you start to build your relational (and/or) non-relational database.

For example, as a senior project manager overlooking the global or regional market, using Excel as a standard template for distribution and extracting data from it would be very helpful especially when you have project managers all over the world and you want to centralize the project information and monitor the progress from a holistic view.
Step 2: Build your bubble page to reflect the project information

Let’s say I want to update this project information.

I know this page looks ugly but it’s just a very basic demonstration of the information listed. You can develop numerous variations based on this.
A few extra notes here:
- Allow user to download the excel template which ensured that all users will input uniform data and easy to update;
- You can add prefix for the file so we won’t be troubled by all kinds of different filenames, also you can extract key information from the filename as key identifier;
- I suggest choosing _uuidfilename as the file structure since it’s easier to see each individual uploads.

Step 3: Deploy the bubble plugin – AWS workflow
_AWS File Uploader plugin 👉 S3 Bucket 👉 Lambda 👉 SQL da_tabase
The delicate connection between S3 bucket and Lambda is realized by trigger, so once anything uploaded into S3 bucket, there will be an event generated and your code will start from here.


In my method, I used two layers for data extraction.
First layer, ‘File Uploaded’ S3 bucket triggers ‘ExtractionFunction’ Lambda function which outputs a new csv file in the ‘CSV Receiver’ S3 bucket.


Second layer, ‘CSV Receiver’ S3 bucket triggers ‘CSV to RDS’ Lambda function which extract data from csv and input it into SQL database.
The reasons behind this:
- This intermediate step allows users to check if there’s irregular data;
- If it’s very large excel file, it diverts the computing stress for 1 lambda function;
- If needed, you can write some easy codes in the first lambda function to create a file upload log which tracks all users’ uploading history;
After successfully uploading file and refreshing the page, the project info is successfully updated!

Actually, there’re many fun parts in the AWS ecosystem and many useful tools around Lambda but due to the length of the article, I will only mention a few:
- Deploy a layer in Lambda if you’re using pandas or pymysql library;
- CloudWatch is very very useful which logs all your lambda activities;
- If you’re using the AWS File Uploader plugin in bubble, make sure to adjust your S3 bucket Permission policy else it won’t work.
Reference: https://docs.zeroqode.com/plugins/aws-file-uploader-plugin
Thank you all for reading this article and I will keep sharing useful tools and tips in bubble.io.