How synthetic data can be used for data privacy protection and machine learning model development.

"Data is the new oil in the digital era"¹. Software engineers and data scientists often need access to large volumes of real data to develop, experiment, and innovate. Collecting such data unfortunately also introduces security liabilities and privacy concerns which affect individuals, organizations, and society at large. Data containing Personally Identifiable Information (PII) and Personal Health Information (PHI) are particularly vulnerable to disclosure, and need to be protected.
Regulations such as the General Data Protection Regulation (GDPR)² serve to provide a level of legal protection for user data, but consequently introduce new technical challenges by restricting data usage, collection, and storage methods. In light of this, synthetic data could serve as a viable solution to protect user data privacy, stay compliant with regulations, and still maintain the pace and ability for development and innovation.
In this article, we will dive into the details of the significant benefits synthetic data offers:
- To support machine learning model development, allowing for faster iteration of experimentation and model building.
- To facilitate collaboration between data owners and external data scientists/engineers, in a more secure, and privacy compliant way.
What is Synthetic Data?
As the name suggests, synthetic data is in essence, ‘fake’ data that is artificially or programmatically generated, as opposed to ‘real’ data that is collected through real-world surveys or events. The creation of synthetic data stems from real data, and a good synthetic dataset is able to capture the underlying structure and display the same statistical distributions as the original data, rendering it indistinguishable from the real one.
The first major benefit of synthetic data is its ability to support machine learning/deep learning model development. Often, developers need the flexibility to quickly test an idea or experiment with building a new model. However, sometimes it takes weeks to acquire and prepare sufficient amounts of data. Synthetic data opens the gateway for faster iteration of model training and experimenting, as it provides a blueprint of how models can be built on real data. Additionally, with synthetic data, ML practitioners gain complete sovereignty over the dataset. This includes, controlling the degree of class separations, sampling size, and degree of noise of the dataset. In this article, we will show you how to improve an imbalanced dataset for machine learning with synthetic data.
The second major benefit of synthetic data is that it can protect data privacy. Real data contains sensitive and private user information that cannot be freely shared and is legally constrained. Approaches to preserve data privacy such as the k-anonymity model³ involve omitting data records to a certain extent. This results in an overall loss of information and data utility. In such cases, synthetic data serves as an excellent alternative to these data anonymization techniques. Synthetic datasets can be more openly published, shared, analyzed, without revealing actual individual information.
1. Synthetic Data for ML Model Training
In this section, we will demonstrate one use case of synthetic data for machine learning – fixing an imbalanced dataset, to support training of a more accurate model.
What is dataset imbalance and why is it important?
"Any dataset with an unequal class distribution is technically imbalanced. However, a dataset is said to be imbalanced when there is a significant, or in some cases extreme, disproportion among the number of examples of each class of the problem."⁴
Machine learning model accuracy is severely hindered when trained on an imbalanced dataset. This is because the model is not exposed to sufficient samples of the minority class during training, inhibiting its ability to recognize instances when evaluated on test and actual production data. For example, in fraud detection tasks, actual fraud records are always of the minority class, and it’s those ones that we need to be able to detect.
A technique we can employ to combat the imbalanced dataset problem is "resampling". This is done either through undersampling the majority or oversampling the minority. Below, we show the use of SMOTE (Synthetic Minority Over-sampling Technique) to oversample the minority class and fix an imbalanced dataset. SMOTE creates synthetic observations based on existing minority observations, using a k-nearest neighbour algorithm.⁵
We begin by creating an imbalanced dataset with a ratio of Class 1 to Class 2 being (90:10), using the sklearn
library.

We can visualize our dataset in a 2D graph by reducing the dimensions of the dataset using Principal Component Analysis (PCA).

Next, we apply SMOTE to oversample the minority class and balance our dataset, and plot out our balanced dataset.


We can see that the number of samples in Class 1 and Class 2 is now equal, and the dataset is balanced.
Tips
- You should oversample only the training dataset. So make sure to split the dataset into training and testing before you apply SMOTE. Otherwise, bias will be introduced in the test dataset, and it won’t reflect the true evaluation of your model.
- Other methods to combat imbalanced dataset include: undersampling the majority class, collecting more data, changing evaluation metrics, etc.
2. Synthetic Data for Privacy Protection
Instead of masking or anonymizing the original data, one can use synthetic data to protect data privacy.
Synthetic data:
- Retains the underlying structure and statistical distribution of the original data
- Does not rely on masking or omitting of the original data
- Provides a strong privacy guarantee to prevent sensitive user information from being disclosed
The picture below shows a synthetic dataset being created from a sample real dataset. In the next article we will dive into the method of generating a synthetic tabular dataset from a real tabular dataset like the following, using a conditional generative adversarial network (GAN).
Sample synthetic data:
In doing so, data can be more freely shared and published, opening up collaboration opportunities on projects across government agencies, social sciences, health sectors, and software companies, where data is heavily regulated by privacy guidelines.
Conclusion
One of the most significant advantages that synthetic data offers is its ability to protect user data privacy. This unlocks opportunities for data sharing, publishing, and collaboration on a larger scale. Additionally, you could turn to synthetic data as a means to support the development of machine learning models and software tools, since it’s a cheaper and faster approach when compared to collecting real world data.
In the next article, we will show you how to generate synthetic data from real dataset, and also discuss some limitations, challenges and the future of synthetic data⁶.
Enjoyed this article? Join us on Project Alesia for more resources on personal data privacy, digital well-being, responsible machine learning, Engineering, and a lot more!
Reference
- https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data
- https://gdpr-info.eu
- Sweeney, L., 2002a. Achieving k-anonymity privacy protection using generalization and suppression. Int. J. Uncertain. Fuzz. Knowl.-Based Syst. 10 (05), 571–588.
- Fernández, A., García, S., Galar, M., Prati, R. C., Krawczyk, B., & Herrera, F. (2018). Learning from imbalanced data sets (pp. 1–377). Berlin: Springer.
- Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321–357.
- Editorial Review Prateek Sanyal