Individuals must be able to understand the implications of their choices to make good decisions for their digital lives
By Dr. David Roi Hardoon, Wong Kwang Lin and Dr. Yvonne Loh

Digital transformation has been lauded globally for its benefits of progress and development. At the heart of any digitally transformed organization is data – data that can be securely accessed by everyone to develop better products, customer experiences or glean deeper business insights.
Placing data at the centre of everyday decisions requires changes in individual skills and attitudes, organizational structures, policies, and relationship amongst parties. These considerations bring the matter of human behaviour to the fore, as different societies and people within them would have varying views of it means to be data-driven, but everyone should ideally be free to make informed choices that meet their own definitions of well-being.
Data for Well-being
Data about people today is less of a public asset; it is increasingly privately funded, collected, and analysed. There is also a broad consensus amongst government agencies, corporations, and individuals that digital technology, data and AI, should contribute to improving human welfare. However, considering debates around the trade-offs between disclosure of personal data and personal well-being, it is important for all stakeholders to have an a-priori acknowledgement that negative consequences may result from digital technology, data and AI.
For some people, extensive collection and sharing of personal data reduces personal well-being because it infringes on the right to privacy. Beyond this, how one’s data is processed and used may be of greater concern, as discomfort grows around targeted advertising, political campaigning and even disinformation.
As part of the initiatives that have been named ‘digital humanitarianism’ (Meier, 2015) [1] or ‘Big Data for development’ (Hilbert, 2016) [2], Magalhães and Couldry (2021) [3] note that positive well-being outcomes can come from "datafication", which is the conversion of ever more aspects of life into digital data for algorithmic mining and semiautonomous decision making.
This article explores if and how the ongoing efforts towards managing data for well-being assume an individual’s awareness of the implications surrounding the disclosure and processing of their personal data.
Implications for consent in data-sharing
There are situations where the choice to share information is made by society, representatives of society, for the "good" of individuals. In the use of speed limits, limitations of hazardous materials in buildings, access to weapons – these regulations are motivated by social discourse, or in some instances, a unilateral decree for the benefit of its citizens. Such rules are made based on the assumption that everyone has a certain level of awareness of the benefits to these limitations.
The GDPR had made strides in this direction with the requirement of explicit consent for data use. However, without a validation of a user’s understanding of the underlying implications – we question whether such regulations serve as legal protection for companies rather than consumers.
This view is supported by pioneering behavioral economist Cass Sunstein, who ponders in a recent book whether disclosures help improve human well-being [4]. People want to obtain services or benefits from organisations, and they provide consent for their data to be shared in exchange for this utility – a paradigm we are familiar with every time we click to accept a set of terms and conditions.
Yet, Sunstein argues that many consent agreements are not effective; "all cost and no benefit". Instead, they exert a cognitive tax – the cost involved in reading and processing information – for example, a user being made to read pages of terms and conditions before they can engage in some transaction.
When we consider many transactions all over the world, this cognitive cost is significant. Often, an individual cannot fully understand what they are agreeing to, much less make an informed decision about whether it is good for them or how to advocate for themselves.
It is widely acknowledged that there are flaws to the processes of obtaining consent in personal data disclosure. For example, sites often use design nudges so that it is easier to accept than reject terms of disclosure, and make privacy policies intentionally difficult to understand [5]. This hinders informed consent and make the process function more as a cursory nod to compliance than meaningful user protection. Even though users are technically presented with all relevant data to make a decision, it also overloads them with complex and lengthy information.
There has been some regulatory response to such consent procedures which demand high cognitive load from consumers. For example, the GDPR mandates that platforms cannot make users agree to being tracked by default, and cannot deny access if these cookies are rejected by consumers. However, this does not ensure that a user is aware of how their personal data will be collected and used, nor does it reduce the cognitive tax involved in the consent process. One could still click to consent without reading and understanding the full terms of data disclosure.
Consent for well-being
Two possible solutions may help.
For one, consent procedures could require answering multiple-choice questions to be sure that the user understands the terms before proceeding. These questions could cover i) what their data can be used for and ii) what their rights include, such as rights to erasure and access. In this procedure, consent would only be valid if the user answers the questions correctly. This verifies that they are aware of how their personal data will be handled.
A second option proposed here is the use of consent managers. These are mechanisms that allow consumers to decide what kind of data to give the organisation, and for what purposes. Rather than letting data collectors unilaterally set the terms of use, users can indicate what they are willing to disclose. The function of consent management could either be offered by the organisation collecting data, or third-party platforms that negotiate between individuals and data collectors.
Of course, individuals can still act against their own best interests, and no solution can always guarantee a positive outcome for users. But even as disagreements will continue to exist on the definition of well-being, each person deserves validated understanding as a first step towards making decisions for their own digital well-being.
**References:**
[1] P. Meier (2015). Digital humanitarians. Boca Raton, FL: CRC.
[2] M. Hilbert (2016). Big data for development: A review of promises and challenges. Development Policy Review, 34(1), 135–174. doi: 10.1111/dpr.12142
[3] J.C. Magalhães, & N. Couldry (2021). Giving by taking away: Big tech, data colonialism, and the reconfiguration of social good. International Journal of Communication, 15, 20.
[4] C.R. Sunstein (2020). Too Much Information: Understanding what You Don't Want to Know. MIT Press.
[5] C. Utz, M. Degeling , S. Fahl & F. Schaub ( 2019, November 11–15). (Un)informed Consent: Studying GDPR Consent Notices in the Field. In _ACM SIGSAC Conference on Computer and Communications Security_, London, United Kingdom. doi: [10.1145/3319535.3354212](http://dx.doi.org/10.1145/3319535.3354212)