Data profiling, privacy, and limits to tech
Where should we draw the line and stop the use of our own personal information?
We live in a polarized world and, in the United States, in a heavily polarized country. Data profiling and algorithmic sorting are compounding and accelerating this phenomenon and represent one of today’s most pressing privacy dilemmas. It contributes to radicalization, and chips away at our democratic foundations.
Ezra Klein, in his book “Why We’re Polarized,” posits that social transformations, culminating in the Civil Rights Act of 1964, have resulted in a widespread resorting of the country inside of and in between the two major political parties. In parallel, more recently, the rise of niche-based media promoting socio-political material exacerbated a trend in which party affiliation is a fundamental part of one’s identity. It is not often that dialogue happens across party lines, instead, intolerance and hostility prevail. The sorting happens across ethnicity, gender, and religion lines, amongst other attributes.
This dynamic predates social media and the wider use of artificial intelligence. However, both play an important role, through data profiling. In “Code Version 2.0”, the scholar Lawrence Lessig, raises the concern about data profiling as a vehicle for community manipulation. Notably, this can happen in a more insidious way than when driven by traditional mass media where choices about editorial content are more explicit. This enables, in my view, consumer agency: after all, choosing not to read a given newspaper or switch to a different cable news program is a relatively easy decision.
The usage of data profiling to drive consumer segmentation, and the associated target advertising which is at the heart of today’s Big Tech business model, can accelerate the sorting issue described by Klein. Lessig talks about how “observing will affect the observer” which, in essence, is what is generating today’s echo chambers. He goes on to argue that “The system watches what you do; it fits you into a pattern; the pattern is then fed back to you in the form of options set by the pattern; the options reinforce the pattern; the cycle begins again.”
Furthermore, the aggregation of data by private, governmental, or other entities, does not happen necessarily for nefarious reasons. In most cases, profiling techniques are used to enhance user experience and the value we derive from commercial products or governmental services. Where, therefore, should we draw the line and stop the use of our own personal information?
In my article “Resetting the Rules for Tech to Preserve the Public Interest”, published at the Tech Policy Press, I described how conditions, including the legal framework, have been set to allow for the commercialization of aggregated personal data. Going forward, I support Lessig’s holistic approach to curbing excesses: law, norms, markets, and architectural/code. A cycle in which he sees “LAW helping CODE to perfect privacy POLICY.”
Let’s briefly touch upon the four elements of Lessig’s framework:
Law – This brings us to the urgent need to enact regulation in the field.
Norms – Each society (or country) will establish the boundaries and limits for what is acceptable in their context. Companies can also decide to adhere to self-regulatory regimes or build trust through exercising privacy-protecting measures.
Markets – Ultimately, consumers vote with their dollars and can have an impact, albeit limited, on product direction.
Architectural/Code – Ethical development and ethical use of data, as well as the incorporation of “Privacy Enhancing Technologies.”
I argue that we need to follow the money and that the inertia of the markets is not going to get us to a safe destination. “The power of commerce is not behind any such change. Here, the invisible hand would really be invisible. Collective action must be taken to bend the architectures toward this goal, and collective action is just what politics is for. Laissez-faire will not cut it” as Lessig states very well.
We need to enact responsible regulation that will not impede, or slow down innovation, and encourage, support, and finance ventures that are exploring alternative business models that will deliver superior customer experience without violating privacy.
Paulo - beautifully written piece, as always. However I don’t think that law is the first step in this process. Regulation is most effective when a society is clear about the risks it wants to mitigate, its policy objectives more broadly and the governance structure needed to fulfill societal aims. In the case of regulating online content I would argue none of the three requirements have been met. There is ongoing, highly politicized debate about whether and how to limit free speech, to ascertain ‘truth’ , to design ‘fair’ content ranking systems and ultimately many open questions about who should be making these decisions. It’s unfortunate that US politicians in particular are unable to build consensus on even simple steps, but until there is national agreement on some level, I thinks laws will be a futile exercise reflecting a divided vision for the future. In the meantime I would argue that some social media companies are actually doing a reasonably good job of balancing the competing pressures they face. There is room for improvement, and some companies are doing a particularly bad job, but until national consensus emerges working with companies to raise concerns and inform their approaches will be the only practical path forward. My $0.02 anyway. Thanks for putting this out there! John