Weâ€™ve gotten so accustomed to leaning on technology for every aspect of our day-to-day lives that we sometimes forget to marvel at the magic that lies behind many wonders that we now treat as commonplace.
Being late for a meeting is a thing of the pastâ€”our mobile devices use our calendars to determine where we need to be and when, check on the current traffic and remind us in advance of when we have to leave so that we make it on time.
The ritual of seeking recommendations for entertainmentâ€”be it novels or moviesâ€”has changed over the last decade. Today, the moment we finish watching a movie or reading a book, we are given dozens of suggestions of what we may like to watch or read nextâ€”suggestions so spot on target that there is no need to ask our friends.
Behind these small miracles are hundreds of algorithms that analyse data about us and our environment, trying to discern our preferences and habits so they can customize the services we consume to better fit our requirements. But there are benefits to this sort of personalization beyond simply making our lives easier.
Data-driven insights allow us to be better informed about the decisions we need to take and often give rise to entirely new and socially relevant business models. In the banking sector, for instance, these sorts of insights offer us new parameters based upon which we can assess the creditworthiness of loan applicants. Doctors can use these insights to support their diagnoses, particularly in the case of complex diseases that are hard to detect.
But there is a dark cloud to this silver lining. For every business or sector that deploys these algorithms for our benefit, there are as many that use them to mine for information that either puts us at a disadvantage or does us harm.
As we organize ourselves to take better advantage of this data-driven ecosystem, we would do well to remember that the flip side of receiving personalized services is that the provider of the service knows much more about us than we might like. Certainly, when taken out of the context of the benefits we receive, the level of personal insights that these algorithms generate often far exceeds anything we would have otherwise been comfortable disclosing.
This dichotomy is the background against which we need to frame the discussion around privacy and the structure of our new privacy law.
Privacy has historically been based on a framework of consent. Corporations are forbidden from collecting personal information without first obtaining the prior consent of the data subject. By making individuals responsible for determining what can or cannot be collected, the burden of assessing the privacy implications of data processing lies on the shoulders of individuals concerned, requiring them to determine for themselves the extent to which they would like to sacrifice their privacy for the advantages on offer.
This sort of autonomy seems well aligned to our democratic notions of liberty and equality. However, as we have seen from an earlier article in this series, consent, in the modern context, is structurally flawed. Data today is collected in a myriad different ways and personalized by overlaying different sets of data one on top of the other till they allow us to identify, at the intersection of those data points, uniquely individual traits and preferences that might not have otherwise been evident.
We are neither technically equipped to fully understand the consequences of the consent we provide nor can we expect to have all the information we need to make infÂormed decisions about our privacy. In other words, any autonomy we might think we have in determining the boundaries of our personal privacy is nominal at best.
We need a new model for privacy protection that allows us, on the one hand, to benefit from all that this new data-rich world offers us, while at the same time protects us from the harms it can inflict. Since the problem with consent is that the person from whom the consent is collected lacks the information necessary to make an informed decision, what if we shift the responsibility for this decision to someone who actually has this information?
In the modern context, that person is the data controllerâ€”the only entity in the entire chain of processing who knows exactly what use the data will be put to. What if, instead of protecting privacy through the data subjectâ€™s consent, we make the controller responsible for ensuring no harm is visited on the subject as a result of its processing?
This is the fundamental basis for the concept of accountabilityâ€”a principle discussed in length in one of the earlier articles in this series. Central to the effective implementation of this principle is the definition of â€śharmâ€ť, so that we can ensure that all data controllers are mindful of the consequences they must avoid while processing data.
The most obvious example of harm is the financial loss, either direct or indirect, that is caused to a data subject as a result of collection or processing. However, there could be many other less obvious consequencesâ€”such as loss of reputation or curtailment of the data subjectâ€™s ability to choose. The new privacy law must delineate these harms in sufficient detail and hold data controllers responsible for ensuring that the algorithms they design do not, even inadvertently, cause harm.
All this comes into sharper focus in the context of machine learning algorithms and neural networksâ€”technologies designed to discover patterns in vast stores of otherwise innocuous data in ways far beyond the capacity of humans.
While the insights that these algorithms provide are invaluable in developing customized services based on choices and preferences, they are, at the same time, capable of laying bare traits and habits that would have otherwise remained secret.
When decisions based on the deeply personal profiles these algorithms generate can impair our job prospects or our ability to avail a loan, the harm that they cause can be significant. This is particularly true, as was discussed in an earlier article in this series, when the programming choices based upon which the algorithms are built allowed bias to creep into the algorithms.
The real problem with machine learning is that no one, not even its programmer, fully understands how an algorithm arrives at its eventual conclusion. This means that when bias does creep into the algorithm, it is impossible to fix it by merely identifying (and eliminating) the faulty code. Any modern privacy law that we draft today must necessarily give some thought to this concern and include solutions that address it.
In the discussion document, I have suggested the creation of a class of learned intermediaries, people skilled in data science who are capable of understanding all these issues. I see learned intermediaries as auditors of dataâ€”whose job it is to evaluate the algorithms that data controllers use in order to assess the harms that they might cause.
Just as companies subject themselves to an annual financial audit, they will, to the extent that they collect and process data, have to undergo a regular data audit to assess the extent to which their data processes cause harm to the individual data subjects whose data they process.
In time, corporate entities with the highest standards of privacy compliance will be certified as such and will attract customers who place a premium on privacy. At which time, instead of being constrained to enforce privacy using punitive measures, we will be able to achieve these results through an incentive framework.