This week in data/cyber/tech: drawing the data minimisation line(s); and when data protection turns deadly.

May 3, 2024
4 minutes

There's rarely a quiet week in data protection — and this one was no exception. Below are two developments from the past seven days that caught my eye.

Story #1: Drawing the data minimisation line(s)

Is it possible for people to use an app without providing their personal data?

The Financial Times ran an interesting article this week in which the CEO of Grindr urged its users to exercise caution when doing so, following a high-profile political story in the UK involving a member of Parliament who shared his colleagues' phone numbers as part of a "honeytrap" operation on the app.

Notwithstanding the element of human judgement (or lack thereof) that can never be fully mitigated, there are a host of data protection considerations at play here. I’m most interested in (1) day protection by design/default, and (2) data minimisation.

Let’s take them in turn.

Data protection by design

Data protection by design means that you consider — or “bake in” — privacy from the outset of the product or service and thereafter throughout its lifecycle.

This may not sound difficult when described in the abstract, but the reality can be very challenging indeed. The Product Team doesn’t want to default to the most privacy-protective setting. The Sales Team needs to use personal data for behavioural advertising. And so on.

Ultimately, it’s for each company to determine what data protection by design and default means for its processing activities. But it's also important to acknowledge that this shouldn’t be a case of the tail wagging the dog. 

Of course it's important to ensure that privacy and data protection are key considerations for new products and services — for legal compliance purposes as well as enabling the trust of your customers and other stakeholders. However, data protection is one voice around the table, and taking a heavy handed, absolutist approach is unlikely to win other teams around to your views.

So diplomacy is the order of the day, particularly given that it is exceptionally difficult — if not impossible — to reverse engineer privacy by design once the product or service has gone live. 

Data minimisation

The data minimisation principle — one of the GDPR/UK GDPR seven building blocks — requires controllers to collect and process the minimum amount of personal data needed for their processing purposes.

This also sounds straightforward in the abstract, but it can be a real challenge in practice. For example, Users 1 and 2 may only wish to share (limited) ABC Data, whereas Users 3 and 4 may be comfortable sharing (extensive or explicit) XYZ Data. 

So where do you draw the minimisation line — or lines? That's a particularly tricky question where what you may think is best for users might be different to what users actually want. 

The takeaway is to consider these questions on a case-by-case basis, balancing questions of law, commerciality and UX. Data protection won't always tip the scales, but it should be an important factor in helping to strike that balance.

Story #2: When data protection turns deadly

It certainly gives new meaning to the phrase “you’re dead to me”.

This week the BBC reported about a woman who went to a hospital appointment, but was told that, according to its records, she had been dead for four months.

You don’t need to be a privacy wonk to see that this is a story about data accuracy. And it’s always interesting to see the application of data protection principles in everyday life.

But it also got me thinking about one of my favourite aspect of the work I do: advising on complex data accuracy disputes, for controllers and data subjects alike.


Article 5(1)(d) of the UK GDPR requires personal data to be “accurate and, where necessary, kept up to date”, such that controllers must take “every reasonable step” to ensure that inaccurate personal data are erased or rectified without delay.

What does this mean in practice?

In most cases it isn’t practicable — or necessary — to review the accuracy of personal data on an active and ongoing basis. It's also usually reasonable to assume that personal data you receive from third parties are accurate. Lastly, you can leverage data subjects themselves to assist keeping their personal data updated.

For example, organisations that use self-service benefits/payroll platforms will often ask employees to update their records (a change of address, married name, etc.). This doesn’t remove the controller’s obligations under Article 5(1)(d), but it’s a good way of helping to stay on top of things – legally and practically speaking. 


The other side of the coin involves individuals who challenge the accuracy of their personal data and/or exercise their right to rectification or completion.

Sometimes this can be straightforward — such as where the right is exercised in relation to a matter of fact. How my surname is spelled, for instance. (How it’s pronounced, or at least how people pronounce it, is a different story entirely.)

But things can get difficult where the personal data concern disputed facts, mistakes or opinions.

This issue can arise in a range of situations — most of them contentious. A person seeking to amend the record of an incorrect diagnosis. Or challenging the feedback provided in relation to an unsuccessful job application. Or disputing the accuracy of the sources used in a reputational diligence report to designate them as a PEP.


When they arise, these issues should be approached with care.

Clear and timely communication with data subjects is key where you’re not going to rectify their data. The analysis for contentious cases should be documented (this doesn’t need to be a long document; a one-pager will usually suffice). And if you receive a material number of these requests, putting in place a policy or SOP (again, it doesn’t need to boil the ocean) is advised.

Subscribe to Ropes & Gray Viewpoints by topic here.