GDPR for Startups in Practise

January 4, 2018
Startups

This is the story of Aplyfifi – a name that stands for thousands of startups in Europe that are platforms, third party providers and will have to become data protection compliant by March.

The idea is simple: a one-click application. Allowing visitors of your website to drag+drop their CV into a widget, and then follow up with a friendly chatbot that answers potential questions. The startup in question has found a niche and is working hard to fill it with its product. Beyond that, the tool helps recruiters pre-select candidates with the help of AI and automation, matching candidates and skills on the spot and discarding irrelevant documents. The applicants advantage? When their CV is uploaded, they can instantly browse for jobs that might interest them. All in all, by using this startup, applicants, employers and recruiters stand to gain. Through automated decision-making, more people find the right jobs and small businesses stop losing time sifting through thousands of irrelevant CVs.

As a third party to others, this startups naturally relies on other platforms, on trust in it’s solutions and on a good reputation as an easy to use and trustworthy company. Not only for those reasons protecting personal data is a priority.

To stay on top of their game and be ready for the General Data Protection Regulation (GDPR) – a one-size-fits-all layer to come in March this year – Applyfifi looking ahead and trying to implement privacy by design without getting lost in legal limbo. To that end, experts from public policy, law, entrepreneurs and community representatives got together. Our goal: Find innovative ways for this startups to implement the GDPR without losing its competitive edge. At the same time, keep what sets it apart, namely their added value through the best user experience.

 

Tracking back: automated decision-making in the GDPR

What are the risks? Like in any other company, founders face pressure to be compliant with the GDPR. However, most entrepreneurs are left with overly restrictive legal talk rather than constructive help. A working group of European privacy regulators tightens rules in a way that disincentive businesses rather than advising founders. For example, a rule that gives users the right to opt out from fully automated decision-making with legal effect is interpreted as blanked ban. Ergo, the startup which allows you to apply for the right job in one click and start an instant chat to find out if the role fits to you might just become illegal.

Automated decisions such as recommendations, bots, advertising or any other customisation are what sets the fourth industrial revolution apart from the third. Rather than masses of the same technology enables us to make choices and receive individual and affordable products and services.

So is automation incompatible with data protection? Certainly not. However, we run risk that Europe’s regulators decide so because of a lack of (respectfully) expertise in technology. It is crucial to distinguish between automated decision-making (ADM) which has a legal and fully automated effect or similar effects which are either automated in part or do not have a immediate effect.

Europe’s data protection authorities are first and foremost experts on data protection. The  GDPR brought a range of new responsibilities to their offices which concern business, technology and innovation. Next to being the watchdog over a proper implementation and respect of rules but also to actively work with smaller companies and help them become compliant. Rather that restricting its use like a recent draft of guidelines suggested, it would be important to distinguish and clarify where ADM is possible and can be used to provide better results. Regulators have, beyond that, responsibility to enable trust in the digital economy and provide a free flow of personal data in the single market. Yet, it is unclear whether regulators are investing the resources and expertise needed to wear both hats.

In principle, regulators should help clarifying that the provisions of ADM apply only if they have direct legal effect and not for applications that help addressing anti-fraud or a pre-selection of any kind, just like Applyfifi.

 

Case in Point: Back to the one-click application

Concretely, the drag and drop format allows users to upload instantly, without any bureaucratic complications, no account that is used only once, no bad passwords to remember, no unnecessary data exposure. In short: data minimisation. This core feature has to survive. But no registration also means that there is no room for tons of consent forms and boxes to tick. This is also a key sales feature of Applyfifi because employers are paying for a one-click application feature embedded in their website, not a long and complicated box or multiple pop-ups.

The team decided to use the strengths of the AI-bot to its advantage: After the drag & drop, the chatbot follows up with the candidate to ensure that their consent is given and offer them the option to opt-out. After that, the chatbot offers to answer questions, also in relation to data protection, and indicates the sending of a follow-up email to give applicants a confirmation, further information and contact details. That way, the user-experience is not diminished, but a friendly and non-annoying means of gaining consent from the uploader is implemented. This is roughly what can be understood as privacy-by-design.

With an AI chatbot, this startup has many ways to adapt this process to an individual customer. For instance, when a user asks many questions, answers can be more elaborate or more links can be offered. Will this be in line with the understanding of information and consent of the European regulators? Maybe, but in order to know the regulator should be fully aware of what the bot is capable of. And, first and foremost, the regulator needs to allow startups to harness the advantages of automation. If someone is in a rush or knows the procedure, the chatbot can come straight to the point. By doing so, everyone saves time and the startup can provide the most awesome user-experience.

In sum: There is no one size fits all approach. Weirdly enough, this is what the GDPR prescribes. Automation can offer a way out for small teams that cannot afford a legal armada. But what we need is a clear signal from Europe’s regulators to use automated processes as an enabler for custom products and services or everyone will get the full load of size 9 terms, conditions and consensus forms, everytime.

To finish: Let’s dispel a misconception: Contrary to what some data protection experts claim, startups do care about data protection. Unlike big businesses, small teams have to go where it hurts to implement GDPR. They invest critical resources into data protection because it matters to them. And they will. In turn, they hope that regulators will work with them to make sure that the market doesn’t just work for the big players.

In this post we focused on privacy by design and automated decision-making in the context of the GDPR. Experts argue that startups have fallen in the blind-spot of this far reaching regulation which is why we’re dedicating a series of blog posts around this issue.