This draft guidance relating to applications of artificial intelligence has recently been published for consultation. The guidance is long and detailed. For the most part it is a helpful synopsis of data protection law as it applies to applications of artificial intelligence but it is also useful for the following reasons:

1. It points out where terms are used differently in the AI context from how they are used in a data protection context, thus avoiding confusion.

2. It has specific contextual examples including practical help in minimising risk

3. It points out common pitfalls e.g. the need to treat the training phase differently from the implementation phase when deciding on purpose and the lawful bases for processing.

4. It covers other aspects of law such as the potential for discrimination where discrimination was inherent in the data used to train the models.

5. It points out key dangers e.g. the possibility of model inversion attacks in which attackers are able to recover personal data about the people whose data was used to train the system.

6. It covers how to deal with individual data protection rights, which can be particularly challenging in this context.

In short, this guidance is a ‘must-read’ for all involved in AI applications. The ICO itself identifies the intended audience:

Those with a compliance focus, including:

• data protection officers
• general counsel
• risk managers
• the ICO’s own auditors

Technology specialists, including:

• machine learning developers and data scientists
• software developers/engineers
• cyber security and IT risk managers