Growing exponentially in most industries, the use of AI is bringing with it huge opportunities as well as a wealth of new legal, regulatory and ethical challenges that need to be understood and addressed. Stakeholders are scrambling to work out what the AI revolution means for them. AI could help us create transformational marketing … or it could exacerbate existing crises in trust and transparency which already plague the industry. For both the reputation of our industry, and protection of the public, AI should be used in an ethical, responsible, and legally compliant way.
In response, to the ethical challenges associated with AI, the UK ISBA (advertiser trade body) and IPA (agency trade body) have released twelve guiding principles for agencies and advertisers on the use of generative AI in advertising.
These principles are broad-brush and designed to ensure that the industry embraces AI in an ethical way that protects both consumers and those working in the creative sector. They cover issues around transparency, intellectual property rights, human oversight and more.
AI should be used responsibly and ethically.
AI should not be used in a manner that is likely to undermine public trust in advertising (for example, through the use of undisclosed deepfakes, or fake, scam or otherwise fraudulent advertising).
Advertisers and agencies should ensure that their use of AI is transparent where it features prominently in an ad and is unlikely to be obvious to consumers.
Advertisers and agencies should consider the potential environmental impact when using generative AI.
AI should not be used in a manner likely to discriminate or show bias against individuals or particular groups in society.
AI should not be used in a manner that is likely to undermine the rights of individuals (including with respect to use of their personal data).
Advertisers and agencies should consider the potential impact of the use of AI on intellectual property rights holders and the sustainability of publishers and other content creators.
Advertisers and agencies should consider the potential impact of AI on employment and talent. AI should be additive and an enabler – helping rather than replacing people.
Advertisers and agencies should perform appropriate due diligence on the AI tools they work with and only use AI when confident it is safe and secure to do so.
Advertisers and agencies should ensure appropriate human oversight and accountability in their use of AI (for example, fact and permission checking so that AI generated output is not used without adequate clearance and accuracy assurances).
Advertisers and agencies should be transparent with each other about their use of AI. Neither should include AI-generated content in materials provided to the other without the other’s agreement.
Advertisers and agencies should commit to continual monitoring and evaluation of their use of AI, including any potential negative impacts not limited to those described above.
Comments