How can companies go about managing AI policy in the workplace?
Katie King: An agile approach to AI adoption and monitoring is now essential for all company boards and for all countries and governments. AI needs to feature at a strategic level, with a task force in place, with representation from all business departments in the organisation. Together they should draft an AI playbook or policy.
The word agile is key; they need to be able to respond rapidly to emerging risks and regulation, whilst innovating with the adoption of AI across different business functions such as HR, sales and marketing. These developments mean that the policy is regularly updated.
Ideally, I recommend that companies work in a proactive, collaborative way with their governments, regulators and trade bodies in order to embrace the transformative benefits of AI, whilst addressing emerging risks.
Should companies be restricting AI usage or promoting it? Why?
K.K.: There is no black and white answer; instead there are shades of grey, depending on your industry sector and geographic region.
What is undisputed is that conversational AI tools like ChatGPT occasionally give inaccurate or inappropriate answers, sometimes referred to as ‘hallucinations.’ For this reason, some companies have trod carefully. But the solution is to have a human ‘in the loop’, fact checking, as you would with a new team member.
Furthermore, banks, medical companies and certain other sectors have understandably restricted the use of some AI tools and platforms because they are unable to allow their staff to upload sensitive, confidential data into an open-source model where others can access the material. But these are not blanket bans and many are now taking advantage of more bespoke GPTs that are more secure.
Going beyond the stereotypical – and often fallacious – visions of artificial intelligence, what are the real risks that AI poses today?
K.K.: There are a number of real risks posed by AI today. As more companies pursue organisational efficiency by using AI across their different job functions, there is a risk that some jobs will become less attractive, and others will be replaced by AI increasingly.
Also, human bias can be captured in AI’s training data and the way it’s programmed. This can result in potentially perpetuating harmful stereotypes and disadvantaging certain groups of people.
Some team members may be tempted into falsely presenting AI-generated outputs, both text and imagery, as their own. This is plagiarism.
About Katie King
Katie King is a published Author, Keynote Speaker and Management Consultant, recognised in the area of AI and digital business transformation, as well as marketing and STEM. Katie regularly consults CEOs, MDs, as well as HR, sales and marketing teams on adapting to Business 4.0. Over the past 28 years, Katie has coached and advised numerous leading brands and business leaders, including Richard Branson, O2, Orange, Accenture, Harrods, Saatchi & Saatchi, PA Consulting, Arsenal FC, as well as NHS trusts, universities and many more.