Guidance on how to prepare a user policy on generative AI for your workplace and how this technology can be used to support HR functions
While AI is already established and present in some HR information systems, spreadsheet software, learning platforms, as well as in search engines and social media apps, the recent shift puts hugely powerful, transformative tools directly into the hands of the end user. This is distinct from AI within those enterprise-level tools that are usually deployed as major projects in an organisation, with testing, risk and security assessments, robust governance and controlled access. By contrast, personal AI tools are unsupervised and unguided, with broad and largely unrestricted access.
Sensational headlines have warned about the potential for cheating, destruction of jobs and even human extinction. This means that people experimenting with generative AI often do so in secret for fear of getting into trouble, or else are avoiding it altogether.
This is a fast moving area. At the time of writing, other new apps like Microsoft Copilot are preparing to launch, and that is likely to bring the power of generative AI to internal data. Some examples we discuss below explore how use could be extended as these new apps become available.
Shaping an AI-use policy
AI-use policy
The rapid emergence of generative AI requires ‘road rules’ or a use policy to be made clear for employees in any organisation.
Many of the risks of AI may already be covered by existing HR policies. But a specific AI policy will provide needed clarity and help avoid wrong assumptions. Such new policy or approach should also reflect the landscape and culture of your organisation. There are three core stages:
- Determining your position and philosophy
- Using this to outline your policy
- Communicating and implementing the policy.
Position and philosophy
Consider your organisational culture. If you have values or behaviours that encourage initiative, curiosity and experimentation, then your AI policy should reflect these. If the environment is more tightly controlled or regulated, then your policy may need to reflect that.
Determine why you want an AI policy. Consider the areas of risk management for the organisation, particularly regarding data security (eg ensuring the organisation’s data is not inadvertently placed into the public sphere), accuracy and quality assurance (eg ensuring employees don’t solely rely on outputs from AI tools, but validate the generated information), control (agreeing what is and isn’t reasonable use), and fairness (eg ensuring that use of tools guards against bias).
Articulate the benefits that you want to achieve. Is the aim to increase productivity, improve quality, reduce time spent on activities, or possibly to accelerate learning? Articulating desired benefits guides usage and reduces fear about elimination of jobs.
Understand current use in your organisation. This may be difficult if there is low trust at present. People who are already using AI may worry they will get into trouble. But it is important to get at least some sense of where, when and how people are using the tools.
Consider where AI may be deployed in different roles. Some of this will be driven by the approach of senior leaders but will also be influenced by the nature of role. AI tools can support research, planning documents, writing business papers and policies, assessing data for errors, writing code and beyond.
Maintain compliance with obligations and requirements. These can include aspects such as data security and GDPR obligations, local legislation, or requirements from industry regulators. It is also critical to identify how the organisation will guard against algorithm bias. Determine whether there are new legal requirements driven by AI, or whether your existing policies already cover them sufficiently or should be revised. The new policy may need to articulate the connection to the ones already in place, reminding employees of their existing responsibilities.
In the round, these elements of the organisational culture, structure, current usage, along with matters such as legislative requirements will guide the development of your policy.
For more information visit CIPD’s dedicated guide Preparing your organisation for AI use.
This guide was written on behalf of the CIPD by Francis Lake, founder of Green Juniper, a people advisory consultancy.