Using AI in your work with us

Our statement on the use of generative AI tools (such as ChatGPT, Gemini or Meta AI) for applications and reporting. 


AI tools are increasingly becoming a part of our day-to-day lives. In the workplace, they can support a range of tasks, shift working practices, and change the way information is used and shared.

As a funder, we recognise there are potential benefits to applicants, grant holders and consultants using these tools. Generative AI can save time, and it can make processes more accessible such as by helping with translation or transcription. But we also know there are ethical concerns that factor into using AI.

Ethical concerns about AI

AI tools present a number of ethical concerns, both in their development and their use. These include the in-built biases and reinforcement of discrimination, and the use of data and imagery without consent in training AI models, both of which compromise the integrity of the outputs from generative AI tools. 

We’re also concerned about the lack of regulation, the reinforcement of economic and power hierarchies, and the human and environmental costs that come from an increased use of AI technologies. 

Our view of AI in applications

As the use of these tools has started to become more common, we have noticed that the similarities in language and the points being made can make it more difficult to understand what is different or special about an applicant, or the work they are describing. This is likely to disadvantage an application – whether for a job or a funding application.

We value your unique perspectives, creativity and insights. Sharing these with us will help us to communicate more effectively, develop understanding, and ultimately to build a stronger relationship. We believe this is how we will most effectively work together to achieve our common goals. 

Our expectations on the use of AI

In this context, we want to be clear about our expectations in situations where you might consider using generative AI tools.

For job, consultancy and grant applicants, and consultants and grant holders, we ask that you make sure you:

  • use generative AI tools responsibly, following relevant legal and ethical standards where these exist or as they develop;
  • follow any more detailed guidance that we provide as part of contracts or grant agreements;
  • are responsible for the accuracy and honesty of the information contained within an application to us, in your work or your reporting;
  • consider the tendency of generative AI tools to provide generic content, and make sure what you share with us conveys what is unique, valuable and distinctive about you and your work;
  • carefully assess the ethical concerns and risks associated with these tools and do not use them if there are risks you can’t mitigate; 
  • make sure you do not share personal or confidential data with any generative AI tools, unless you have explicit permission to do so;
  • do not share any confidential information that belongs or relates to the Foundation; and
  • acknowledge any outputs from generative AI in your application or reporting to us.

As a funder, we are still learning more about and exploring the potential implications of AI tools. We welcome opportunities to share and learn from others and we anticipate our position may evolve along with the technology.

Related content