top of page

AI and the Arts: 5 Steps to a Responsible Generative AI Policy

by Katrina Ingram, Ethically Aligned AI and Lisa Mackay, Rozsa Foundation



This image features two hands, one robotic and one human, reaching towards each other. The image is inspired by Michelangelo's "Creation of Adam" and explores the relationship between ethical AI and the arts. The hands are shown in a design-like style, emphasizing the connection between technology and humanity.

The arrival of ChatGPT and other generative AI technologies has led organizations from all industries to consider how they can use AI responsibly. Yet, drafting generative AI usage guidelines isn’t just about creating a policy. Ironically, ChatGPT could probably do that for you! The real opportunity for organizations is to develop a deliberative ethical process that can help with decision-making about the use of a range of AI technologies including generative AI. It’s about:

  • thoughtfully considering organizational and societal values

  • understanding how the technology might impact your organization and community

  • reflecting on these impacts in light of your values

  • taking all of this information into account to provide policy guidance for staff

We are sharing our journey to help other organizations who are facing similar questions about AI and may wish to undertake their own ethical process toward a responsible AI usage policy. There are many ethical issues that go into making AI technologies, including environmental costs and exploitative human labour, in addition to a host of questions related to the use of the technology. It’s important to try and understand the full range of issues.

Step One: Start with your organizational values

Katrina: Grounding a discussion about the responsible use of AI in terms of your organizational mission, vision and values is an essential first step. The Rozsa Foundation has done a considerable amount of work to articulate its values. We were able to refer to these guidelines in order to centre our discussion in light of the Rozsa Foundation’s commitments to equity, integrity, partnership, growth and rigour. Lisa: Starting with the values we had determined during strategic planning was a great way to begin to bring the immense world of AI into focus for us. Using each value as a lens through which to look at the use and implications of AI made it more applicable to our organization and the work that we do. It also helped us to review and recommit to the values we had chosen as we talked through the scope of work we do and how AI would intersect with our world. Katrina: If your organization has not already done this work, this is an important first step to undertake before moving ahead. Understanding your values can help provide the “hard lines” your organization will not cross and clarify which behaviours you may wish to amplify or avoid in the context of your use of AI systems. Lisa: It was through this conversation that we identified several of our “hard and fast” approaches to AI – namely that as a foundation that is dedicated to serving and supporting the arts community, we would never use AI to replace the work of an artist. This became the basis for the commitments that followed.

Step Two: Do your homework to learn about AI

Katrina: What does your organization know about AI technologies and how they work? This step involves doing some homework to better understand how AI systems are constructed, as well as the ethical issues in AI, so that you can make good choices about how these systems align with or are opposed to your values. You can do some of this work on your own by reading credible sources. However, having an expert involved in the process provides a turnkey way to learn about AI and its ethical implications. The Rosza team did a little of both. They were given a set of articles and videos to watch in advance and then were given further information specific to generative AI at the stakeholder engagement session. Lisa: Having a marketing background, I have seen many articles and tip sheets enter my world to examine how AI could help in advertising, content creation, and communications. Mark Schaffer in particular is a great resource who is immensely curious about the potential of AI in marketing. However, having Katrina send us some reading and guide us through the ethical implications of AI open my eyes to the issues that AI raises overall in terms of ethics and values, and even what part of our human existence we are willing to hand over to a machine. Katrina: During this process, the Rozsa team learned about the ethical concerns related to how the data used to train these systems is gathered at scale without consent, a practice known as web-scraping. We discussed what this meant for artists in terms of intellectual property and the various lawsuits that have been launched against companies like OpenAI and Stable Diffusion. We also talked about the poor working conditions for gig-economy workers who label and prepare data for machine-learning projects or provide content moderation to help train these systems through human feedback. These workers are often subjected to various types of harmful content that causes trauma.

Step Three: Engage stakeholders

Katrina: Making space to have a fulsome discussion with key stakeholders is the next step in the process. Depending on the size of your organization, this might need to be more than one meeting. Team members shared how they are encountering AI in the context of their work as well as in their personal lives. We talked about the ways in which generative AI was being used, such as in social media posts or marketing copy, and debated the impact this had on the workflow. In addition to our own use of the technology, we grappled with many bigger picture questions, such as what it means to create art, the impact of generative AI on artists’ livelihoods, whether or not these systems actually save time and thoughts on receiving materials from partners (e.g. grant applications) formulated in whole or in part by generative AI. Lisa: This part of the conversation was where we did a lot of learning and really wrestled with how we wanted, as individuals and as the Rozsa Foundation, to use and to limit AI in our work. As individuals we all had differing comfort levels with generative AI and the speed at which it is infiltrating daily life, and working through that as a team was really productive. The commitments we ended up with could not have been conceived of without this lengthy conversation. Katrina: We also considered the potential benefits of using these technologies by the arts community. We acknowledged that there was a difference between an artist having agency to make choices to leverage AI to create art vs being harmed by the technologies and how specific context, and even the type of art might play into this decision. We discussed how smaller arts organizations might find some efficiencies and utility in using a tool like ChatGPT to help write a grant application and how that might actually be equitable by providing a more level playing field. Lisa: We realized here that there is a difference between artmaking and art-enabling in terms of technology and AI. Ultimately, it is not our place to make a blanket judgement on the use of AI in art-making – that is for artists to grapple with individually and collectively. When it comes to supporting and facilitating artmaking, which is ultimately what arts managers do, we can make our own organizational decisions, and prompt others to do the same. We can raise the questions and the possibilities for our community to debate and provide the space for conversations. But it is not our place to determine outcomes for everyone. If people are using AI for grant applications, for example, they would need to determine if it is more efficient and if the results are acceptable. Every funder will have to determine if the applications meet the criteria, but it is not up to them how the application is created.

Step Four: Draft the policy

Katrina: Writing the actual policy should be one of the final steps in the process. At this point, you will have documented your conversation which should provide some key inputs for the policy document. You might find it helpful to use a template as a starting point, but ultimately it’s important to align the policy with your organization’s voice and brand. This is especially true if you intend to make your policy a public-facing document. Publishing your commitment to responsible AI usage is a wonderful way to commit to living by the guidelines. Your organization's ability to live by your values, as expressed in the guidelines, is the real test. Lisa: Katrina showed us a template for an AI policy and some guidelines for developing our own. From there I took the notes from our conversation and created individual commitments that pertained to the things we had agreed upon. I quickly realized that we needed an internal policy for staff to help guide us all in our use of AI in our work – being specific about the kinds of things we could use it for and the kinds of things we should not use it for. Having this matrix helps us consider individual applications and which side of our policy they fall on. We also wanted to do a more public commitment to the arts community. Having very clear assertions of what we will not use AI for will hopefully help us maintain trust and transparency, which is key in every part of our work. We wanted to make sure that the community knew that our mission to support artists and the arts sector remains paramount in every application, including the internal use of technology. Everyone had a chance to weigh in on these documents and they generated further discussions. Finally we arrived at the documents posted on our website.

Step Five: Prepare to iterate

Katrina: There are many unsettled questions involving generative AI. As legal issues are settled or as society’s ethical positions about these technologies evolve, it's important that you are prepared to revisit your position. You’ll also learn from your own experiences in applying your policy as to how you feel about it in practice. Implementing the policy and living with it over time, might lead you to identify gaps or to consider new contexts for use cases that didn’t occur to your team during the initial process. You might also receive feedback from partners or other stakeholders that you wish to incorporate in an updated version. The work is iterative. Plan to revisit it and make adjustments as needed. Make sure everyone knows that “this is our policy, for now. We will revisit it in a regular fashion and plan to adapt it as new information about these technologies arises.” Lisa: Every day seems to bring a new version of AI or a new court decision about AI copyright, and you can be sure that some of your commitments or policies will need to change, be removed, or have new ones developed. All we could do in the face of such rapidly changing circumstances was to include a commitment that we would always be learning about new developments in AI and updating our policies as required. While we are happy with what we ended up with, we remain open to new conversations and feedback from the community we support.


Join us on November 16!





Mark your calendars! Rozsa Foundation will host an arts community webinar on Thursday, November 16 at 2:00 p.m. to get into more detail and take questions about Ethical AI and the Arts. Katrina Ingram, Founder and CEO of Ethically Align AI and former Rozsa Foundation board director will join Rozsa Foundation's Director of Storytelling Lisa Mackay for a conversation about developing AI commitments and policies as well as other related topics. A ZOOM link will be shared with our newsletter recipients closer to November 16. If you have any questions or topics you would like to see addressed, please email lisa@rozsafoundation.com.

bottom of page