What opportunities, pitfalls, challenges and possibilities does Artificial Intelligence present to the cultural sector?
Expert panel of guests:
Jocelyn Burnham - Founder, AI for Culture
Owen Hopkin - Director of New Technologies and Innovation, Arts Council England
Duncan Mann - Founder and CEO, HoxtonAI
Sean Waterman - Head of Intellectual Property, Naomi Korn Associates
Key links:
Art Council England's Responsible AI Practical Toolkit
AI for Culture website
HoxtonAI website
Naomi Korn Associates website
Tom Dawson
[ 00:00:03 ]Hello, I'm Tom Dawson and welcome to the Arts and Culture podcast from the Association for Cultural Enterprises, the go-to podcast for thought leadership in the cultural sector. Today, we're turning our attention to AI in culture. Yes, it's one of the most talked about subjects right now, but this episode is about moving beyond the headlines to ask: where do we begin? AI is a young and disruptive technology, but the cultural sector has navigated seismic changes before, finding ways to adapt, evolve, and keep delivering on its mission. Ambiguity and overwhelm are natural, so we wanted to have an honest conversation about practical first steps and what AI really means for our work. Later, I'll talk to Duncan Mann, CEO and founder of Hoxton AI, about some practical applications. And then I'll be joined by Sean Waterman, head of intellectual property at Naomi Korn, to look at the legal copyright and IP implications.
Tom Dawson
[ 00:00:58 ] To start, I'm joined by Jocelyn Burnham, founder of AIforCulture. Jocelyn works at the intersection of culture and technology, and from a neutral standpoint, helps museums, galleries and heritage organisations understand and experiment with AI in a safe, creative and responsible way. Alongside her is Owen Hopkin, Director of New Technologies and Innovation at Arts Council England, who is part of the team behind the Arts Council's new AI toolkit.
Tom Dawson
[ 00:01:30 ] Owen, if I could maybe come to you first. I mean, you've been involved in shaping the Arts Council of England's AI policy and toolkit. What do you think are the biggest opportunities and risks AI poses for the cultural sector as a whole strategically?
Owen Hopkin
[ 00:01:44 ] The obvious thing to say is that, you know, the rise of artificial intelligence, particularly the launch of ChatGPT-3 and generative AI, raises really significant questions around intellectual property.
Owen Hopkin
[ 00:01:58 ] Know how artists guard against that or allow people to use it in the right way and are remunerated properly. I think on that front we were happy to see the government's consultation come out a few months ago, which we responded to. We'll see what comes from our side. One of the big things that came through the consultation at the Arts Council when we were looking at the policy was the environmental impact that AI tools have. You know, it's significant. Trying to reconcile the opportunities with that challenge is quite a big one. In the world of Gen AI, what does creativity actually mean? Quite a big esoteric sort of question, but it has quite practical implications for the Arts Council in terms of what we can fund and how much. use of an AI tool is valid and good and creative and how much isn't. So I think that's quite a big one. But then, you know, flipping to the opportunities, I think it does present some opportunities around new forms of creativity, how we use AI.
Owen Hopkin
[ 00:02:59 ] to present work in different ways, present brand new work. For a large organisation like the Arts Council and the sector generally, there are obviously lots of potential operational efficiencies that we could use. We are stretched as a sector and as workers within it, so anything that could... free up some time to allow us to actually get on with the art, or in our case, actually get on with the funding, could be a good thing.
Tom Dawson
[ 00:03:23 ] Jocelyn, from your perspective, working directly with a lot of organisations on AI and AI education, what are you hearing from... the leaders you're talking to about their kind of hopes and fears around AI and actually what it means for them and their jobs.
Jocelyn Burnham
[ 00:03:38 ] With organisations, often, I think it's fair to say, the first theme that's being discussed is efficiency. And what I find very interesting is seeing how that conversation quite quickly moves away, interestingly, from efficiency. The more AI seems to be explored within that organization, I'm seeing it shift a lot more to value. And discussions around actually what makes valuable work, what makes satisfying work, what makes us enjoy doing our jobs and enjoy working specifically in this sector rather than a different one. And I think often the idea of efficiency is a bit of a catch-all, but also you lose a lot of uniqueness, both of the sector, of the organization, and also of the individual when you just think about efficiency. So what I tend to see is that it begins where things start. Then it goes into value, maybe ethics, sort of representation of workers, of creatives. And then maybe it goes a little bit more into innovation. So rather than just looking at what existing tech can do and how it interacts with our work, but getting to a point where we're thinking, actually, how can we put our own stamp on this?
Jocelyn Burnham
[ 00:04:43 ] Not just in a way where it necessarily positions us as being positive about the tools and the tech. But innovation, even for how we react against it, how we protect ourselves, our copyright, how we have our learning journeys— which are really unique and can allow us to have kind of a complicated and nuanced relationship with different parts of the tech that are emerging. As far as sort of concerns and risks go, there are, of course, many often data concerns with artists, of course, it's copyright, broader cultural concerns about what it means when, you know, the majority of internet traffic is bots and we don't know what's real, what's not. So it goes into lots of different places. And as Owen said as well, it goes very much into the philosophy as well of what is creativity? What do we champion in our sector? And how does that interact with the funding models or the business models of how we survive as well? Really what I'm actually seeing much more, rather than maybe organizations finding some AI innovation which helps them cross the board, I'm not really seeing that. What tends to be the case is it's much more specific to maybe one member of one team who finds something which works for them.
Jocelyn Burnham
[ 00:05:47 ] That I think is where it gets really interesting because we have to sort of embrace quite how different we all are.
Tom Dawson
[ 00:05:53 ] Given the pressures I mentioned at the beginning on teams and the demand for resources, do you think there's an issue in cultural organizations as to where this sits? Who does this sit with? Who owns this within an organisation? Because I talk to organisations about things like sustainability, and unless you're a certain size of organisation, you might have a sustainability officer or lead. There seems to be a parallel with policies like AIs. Is there someone in the organisation considering the policy or driving this?
Owen Hopkin
[ 00:06:21 ] Definitely. You know, I think one of the big... And important decisions that we made in putting together the AI policy for the Arts Council was making sure that it stayed within the enterprise and innovation team, which was outward facing as well as inward facing. Because I think the conclusion that we came to quite quickly was that thinking about policy and how to implement it, as well as how to make that policy values-driven, is not a technical task or an IT document. It is a document and a task that should reflect and involve the organisation as a whole, because the technology that we're talking about is so pervasive and it does require... some thought around how the organisation or an individual's values marry up with the use of those tools and that technology in certain situations. So it was actually very, very important that the team that was running it inside the Arts Council, the people who were pushing forward with the work, had that broader perspective rather than... for instance, kind of a narrow data governance or a narrow IT sort of perspective.
Jocelyn Burnham
[ 00:07:27 ] That also really makes sense with what I'm seeing too, where there seems to be most benefit from not siloing this into one particular team or one particular lead. but finding ways to sort of increase the surface of conversation that comes in. I think you're seeing this in things like Art Fund, for example, have an AI working group that comes together regularly to bring frontline and different perspectives around the tech into a central conversation. And then perhaps there might be... Some teams with more energy, more resources, and the ability to explore those, to centralize some training and education, that kind of thing. It's only a good thing when more brains are in the mix.
Owen Hopkin
[ 00:08:02 ] One of the things that we established when we were doing this piece of work at ACE was to set up two new governance groups. So one was the oversight group made up of 12 directors, different parts of the organization. But the other governance group was this AI staff reference group made up of... between about 20 or 30 people from the Arts Council, different business areas, different levels of seniority, but crucially, different attitudes to AI. So we were recruiting people because they either... liked it and saw it and saw the potential, but also deliberately recruiting people who weren't at all supportive to make sure that all of those people and those different constituencies were asking really important questions that reflected how the organisation as a whole were thinking about it. And the policy that we published definitely wouldn't have been... as even-handed had we not have done that.
Tom Dawson
[ 00:08:58 ] So thinking about that Responsible AI toolkit from Arts Council England and Owen, where would you recommend people start if they're feeling overwhelmed by the subject or the pace of change using that toolkit?
Owen Hopkin
[ 00:09:09 ] I think having a look at the resources are actually a good place to start. One of the key people that worked on these resources and on this work with us was Dr Una Murphy from Goldsmiths University, and she came to us via a Braid Fellowship, which is... funded by AHRC, the Arts and Humanities Research Council, administered by the University of Edinburgh. So Oona's put the documents together based on the journey that we've been on and I think Oona's done an incredible job in making The resource is, you know, very, very user friendly. It doesn't require a lot of expertise or kind of knowledge of AI, both technical and generally. And even though it reflects our journey, which is the journey of a fairly large organisation, I think the principles that it highlights. You know are pretty transferable, regardless of the size of the organization. So actually, I think beginning with some of those resources would be a pretty good start.
Jocelyn Burnham
[ 00:10:02 ] One way which I think works really well is obviously my whole thing— about self-experimentation, playfulness, that kind of thing— as a way to learn and break through anxiety. But there are lots of ways. I think it's really nice to acknowledge, quite at the top, how emotionally charged this can be, especially for those in arts and culture.
Tom Dawson
[ 00:10:18 ] Yeah, great point. When this comes up in any of cultural enterprises, events, or conversations or networking, that is certainly the word overwhelm or confusion. I think people feel like they should be engaging with this subject, but they don't know where to start. I mean, are we just not there? As a society, yet, is that sort of sense of actually what this means for us as a society not quite there yet? The message is, it's okay to feel like that because we're all grappling with what that means for our daily lives, not just our organisations. Do you think that's a fair observation?
Jocelyn Burnham
[ 00:10:50 ] I certainly think so. And I think that's okay. I'm still overwhelmed by the internet every single day. I don't think that's going away. I think it's okay for us to have our squishy human selves be part of this and not need to predict the future, not need to know where it's all going. What I think isn't necessarily great is fear. I think fear is a block for learning. Fear is a block for agency. I think criticism is good, but fear usually isn't. So I think that anything gets you away from fear towards interest, towards curiosity, and then towards collaboration and good conversations is probably going to be a good thing. And then, within that, just trying to support ourselves the best we can. The culture sector is a fantastic place to have those conversations. We're good at having them. There's no reason we need to treat this like any other piece of technology. We can be a little bit more human about it. And I think that's actually a fantastic strength and one we should really champion.
Tom Dawson
[ 00:11:40 ] Are there any practical examples of tools that you can use in your day-to-day life which actually kind of unlocks this theoretical idea and goes, 'Ah, actually, this is what this is for?'
Jocelyn Burnham
[ 00:11:51 ] Working with data sets, for instance, I think is a good example. So perhaps a complex data set, something which isn't sensitive, which you're perfectly fine to upload to the internet and to use to increase your understanding of that data, the relationships, the correlations of that data. the biases of it as well. Being using that for sort of self-education about the data that you already have, I think is really interesting to explore. Also AI for coding, I think is one which, again, I don't try to predict the future, but I would be surprised if that isn't something that we don't see more experimentation with. as the ability to create applications or experiments or kind of interactive things. The bar to entry to that lowers and lowers and lowers.
Tom Dawson
[ 00:12:30 ] Thinking about the commercial side of organisations, particularly thinking about where cultural enterprises is coming from, there's an opportunity there to maybe analyse and understand.
Tom Dawson
[ 00:12:41 ] audience movement, behaviour, spend?
Jocelyn Burnham
[ 00:12:44 ] It's a possibility. You know, I'm as curious as anyone else about where people find value in that. And if it actually does lead to good insights, my suspicion is it's going to lead to something. At the very least, it's going to lead to you having more competency over the knowledge of your data and how it relates to each other. So I always am most optimistic when I think about how AI can be used for learning about what you're already working with, rather than perhaps just producing something entirely new. I think that's interesting to explore. And those are the kind of the level of experiments I'm often seeing people do, where they're working with large datasets or their Google Analytics, something like that. To find those correlations in natural language. That seems to me like an obvious place to start when you're exploring AI with data.
Owen Hopkin
[ 00:13:26 ] I kind of lose count of how many people get fear of an entirely blank page when they're trying to write something and they know what they want to write, but they just can't get it down. And I think the power of AI to help with that first draft, to cut through that deadlock, is enormously powerful and is obviously very, very easy to do. I had a conversation with a large museum a few months ago. And on that data point, they'd kind of built something that predicted the audience numbers over weeks, months and years. And they'd had that for six months. And the AI model that they'd used was actually pretty accurate. But to take it back to its even kind of simpler steps that Gen AI creating some sort of content to help you move forward with either your promotional needs or copy for the website or whatever, you know, is straightforward and a very easy win.
Jocelyn Burnham
[ 00:14:18 ] Something which I'm finding really interesting to explore is neurodiversity and AI, how diverse all our brains are and the things that we individually find challenging or individual programs, individual subtasks within tasks. And the interesting kind of overlap that happens when people might be getting past those blocks with AI and they're using it in such a way that allows them as an individual person to be more productive in a way that nobody else would think of using it. Where you can use it to get past mental blocks is really interesting to explore too.
Owen Hopkin
[ 00:14:49 ] On that accessibility piece, that's the game changer and that's the large opportunity. And we have something at the Arts Council, a scheme to help people who are neurodiverse or have certain accessibility needs. And we have a menu of AI tools that can help allow them to do things that would have been very, very difficult even six months ago. And on that point of adoption...
Tom Dawson
[ 00:15:09 ] In terms of models of transparency, of telling audiences when AI is being used, are there any good examples of organizations being transparent about that that either of you have come across?
Jocelyn Burnham
[ 00:15:20 ] I mean, I'm hearing people talk about, you know, having a page on your website that lists every single AI tool you might have used or an agency you work with might have used. Sure, that's transparent, but also in the real world, how useful is that? Is that sustainable too? So I think it's fair to see that this is a piece of work that we're going to be using our minds on. And there's probably not something at this point, personally, as one person that I would point to and go, yes, that's best practice. I think best practice is having the conversation in the first place.
Owen Hopkin
[ 00:15:46 ] It's moving very, very quickly as well. You know, we're getting to a point where it's going to become very, very difficult to tell when an AI has been used in a certain process because they're sometimes hardwired into the applications. into the business applications that we use. We don't have a huge amount to say about whether that's the case or not. From our perspective and in terms of our policy, what we're trying to impress on staff members is just to make sure that there's always a human in the loop whenever you're explicitly using. An AI tool if whatever's generated is sufficiently AI. Then we need to be open with that. But it's a very very difficult line to draw between when you should be open and when you've just used it as a starting point. It's a very difficult question.
Jocelyn Burnham
[ 00:16:32 ] More than once, I've been in a session with senior leadership where there's been an idea of, 'OK, maybe we'll have a blanket policy. We'll be very open that we don't use AI on our website, on our social media channels.' The difficulty is, as we all find, that the distinction between AI and non-AI becomes increasingly blurry the further you go into it. Some people would say that using modern forms of spellcheck is AI, or taking pictures of certain types of cameras algorithmically affects the lighting to make it, you know, a more preferable image. That's AI. Can the templates AI? You know, it's very, very challenging to draw that line. It's not impossible to have a red line. I think Art Fund does a good job at this. They've put their AI policies online and they're very transparent about that. But it's just interesting to reflect that even the agreement about what qualifies as AI production is an ongoing conversation and one which again is probably going to be different for different individuals and perspectives.
Tom Dawson
[ 00:17:27 ] You know, hearing stories of people— advertising front of house roles or visitor experience—who get hundreds of CVs and then have to work out which ones were written by ChatGPT and which ones were actually written by someone. So it's that education piece seems quite important.
Owen Hopkin
[ 00:17:43 ] I think the education piece is probably around a different aspect to it, because I think the approach that we've taken at Arts Council is to be very open with the sector and those seeking funding from us, which is that we don't use AI in our decision-making process. But we can't stop anyone from using AI tools to write applications to us. And I think that's the correct approach because I don't think that's an AI question. Because pre-AI, organisations or individuals may have had bid writers writing applications for them to submit to the Arts Council. And I don't see a huge amount of difference between someone using an AI tool and paying someone to write the application from them. So what we've tried to do, the education piece that we're trying to do is to just let them know that we're in an age of... gen ai and lots of people may also be using ai tools to write applications to us which means that lots of these applications may sound the same use similar language and make it very difficult for us to discern the creative idea or grain within it all so impressing upon them that They have this tool
Owen Hopkin
[ 00:18:51 ] that can help them, but also to be aware that that might not help them in the long term because unless they get a handle on it and take responsibility for it, they may be sending us a similar type of... application as many, many others, which will ultimately hurt the application and their goal of getting the money in the first place.
Jocelyn Burnham
[ 00:19:10 ] I also think there's something here about how we frame what AI education means in the culture sector, and perhaps how we can widen that so it's not necessarily a conversation about what tools or tech we want to use in our own work. But also how we interact with that emerging in society and how it increases the surface of problems we can have, things we can experience. There's a client of mine who doesn't use AI on their website or in their marketing materials, but increasingly, they're receiving house, they're receiving. promotional images that might have been produced with AI. This leads to a conversation about, okay, what's our relationship with these artists, towards what we tell them that we're comfortable with, how that works, and that requires itself knowledge and confidence to have those conversations. So I think kind of repositioning the idea that AI education in culture is about how we use it now, and instead it becomes how we work with a society which is being influenced by it. I think is a nicer way to start because then also it introduces more room there, I think, for criticism and people to feel safe to enter those conversations, not thinking that it's going to be a bit of a love fest for AI, which I think often is the impression when people think about AI training.
Jocelyn Burnham
[ 00:20:23 ] Exactly as Owen said, I don't think it's a conversation owned by one department or one particular team. I think that we all have a right to have this conversation, even if we think that, you know, it's very technical. Often it's not. Often when you get really down to it, we're talking about what is value, what is a good workplace to operate in, and what do we want. our sector should do.
Owen Hopkin
[ 00:20:44 ] Throughout the process of putting the policy together, we did a lot of consultation. What became very, very clear is that no one has the answers. No one has all of the answers. If they tell you they do, then they're probably trying to sell you something. But because no one has the answers. I think then the most authentic, credible and defensible place to get to with any sort of approach to AI is one that's based on your values, because that is. unassailable and you can stand behind that regardless of what your literacy is of digital technology or AI generally and that's where that broader conversation with the rest of the organisation and other voices become. very, very important to make sure that it is reflective of different attitudes and thoughts to AI, as well as reflect the organisation or the individual's values in their approach to it.
Tom Dawson
[ 00:21:41 ] This episode is brought to you by King & McGaw, trusted by the world's leading cultural institutions for over 40 years. From the National Gallery to MoMA, they craft beautifully bespoke art products, all designed and handmade in Sussex. While many companies are turning to AI, at King & McGaw, true craftsmanship remains at the heart of their work for cultural institutions. Every postcard, greetings card, frame print, and poster is expertly colour-matched by skilled printers and carefully assembled by hand. Visit kingandmcgaw. com to find out more.
Tom Dawson
[ 00:22:15 ] Next, I sat down with Duncan Mann, CEO and founder of Hoxton AI. Duncan's company develops AI tools that capture and analyze visitor data in cultural venues. So we talked about some practical examples of how the technology is already being put to work on the ground.
Duncan Mann
[ 00:22:31 ] Whilst we're an AI company, our fundamental philosophy is that AI is not an end. It's like a tool by which you should be achieving your normal business objectives. You know, I think a lot of people come to it saying... I need an AI strategy. I wouldn't think of it that way. I'd think of it as whatever your business objective is, AI might be a tool that makes you get there better, faster, cheaper. And so when we're thinking about people using AI in effective ways, it's generally about getting the key bits of information about your space, understanding how your space is used and how people feel about it. And AI is the tool that we use for some of those important bits of information. There's two bits that we focus on. One is space usage. So how many people are in the building? How many people are in gallery A versus B? Or, you know, at what time does it get most busy? The tool we use for that is effectively anonymized computer vision and a load of AI sitting in the cloud that helps us make really accurate data. and help you understand exactly what time it got busiest, you know, which spaces were busier than others, when people peak entrance number and flow was, that sort of thing.
Duncan Mann
[ 00:23:32 ] So AI is the tool. The output is actually very practical. Do I need more staff in gallery A than B because there are more people? You can make pretty transformative decisions and improvements to your operations just off the back of that data. Tate Modern is a good example. So they obviously have lots of amazing exhibitions around their site. And the Rodin exhibition, we did a nice case study with them where they had certain periods where they were selling out, maximized the ticket slots that they had. But we gave them occupancy data. Which was driven by this anonymous, privacy-conscious computer vision system that allowed them to see that, even when they were sold out, they still had about 20% to 30% excess capacity. And that's just because some people had left before they estimated that they would. And that meant... Actually, they could release more tickets at the times when it's busiest and they could think about what to do at the times when it was less busy and what other events or campaigns they could do at that time and just made much better use of the space. From a sort of ticket perspective, they could actually get more people to see these amazing exhibitions. And then, on the sort of counter side, we have not just in museums, but other leisure attractions like centre parks.
Duncan Mann
[ 00:24:35 ] They had a similar approach, but what they wanted to do was understand how to allocate staff. They have staff they can move around. Those massive swimming pools they have are really amazing. When they get really busy, they can fluctuate between a couple of hundred swimmers and 1,500 swimmers. And what they wanted to do was allocate the staff more when it was busy and fewer staff when it was less busy. You can't do that if you don't know how busy it is. And until we came along, they didn't have that occupancy number. So they have... Now, we have real-time dynamic allocation of staff according to a real-time number of busyness. And that means that the swimmers get a better experience. So there's a couple of fairly practical examples. Another one actually is around this new launch of the feedback system that we're using. So we've got a natural language. feedback system which is called talkback insights that is quite transformative in the way that it allows people to understand what people think of the space it's connected to the cloud it's all anonymized there's a screen that says can you help the museum tell us what you think about it or what did
Duncan Mann
[ 00:25:36 ] you enjoy what would you like to change and there's either a microphone or a in some cases a red telephone that people can pick up and just leave a voice note people just speak and they speak in a very different way to the google and tripadvisor reviews they just speak naturally in any language And they speak at length in many cases and quite nuanced feedback about, you know, oh, this bit of the stairs needs better signage. And they're giving really honest, direct feedback because it's in the moment. And all the system is doing is very practical use of AI.
Duncan Mann
[ 00:26:05 ] Transcribing everything into text, combining it all across the thousands of bits of feedback, and then giving you the core theme. So here are the five things that people wanted to improve. Here are the five things that people loved about the site, but they think that you know the cleanliness could be improved, or whatever it might be. And actually, over thousands of bits of feedback, it's quite hard for a person to read through that, transcribe it, and do it. But this is the sort of thing that those sort of AI tools are really good at— sort of making that process easier and then letting the team actually take action on it. The Cornwall Museum actually uses that really well, as they use it both for improving experience for visitors and for things like grant applications and funding. Right. Because it gives you the evidence to say we made these changes. This is what happened. And here's some actual evidence as to how it was changed. And I think the thing that we were most surprised about was that because this is basically anonymous, right? People just pick it up, give their honest opinion, however they want, as freely as they like. We get something like five to 10 times as much feedback as you do from a Google reviews.
Tom Dawson
[ 00:27:00 ] And I've seen those in action. They're great. Demythologizing what these tools are for and how you can use them, I think, is really important.
Duncan Mann
[ 00:27:09 ] I think it's crucial. The number of people who've come to me and said, 'I need an AI strategy. What AI should I put in?' I do think that it's almost backwards. I think you do have to have a business strategy— or some objectives of the business—before you can even consider AI. AI is just a functional tool that might help you get there faster. But I definitely caution against implementing AI tools. It's AI, and certainly don't implement anything that you don't really understand why you're doing it or how it gets to where it gets to. You know your business better than anybody who's selling you some solution. you should really understand what it is you're trying to do and don't be bamboozled. It should be understandable as to what the outcome is and why it's valuable to you.
Tom Dawson
[ 00:27:46 ] Thinking about some of the myths around AI and machine learning, what are some of the ones that you encounter the most that you could debunk?
Duncan Mann
[ 00:27:54 ] The myths generally around the sort of limitless capability. I mean, it's not, that's not the case. I think also a lot of people, they sort of assume it can do everything. And in reality, it can't. It gives you a very reasonable and sensible answer to most questions. It seems like it's on the right page. It's more the dangers are that... Because people assume it's very powerful, that it's always correct. And so people, rather than use it as a sounding board or a way to kind of prompt new responses, they sort of take it as read that the output that, let's say, GPT gives you is correct. That's quite dangerous because you just regress into this world where truth is determined by the large language model and that's not good. It also relates to another one of the worries, particularly in museums and histories and things, that they get rewritten by these tools where people sort of rely on them and it's sort of a self-fulfilling loop. People asking questions to these machines and then passing it off as truth. And then that becomes the training data set for the next set of machines. And so I think that's a bit of a risk. People worry that it'll replace lots of jobs immediately.
Duncan Mann
[ 00:28:57 ] I think the truth is, particularly in most service-based, people-based roles, experience-led attractions. The experience will always be largely driven by the interactions with people. So I think then it's quite important as well is that you don't replace the people that make these museums and attractions alive because that is what makes them special. It's just about making the right decisions from the leadership positions that we don't. Make that mistake because what makes a space amazing is the people that are in it and the sort of information that's within brilliant advice.
Tom Dawson
[ 00:29:25 ] For example, thank you Duncan, and looking ahead a bit, what AI applications or tools do you think are on the horizon that could be used in the cultural sector?
Duncan Mann
[ 00:29:35 ] There's a number of really practical ways you can think about it. There's things that could practically improve the experience in attractions. For example, things like, for people that are visually impaired, you know, giving them audio descriptions of what's in front of them, things like that. So enhanced accessibility is quite a good one. You've got things like augmented reality or virtual reality making the experience of the people that are there a richer experience and more context, which is really helpful. In that context, we were doing some interesting work with Birmingham. museums where we're thinking about other ways we can capture spoken language so oral histories you know that's a really natural thing to do with our feedback system and that you can tell your story capture them and transcribe them and understand about them in fact we did a very interesting thing there when Ozzy Osbourne passed away, they repurposed the Talkback Insights machine as a condolence message system. So people could just leave a voice note of condolence and our AI system would transcribe it all and compile them all. And we got something like two and a half thousand condolence messages over the space of about 10 days. There are things like that where you can iterate what we've already done and make it sort of richer or apply it in different ways.
Duncan Mann
[ 00:30:38 ] We see the sort of future of where our business will go is having a natural language dialogue with your space. So understanding, you know, where is it busiest? What can I do to drive better footfall upstairs into this gallery? why is the cafe underperforming on a tuesday and it should be just almost a natural dialogue with the space and it's pulling all the different bits of information together you know what do people feel about it when it's hot weather outside how does that change the feedback inside so we're using the tools to gather the information day to day and bit by bit but then you should have a really natural way of interacting with that We've actually built early versions and prototypes of this interface where you can speak to the space and understand what changes drive what impact. That's where it's got to go. Less about a dashboard giving you a number and more about value and interaction with effectively a living, breathing digital twin.
Tom Dawson
[ 00:31:26 ] That's fascinating. We're sort of overwhelmed with information. It sounds like these tools are there to kind of aggregate that and distill it into something which is digestible and useful, I suppose.
Duncan Mann
[ 00:31:36 ] I think that's absolutely key. I think there was a bit of a data explosion before where people were saying, you know, we need to gather more data, gather more data. Yes, obviously data is useful, but at some point you've either got to choose the specific data you want to look at, or you've got to find a way of aggregating all the different bits of data. Making use of that often requires a powerful system to sort of distill it into something that's actionable. It should be like a 30-second read and something you can share with your team. So that is the output of that Talkback Insights system actually. A tiny report that gives you a 30-second summary, the top five improvements, the top five things that people loved, and a few specific quotes. And then it gives you a sentiment, 83% positive. The sentiment split, and then all the language breakdown. So, just something that you can look at quite quickly and say, 'Okay, it's a few pages of report, very digestible and shareable.' Otherwise, you have a thousand bits of feedback that you'd have to trawl through and make sense of. I think the key is start small and start practical. Whatever you're thinking about your strategy and how you want to implement AI, I would always see if you can start small, see if you can get some information.
Duncan Mann
[ 00:32:38 ] and then expand across I mean I think that's one of the biggest barriers it takes a long time to get moving and test something and I think you should be able to kick the tires, explore what the output will be, and then move accordingly.
Tom Dawson
[ 00:32:52 ] My final guest is Sean Waterman, Head of Intellectual Property at Naomi Korn Associates. Sean advises cultural organisations across the UK on copyright, IP and rights management. So we looked at the legal and ethical implications of AI in our sector.
Tom Dawson
[ 00:33:09 ] Sean, thinking about AI as a disruptive force in society and the economy in general, do you think it's likely to be challenging for the cultural sector, particularly from a legal point of view?
Sean Waterman
[ 00:33:22 ] When you look at AI, it does disrupt and challenges some of our fundamental assumptions that relate to copyright, areas such as authorship, the idea of originality, and copyright ownership. It raises questions for our legal system, resulted in uncertainty on how copyright law is applied to AI. So, for example, one of the biggest issues everyone talks about is training AI on copyright protected works without a license and infringement of the exclusive rights. that are granted to copyright holders and cultural institutions hold large digital archives and so if ai companies are scraper mining these without permission It does raise potential copyright, moral rights, database rights and licensing disputes. Another question is: Do the derivative works generated by AI tools infringe copyright or the moral rights of creators? Another question... Are works generated by AI protected by copyright?
Sean Waterman
[ 00:34:25 ] And if so, who owns the copyright? Is it the AI company or is it the user of the AI tool? On top of that, you have jurisdictional complexities. Copyright is a territorial matter. Different countries are taking different approaches towards how AI and copyright interact. So I see the challenge for cultural institutions is around how they navigate these legal uncertainties. So how do they ensure their use of AI tools or if they're working with AI companies or permitting others to use AI on their collections? How do they ensure this does not infringe copyright, the moral rights of creators, or any contractual agreements that they might have in place? As well as copyright, we also have to consider our personal data. So another consideration is: does the use of AI tools, the way it processes material that might contain personal data, comply with GDPR?
Sean Waterman
[ 00:35:31 ] But let's be optimistic. The cultural sector has faced similar challenges due to changes in technology. For example, mass digitisation projects and the rise of social media. Naomi Corn Associates... Ac mae'n helpu sefydliadau diwylliannol yn drafftu polisïau a phrosedigion sy'n adnabod a rheoli'r risgiau potentiall a sicrhau cyflawni gyda copiwraeth a data. Protection legislation. So in my opinion, the starting point for cultural institutions should be to establish clear policies and procedures on their use of AI. Focus on what you have control over. It's all about how those policies will align with their strategic objectives and also with their values. So as well as compliance, we need to look at the ethical issues involved as well. So when drafting an AI policy, it shouldn't be standalone. It should be aligned with your other information governance policies, such as your IP policy, your IT policy, and your data protection policy.
Sean Waterman
[ 00:36:36 ] And as well as having clear policies and procedures, ensuring staff are aware of those legal and ethical issues that are raised by using AI. So providing training, providing access to training can help. So we recently created a CPD-accredited course on AI and information law, which deals with privacy and ethical considerations. And next on our list is to create a course that's all about AI and copyright, which we hope to launch early next year.
Tom Dawson
[ 00:37:10 ] That's all really helpful and good advice, Sean. Thank you. It's also important to maintain that positivity and optimism that, as a sector, we have faced disruptive challenges in the past. We are able to come together and form a consensus.
Sean Waterman
[ 00:37:23 ] AI will present some great opportunities, especially of interrogating our collections and providing more data and information about them. It's not always just about generative AI and manipulating images and creating derivative works. There's a lot more to it than that.
Tom Dawson
[ 00:37:39 ] You've already mentioned the interaction and the tension between who owns what and what are the implications of AI scraping content, but then also whether AI content is subject to the same laws. If a museum's digitized collection is used to train an AI model, and then someone else generates income based on that collection, should the museum be benefiting financially? Is there any legal recourse to this? Are the existing laws kind of fit for purpose in this area, or is that still up for grabs?
Sean Waterman
[ 00:38:11 ] Let's sort of break that down. So, if someone has trained their AI using a museum's digital collections without their consent, should they benefit financially? Well, they'll have to get in line because there's the rights holders. If you are looking at it as a copyright infringement, you have to be the copyright owner a lot of cultural institutions a lot of the works that for example they might publish online are themselves out of copyright and then you go down the whole debate as to whether the images that they've created attract any new copyright. If you took legal action based on that you'd have to as an institution prove that you own the copyright in those works and that might fall flat. If there have been derivative works that have been created from that well who are you going after? Are you going after the AI companies?
Sean Waterman
[ 00:39:03 ] and taking legal action against them, or are you going against the user of the AI tool, you would have to prove that the derivative work copies a substantial amount of... a work that is your copyright, I think basically sums it up. One of the issues is transparency. It's not always very transparent when developing the AI tools, what works have been used in the training. So transparency is an issue. So proving that your works have been used to train the AI might be an issue. Database rights might be another avenue because database rights are only if a substantial amount of data from a database, which would include, say, a collections online website. If that has been extracted and then reused without permission, there could be a database rights infringement. But we return to the idea of territoriality. Database rights only apply in the UK. Therefore, they only apply if the extraction was done in the UK. Same with copyright. If you're training the AI, it has to be that the training was done in the UK for copyright infringement to apply in the UK.
Sean Waterman
[ 00:40:09 ] There is a legal case at the moment with Getty versus Stability AI, Getty Images, that is taking place in the UK. And one of the areas that we're looking at is the case has fallen at one of the first hurdles in that it was proved that the training of the AI was— mostly done outside of UK's jurisdiction. And so the case has had to adapt to look at secondary infringement that the AI system enables, by being used in the UK, enables others to potentially create infringing derivative works. Yeah, there could be avenues, but there's a lot of ifs and buts. And also, if you're taking on a large AI company, you're going to have to have deep pockets as well.
Tom Dawson
[ 00:40:57 ] Well, quite. Thinking about licensing, which is a very important income stream for a lot of our members, is there anything organizations can do around their licensing agreements, given that AI can now extract, remix, and replicate works and collections at scale, if they are the copyright holder themselves?
Sean Waterman
[ 00:41:14 ] Mae unrhyw gyfranogadau llicensio, fel i ddweud, ar gyfer llicensio ime, gallwch chi gael clawsau ysgrifennol wedi'u hysgrifennu i'ch termau a chyfranogwyr, os ydynt yn gofyn i'r hyfforddiant o AI ar yr imej sy'n cael ei llicensio. You might want to distinguish between non-commercial text and data mining, given that there is an exception for that in the UK, distinguished between commercial and non-commercial. Another area to look at, which many license agreements do, is about the rights in derivative works. So AI can generate outputs that look like new works, but heavily based on the original. So the licenses could clarify whether the creation of derivative works via an AI tool is permitted. If it is permitted, who owns any AI-assisted outputs? Is it the institution, the licensee, or both? And whether attribution of the original collection is required if it is permitted. They could look at adding technical metadata safeguards, so require licenses to preserve any machine-readable metadata.
Sean Waterman
[ 00:42:18 ] That might include, for example, that training of AI is not permitted on distributed files, contractually prohibits tripping out of rights metadata. And you could be looking at watermarking digital fingerprinting to trace potential misuse, because that's going to be an issue. You can have all these clauses in your lessons. How are you going to then track how that material has been used and has it been used to train AI or used in derivative works? Any agreements you might want to address liability and indemnity. So if you're working with an AI developer or a publisher or a platform that the institution is indemnified by the misuse of the collection in AI systems. Maybe one area to really look at is building new licensing models. And this is something that lots of bodies are looking at and have implemented. It's having a license specifically for the training of AI, which could... perhaps be an additional revenue stream.
Sean Waterman
[ 00:43:19 ] In return for granting access to curated, high-quality datasets, you could charge a fee.
Tom Dawson
[ 00:43:26 ] Okay, so there are potential options there that would bring, presumably, some element of transparency to how an organisation is interacting with licensees, but also give it some element of control over an area where they might not normally have that.
Sean Waterman
[ 00:43:42 ] Yeah, but one thing to remember that with the text and data mining exception that is currently in the UK that allows that for a non-commercial purpose, this does override any contracts that might be in place. So once access has been granted... If as long as the text and data mining was for a non-commercial purpose and those copies that were made for that purpose weren't shared or distributed in any way, then any contract you might have would be overridden by the exception.
Tom Dawson
[ 00:44:13 ] Just thinking internally, are there any liabilities cultural organisations might face with staff using AI tools, you know, quite basic admin tasks or areas where they themselves might be accidentally infringing? Copyright or using biased content you talked about is really important. Having the right policies in place is that where people should start.
Sean Waterman
[ 00:44:36 ] Yeah, I think having governance and policy is very important. It might be that you have an AI use policy for staff. So it dictates what tools can be used, only vetted and approved ones, for example. I mean, like when you're in an organisation, generally, if you're going to use software, you're supposed to go to your IT department and check that it's OK. So it would be a similar principle as that. And also maybe looking at what use is. of that ai tool are permitted is it just for research just for drafting or experimentation and what uses are prohibited so are you going to publish those outputs without some form of review? How do you ensure what sort of data maybe you are putting into an AI tool? So you might want to restrict it. to works that are out of copyright, for example, or making sure that the use of personal data, are you uploading any personal data into an AI tool as well? governance and policy is really important human oversight so you know if you are going to publish any content that has been generated by ai there should be some sort of human review just to check that it is in fact accurate interrogate what the ai has produced treat the ai as an assistant rather than
Sean Waterman
[ 00:45:56 ] the actual author. You might also want to check any contractual safeguards that are in place. Choose AI providers that offer indemnities against IP claims, if possible. Always check the terms of service, who the liable party is. Transparency and attribution. So, if you're using AI outputs in exhibitions, be transparent with the audiences. One of the key areas that I think gives us an advantage as a sector is that idea of trust and authenticity. If we're bemoaning other people for creating works generated by AI, we should be really open and transparent about whether we are, as an institution, so that people don't assume that an image is an actual authentic part of our collections or our archives, for example. An approach that could be taken is maybe having an ethics review of any projects that might involve AI. So, you know, have a project approval. Training and awareness is obviously also important so people understand what the risks are of using AI.
Sean Waterman
[ 00:46:59 ] And also, yeah, tie it into your data protection policies and procedures, definitely.
Tom Dawson
[ 00:47:07 ] Thank you to Sean and my other guests, Duncan, Owen and Jocelyn. And thank you to our sponsors, King & McGaw. Until next time, take care.