Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.
The podcast Tech Law Talks is created by Reed Smith. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
In its first leading judgment (decision of November 18, 2024, docket no.: VI ZR 10/24), the German Federal Court of Justice (BGH) dealt with claims for non-material damages pursuant to Art. 82 GDPR following a scraping incident. According to the BGH, a proven loss of control or well-founded fear of misuse of the scraped data by third parties is sufficient to establish non-material damage. The BGH therefore bases its interpretation of the concept of damages on the case law of the CJEU, but does not provide a clear definition and leaves many questions unanswered. Our German data litigation lawyers, Andy Splittgerber, Hannah von Wickede and Johannes Berchtold, discuss this judgment and offer insights for organizations and platforms on what to expect in the future.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Andy: Hello, everyone, and welcome to today's episode of our Reed Smith Tech Law Talks podcast. In today's episode, we'll discuss the recent decision of the German Federal Court of Justice, the FCJ, of November 18, 2024, on compensation payments following a data breach or data scraping. My name is Andy Splittgerber. I'm partner at Reed Smith's Munich office in the Emerging Technologies Department. And I'm here today with Hannah von Wickede from our Frankfurt office. Hannah is also a specialist in data protection and data litigation. And Johannes Berchtold, also from Reed Smith in the Munich office, also from the emerging technologies team and tech litigator. Thanks for taking the time and diving a bit into this breathtaking case law. Just to catch everyone up and bring everyone on the same speed, it was a case decided by the German highest civil court, in an action brought by a user of a social platform who wanted damages after his personal data was scraped by a hacker from that social media network. And that was done through using the telephone number or trying out any kind of numbers through a technical fault probably, and this find a friend function. And through this way, the hackers could download a couple of million data sets from users of that platform, which then could be found in the dark web. And the user then started an action before the civil court claiming for damages. And this case was then referred to the highest court in Germany because of the legal difficulties. Hannah, do you want to briefly summarize the main legal findings and outcomes of this decision?
Hannah: Yes, Andy. So, the FCJ made three important statements, basically. First of all, the FCJ provided its own definition of what a non-material damage under Article 82 GDPR is. They are saying that mere loss of control can constitute a non-material damage under Article 82 GDPR. And if such a loss of the plaintiffs is not verifiable, that also justified fear of personal data being misused can constitute a non-material damage under GDPR. So both is pretty much in line with what the ECJ already has said about non-material damages in the past. And besides that, the FCJ makes also a statement regarding the amount of compensation for non-material damages following from scraping incident. And this is quite interesting because according to the FCJ, the amount of the claim for damages in such cases is around 100 euros. That is not much money. However, FCJ also says both loss of control and reasonable apprehension, also including the negative consequences, must first be proven by the plaintiff.
Andy: So we have an immaterial damage that's important for everyone to know. And the legal basis for the damage claim is Article 82 of the General Data Protection Regulation. So it's not German law, it's European law. And as you'd mentioned, Hannah, there was some ECJ case law in the past on similar cases. Johannes, can you give us a brief summary on what these rulings were about? And on your view, does the FCJ bring new aspects to these cases? Or is it very much in line with the European Court of Justice that already?
Johannes: Yes, the FCJ has quoted ECJ quite broadly here. So there was a little clarification in this regard. So far, it's been unclear whether the loss of control itself constitutes the damage or whether the loss of control is a mere negative consequence that may constitute non-material damage. So now the Federal Court of Justice ruled that the mere loss of control constitutes the direct damage. So there's no need for any particular fear or anxiety to be present for a claim to exist.
Andy: Okay, so it's not. So we read a bit in the press after the decision. Yes, it's very new and interesting judgment, but it's not revolutionary. It stays very close to what the European Court of Justice said already. The loss of control, I still struggle with. I mean, even if it's an immaterial damage, it's a bit difficult to grasp. And I would have hoped FCJ provides some more clarity or guidance on what they mean, because this is the central aspect, the loss of control. Johannes, you have some more details? What does the court say or how can we interpret that?
Johannes: Yeah, Andy, I totally agree. So in the future, discussion will most likely tend to focus on what actually constitutes a loss of control. So the FCJ does not provide any guidance here. However, it can already be said the plaintiff must have had the control over his data to actually lose it. So whether this is the case is particularly questionable if the actual scrape data was public, like in a lot of cases where we have in Germany right here, and or if the data was already included in other leaks, or the plaintiff published the data on another platform, maybe on his website or another social network where the data was freely accessible. So in the end, it will probably depend on the individual case if there was actually a loss of control or not. And we'll just have to wait on more judgments in Germany or in Europe to define loss of control in more detail.
Andy: Yeah, I think that's also a very important aspect of this case that was decided here, that the major cornerstones of the claim were established, they were proven. So it was undisputed that the claimant was a user of the network. It was undisputed that the scraping took place. It was undisputed that the user's data was affected part of the scraping. And then also the user's data was found in the dark web. So we have, in this case, when I say undistributed, it means that the parties did not dispute about it and the court could base their legal reasoning on these facts. In a lot of cases that we see in practice, these cornerstones are not established. They're very often disputed. Often you perhaps you don't even know that the claimant is user of that network. There's always dispute or often dispute around whether or not a scraping or a data breach took place or not. It's also not always the case that data is found in the dark web. I think this, even if the finding in the dark web, for example, is not like a written criteria of the loss of control. I think it definitely is an aspect for the courts to say, yes, there was loss of control because we see that the data was uncontrolled in the dark web. So, and that's a point, I don't know if any of you have views on this, also from the technical side. I mean, how easy and how often do we see that, you know, there is like a tag that it says, okay, the data in the dark web is from this social platform? Often, users are affected by multiple data breaches or scrapings, and then it's not possible to make this causal link between one specific scraping or data breach and then data being found somewhere in the web. Do you think, Hannah or Johannes, that this could be an important aspect in the future when courts determine the loss of control, that they also look into, you know, was there actually, you know, a loss of control?
Hannah: I would say yes, because it was already mentioned that the plaintiffs must first prove that there is a causal damage. And a lot of the plaintiffs are using various databases that list such alleged breaches, data breaches, and the plaintiffs always claim that this would indicate such a causal link. And of course, this is now a decisive point the courts have to handle, as it is a requirement. Before you get to the damage and before you can decide if there was a damage, if there was a loss of control, you have to prove if the plaintiff even was affected. And yeah, that's a challenge and not easy in practice because there's also a lot of case law already about these databases or on those databases that there might not be sufficient proof for the plaintiffs being affected by alleged data breaches or leaks.
Andy: All right. So let's see what's happening also in other countries. I mean, the Article 82, as I said in the beginning, is a European piece of law. So other countries in Europe will have to deal with the same topics. We cannot come up with our German requirements or interpretation of immaterial damages that are rather narrow, I would say. So Hannah, any other indications you see from the European angle that we need to have in mind?
Hannah: Yes, you're right. And yet first it is important that this concept of immaterial damage is EU law, is in accordance with EU law, as this is GDPR. And as Johannes said, the ECJ has always interpreted this damage very broadly. And does also not consider a threshold to be necessary. And I agree with you that it is difficult to set such low requirements for the concept of damage and at the same time not demand materiality or a threshold. And in my opinion, the Federal Court of Justice should perhaps have made a submission here to the ECJ after all because it is not clear what loss of control is. And then without a material threshold, this contributes a lot to legal insecurity for a lot of companies.
Andy: Yeah. Thank you very much, Hannah. So yes, the first takeaway for us definitely is loss of control. That's a major aspect of the decision. Other aspects, other interesting sentences or thoughts we see in the FCJ decision. And one aspect I see or I saw is right at the beginning where the FCJ merges together two events. The scraping and then a noncompliance with data access requests. And that was based in that case on contract, but similar on Article 15, GDPR. So those three events are kind of like merged together as one event, which in my view doesn't make so much sense because they're separated from the event, from the dates, from the actions or non-actions, and also then from the damages from a non-compliance with an Article 15. I think it's much more difficult to argue with a damage loss of control than with a scraping or a data breach. That that's not a major aspect of the decision but I think it was an interesting finding. Any other aspects, Hannah or Johannes, that you saw in the decision worth mentioning here for our audience?
Johannes: Yeah so I think discussion in Germany was really broadly so i think just just maybe two points have been neglected in the discussion so far. First, towards the ending of the reasoning, the court stated that data controllers are not obliged to provide information about unknown recipients. For example, like in scraping cases, controllers often do not know who the scrapers are. So there's no obligation for them to provide any names of scrapers they don't know. That clarification is really helpful in possible litigation. And on the other hand, it's somewhat lost in the discussion that the damages of the 100 euros only come into consideration if the phone number, the user ID, the first name, the last name, the gender, and the workplace are actually affected. So accordingly, if less data, maybe just an email address or a name, or less sensitive data was scraped, the claim for damages can or must even be significantly lower.
Andy: All right. Thanks, Johannes. That's very interesting. So, not only the law of control aspect, but also other aspects in this decision that's worth mentioning and reading if you have the time. Now looking a bit into the future, what's happening next, Johannes? What are your thoughts? I mean, you're involved in some similar litigation as well, as so is Hannah, what do you expect, What's happening to those litigation cases in the future? Any changes? Will we still have law firms suing after social platforms or suing for consumers after social platforms? Or do we expect any changes in that?
Johannes: Yeah, Andy, it's really interesting. In this mass GDPR litigation, you always have to consider the business side, not always just the legal side. So I think the ruling will likely put an end to the mass GDPR litigation as we know it in the past. Because so far, the plaintiffs have mostly appeared just with a legal expenses insurer. So the damages were up to like 5,000 euros and other claims have been asserted. So the value in dispute could be pushed to the edge. So it was like maybe around 20,000 euros in the end. But now it's clear that the potential damages in such scraping structures are more likely to be in the double-digit numbers, like, for example, 100 euros or even less. So as a result, the legal expenses insurers will no longer fund their claims for 5,000 euros. But at the same time, the vast majority of legal expenses insurers have agreed to a deductible of more than 100 euros. So the potential outcome and the risk of litigation are therefore disproportionate. And as a result, the plaintiffs will probably refrain from filing such lawsuits in the future.
Andy: All right. So good news for all insurers in the audience or better watch out for requests for coverage of litigation and see if not the values in this cube are much too high. So we will probably see less of insurance coverage cases, but still, definitely, we expect the same amount or perhaps even more litigation because the number as such, even if it's only 100 euros, seems certainly attractive for users as a so-called low-hanging fruit. And Hannah, before we close our podcast today, again, looking into the future, what is your recommendation or your takeaways to platforms, internet sites, basically everyone, any organization handling data can be affected by data scraping or a data breach. So what is your recommendation or first thoughts? How can those organizations get ready or ideally even avoid such litigation?
Hannah: So at first, Andy, it is very important to clarify that the FCJ judgment is ruled on a specific case in which non-public data was made available to the public as a result of a proven breach of data protection. And that is not the case in general. So you should avoid simply apply this decision to every other case like a template because if other requirements following from the GDPR are missing, the claims will still be unsuccessful. And second, of course, platforms companies have to consider what they publish about their security vulnerabilities and take the best possible precautions to ensure that data is not published on the dark web. And if necessary, companies can transfer the risk of publication to the user simply by adjusting their general terms and conditions.
Andy: Thanks, Hannah. These are interesting aspects and I see a little bit of conflict between the breach notification obligations under Article 33, 34, and then the direction this caseload goes. That will also be very interesting to see. Thank you very much, Hannah and Johannes, for your contribution. That was a really interesting, great discussion. And thank you very much to our audience for listening in. This was today's episode of our EU Reed Smith Tech Law Talks podcast. We thank you very much for listening. Please leave feedback and comments in the comments fields or send us an email. We hope to welcome you soon to our next episode. Have a nice day. Thank you very much. Bye bye.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcast on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Laura-May Scott and Emily McMahan navigate the intricate relationship between AI and professional liability insurance, offering valuable insights and practical advice for businesses in the AI era.
Our hosts, both lawyers in Reed Smith’s Insurance Recovery Group in London, delve into AI’s transformative impact on the UK insurance market, focusing on professional liability insurance. AI is adding efficiency to tasks such as document review, legal research and due diligence, but who pays when AI fails? Laura-May and Emily share recommendations for businesses on integrating AI, including evaluating specific AI risks, maintaining human oversight and ensuring transparency.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Laura-May: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the UK insurance market. I'm Laura-May Scott, a partner in our Insurance Recovery and Global Commercial Disputes group based here in our London office. Joining me today is Emily McMahan, a senior associate also in the Insurance Recovery and Global Commercial Disputes team from our London office. So diving right in, AI is transforming how we work and introducing new complexities in the provision of services. AI is undeniably reshaping professional services, and with that, the landscape of risk and liability. Specifically today, we're going to discuss how professional liability insurance is evolving to address AI-related risks, and what companies should be aware of as they incorporate AI into their operations and work product. Emily, can you start by giving our listeners a quick overview of professional liability insurance and how it intersects with this new AI-driven landscape?
Emily: Thank you, Laura-May. So, professional liability insurance protects professionals, including solicitors, doctors, accountants, and consultants, for example, against claims brought by their clients in respect of alleged negligence or poor advice. This type of insurance helps professionals cover the legal costs of defending those claims, as well as any related damages or settlements associated with the claim. Before AI, professional liability insurance would protect professionals from traditional risks, like errors in judgment or omissions from advice. For example, if an accountant missed a filing deadline or a solicitor failed to supervise a junior lawyer, such that the firm provided incorrect advice on the law. However, as AI becomes increasingly utilized in professional services and in the delivery of services and advice to their clients, the traditional risks faced by these professionals is changing rapidly. This is because AI can significantly alter how services are delivered to clients. Indeed, it is also often the case that it is not readily apparent to the client that AI has been used in the delivery of some of these professional services.
Laura-May: Thank you, Emily. I totally agree with that. Can you now please tell us how the landscape is changing? So how is AI being used in the various sectors to deliver services to clients?
Emily: Well, in the legal sphere, AI is being used for tasks such as document review, legal research, and within the due diligence process. At first glance, this is quite impressive, as these are normally the most time-consuming aspects of a lawyer's work. So the fact that AI can assist with these tasks is really useful. Therefore, when it works well, it works really well and can save us a lot of time and costs. However, when the use of AI goes wrong, then it can cause real damage. For example, if it transpires that something has been missed in the due diligence process, or if the technology hallucinates or makes up results, then this can cause a significant problem. I know, for example, on the latter point in the US, there was a case where two New York lawyers were taken to court after using ChatGPT to write a legal brief that actually contained fake case citations. Furthermore, using AI poses a risk in the context of confidentiality, where personal data of clients is disclosed to the system or there's a data leak. So when it goes wrong, it can go really wrong.
Laura-May: Yes, I can totally understand that. So basically, it all boils down to the question of who is responsible if AI gets something wrong? And I guess, will professional liability insurance be able to cover that?
Emily: Yes, exactly. Does liability fall to the professionals who have been using the AI or the developers and providers of the AI? There's no clear-cut answer, but the client will probably no doubt look to the professional with whom they've contracted with and who owes them a duty of care, whether that be, for example, a law firm or an accountancy firm to cover any subsequent loss. In light of this, Laura-May, maybe you could tell our listeners what this means from an insurance perspective.
Laura-May: Yes, it's an important question. So since many insurance policies were created before AI, they don't explicitly address AI related issues. For now, claims arising from AI are often managed on a case by case basis within the scope of existing policies, and it very much depends on the policy wording. For example, as UK law firms must obtain sufficient professional liability insurance to adequately cover its current and past services as mandated by its regulator, to the solicitor's regulatory authority, then it is likely that such policy will respond to claims where AI is used to perform and deliver services to clients and where a later claim for breach of duty arises in relation to that use of AI. Thus, a law firm's professional liability insurance could cover instances where AI is used to perform legal duties, giving rise to a claim from the client. And I think that's pretty similar for accountancy firms who are members of the Institute of Chartered accountants for England and Wales. So the risks associated with AI are likely to fall under the minimum terms and conditions for its required professional liability insurance, such that any claims brought against accountants for breach of duty in relation to the use of AI would be covered under the insurance policy. However, as time goes on, we can expect to see more specific terms addressing the use of AI in professional liability policies. Some policies might have that already, but I think as we go through the market, it will become more industry standard. And we recommend that businesses review their professional liability policy language to ascertain how it addresses AI risk.
Emily: Thanks, Laura-May. That's really interesting that such a broad approach is being followed. I was wondering whether you would be able to tell our listeners how you think they should be reacting to this approach and preparing for any future developments.
Laura-May: I would say the first step is that businesses should evaluate how AI is being integrated into their services. It starts with understanding the specific risks associated with the AI technologies that they are using and thinking through the possible consequences if something goes wrong with the AI product that's being utilised. The second thing concerns communication. So even if businesses are not coming across specific questions regarding the use of AI when they're renewing or placing professional liability cover, companies should always ensure that they're proactively updating their insurers about the tech that they are using to deliver their services. And that's to ensure that businesses discharge their obligation to give a fair presentation of the risk to insurers at the time of placement or on variation or renewal of the policy pursuant to the Insurance Act 2015. It's also practically important to disclose to insurers fully so that they understand how the business utilizes AI and you can then avoid coverage-related issues down the line if a claim does arise. Better to have that all dealt with up front. The third step is about human involvement and maintaining robust risk management processes for the use of AI. Businesses need to ensure that there is some human supervision with any tasks involving AI and that all of the output from the AI is thoroughly checked. So businesses should be adopting internal policies and frameworks to outline the permitted use of AI in the delivery of services by their business. And finally, I think it's very important to focus on transparency with clients. You know, clients should be informed if any AI tech has been used in the delivery of services. And indeed, some clients may say that they don't want the professional services provider to utilize AI in the delivery of services. And businesses must be familiar with any restrictions that have been put in place by a client. So in other words, informed consent for the use of AI should be obtained from the client where possible. I think these should collectively help, these steps should collectively help all parties begin to understand where the liability lies, Emily. Do you have anything to add?
Emily: I see. So it's basically all about taking a proactive rather than a reactive attitude to this. Though times may be uncertain, companies should certainly be preparing for what is to come. In terms of anything to add, I would also just like to quickly mention that if a firm uses a third-party AI tool instead of its own tool, risk management can become a little more complex. This is because if a firm develops their own AI tool, they know how it works and therefore any risks that could manifest from that. This makes it easier to perform internal checks and also obtain proper informed consent from clients as they'll have more information about the technology that is being utilized. Whereas if a business uses a third-party technology, although though in some cases this might be cheaper, it is harder to know the associated risk. And I would also say that jurisdiction comes into this. It's important that any global professional service business looks at the legal and regulatory landscape in all the countries that they operate. There is not a globally uniform approach to AI, and how to utilize it and regulate it is changing. So, companies need to be aware of where their outputs are being sent and ensure that their risks are managed appropriately.
Laura-May: Yes, I agree. All great points, Emily. So in the future, what do you think we can be expecting from insurers?
Emily: So I know you mentioned earlier about how time progresses, we can expect to see more precise policy. At the moment, I think it is fair to say that insurers are interested in understanding how AI is being used in businesses. It's likely that as time goes on and insurers begin to understand the risks involved, they will start to modify their policies and ask additional questions of their clients to better tailor their covered risks. For example, certain insurers may require that insureds declare their AI usage and provide an overview of the possible consequences if that technology fails. Another development we can expect from the market is new insurance products created solely for the use of AI.
Laura-May: Yes, I entirely agree. I think we will see more specific insurance products that are tailored to the use of AI. So in summary, businesses should focus on their risk management practices regarding the use of AI and ensure that they're having discussions with their insurers about the use of the new technologies. These conversations around responsibility, transparency and collaboration will undoubtedly continue to shape professional liability insurance in the AI era that we're now in. And indeed, by understanding their own AI systems and engaging with insurers and setting clear expectations with clients, companies can stay ahead. Anything more there, Emily?
Emily: Agreed. It's all about maintaining that balance between innovation and responsibility. While AI holds tremendous potential, it also demands accountability from the professionals who use it. So all that's left to say is thank you for listening to this episode and look out for the next one. And if you enjoyed this episode, please subscribe, rate, and review us on your favorite podcast platform and share your thoughts and feedback with us on our social media channels.
Laura-May: Yes, thanks so much. Until next time.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Our latest podcast covers the legal and practical implications of AI-enhanced cyberattacks; the EU AI Act and other relevant regulations; and the best practices for designing, managing and responding to AI-related cyber risks. Partner Christian Leuthner in Frankfurt, partner Cynthia O'Donoghue in London with counsel Asélle Ibraimova share their insights and experience from advising clients across various sectors and jurisdictions. ----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Christian: Welcome to the Tech Law Talks, now new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI and cybersecurity threat. My name is Christian Leutner. I'm a partner at the Reed Smith Frankfurt office, and I'm with my colleagues Cynthia O'Donoghue and Asélle Ibraimova from the London office.
Cynthia: Morning, Christian. Thanks.
Asélle: Hi, Christian. Hi, Cynthia. Happy to be on this podcast with you.
Christian: Great. In late April 2024, the German Federal Office for Information Security has identified that AI, and in particular, generative AI and large language models, LLM, is significantly lowering the barriers to entry for cyber attacks. The technology, so AI, enhances the scope, speed, and the impact of cyber attacks, of malicious activities, because it simplifies social engineering, and it really makes the creation or generation of malicious code faster, simpler, and accessible to almost everybody. The EU legislator had some attacks in mind when creating the AI Act. Cynthia, can you tell us a bit about what the EU regulator particularly saw as a threat?
Cynthia: Sure, Christian. I'm going to start by saying there's a certain irony in the EU AI Act, which is that there's very little about the threat of AI, even though sprinkled throughout the EU AI Act is lots of discussion around security and keeping AI systems safe, particularly high-risk systems. But the EU AI Act contains a particular article that's focused on the design of high-risk systems and cybersecurity. And the main concern is really around the potential for data poisoning and for model poisoning. And so part of the principle behind the EU AI Act is security by design. And so the idea is that the EU AI Act regulates high-risk AI systems such that they need to be designed and developed in a way that ensures an appropriate level of accuracy robustness, and cybersecurity. And to prevent such things as data poisoning and model poisoning. And it also talks about the horizontal laws across the EU. So because the EU AI Act treats AI as a product, it brings into play other EU directives, like the Directive on the Resilience of Critical Entities and the newest cybersecurity regulation in relation to digital products. And I think when we think about AI, you know, most of our clients are concerned about the use of AI systems and being, let's say, ensuring that they're secure. But really, you know, based on that German study you mentioned at the beginning of the podcast, I think there's less attention paid to use of AI as a threat vector for cybersecurity attacks. So, Christian, what do you think is the relationship between the AI Act and the Cyber Resilience Act, for instance?
Christian: Yeah, I think, and you mentioned it already. So the legislator thought there is a link and the high-risk AI models need to implement a lot of security measures. And the latest Cyber Resilience Act requires some stakeholders in software and hardware products to also implement security measures and also imposes another or lots of different obligations on them. To not over-engineer these requirements, the AI Act already takes into account that if a high-risk AI model is in scope of the Cyber Resilience Act, the providers of those AI models can refer to the implementation of the cybersecurity requirements they made under the Cyber Resilience Act. So they don't need to double their efforts. They can just rely on what they have implemented. But it would be great if we're not only applying the law, but if there would also be some guidance from public bodies or authorities on that. Asélle, do you have something in mind that might help us with implementing those requirements?
Asélle: Yeah, so ENISA has been working on AI and cybersecurity in general, and it has produced a paper called Multi-Layer Framework for Good Cybersecurity Practices for AI last year. So it still needs to be updated. However, it does provide a very good summary of various AI initiatives throughout the world. And generally mentions that when thinking of AI, organizations need to take into consideration the general system vulnerabilities, the vulnerabilities in the underlying ICT infrastructure. And also when it comes to the use of AI models or systems, then, you know, various threats that you already talked about, such as data poisoning and model poisoning and other kind of adversarial attacks on those systems should also be taken into account. So in terms of specific kind of guidelines or standards that ENISA mentioned is ISO 4201. It's an AI management system standard. And also another noteworthy guidelines mentioned is the NIST AI risk management framework, obviously the US guidelines. And obviously both of these are to be used on a voluntary basis. But basically, their aim is to ensure developers create trustworthy AI, valid, reliable, safe, and secure and resilient.
Christian: Okay, that's very helpful. I think it's fair to say that AI will increase the already high likelihood of being subject to cyber attack at some point, that this is a real threat to our clients. And we all know from practice that you cannot defend against everything. You can be cautious, but there might be occasions when you are subject to an attack, when there has been a successful attack or there is a cyber incident. If it is caused by AI, what do we need to do as a first responder, so to say?
Cynthia: Well, there are numerous notification obligations in relation to attacks. Again, depending on the type of data or the entity involved. For instance, if the, As a result of a breach from an AI attack, it involves personal data, then there's notification requirements under the GDPR, for instance. If you're in a certain sector that's using AI, one of the newest pieces of legislation to go into effect in the EU, the Network and Information Security Directive, tiers organizations into essential entities and important entities. And, you know, depending on whether the sector the particular victim is in is subject to either, you know, the essential entity requirements or the important entity requirements, there's a notification obligation under NIST-2, for short, in relation to vulnerabilities and attacks. And ENISA, who Asélle was just talking about, has most recently issued a report for, let's say, network and other providers, which are essential entities under NIST-2, in relation to what is considered a significant or a vulnerability or a material event that would need to be notified to the regulatory entity and the relevant member state for that particular sector. And I'm sure there's other notification requirements. I mean, for instance, financial services are subject to a different regulation, aren't they, Asélle? And so why don't you tell us a bit more about the notification requirements of financial services organizations?
Asélle: The EU Digital Operational Resilience Act also provides similar requirements to the supply chain of financial entities, specifically the ICT third-party providers, which the AI providers may fall into. And Article 30 under DORA requires that there are specific, for example, contractual clauses requiring cybersecurity around data. So it requires provisions on availability, authenticity, integrity, and confidentiality. There are additional requirements to those ICT providers whose product, say AI product, perhaps as an ICT product, plays a critical or important function in the provision of the financial services. In that case... There will be additional requirements, including on ICT security measures. So in practical terms, it would mean your organizations that are regulated in this way, they are likely to ask AI providers to have additional tools, policies, measures, and to provide evidence that such measures have been taken. It's also worth mentioning about the developments on AI regulation in the UK. Previous UK government wanted to adopt a flexible, non-binding regulation of AI. However, the Labour Government appears to want to adopt a binding instrument. However, it is likely to be of limited scope, focusing only on the most powerful AI models. However, there isn't any clarity in terms of whether the use of AI in cyber threats is regulated in any specific way. Christian, I wanted to direct a question to you. How about the use of AI in supply chains?
Christian: Yeah, I think it's very important to have a look on the entire supply chain of the companies or the entire contractual relationships. Because most of our clients or companies out there do not develop or create their own AI. They will use AI from vendors or their suppliers or vendors will use AI products to be more efficient. And all the requirements, for example, the notification requirements that Cynthia just mentioned, they do not stop if you use a third party. So even if you engage a supplier, a vendor, you're still responsible to defend against cyber attacks and to report cyber incidents or attacks if they concern your company. Or at least there's a high likelihood. So it's very crucial to have those scenarios in your mind when you're starting a procurement process and you start negotiating on contracts to have those topics in the contract with a vendor, with a supplier to have notification obligations in case there is a cyber attack at that vendor that you probably have some audit rights, inspection rights, depending on your negotiation position, but at least to make sure that you are aware if something happens so that the risk that does not really or does not directly materializes at your company cannot sneak through the back door by a vendor. So that's really important that you always have an eye on your supply chain and on your third-party vendors or providers.
Cynthia: That's such a good point, Christian. And ultimately, I think it's best for organizations to think about it early. So it really needs to be embedded as part of any kind of supply chain due diligence, where maybe a new question needs to be added to a due diligence questionnaire on suppliers about whether they use AI, and then the cybersecurity around the AI that they use or contribute to. Because we've all read and heard in the papers and been exposed to through client counseling of cybersecurity breaches that have come through the supply chain and may not be direct attacks on the client itself. And yeah, I mean, the contractual provisions then are really important. Like you said, making sure that the supplier notifies the customer very early on. And then there is cooperation and audit mechanisms. Asélle, anything else to add?
Asélle: Yeah, I totally agree with what was said. I think beyond just the legal requirements, it is ultimately the question of defending your business, your data, and whether or not it's required by your customers or by specific legislation to which your organization may be subject to. It's ultimately whether or not your, business can withstand more sophisticated cyber attacks and therefore agree with both of you that organizations should take supply chain resilience and cyber security and generally higher risks of cyber attacks more seriously and put measures in place better to invest now than later after the attack. I also think that it is important for in-house teams to work together as cyber security threats are enhanced by AI. And these are legal, IT security, risk management, and compliance teams. Sometimes, for example, legal teams might think that the IT security or incident response policies are owned by IT, so there isn't much contribution needed. Or the IT security teams might think the legal requirements are in the legal team's domain, so we'll wait to hear from legal on how to reflect those. So working in silos is not beneficial. IT policies, incident response policies, training material on cybersecurity should be regularly updated by IT teams and reviewed by legal to reflect the legal requirements. The teams should collaborate on running tabletop incident response and crisis response exercises, because in the real case scenario, they will need to work hand in hand to efficiently respond to these scenarios.
Cynthia: Yeah, I think you're right, Asélle. I mean, obviously, any kind of breach is going to be multidisciplinary in the sense that you're going to have people who understand AI, understand, you know, the attack vector, which used the AI. You know, other people in the organization will have a better understanding of notification requirements, whether that be notification under the cybersecurity directives and regulations or under the GDPR. And obviously, if it's an attack that's come from the supply chain, there needs to be that coordination as well with the supplier management team. So it's definitely multidisciplinary and requires, obviously cooperation and information sharing and obviously in a way that's done in accordance with the regulatory requirements that we've talked about. So in sum, you have to think about AI and cybersecurity both from a design perspective as well as the supply chain perspective and how AI might be used for attacks, whether it's vulnerabilities into a network or data poisoning or model poisoning. Think about the horizontal requirements across the EU in relation to cybersecurity requirements for keeping systems safe and or if you're an unfortunate victim of a cybersecurity attack where AI has been used to think about the notification requirements and ultimately that multidisciplinary team that needs to be put in place. So thank you, Asélle, and thank you, Christian. We really appreciate the time to talk together this morning. And thank you to our listeners. And please tune in for our next Tech Law Talks on AI.
Asélle: Thank you.
Christian: Thank you.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Reed Smith lawyers Cheryl Yu (Hong Kong) and Barbara Li (Beijing) explore the latest developments in AI regulation and litigation in China. They discuss key compliance requirements and challenges for AI service providers and users, as well as the emerging case law on copyright protection and liability of AI-generated content. They also share tips and insights on how to navigate the complex and evolving AI legal landscape in China. Tune in to learn more about China’s distinct approach to issues involving AI, data and the law.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Cheryl: Welcome to our Tech Law Talks and new series on artificial intelligence. Over the months, we have been exploring the key challenges and opportunities within the rapidly involving AI landscape. Today, we will focus on AI regulations in China and the relevant PRC court decisions. My name is Cheryl Yu, a partner in the Hong Kong office at Reed Smith, and I'm speaking today with Barbara Li, who is a partner based in our Beijing office. Barbara and I are going to focus on the major legal regulations on AI in China and also some court decisions relating to AI tours to see how China's legal landscape is evolving to keep up with the technological advancements. Barbara, can you first give us an overview about China's AI regulatory developments?
Barbara: Sure. Thank you, Cheryl. Very happy to do that. In the past few years, the regulatory landscape governing AI in China has been evolving at a very fast pace. Although China does not have a comprehensive AI as a EU AI act, China has been leading the way in rolling out multiple AI regulations governing generative AI, debate technologies, and algorithms. In July 2023, China issued the Generative AI Measures, which becomes one of the first countries in the world to regulate generative AI technologies. These measures apply to generative AI services offered to the public in China, regardless of whether the service provider is based in China or outside China. And international investors are allowed to set up local entities in China to develop and offer AI services in China. In relation to the legal obligation, the measures lay down a wide range of legal requirements in performing and using generative AI services. Including content screening, protection of personal data and privacy, and safeguarding IPR and trade secrets, and also taking effective measures to prevent discrimination, when the company's design algorithm chooses a training data or creates a large language model.
Cheryl: Many thanks, Barbara. These are the very important compliance obligations that business should not neglect when engaging in development of AI technologies, products, and services. I understand that one of the biggest concerns in AI is how to avoid hallucination and misinformation. I wonder if China has adopted any regulations to address these issues?
Barbara: Oh, yes, definitely, Cheryl. China has adopted multiple regulations and guidelines to address these concerns. For example, the Deep Synthesis Rule, which became effective from January 2023, and this regulation aims to have a governance over the use of deep-fake technologies in generating or changing digital content. And when we talk about digital content, the regulation refers to a wide range of digital media, including video, voices, text, and images. And the deep synthesis service providers, they must refrain from using deep synthesis of services to produce or disseminate illegal information. And also, the companies are required to establish and improve proper compliance or risk management systems. Such as having the user registration system, doing the ethics review of the algorithm, and also protecting personal information, and also taking measures to protect IT and also prevent misinformation and fraud, and also, last but not least, setting up a response to the data breach. In addition, China's National Data and Cybersecurity Regulator, which is CAC, have issued a wide range of rules on algorithm fighting. And also, these algorithm fighting requirements have become effective from June 2024. According to this 2024 regulation, if a company uses algorithms in its online services with the functions of blogs, chat rooms, public accounts, short videos, or online streaming, So these staff functions are required of being capable of influencing public opinion or driving social engagement. And then the service provider is required to file its algorithm with the CAC, the regulator, within 10 working days after the launch of the service. So in order to finish the algorithm filing, the company is required to put together a comprehensive information documentation. Those information and documentation include the algorithm assessment report, security monitoring policy, data breach response plan, and also some technical documentation to explain the function of the algorithm. And also, the CAC has periodically published a list of filed algorithms, and also up to 30th of June 2024, we have seen over 1,400 AI algorithms which have been developed by more than 450 companies, and those algorithms have been successfully filed by the CAC. So you can see this large number of AI algorithm findings indeed have highlighted the rapid development of AI technologies in China. And also, we should remember that the large volume of data is a backbone of AI technologies. So we should not forget about the importance of data protection and privacy obligations when you develop and use AI technologies. Over the years, China has built up a comprehensive data and privacy regime with the three pillars of national laws. Those laws include the Personal Information Protection Law, normally in short name PIPL, and also the Cybersecurity Law and Data Security Law. So the data protection and cybersecurity compliance requirements got to be properly addressed when companies develop AI technologies, products, and services in China. And indeed, there are some very complicated data requirements and issues under the Chinese data and cybersecurity laws. For example, how to address the cross-border data transfer. So it's very important to remember those requirements. China data requirement and the legal regime is very complex. So given the time constraints, probably we can find another time to specifically talk about the data issues under the Chinese.
Cheryl: Thanks, Barbara. Indeed, there are some quite significant AI and data issues which would warrant more time for a deeper dive. Barbara, can you also give us some update on the AI enforcement status in China and share with us your views on the best practice that companies can take in mitigating those risks?
Barbara: Yes, thanks, Cheryl. Indeed, Chinese AI regulations do have keys. For example, the violation of the algorithm fighting requirement can result in fines up to RMB 100,000. And also the failure to comply with those compliance requirements in developing and using technologies can also trigger the legal liability under the Chinese PIPL, which is Personal Information Protection Law, and also the cyber security law and the data security law. And under those laws, a company can be imposed a monetary fine up to RMB 15 million or 5% of its last year turnover. In addition, the senior executives of the company can be personally subject to liability, such as a penalty up to a fine up to 1 million RMB, and also the senior executives can be barred from taking senior roles for a period of time. In the worst scenario, criminal liability can be pursued. So, in the first and second quarters of this year, 2024, we have seen some companies have been caught by the Chinese regulators for failing to comply with the AI requirements, ranging from failure to monitor the AI-generated content or neglecting the AI algorithm-finding requirements. Noncompliance has resulted in the suspension of their mobile apps pending ratification. As you can see, that noncompliance risk is indeed real, so it's very important for the businesses to pay close attention to the relevant compliance requirements. So to just give our audience a few quick takeaways in terms of how to address the AI regulatory and legal risk in China, we would say probably the companies can consider three most important compliance steps. The first is that with the faster development of AI in China, it's crucial to closely monitor the legislative and enforcement development in AI, data protection, and cybersecurity. security. While the Chinese AI and data laws share some similarities with the laws in other countries, for example, the EU AIF and the European GDPR, Chinese AI and data laws and regulations indeed have its unique characteristics and requirements. So it's extremely important for businesses to understand the Chinese AI and data laws, conduct proper analysis of the key business implications. And also take appropriate compliance action. So that is number one. And the second one, I would say, in terms of your specific AI technologies, products and services rolling out in the China market, it's very important to do the required impact assessment to ensure compliance with accountability, bias, and also accessibility requirements, and also build up a proper system for content monitoring. If your algorithm falls within the scope subject to fighting requirements, you definitely need to prepare the required documents and finish the algorithm fighting as soon as possible to avoid the potential penalties and compliance rates. And the third one is that you should definitely prepare the China AI policies, the AI terms of use, and build up your AI governance and compliance mechanism in line with the evolving Chinese AI regulation, and also train your team on the use of AI for compliance in their day-to-day work. So it's also very important, very interesting to note that in the past month, Chinese schools have given some landmark rulings in trials in relation to AI technology. Those rulings cover various AI issues, ranging from copyright protection of AI-generated content, data scraping, and privacy. Cheryl, can you give us an overview about those cases and what takeaways we can get from those rulings?
Cheryl: Yes, thanks, Barbara. As mentioned by Barbara, with the emerging laws in China, there have been a lot of questions relating to AI technologies which are interacted with copyright law. The most commonly discussed questions include if users instruct an AI tour to produce an image, who is the author of the work, the AI tour, or the person giving instructions to the AI tour. And if the AI tour generates a work that bears a strong resemblance to another work already published, would that constitute an infringement of copyright? Before 2019, the position in China was that works generated by AI machines generally were not subject to copyright protection. For a work to be copyrightable, the courts will generally consider whether the work is created by natural persons and whether the work is original. Subsequently, there has been a shift in the Chinese court's position, in which the courts are more inclined to protect the copyrights of AI-generated content. For example, the Nanshan District Court of Shenzhen handed down a decision, Shenzhen Tencent versus Shanghai Yinsheng, in 2019. The court held that the plaintiff, Shenzhen Tencent, should be regarded as the author of an article, which was generated by an AI system at the supervision of the plaintiff. The court further held that the intellectual contribution of the plaintiff's staff, including inputting data, setting prompts, selecting the template, and the layout of the article, played a direct role in shaping the specific expression of the article. Hence, the article demonstrated sufficient originality and creativity to warrant copyright protection. Similarly, the Beijing Internet Court reached the same decision in Li Yunkai v. Liu Yuanchun in 2023, and the court held that AI-generated content can be subject to copyright protection if the human user has contributed substantially to the creation of the work. In its judgment, the court ruled that an AI machine cannot be an author of the work, since it is not human. And the plaintiff is entitled to the copyright of the photo generated by the AI machine on the grounds that the plaintiff personally chose and arranged the order of prompts, set the parameters, and detected the style of the output, which warrants a sufficient level of originality in the work. As you may note, in both cases, for work to be copyrightable in China, the courts no longer required it to be created entirely by a human being. Rather, the courts focused on whether there was an element of original intellectual achievement. Interestingly, there's another case handed down by the Hangzhou Internet Court in 2023, which has been widely criticized in China. This court decided that the AI was not an author, not because it was non-human, but because it was a weak AI and did not possess the relevant capability for intellectual creation. And this case has created some uncertainty as to what is the legal status of the AI if it is stronger and has the intellectual capability to generate original works, and the questions such as, would such an AI be qualified as an author and be entitled to copyright over its works? Those issues remain to be seen as the technology and law develops.
Barbara: Thank you, Cheryl. We now understand the position in relation to the authorship under the Chinese law. What about the plaintiffs? What about the platforms which provide generative AI tools? I understand that they also face the question of whether there will be secondary level for infringement of AI generated content output. Have the Chinese courts issued any case on this topic?
Cheryl: Many thanks, Barbara. Yes, there's some new development on this issue in China in early 2024. And the Guangzhou Internet Court published a decision on this issue, which is the first decision in China regarding the secondary liability of AI platform providers. And the plaintiff in this case has exclusive rights to a Japanese cartoon image, the Ultraman, including various rights such as reproduction, adaptation, etc. And the defendant was an undisclosed AI company that operates a website with AI conversation function and AI image generation function. These functions were provided using an unnamed third-party provider's AI model, which was connected to the defendant's website. The defendant allowed visitors to their website to use this AI model to generate images, but it hadn't created the AI model themselves. The plaintiff eventually discovered that if one input prompts related to Ultraman, the the generative AI tool would produce images highly similar to Ultraman. Then the plaintiff eventually brought an action of copyright infringement against the defendant. And the court held that, in this case, the defendant platform has breached a duty of care to take appropriate measures to ensure that outputs do not contribute any copyright law and the relevant AI regulations in China. And the output that the AI generative tool created has infringed on the copyright of the other protected works. So this Ultraman case serves a timely reminder to Chinese AI platform providers that it is of utmost importance to comply with the relevant laws and regulations in China. And another interesting point of law is the potential liability of AI developers in the scenario that copyright materials are used to train the AI tour. So far, there haven't been any decisions relating to this issue in China, and it remains to be seen whether AI model developers would be liable for infringement of copyright in the process of training their AI models with copyrightable materials, and if so, whether there are any defenses available to them. We shall continue to follow up and keep everyone posted in this regard.
Barbara: Yes, indeed Cheryl those are all very interesting developments. So to conclude for our podcast today, with the advancement of AI technology, it's almost inevitable that more legal challenges will emerge related to the training and application of a generative AI system. To this course, we have been expected to develop innovative legal interpretations to strike a balance between safeguarding copyright and promoting the technology innovation and growth. So our team, Reed Smith in Greater China, will bring all the updates to you on the development. So please do stay tuned. Thank you.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Reed Smith partners Claude Brown and Romin Dabir discuss the challenges and opportunities of artificial intelligence in the financial services sector. They cover the regulatory, liability, competition and operational risks of using AI, as well as the potential benefits for compliance, customer service and financial inclusion. They also explore the strategic decisions firms need to make regarding the development and deployment of AI, and the role of regulators play in supervising and embracing AI.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Claude: Welcome to Tech Law Talks and our new series on artificial intelligence, or AI. Over the coming months, we'll explore key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in financial services. And to do that, I'm here. My name is Claude Brown. I'm a partner in Reed Smith in London in the Financial Industry Group. And I'm joined by my colleague, Romin Dabir, who's a financial services partner, also based in London.
Romin: Thank you, Claude. Good to be with everyone.
Claude: I mean, I suppose, Romin, one of the things that strikes me about AI and financial services is it's already here. It's not something that's coming in. It's been established for a while. We may not have called it AI, but many aspects it is. And perhaps it might be helpful to sort of just review where we're seeing AI already within the financial services sector.
Romin: Yeah, absolutely. No, you're completely right, Claude. Firms have been using AI or machine learning or some form of automation in their processes for quite a while, as you rightly say. And this has been mainly driven by searches for efficiency, cost savings, as I'm sure the audience would appreciate. There have been pressures on margins and financial services for some time. So firms have really sought to make their processes, particularly those that are repetitive and high volume, as efficient as possible. And parts of their business, which AI has already impacted, include things like KYC, AML checks, back office operations. All of those things are already having AI applied to them.
Claude: Right. I mean, some of these things sound like a good thing. I mean, improving customer services, being more efficient in the know-your-customer, anti-money laundering, KYC, AML areas. I suppose robo-advice, as it's called sometimes, or sort of asset management, portfolio management advice, might be an area where one might worry. But I mean, the general impression I have is that the regulators are very focused on AI. And generally, when one reads the press, you see it being more the issues relating to AI rather than the benefits. I mean, I'm sure the regulators do recognize the benefits, but they're always saying, be aware, be careful, we want to understand better. Why do you think that is? Why do you think there's areas of concern, given the good that could come out of AI?
Romin: No, that's a good question. I think regulators feel a little bit nervous when confronted by AI because obviously it's novel, it's something new, well, relatively new that they are still trying to understand fully and get their arms around. And there are issues that arise where AI is applied to new areas. So, for example, you give the example of robo-advice or portfolio management. Now, these were activities that traditionally have been undertaken by people. And when advice or investment decisions are made by people, it's much easier for regulators to understand and to hold somebody accountable for that. But when AI is involved, responsibility sometimes becomes a little bit murkier and a little bit more diffuse. So, for example, you might have a regulated firm that is using software or AI that has been developed by a specialist software developer. And that software is able to effectively operate with minimal human intervention, which is really one of the main drivers behind it, behind the adoption of AI, because obviously it costs less, it is less resource intensive in terms of skilled people to operate it. It but under those circumstances who has the regulatory responsibility there is it the software provider who makes the algorithm programs the software etc etc and then the software goes off and makes decisions or provides the advice or is it the firm who's actually running the software on its systems when it hasn't actually developed that software? So there are some naughty problems i think that regulators are are still mulling through and working out what they think the right answers should be.
Claude: Yeah I can see that because I suppose historically the the classic model certainly in the UK has been the regulator say if you want to outsource something thing. You, the regulated entity, be you a broker or asset manager or a bank, you are, or an investment firm, you are the authorized entity, you're responsible for your outsourcer or your outsource provider. But I can see with AI, that must get a harder question to determine, you know, because say your example, if the AI is performing some sort of advisory service, you know, has the perimeter gone beyond the historically regulated entity and does it then start to impact on the software provider. That's sort of one point and you know how do you allocate that responsibility you know that strict bright line you want to give it to a third party provider it's your responsibility. How do you allocate that responsibility between the two entities even outside the regulator's oversight, there's got to be an allocation of liability and responsibility.
Romin: Absolutely. And as you say, with traditional outsource services, it's relatively easy for the firm to oversee the activities of the outsource services provider. It can get MI, it can have systems and controls, it can randomly check on how the outsource provider is conducting the services. But with something that's quite black box, like some algorithm, trading algorithm for portfolio management, for example, it's much harder for the firm to demonstrate that oversight. It may not have the internal resources. How does it really go about doing that? So I think these questions become more difficult. And I suppose the other thing that makes it more difficult with AI to the traditional outsourcing model, even the black box algorithms, is by and large they're static. You know, whatever it does, it keeps on doing. It doesn't evolve by its own processes, which AI does. So it doesn't matter really whether it's outsourced or it's in-house to the regulated entity. That thing's sort of changing all the time and supervising it is a dynamic process and the speed at which it learns which is in part driven by its usage means that the dynamics of its oversight must be able to respond to the speed of it evolving.
Romin: Absolutely and and you're right to highlight all of the sort of liability issues that arise, not just simply vis-a-vis sort of liabilities to the regulator for performing the services in compliance with the regulatory duties, but also to clients themselves. Because if the algo goes haywire and suddenly, you know, loses customers loads of money or starts making trades that were not within the investment mandate provided by the client where does the buck stop is that with the firm is that with the person who who provided the software it's it's all you know a little difficult.
Claude: I suppose the other issue is at the moment there's a limited number of outsourced providers and. One might reasonably expect competition being what it is for that to proliferate over time but until it does I would imagine there's a sort of competition issue a not only a competition issue in one system gaining a monopoly but that particular form of large model learning then starts to dominate and produce, for want of a better phrase, a groupthink. And I suppose one of the things that puzzles me is, is there a possibility that you get a systemic risk by the alignment of the thinking of various financial institutions using the same or a similar system of AI processes, which then start to produce a common result? And then possibly producing a common misconception, which introduces a sort of black swan event that was anticipated.
Romin: And sort of self-reinforcing feedback loops. I mean, there was the story of the flash crash that occurred with all these algorithmic trading firms all of a sudden reacting to the same event and all placing sell orders at the same time, which created a market disturbance. That was a number of years ago now. You can imagine such effects as AI becomes more prevalent, potentially being even more severe in the future.
Claude: Yeah, no, I think that's, again, an issue that regulators do worry about from time to time.
Romin: And I think another point, as you say, is competition. Historically, asset managers have differentiated themselves on the basis of the quality of their portfolio managers and the returns that they deliver to clients, etc. But here in a world where we have a number of software providers, maybe one or two of which become really dominant, lots of firms are employing technology provided by these firms, differentiating becomes more difficult in those circumstances.
Claude: Yeah and I guess to unpack that a little bit you know as you say portfolio managers have distinguished themselves by better returns than the competition and certainly better returns than the market average and that then points to the quality of their research and their analytics. So then i suppose the question becomes to what extent is AI being used to produce that differentiator and how do you charge your you know your your fees based on that is this you've got better technology than anyone else or is it you've got a better way to deploy the technology or is it that you've just paid more for your technology you know how did how because transforming the input of AI into the analytics and the portfolio management. Is quite a difficult thing to do at the best of times. If it's internal, it's clearly easier because it's just your mousetrap and you built that mousetrap. But when you're outsourcing, particularly in your example, where you've got a limited number of technology providers, that split I can see become quite contentious.
Romin: Yeah, absolutely. Absolutely. And I think firms themselves will need to sort of decide what approach they are going to take to the application of AI, because if they go down the outsourced approach, that raises issues that we've discussed so far. Conversely if they adopt a sort of in-house model they have more control the technology's proprietary potentially they can distinguish themselves and differentiate themselves better than relying on an outsource solution but then you know cost is far greater will they have the resources resources expertise and really to compete with these large specialist providers to many different firms. There's lots of strategic decisions that firms need to make as well.
Claude: Yeah but I mean going back to the regulators for a moment Romin, it does seem to me that you know there are some benefits to regulators in embracing AI within their own world because certainly we already see the evidence that they're very comfortable using manipulation of large databases. For example, it's trade repositories or it's trade reporting. We can see sort of enforcement actions being brought using databases that have produced the information the anomalies and as I see it AI can only improve that form of surveillance enforcement whether that is market manipulation or insider dealing or looking across markets to see whether sort of concurrent or collaborative activity is engaged and it may not get to the point where the AI is going to to bring the whole enforcement action to trial. But it certainly makes that demanding surveillance and oversight role for a regulator a lot easier.
Romin: Absolutely. Couldn't agree more. I mean, historically, firms have often complained. And it's a very common refrain in the financial services markets We have to make all these ridiculous reports, detailed reports, send all this information to the regulator. And firms were almost certain that it would just disappear into some black hole and never be looked at again. Again, historically, that was perhaps true, but with the new technology that is coming on stream, it gives regulators much more opportunity to meaningfully interrogate that data and use it to either bring enforcement action against firms or just supervise trends, risks, currents in markets which might otherwise not have been available or apparent to them.
Claude: Yeah, I mean, I think, to my mind, data before you apply technology to it, it's rather like the end of Raiders of the Lost Ark in the Spielberg film, you know, where they take the Covenant and push it into that huge warehouse and the camera pans back and you just see massive, massive data. But I suppose you're right with AI, that you can go and find the crate with the thing in other Spielberg films are available. it seems to me almost inexorable that the use of AI in financial services will increase and you know the potential and the efficiencies particularly with large scale and repetitive tasks and more inquiry it's not just a case of automation it's a case of sort of overseeing it but I suppose that begs a bit of a question as to who's going to be the dominant force in in the market. Is it going to be the financial services firms or the tech firms that can produce more sophisticated AI models.
Romin: Absolutely, I mean I think we've seen amongst the AI companies themselves so you know the the key players like google open AI microsoft there's a bit of an arms race between themselves as to the best LLM who can come up with the most accurate, best, fastest answers to queries. I think within AI and financial services, it's almost inevitable that there'll be a similar race. And I guess the jury's still out as to who will win. Will it be the financial services firms themselves, or will it be these specialist technology companies that apply their solutions in the financial services space? I don't know, but it will certainly be interesting to see.
Claude: Well, I suppose the other point with the technology providers, and you're right, I mean, you can already see that when you get into cloud-based services and software as a service and the others, that the technology is becoming a dominant part of financial services, not necessarily the dominant, but certainly a large part of it. And that, to my mind, has a really interesting question about the commonality of technology and I in general and AI in particular in you, know you can now see the these services and particularly you know and I can see this with AI as well entering into a number of financial sectors which historically have been diffused so the use of AI for example in insurance the the use in banking, the use in asset management, the use in broking, the use in advisory services, there's now a coming together of the platforms and the technology, such as LLM, across all of them. And that then begs the question, is there an operational resilience question? It's almost like, does AI ever become so pervasive that is a bit like electricity, power. You can see the CrowdStrike. Is the technology so all-pervasive that actually it produces an operational risk concern that would cause a regulator to take it to an extreme, to alter the operational risk charge in the regulatory capital environment?
Romin: Yeah, exactly. I think these are certainly is the space that regulators are looking at with increased attention, because some of the emerging risks, etc, might not be apparent. So like you mentioned with CrowdStrike, nobody really knew that this was an issue until it happened. So regulators, I think, are very nervous of the unknown unknowns.
Claude: Yeah. I mean, it seems to me that AI has a huge potential in the financial services sector, in, A, facilitating the mundane, but also in being proactive in identifying anomalies. Potentials for errors, potentials for fraud. It's like, you know, there's a huge amount that it can contribute. But as always, you know, that brings structural challenges.
Romin: Absolutely. And just on the point that we were discussing earlier about the increased efficiencies that it can bring to markets you know there's been a recognized problem with the so-called advice gap in the uk where the kind of mass affluent less high net worth investors aren't really willing to pay for the receipt of financial advice. As technology gets better, the cost of accessing more tailored, intelligent advice will hopefully come down, leading to the ability for people to make more sensible financial decisions.
Claude: Which I'm sure was part of the responsibility of financial institutions to improve financial and fiscal education. That's going to be music to a regulator's ears. Well, Romin, interesting subject, interesting area. We live, as the Chinese say, in interesting times. But I hope to those of you who've listened, it's been interesting. We've enjoyed talking about it. Of course, if you have any questions, please feel free to contact us, my colleague, Romin Dabir, or myself, Claude Brown. You can find our contact details accompanying this and also on our website. Thank you for listening.
Romin: Thank you.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
This episode highlights the new benefits, risks and impacts on operations that artificial intelligence is bringing to the transportation industry. Reed Smith transportation industry lawyers Han Deng and Oliver Beiersdorf explain how AI can improve sustainability in shipping and aviation by optimizing routes and reducing fuel consumption. They emphasize AI’s potential contributions from a safety standpoint as well, but they remain wary of risks from cyberattacks, inaccurate data outputs and other threats.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Han: Hello, everyone. Welcome to our new series on AI. Over the coming months, we will explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, my colleague Oliver and I will focus on AI in shipping and aviation. My name is Han Deng, a partner in the transportation industry group in New York, focusing on the shipping industry. So AI and machine learning have the potential to transform the transportation industry. What do you think about that, Oliver?
Oliver: Thanks, Han, and it's great to join you. My name is Oliver Beiersdorf. I'm a partner in our transportation industry group here at Reed Smith, and it's a pleasure to be here. I'm going to focus a little bit on the aviation sector. And in aviation, AI is really contributing to a wide spectrum of value opportunities, including enhancing efficiency, as well as safety-critical applications. But we're still in the early stages. The full potential of AI within the aviation sector is far from being harnessed. For instance, there's huge potential for use in areas which will reduce human workload or increase human capabilities in very complex scenarios in aviation.
Han: Yeah, and there's similar potential within the shipping industry with platforms designed to enhance collision avoidance, route optimization, and sustainability efforts. In fact, AI is predicted to contribute $2.5 trillion to the global economy by 2030.
Oliver: Yeah, that is a lot of money, and it may even be more than that. But with that economic potential, of course, also comes substantial risks. And AI users and operators and industries now getting into using AI have to take preventative steps to avoid cyber security attacks. Inaccurate data outputs, and other threats.
Han: Yeah, and at Reed Smith, we help our clients to understand how AI may affect their operations, as well as how AI may be utilized to maximize potential while avoiding its pitfalls and legal risks. During this seminar, we will highlight elements within the transportation industry that stand to benefit significantly from AI.
Oliver: Yeah, so a couple of topics that we want to discuss here in the next section, and there's really three of them which overlap between shipping and aviation in terms of the use of AI. And those topics are sustainability, safety, and business efficiency with the use of AI. In terms of sustainability, across both sectors, AI can help with route optimization, which saves on fuel and thus enhances sustainability.
Han: AI can make a significant difference in sustainability across the whole of the transportation industry by decreasing emissions. For example, within the shipping sector, emerging tech companies are developing systems that can directly link the information generated about direction and speed to a ship's propulsion system for autonomous regulation. AI also has the potential to create optimized routes using sensors that track and analyze real-time and variable factors such as wind speed and current. AI can determine both the ideal route and speed for a specific ship at any point in the ocean to maximize efficiency efficiency and minimize fuel usage.
Oliver: So you can see the same kind of potential in the aviation sector. For example, AI has the potential to assist with optimizing flight trajectories, including creating so-called green routes and increasing prediction accuracy. AI can also provide key decision makers and experts with new features that could transform air traffic management in terms of new technologies and operating procedures and creating greater efficiencies. Aside from reducing emissions, these advances have the potential to offer big savings in energy costs, which, of course, is a major factor for airlines and other players in the industry, with the cost of gas being a major factor in their budgets, and in particular, jet fuel for airlines. So advances here really have the potential to offer big savings that will enable both sectors to enhance profitability while decreasing reliance on fossil fuels.
Han: I totally agree. And further, you know, in terms of safety. AI can be used with the transportation industry to assist with safety assessment and management by identifying, managing, and predicting various safety risks.
Oliver: Right. So, in the aviation sector, AI has the potential to increase safety by driving the development of new air traffic management systems to maintain distances from aircraft. Planning safer routes, assisting in approaches to busy airports, And the development of new conflict detection, traffic advisories, and resolution tools, along with cyber resilience. What we're seeing, of course, in aviation, and there's a lot of discussion about, is the use of drones and EV tools, so electronic, vertical, takeoff, and landing aircraft. All of which add more complexity to the existing use of airspace. And you're seeing many players in the industry, including retailers who deliver products, using eVTOLs and drones to deliver product. And AI can be a useful assistant, that is, to ATM actors from planning, to operations, and really across all airspace users. It can benefit airline operators as well, who depend on predictable routine routes and services by using aviation data to predict air traffic management more accurately.
Han: That's fascinating, Oliver. Same within the shipping sector, for example, AI has the capacity to create 3D models for areas and use those models to simulate the impact of disruptions that may arise. AI can also enhance safety features through the use of vision sensors that can respond to ship traffic and prevent accidents. As AI begins to be able to deliver innovative responses that enhance predictability and resilience of the traffic management system, efficiency will increase productivity and enhance use of scarce resources like airspace, runways, and stuff.
Oliver: Yeah. So it'll be really interesting to follow, you know, how this develops. It's all still very new. Another area where you're going to see the use of AI, and we already are, is in terms of business efficiency, again, in both the shipping and aviation sectors. There's really a lot of potential for AI, including in generating data and cumulative reports based on real-time information. And by increasing the speed by which the information is processed, companies can identify issues early on and perform predictive maintenance to minimize disruptions. The ability to generate reports is also going to be useful in ensuring compliance with regulations and also coordinating work with contractors, vendors, partners, such as code share partners in commercial aviation and other stakeholders in the industry.
Han: Yeah, and AI can be used to perform comprehensive audits to ensure that all cargo is present and that it complies with contracts, local and national regulation, which can help identify any discrepancies quickly and lead to swift resolution. AI can also be used to generate reports based on this information to provide autonomous communication within contractors about cargo location and the estimated time of arrival. Increasing communication and visibility in order to inspire trust and confidence. Aside from compliance, these reports will also be useful in ensuring efficiencies in management and business development and strategy by performing predictive analytics in various areas, such as demand forecasting.
Oliver: And despite all these benefits, of course, as with any new technology, you need to weigh that against the potential risk and various things that can happen by using AI. So let's talk a little bit about cybersecurity and regulation being unable to keep pace with technology development, inaccurate data, and industry fragmentation. Things are just happening so fast that there's a huge risk. Associated with the use of artificial intelligence in many areas, but also in the transportation industry, including as a result of cybersecurity attacks. Data security breaches can affect airline operators or can also occur on vessels, in port operations, and in undersea infrastructure. Cyber criminals who are becoming more and more sophisticated can even manipulate data inputs, causing AI platforms on vessels to misidentify malicious maritime activity as legitimate trade or safe. Actors using AI are going to need to ensure the cyber safety of AI-enabled systems. I mean, that's a focus in both shipping and aviation and in other industries. Businesses and air traffic providers need to ensure that AI-enabled applications have robust cybersecurity elements built into their operational and maintenance schedules. Shipping companies will need to update their current cybersecurity systems and risk assessment plans to develop these threats and comply with relevant data and privacy laws. A real recent example is the CrowdStrike software outage on July 19th, which really affected almost every industry. But we saw it being particularly acute in the aviation industry and commercial aviation with literally thousands of flights being canceled with massive disruption to the industry. And interestingly, the CrowdStrike software outage, what we're talking about there is really software that's intended to avoid cyber criminal risk. And a, you know, a programming issue can result in, you know, systems being down and these types of massive disruptions, because of course, in both aviation and in shipping, we're so reliant on technologies and the issue of regulation and really the inability of regulators to keep up up with this incredible fast pace is another concern. And regulations are always reactive. In this instance, AI continues to rapidly develop and regulations do not necessarily effectively address AI. In its most current form. The unchecked use of AI could create and increase the risk of cybersecurity attacks and data privacy law violations, and frankly, create other risks that we haven't even been able to predict.
Han: Wow, we really need to buckle up in the times of cybersecurity. And talking about inaccurate data, quality of AI depends upon the quality of its data inputs. Therefore, misleading and inaccurate data sets could lead to imprecise predictions for navigation. Alternatively, there is a risk that users may rely too heavily on AI platforms to make important decisions about collision avoidance and route optimization. And so the shipping companies must be sure to properly train their employees on the proper uses of AI. And speaking of industry fragmentation, AI is an expensive tool. Poor economies will be unable to integrate AI platforms in their maritime or aviation operations, which could fragment global trade. For example, without harmony in the AI use and proficiency, the shipping industry may see a decrease in revenue, a lack of global governance, and the rise of the black market dark fleets.
Oliver: There's just so much to talk about in this area. It's really almost mind-blowing. But in conclusion, I think a couple points that have come out of our discussion is that if the industry takes action and fully captures AI-enabled value opportunities in both the short and the long terms, the potential for AI is just huge. But we have to be very mindful of the associated risks and empower private industry and governments to provide resolutions through technology, but also regulations. So thank you very much for joining us. That's it for today. And we really appreciate you listening in to our Tech Law Talks.
Han: Thank you.
Oliver: Thank you.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Emerging technology lawyers Therese Craparo, Anthony Diana and Howard Womersley Smith discuss the rapid advancements in AI in the financial services industry. AI systems have much to offer but most bank compliance departments cannot keep up with the pace of integration. The speakers explain: If financial institutions turn to outside vendors to implement AI systems, they must work to achieve effective risk management that extends out to third-party vendors.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Therese: Hello, everyone. Welcome to Tech Law Talks and our series on AI. Over the coming months, We'll be exploring the key challenges and opportunities within the rapidly evolving AI landscape. And today we'll be focusing on AI in banking and the specific challenges we're seeing in the financial services industry and how the financial services industry are approaching those types of challenges with AI. My name is Therese Craparo. I am a partner in our Emerging Technologies Group here at Reed Smith, and I will let my colleagues on this podcast introduce themselves. Anthony?
Anthony: Hey, this is Anthony Diana, partner in the New York office of Reed Smith, also part of the Emerging Technologies Group, and also, for today's podcast, importantly, I'm part of the Bank Tech Group.
Howard: Hello, everyone. My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies Group at Reed Smith in London. As Anthony says, I'm also part of the Bank Tech Group. So back to you, Therese.
Therese: All right. So just to start out, what are the current developments or challenges that you all are seeing with AI in the financial services industry?
Anthony: Well, I'll start. I think a few things. Number one, I think we've seen that the financial services industry is definitely all in on AI, right? I mean, there's definitely a movement in the financial services industry. All the consultants have said this, that this is one of the areas where they expect AI, including gender of AI, to really have an impact. And I think that's one of the things that we're seeing is there's a tremendous amount of pressure from the legal and compliance departments because the businesses are really pushing to be AI forward and really focusing on AI. So one of the challenges is that this is here. It’s now. It's not something you can plan for. I think half of what we're seeing is AI tools are coming out frequently, sometimes not even with the knowledge of legal compliance, sometimes with knowledge of the business, where because it's in the cloud, they just put in an AI feature. So that is one of the challenges that we're dealing with right now, which is catch up. Things are moving really quickly, and then people are trying to catch up to make sure that they're compliant with whatever regs that are out there. Howard?
Howard: I agree with that. I think that banks are all in with the AI hype cycle, and I certainly think it is a hype cycle. I think that generally the sector is at the same pace, and at the moment we're looking at an uptick of interest and procurement of AI systems into the infrastructure of banks. I think that, you know, from the perspective of, you know, what the development phase is, I think we are just looking at the stage where they are buying in AI. We are beyond the look and see, the sourcing phase, looking at the buying phase and the impingement of AI into those banks. And, you know, what are the challenges there for? Well, challenges are twofold. One, it's from an existential perspective. Banks are looking to increase shareholder value, and they are looking to drive down costs, help, and we've seen that too with dependency technology that banks have had over the past 15 or more years. AI is an advantage of that, and it's an ability for banks to impose more automation within their organizations and less focus on humans and personnel. And we'll talk a bit more about what that involves and the risks, particularly, that could be created from relying solely on technology and not involving humans, which some proponents of AI anticipate.
Therese: And I think what's interesting, just picking up on what both of you are saying, in terms of how those things come together, including from a regulatory perspective, is that historically the financial industry has used variations of AI in a lot of different ways for trading analysis, for data analysis and the like. Like, so it's not, the concept of AI is not unheard of in the financial services industry, but I do think is interesting to talk about Howard talking about the hype cycle around generative AI. That's what's throwing kind of a wrench in the process, not just for traditional controls around, you know, AI modeling and the like, but also for business use, right? Because, you know, as Howard's saying, the focus is currently is how do we use all of these generative AI tools to improve efficiencies, to save costs, to improve business operations, which is different than the use cases that we've seen in the past. And at the same time, Anthony, as you're saying, it's coming out so quickly and so fast. The development is so fast, relatively speaking. The variety of use cases is coming across so broad in a way that it hasn't than before. And the challenges that we're seeing is that the regulatory landscape, as usual with technology, isn't really keeping up. We've got guidance coming from, you know, various regulators in the U.S. The SEC has issued guidance. FINRA has issued guidance. The CFPB has issued guidance. And all of their focus is a little bit different in terms of their concerns, right? There's concerns about ethical use and the use with consumers and the accuracy and transparency and the like. But there's concerns about disclosure and appropriate due diligence and understanding of the AI that's being used. And then there's concerns about what data it's being used on and the use of AI on highly confidential information like MNPI, like CSI, like consumer data and the like. And none of it is consolidated or clear. And that's in part because the regulators are trying to keep up. And they do tend not to want to issue strict guidance on technology as it's developing, right, because they're still trying to figure out what the appropriate use is. So we have this sort of confluence of brand new use cases, democratization, the ability to, you know, extend the use of AI very broadly to users, and then the speed of development that I think the financial services industry is struggling to keep up with themselves.
Anthony: Yeah, and I think the regulators have been pretty clear on that point. Again, they're not giving specific guidance, I would say, but they say two of the things that they most are concerned with is like the AI washing, which is, and they've already done some finds where if you tout you're using AI, you know, for trading strategies or whatever, and you're not, that you're going to get dinged. So that's obviously going to be part of whatever financial services due diligence you're going to be doing on a product, like making sure that actually is AI is going to be important, because that's something the regulators care about. And then the other thing, as you said, is it's the sensitive information, whether it's material, non-public information. I expect, like you said, the confidential supervisory information, any AI touching on those things is going to be highly sensitive. And I think, you know, one of the challenges that most financial institutions have is they don't know where all this data is, right? Or they don't have controls around that data. So I think that's, you know, again, that's part of the challenge is as much as they're, you know, every financial institution is going out there saying, we're going to be leveraging AI extensively. And whether they are or not remains to be seen. There is potential regulatory issues with saying that and not actually doing it, which is, I think, somewhat new. And I think just, again, as we sort of talked about this, is are the financial institutions really prepared for this level of change that's going on? And I think that's one of the challenges that we're seeing, is that, in essence, they're not built for this, right? And Howard, you're seeing it on the procurement side a lot as they're starting to purchase this. Therese and I are seeing it on the governance side as they try to implement this, and they're just not ready because of the risks involved to actually fully implement or use some of these technologies.
Therese: So then what are they doing? What do we see the financial services industry doing to kind of approach the management governance of AI in the current environment?
Howard: Well, I can answer that from an operational perspective before we go into a government's perspective. From an operational perspective, it's what Anthony was alluding to, which is banks cannot keep up with the pace of innovation. And therefore, they need to look out into the market for technological solutions that advance them over their competitors. And when they're all looking at AI, they're all clambering over each other to look at the best solutions to procure and implement into their organizations. We're seeing a lot of interest from banks at buying AI systems from third-party providers. From a regulatory landscape, that draws in a lot of concern because there are existing regulations in the US, in the UK and EU around how you control your supply chain and make sure that you manage your organization responsibly and faithfully with adequate risk management systems, which extends all the way out to your reliance on third party vendors. And so the way that we're seeing banks implement these risk management systems in the context of procurement is through contracts. And that's what we get in a lot. How do they govern the purchasing of AI systems into their organization from third-party vendors? And to what extent can they legislate against everything? They can't. And so the contracts have to be extremely fit for purpose and very keenly focused on the risks that AI, when deployed within their business, and this is all very novel. And this, for my practice, is the biggest challenge I'm seeing. Once they deploy it into the organization, that's where Anthony and Therese, I'll pass it back to you.
Anthony: Yeah. And I think, Howard, I think one of the things that we're seeing instead of the consequence here is that oftentimes, and this is one of the challenges, is really a lot of the due diligence in terms of how does the tool work? How will it be implemented? Should be done before the contracting? I think that's one of the things that we're seeing. When does the due diligence come in? We're seeing it a lot. They contract it already. Now we're doing the due diligence, testing it and the like. And I think that's one of the challenges I think that we're going to be seeing. I think one of the things, just from a governance perspective, and this is probably the biggest challenge, is just when you think about governance and hopefully you have a committee, I think a lot of organizations have some type of committee or whatever that's sort of reviewing this. I think one of the things that we've seen and where these committees and governance is failing is that it's not accounting for everything. It's going to the committee and they're signing off on data use, for example, and saying, okay, this type of data is appropriate use or it's not training the model and stuff like that, which are very high level and very important topics to cover. But it's not everything. And I think one of the things that we're seeing from a governance perspective is where do you do the due diligence? Where do you get the transparency? You could have a contractual relationship and say that the tool works a certain way. It's only doing this. But are we just going to rely on representations? Or are we actually going to do the due diligence, asking the questions, really probing, figuring out the settings, all of that? The earlier you do that, the better. Frankly, a lot of it, if it was before the contract, would be better because then if you find some certain risks, the contracts can sort of reflect those risks. So that's one of, I think, the governance challenges we have as we move forward here. And also, as I talked about earlier, sometimes the contracts are already set and then they put in an AI feature. That often is another gap that we're seeing in a lot of organizations where they may have third-party governance on the procurement side and they have contracts and the like, but then they don't really have governance on new features coming in on a contract that's in existence already, and then you have to go back. And again, the ideal situation is you'd have, if they had that, you'd go back and look at the contract and say, do we need the amendment contract? Probably should, to account for the fact that you're now using AI. So those are some of the, I think, some of the governance challenges that we've been seeing.
Therese: But I do think what's interesting is that we are seeing financial services work to put in place more comprehensive governance structures for AI, which is a new thing, right? As Anthony's saying, we are seeing committees or working groups formed that are responsible for reviewing AI use within the organization. We are seeing folks trying to structure or update their third-party governance mechanisms to route applications that may have an AI feature to review. We are seeing folks trying to bring in the appropriate personnel. So sometimes, as Anthony's saying, they're not perfect yet. They're only focused on data use or IP. But we are more and more seeing people pull in compliance and legal and other personnel to focus on governance items. We're seeing greater training, real training of users, a lot of heavy focus on user training, appropriate use, appropriate use cases in terms of the use of AI, greater focus on the data that's being used, and how do they put controls in place, which is challenging right now, but to minimize the use of AI on highly confidential information, or if it is being used, to have appropriate safeguards in place. And so I think what's interesting with AI that's different than what we've seen with other types of emerging technologies is that both the regulators and the financial services industry are looking toward putting in place more comprehensive strategies, guidance, controls around the use of AI. It isn't perfect yet. It's not there yet. There's a lot of sort of trial and error in development. But I think it's interesting that with AI, we are seeing kind of a coalescence around an attempt. To have greater management, oversight, governance around the use of the technology, which isn't something, frankly, that we've seen necessarily in a wide scale with other types of emerging technologies, which of course are happening in the financial services industry all the time.
Anthony: Yeah. And I think just to highlight this, right, it starts with the contract, as Howard said, because that's the way you start. So once you do the contracting, testing and validation is critical. And I think that's one of the things that I think a lot of organizations are dealing with because they want to understand the model. There's not a lot of transparency around how the model works. That's just the way it is. So you have to do the testing and validation. And that's the due diligence that I was talking about before. And then documenting decisions, right? So what we're seeing is you've got a governance council. You have to make sure the contract's there. You're doing the testing and validation. All of this is documented, right? To me, that's the most important thing because when the regulators come and say, how do you deal with this, you've got to have documentation that say, here's the way we're deploying AI in organizations. Here's the documentation that shows we're doing it the right way. We're testing it. We're validating it. We have good contracts in place. All of that is, I think, critical. Again, I think the biggest challenge is the scale. This is moving so quickly that you probably have to prioritize. I think this goes back to the data use that we were talking about before, is you probably should be focusing on those AI tools that are really customer facing, that are dealing with material non-public information. If it's dealing with sensitive personal information and it's dealing with CSI, and that becomes a data governance issue. Figuring out, okay, what are the systems where I'm going to employ AI that touch upon these high risk areas, that probably should be where the priorities are. That's where we've seen a lot of concern. If you don't have your data governance in place and you know which tools have highly sensitive information, it's really hard to have a governance structure around AI. So that's where, again, we're seeing a lot of financial institutions playing catch up.
Therese: All right. Well, thanks for that, Anthony. And thanks to Howard as well. I think we have maybe barely scratched the surface of AI in banking. But thanks to everyone for joining us today. And please do join us for our next episode in our AI series.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Regulatory lawyers Cynthia O’Donoghue and Wim Vandenberghe explore the European Union’s newly promulgated AI Act; namely, its implications for medical device manufacturers. They examine amazing new opportunities being created by AI, but they also warn that medical-device researchers and manufacturers have special responsibilities if they use AI to discover new products and care protocols. Join us for an insightful conversation on AI’s impact on health care regulation in the EU.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.
Cynthia: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI and life sciences, particularly medical devices. I'm Cynthia O’Donoghue. I'm a Reed Smith partner in the London office in our emerging technology team. And I'm here today with Wim Vandenberghe. Wim, do you want to introduce yourself?
Wim: Sure, Cynthia. I'm Wim Vandenberghe, I'm a life science partner out of the Brussels office, and my practice is really about regulatory and commercial contracting in the life science space.
Cynthia: Thanks, Wim. As I mentioned, we're here to talk about the EU AI Act that came into force on the 2nd of August, and it has various phases for when different aspects come into force. But I think a key thing for the life sciences industry and any developer or deployer of AI is that research and development activity is exempt from the EU AI Act. And the reason it was done is because the EU wanted to foster research and innovation and development. But the headline sounds great. If, as a result of research and development, that AI product is going to be placed on the EU market and developed, essentially sold or used in products in the EU, it does become regulated under the EU AI Act. And there seems to be a lot of talk about interplay between the EU AI Act and various other EU laws. So Wim, how does the AI Act interplay with the medical devices regulation, the MDR and the IVDR?
Wim: That's a good point, Cynthia. And that's, of course, you know, where a lot of the medical device companies are looking at kind of like that interplay and potential overlap between the AI Act on the one hand, which is a cross-sectoral piece of legislation. So it applies to all sorts of products and services, whereas the MDR and the IVDR are of course only applicable to medical technologies. So in summary, you know, the medical, both the AI Act and the MDR and IVDR will apply to AI systems, provided, of course, that those AI systems are in scope of the respective legislation. So maybe I'll start with the MDR and IVDR and then kind of turn to the AI Act. Under the MDR and the IVDR, of course, there's many AI solutions that are either considered to be a software as a medical device in their own right, or they are part or component of a medical technology. So to the extent that this AI system as software meets the definition of a medical device under the MDR or under the IVDR, it would actually qualify as a medical device. And therefore, the MDR and IVDR is fully applicable to those AI solutions. Stating the obvious, you know, there's plenty of AI solutions that are already now on the market and being used in a healthcare setting as well. Well, what the AI Act kind of focuses on, particularly with regard to medical technology, is the so-called high-class risk AI systems. And for a medical technology to be a high-class AI system under the AI Act, it's essentially it's a twofold kind of criteria that needs to apply. First of all, the AI solution needs to be a medical device or an in vitro diagnostic under the sector legislation, so the MDR or the IVDR, or it is a safety component of such a medical product. Safety component is not really explained in the AI Act, but think about, for example, the failure of an AI system to interpret diagnostic IVD instrument data, for example, that could endanger the health of a person by generating false positives. That would be a safety component. So that's the first step you have to see is the AI solution, does it qualify as a medical device or is it a safety component of a medical device? And the second step is that it is only for AI solution that are actually undergoing a conformity assessment by a notified body under the MDR or the IVDR. So to make kind of a long story short, it actually means that medical devices that are either a class 2A, 2B, or 3 will be in the scope of the AI Act. And for the IVDR, for in vitro diagnostics, that would be class B to D. The risk class, that would be then captured by the AI Act. So that essentially is kind of like determining the scope and the applicability of the AI Act. And Cynthia, maybe coming back to an earlier point of what you said on research, I mean, the other kind of curious thing as well that the AI Act doesn't really kind of foresee is the fact that, of course, you know, for getting an approved medical device, you need to do certain clinical investigations and studies on that medical device. So you really have to kind of test it in a real world setting. And that happens by a clinical trial, clinical investigation. The MDR and the IVDR have elaborate kind of rules about that. And the very fact that you do this prior to getting your CE mark and your approval and then launching it on the market is very standard under the MDR and the IVDR. However, under the AI Act, which also requires CE marking and approval, and we'll come to that a little bit later, there's no mentioning about such clinical and performance evaluation of medical technology. So if you would just read the AI Act like that, it would mean actually that you need to have a CE mark for such a high-risk AI system, and only then you can do your clinical assessment. And of course, that wouldn't be consistent with the MDR and the IVDR. And we can talk a little bit later about consistency between the two frameworks as well. You know, the one thing that I do see as being very new under the AI Act is everything to do around data and data governance. And I'm just, you know, kind of question, Cynthia, you know, given your experience, you know, if you can maybe talk a little bit about, you know, what are the requirements going to be for data and data governance under the AI Act?
Cynthia: Thanks, Wim. Well, the AI Act obviously defers to the GDPR, and the GDPR, which regulates how data is used and transferred outside within the EEA member states and then transferred outside the EEA, all has to interoperate with the EU AI Act. In the same way as you were just saying that the MDR, the IVDR needs to interoperate, and you touched, of course, on clinical trials, so the clinical trial regulation would also have to work and interoperate with the EU AI Act. Obviously, if you're working with medical devices, most of the time it's going to involve personal data and what is called sensitive, a special category, data concerning health about patients or participants in a clinical trial. So, you know, a key part of AI is that training data. And so the data that goes in, that's ingested into the AI system for purposes of a clinical or for a medical device needs to be as accurate as possible. And obviously the GDPR also includes a data minimization principle. So the data needs to be the minimum necessary, but at the same time. You know, that training data, you know, depending on the situation in a clinical trial might be more controlled. But once a product is put into the market, there could be data that's ingested into the AI system that has anomalies in it. You know, you mentioned about false positives, but there's also a requirement under the AI to ensure that the ethical principles in AI, which was non-binding by the EU, are adhered to. And one of those is human oversight. So obviously, if there's anomalies in the data and the outputs from the AI would give false positives or create other issues with the output that EU AI Act requires once a CE mark is obtained, just like the MDR does, for there to be a constant conformity assessment to ensure that any kind of anomalies and or the necessity for human intervention is met. Is done on a regular basis as part of reviewing the AI system itself. So we've talked about high-risk AI. We've talked a little bit about the overlap between the GDPR and the EU AI Act and the MDR and the IBDR overlap and interplay. Let's talk about some real-world examples, for instance. I mean, the EU AI Act also classes education as potentially high risk if any kind of vocational training is based solely on assessment by an AI system. How does that potentially work with the way medical device organizations and pharma companies might train clinicians?
Wim: It's a good question. I mean, normally, you know, those kind of programs, they would typically not be captured, you know, by the definition of a medical device, you know, through the MDR. So they'd most likely be out of scope, unless it is programs that are actually kind of extending also to a real life kind of diagnosis or cure or treatment kind of, you know, helping the physician, I mean, to make their own decision. But if it's really about kind of training, it normally would fall out of scope. And that'd be very different right here with the AI Act, actually, it would be kind of captured, it would be qualified as a high risk class. And what it would mean is that maybe different from a medical device, you know, manufacturer that would be very kind of used to a lot of the concepts that are used in the AI Act as well. And we'll come to that a little bit later. You know, manufacturers or developers of this kind of software solution and all, they wouldn't necessarily be sophisticated kind of in the medical technology space in terms of having a risk and a quality management system. Having your technical documentation verified, et cetera, et cetera. So I do think that's one of those examples where there could be a potential, like a bit of a mismatch between the two. You will have to see, of course, for a number of these obligations in relation to specific AI systems under the AI Act, whether it's high class or the devices that you mentioned, Cynthia, you know, which is more, I think, in Annex 3 of the AI Act, the European Commission is going to produce a lot of delegated acts and guidance documents. So it is to be seen, actually, what the Commission is going to kind of provide more in detail about this.
Cynthia: Thanks, Wim. I mean, we've talked a lot about high-risk AI, but the EU AI Act also regulates general-purpose AI, you know, and so chatbots and those kinds of things are regulated, but in a more minimum way under the EU AI Act. What if a pharma company, a medical device company has a chatbot on its website for customer service? Obviously, there's risks in relation to data and people inputting sensitive personal data, but there's got to be a risk in relation as well as to users of those chatbots seeking to use that system to triage or ask questions seeking medical device. How would that be regulated?
Wim: Yeah, that's a good point. I mean, it would ultimately be down to the intended purpose of that chat box. If that chat box is really about just connecting the patient or the user with maybe a physician and then take it forward. Or would it be a chat box that actually is also functioning, say, more as a kind of a triage system where the chat box, depending on the input given by the user or the answers given, would start actually kind of making their own decisions and would, you know, already kind of like point towards a certain decision, whether a cure or whether treatment is required, etc. So that would be already much more, again, in the space of the medical device definition, whereas a kind of, a general use chat box would not necessarily be but it really is kind of down to the intended purpose of of the chat box the one thing that is of course specific with an AI system versus a more kind of standard software or chat box system is that the AI kind of learning continuous learning, may actually go kind of beyond and above the intended purpose of what was initially initially envisaged for that chat box. And that might have been influenced, like you say, Cynthia, about the input. Maybe because the user is asking different questions, the chat box may react different and may actually go beyond the intended purpose. And that's really, I think, that's going to be a very difficult point of discussion, in particular with notified bodies, in case you need to have your AI system assessed by a notified body. Under the MDR and the IVDR, a lot of the notified bodies have gone on record saying that that kind of the process of continuous learning by an AI system, of course, entails a certain risk ultimately. And to the extent that a medical device manufacturer has not described like almost like in a what within a certain boundaries, you know, that the AI system can operate, that would actually mean that it goes beyond the approval and would need to be reassessed and re kind of confirmed by the notified body. So I think that's going to be something and it's really not clear under the AI Act, there's a certain idea about change, you know, to what extent if the AI system learns and changes, do you need to seek new approval and conformity assessment? And that change doesn't necessarily correspond with what is considered to be a significant change. That's the wording being used under the MDR. That doesn't necessarily correspond, again, between the two frameworks here as well. And maybe one point, Cynthia, that I, on the chat box, you know, because it is, you know, it wouldn't necessarily qualify as a high risk, but there are certain requirements under the AI Act, right? It's about, if I understand well, a lot about transparency, that you're transparent, right?
Cynthia: Mm-hmm. So anyone who interacts with a chatbot needs to be aware that that's what they're interacting with. So you're right, there's a lot about transparency and also explainability. But I think you're starting to get towards something about reconformity. I mean, if, you know, Notified Body has certified the use, whether it be a medical device or, you know, AI, if there's anomalies in that data or if it starts doing new things, potentially what is off-label use, surely that would trigger a new requirement for a new conformity assessment.
Wim: Yeah, absolutely. I think, you know, under the MDR, it wouldn't necessarily, even though with the MDR now, there's language that you need to report. If you see like a trend of off-label use, you need to report that. Under the AI Act, it's really down to what kind of change can you tolerate? And an off-label use, you know, by definition is, you know, kind of using the device for other purposes, for other intended purposes than the developer had in mind. And that, you know, again, if you're reading the AI Act strictly, you know, that would probably then indeed trigger a new conformity assessment procedure as well.
Cynthia: One of the things that we haven't talked about, and we've kind of alluded through that, what we've been discussing about off-label use and anomalies in the data, is that the EU is talking about essentially a separate AI liability, which is still in draft. So I would have thought that you know medical device manufacturers would need to be very cognizant that you know there's potential for increased liability under the AI act you've got liability under the MDR obviously the GDPR has its own penalty system so I mean you this will require quite a lot of governance to to try and minimize risk.
Wim: Yeah would you oh Oh, absolutely. Oh, absolutely. I think, I mean, you touched on a very, very good point. I think I wouldn't say that, you know, in Europe, we're moving entirely to a kind of claimant-friendly or, you know, class action-friendly litigation landscape. But, you know, there are a lot of new laws being put in place that might actually trigger that a bit. You know, you mentioned rightfully the AI liability directive that is still in draft stage but you already have the new general safety product regulation place you have the class action directive as well you have the whistleblower directive being you know rolled out in all the member states so i think all of that combined surely and then certainly with AI systems does create increased risk and certain risks you know, a medical technology company will be very familiar with, you know, if we're talking about risk management, quality management, the drafting of the technical documentation. All labeling, document keeping, adverse event reporting, all of that is well known. But what is less well known is a bit, you know, the use cases that we discussed, but also the potential, the overlap and the potential inconsistencies between the different legal systems, especially on data and data governance. I don't think, you know, a lot of medical technology companies are so advanced yet. And then we can tell already now, you know, when a medical device that incorporates, you know, AI software is being certified, there are some questions, you know, there's some language in the MDR about software and kind of continuous software along the lifecycle of the software compliance. Compliance, but it's not at all prescriptive as what will happen now with the AI Act, where you'll have a lot more requirements on like data quality, you know, what were the data sets that you have been used to train the algorithm? Can we have access to it, et cetera, et cetera. You need to disclose that. That's certainly a big risk area. The other risk area that I would see, and again, that differs maybe a bit from the MDR are is that it's not just about, under the AI Act, imposing requirements on the developers, which essentially are the manufacturers. But it's also on who are called the deployers. And the deployers are essentially the users. You know, that could be hospitals or physicians or patients. And there are also requirements now being imposed on them. And I do think that's a novelty to some extent as well. So that will be curious on how they deal with that, how medical device companies, you know, with their customers, with their doctors and hospitals are going to interact to guarantee kind of a continuous compliance, not just with the MDR, but now also with the AI act.
Cynthia: Thanks, Wim. That's a lot for organizations to think about, because those things weren't complicated enough under the MDR itself. I think some of the takeaways are obviously the interplay between the MDR, the IVDR, the EU Act, concerns around software as a medical device, and the overlap with what is an AI system, which is obviously the potential for inferences and generating of outputs. And then concerns around transparency, being able to be open about how they're using AI and explaining its use. We also talked a little bit about some of the risks in relation to clinician education, off-label use, anomalies within the data, and the potential for liability. So, please feel free to get in touch if you have any questions after listening to this podcast. Thank you, Wim. We appreciate your listening, and please be aware that we will have more podcasts on AI over the coming months. Thanks again.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Reed Smith emerging tech lawyers Andy Splittgerber in Munich and Cynthia O’Donoghue in London join entertainment & media lawyer Monique Bhargava in Chicago to delve into the complexities of AI governance. From the EU AI Act to US approaches, we explore common themes, potential pitfalls and strategies for responsible AI deployment. Discover how companies can navigate emerging regulations, protect user data and ensure ethical AI practices.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Andy: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape globally. Today, we'll focus on AI and governance with a main emphasis on generative AI in a regional perspective if we look into Europe and the US. My name is Andy Splittgerber. I'm a partner in the Emerging Technologies Group of Reed Smith in Munich, and I'm also very actively advising clients and companies on artificial intelligence. Here with me, I've got Cynthia O'Donoghue from our London office and Nikki Bhargava from our Chicago office. Thanks for joining.
Cynthia: Thanks for having me. Yeah, I'm Cynthia O'Donoghue. I'm an emerging technology partner in our London office, also currently advising clients on AI matters.
Monique: Hi, everyone. I'm Nikki Bhargava. I'm a partner in our Chicago office and our entertainment and media group, and really excited to jump into the topic of AI governance. So let's start with a little bit of a basic question for you, Cynthia and Andy. What is shaping how clients are approaching AI governance within the EU right now?
Cynthia: Thanks, Nikki. The EU is, let's say, just received a big piece of legislation, went into effect on the 2nd of October that regulates general purpose AI and high risk general purpose AI and bans certain aspects of AI. But that's only part of the European ecosystem. The EU AI Act essentially will interplay with the General Data Protection Regulation, the EU's Supply Chain Act, and the latest cybersecurity law in the EU, which is the Network and Information Security Directive No. 2. so essentially there's a lot of for organizations to get their hands around in the EU and the AI act has essentially phased dates of effectiveness but the the biggest aspect of the EU AI act in terms of governance lays out quite a lot and so it's a perfect time for organizations to start are thinking about that and getting ready for various aspects of the AAC as they in turn come into effect. How does that compare, Nikki, with what's going on in the U.S.?
Monique: So, you know, the U.S. is still evaluating from a regulatory standpoint where they're going to land on AI regulation. Not to say that we don't have legislation that has been put into place. We have Colorado with the first comprehensive AI legislation that went in. And we also had, you know, earlier in the year, we also had from the Office of Management and Budget guidelines to federal agencies about how to procure and implement AI, which has really informed the governance process. And I think a lot of companies in the absence of regulatory guidance have been looking to the OMB memo to help inform what their process may look like. And I think the one thing I would highlight, because we're sort of operating in this area of unknown and yet-to-come guidance, that a lot of companies are looking to their existing governance frameworks right now and evaluating how they're both from a company culture perspective, a mission perspective, their relationship with consumers, how they want to develop and implement AI, whether it's internally or externally. And a lot of the governance process and program pulls guidance from some of those internal ethics as well.
Cynthia: Interesting, so I’d say somewhat similar in the EU, but I think, Andy, the consumer, I think the US puts more emphasis on, consumer protection, whereas the EU AI Act is more all-encompassing in terms of governance. Wouldn't you agree?
Andy: Yeah, that was also the question I wanted to ask Nikki, is where she sees the parallels and whether organizations, in her view, can follow a global approach for AI are ai governance and yes i like for the for the question you asked yes i mean the AI act is the European one is more encompassing it is i'm putting a lot of obligations on developers and deployers like companies that use ai in the end of course it also has the consumer or the user protection in the mind but the rules directly rated relating to consumers or users are I would say yeah they're limited. So yeah Nikki well what what's kind of like you always you always know US law and you have a good overview over European laws what is we are always struggling with all the many US laws so what's your thought can can companies in terms of AI governance follow a global approach?
Monique: In my opinion? Yeah, I do think that there will be a global approach, you know, the way the US legislates, you know, what we've seen is a number of laws that are governing certain uses and outputs first, perhaps because they were easier to pass than such a comprehensive law. So we see laws that govern the output in terms of use of likenesses, right, of publicity violations. We're also seeing laws come up that are regulating the use of personal information and AI as a separate category. We're also seeing laws, you know, outside of the consumer, the corporate consumer base, we're also seeing a lot of laws around elections. And then finally, we're seeing laws pop up around disclosure for consumers that are interacting with AI systems, for example, AI powered chatbots. But as I mentioned, the US is taking a number of cues from the EU AI Act. So for example, Colorado did pass a comprehensive AI law, which speaks to both obligations for developers and obligations to deployers, similar to the way the EU AI Act is structured, and focusing on what Colorado calls high risk AI systems, as well as algorithmic discrimination, which I think doesn't exactly follow the EU AI Act, but draws similar parallels, I think pulls a lot of principles. That's the kind of law which I really see informing companies on how to structure their AI governance programs, probably because the simple answer is it requires deployers at least to establish a risk management policy and procedure and an impact assessment for high risk systems. And impliedly, it really requires developers to do the same. Because developers are required to provide a lot of information to deployers so that deployers can take the legally required steps in order to deploy the AI system. And so inherently, to me, that means that developers have to have a risk management process themselves if they're going to be able to comply with their obligations under Colorado law. So, you know, because I know that there are a lot of parallels between what Colorado has done, what we see in our memo to federal agencies and the EU AI Act, maybe I can ask you, Cynthia and Andy, to kind of talk a little bit about what are some of the ways that companies approach setting up the structure of their governance program? What are some buckets that it is that they look at, or what are some of the first steps that they take?
Cynthia: Yeah, thanks, Nikki. I mean, it's interesting because you mentioned about the company-specific uses and internal and external. I think one thing, you know, before we get into the governance structure or maybe part of thinking about the governance structure is that for the EU AI Act, it also applies to employee data and use of AI systems for vocational training, for instance. So I think in terms of governance structure. Certainly from a European perspective, it's not necessarily about use cases, but about really whether you're using that high risk or general purpose AI and, you know, some of the documentation and certification requirements that might apply to the high risk versus general purpose. But the governance structure needs to take all those kinds of things into account. Account so you know obviously guidelines and principles about the you know how people use external AI suppliers how it's going to be used internally what are the appropriate uses you know obviously if it's going to be put into a chatbot which is the other example you used what are rules around acceptable use by people who interact with that chatbot as well as how is that chatbot set up in terms of what would be appropriate to use it for. So what are the appropriate use cases? So, you know, guidelines and policies, definitely foremost for that. And within those guidelines and policies, there's also, you know, the other documents that will come along. So terms of use, I mentioned acceptable use, and then guardrails for the chatbot. I mean, I mean, one of the big things for EU AI is human intervention to make sure if there's any anomalies or somebody tries to game it, that there can be intervention. So, Andy, I think that dovetails into the risk management process, if you want to talk a bit more about that.
Andy: Yeah, definitely. I mean, the risk management process in the wider sense, of course, like how do organizations start this at the moment is first setting up teams or you know responsible persons within the organization that take care of this and we're gonna discuss a bit later on how that structure can look like and then of course the policies you mentioned not only regarding the use but also how to or which process to follow when AI is being used or even the question what is AI and how do we at all find out in our organization where we're using AI and what is an AI system as defined under the various laws, also making sure we have a global interpretation of that term. And then that is a step many of our clients are taking at the moment is like setting up an AI inventory. And that's already a very difficult and tough step. And then the next one is then like per AI system that is then coming up in this register is to define the risk management process. And of course, that's the point where in Europe, we look into the AI Act and look what kind of AI system do we have, high risk or any other sort of defined system. Or today, we're talking about the generative AI systems a bit more. For example, there we have strong obligations in the European AI Act on the providers of such generative AI. So less on companies that use generative AI, but more on those that develop and provide the generative AI because they have the deeper knowledge on what kind of training data is being used. They need to document how the AI is working and they need to also register this information with the centralized database in the European Union. They also need to give some information on copyright protected material that is contained in the training data so there is quite some documentation requirements and then of course so logging requirements to make sure the AI is used responsibly and does not trigger higher are risks. So there's also two categories of generative AI that can be qualified. So that's kind of like the risk management process under the European AI Act. And then, of course, organizations also look into risks into other areas, copyright, data protection, and also IT security. Cynthia, I know IT security is one of the topics you love. You add some more on IT security here and then we'll see what Nikki says for the US.
Cynthia: Well, obviously NIST 2.0 is coming into force. It will cover providers of certain digital services. So it's likely to cover providers of AI systems in some way or other. And funny enough, NIST 2.0 has its own risk management process involved. So there's supply chain due diligence involved, which would have to be baked into a risk management process for that. And then the EU's ENISA, Cybersecurity Agency for the EU, has put together a framework for cybersecurity, for AI systems, dot dot binding. But it's certainly a framework that companies can look to in terms of getting ideas for how best to ensure that their use of AI is secure. And then, of course, under NIST, too, the various C-Certs will be putting together various codes and have a network meeting late September. So we may see more come out of the EU on cybersecurity in relation to AI. But obviously, just like any kind of user of AI, they're going to have to ensure that the provider of the AI has ensured that the system itself is secure, including if they're going to be putting trained data into it, which of course is highly probable. I just want to say something about the training data. You mentioned copyright, and there's a difference between the EU and the UK. So in the UK, you cannot use, you know, mine data for commercial purposes. So at one point, the UK was looking at an exception to copyright for that, but it doesn't look like that's going to happen. So there is a divergence there, but that stems from historic UK law rather than as a result of the change from Brexit. Nikki, turning back to you again, I mean, we've talked a little bit about risk management. How do you think that that might differ in the US and what kind of documentation might be required there? Or is it a bit looser?
Monique: I think there are actually quite a bit of similarities that I would pull from what, you know, we have in the EU. And Andy, I think this goes back to your question about whether companies can establish a global process, right? In fact, I think it's going to to be really important for companies to see this as a global process as well. Because AI development is going to happen, you know, throughout the world. And it's really going to depend on where it's developed, but also where it's deployed, you know, and where the outputs are deployed. So I think taking a, you know, broader view of risk management will be really important in the the context of AI, particularly given. That the nature of AI is to, you know, process large swaths of information, really on a global scale, in order to make these analytics and creative development and content generation processes faster. So that just a quick aside of I actually think what we're going to see in the US is a lot of pulling from what we've seen that you and a lot more cooperation on that end. I agree that, you know, really starting to frame the risk governance process is looking at who are the key players that need to inform that risk measurement and tolerance analytics, that the decision making in terms of how do you evaluate, how do you inventory. Evaluate, and then determine how to proceed with AI tools. And so, you know, one of the things that I think makes it hopefully a little bit easier is to be able to leverage, you know, from a U.S. Perspective, leverage existing compliance procedures that we have, for example, for SEC compliance or privacy compliance or, you know, other ethics compliance programs. Brands and make AI governance a piece of that, as well as, you know, expand on it. Because I do think that AI governance sort of brings in all of those compliance pieces. We're looking at harms that may exist to a company, not just from personal information, not just from security. Not just from consumer unfair deceptive trade practices, not just from environmental, standpoints, but sort of the very holistic view of not to make this a bigger thing than it is, but kind of everything, right? Kind of every aspect that comes in. And you can see that in some of the questions that developers are supposed to be able to answer or deployers are supposed to be able to answer in risk management programs, like, for example, in Colorado, right, the information that you need to be able to address in a risk management program and an impact assessment really has to demonstrate an understanding of, of the AI system, how it works, how it was built, how it was trained, what data went into it. And then what are the full, what is the full range of harms? So for example, you know, the privacy harms, the environmental harms, the impact on employees, the impact on internal functions, the impact on consumers, if you're using it externally, and really be able to explain that, whether you have to put out a public statement or not, that will depend on the jurisdiction. But even internally, to be able to explain it to your C-suite and make them accountable for the tools that are being brought in, or make it explainable to a regulator if they were to come in and say, well, what did you do to assess this tool and mitigate known risks? So, you know, kind of with that in mind, I'm curious, what steps do you think need to go into a governance program? Like, what are one of the first initial steps? And I always feel that we can sort of start in so many different places, right, depending on how a company is structured, or what initial compliance pieces are. But I'm curious to know from you, like, Like, what would be one of the first steps in beginning the risk management program?
Cynthia: Well, as you said, Nikki, I mean, one of the best things to do is leverage existing governance structures. You know, if we look, for instance, into how the EU is even setting up its public authorities to look at governance, you've got, as I've mentioned, you know, kind of at the outset, you've almost got a multifaceted team approach. And I think it would be the same. I mean, the EU anticipates that there will be an AI officer, but obviously there's got to be team members around that person. There's going to be people with subject matter expertise in data, subject matter expertise in cyber. And then there will be people who have subject matter expertise in relation to the AI system itself, the data, training data that's been used, how it's been developed, how the algorithm works. Whether or not there can be human intervention. What happens if there are anomalies or hallucinations in the data? How can that be fixed? So I would have thought that ultimately part of that implementation is looking at governance structure and then starting from there. And then obviously, I mean, we've talked about some of the things that go into the governance. But, you know, we have clients who are looking first at use case and then going, okay, what are the risks in relation to that use case? How do we document it? How do we log it? How do we ensure that we can meet our transparency and accountability requirements? You know, what other due diligence and other risks are out there that, you know, blue sky thinking that we haven't necessarily thought about. Andy, any?
Andy: Yeah, that's, I would say, one of the first steps. I mean, even though not many organizations allocate now the core AI topic in the data protection department, but rather perhaps in the compliance or IT area, still from the governance process and starting up that structure, we see a lot of similarities to the data protection. Protection GDPR governance structure and so yeah I think back five years to implementation or getting ready for GDPR planning and checking what what other rules we we need to comply with who knew do we need to involve get the plan ready and then work along that plan that's that's the phase where we see many of our clients at the moment. Nikki, more thoughts from your end?
Monique: Yeah, I think those are excellent points. And what I have been talking to clients about is sort of first establishing the basis of measurement, right, that we're going to evaluate AI development on or procurement on. What are the company's internal principles and risk tolerances and defining those? And then based off of those principles and those metrics, putting together an impact assessment, which borrows a lot from what, you know, from what you both said, it borrows a lot from the concept of impact assessments under privacy compliance, right? Right, to implement the right questions and put together the right analytics in order to measure whether a AI tool that's in development is meeting up to those metrics, or something that we are procuring is meeting those metrics, and then analyzing the risks that are coming out of that. I think a lot of that, the impact assessment is going to be really important in helping make those initial determinations. But also, you know, and this is not just my feeling, this is something that is also required in the Colorado law is setting up an impact assessment, and then repeating it annually, which I think is particularly important in the context of AI, especially generative AI, because generative AI is a learning system. So it is going to continue to change, There may be additional modifications that are made in the course of use that is going to require reassessing, is the tool working the way it is intended to be working? You know, what has our monitoring of the tool shown? And, you know, what are the processes we need to put into place? In order to mitigate the tool, you know, going a little bit off path, AI drift, more or less, or, you know, if we start to identify issues within the AI, how do we what processes do we have internally to redirect the ship in the right process. So I think impact assessments are going to be a critical tool in helping form what is the rest of the risk management process that needs to be in place.
Andy: All right. Thank you very much. I think these were a couple of really good practical tips and especially first next steps for our listeners. We hope you enjoyed the session today and look forward if you have any feedback to us either here in the comment boxes or directly to us. And we hope to welcome you soon in one of our next episodes on AI, the law. Thank you very much.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or established standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Reed Smith partners share insights about U.S. Department of Health and Human Services initiatives to stave off misuse of AI in the health care space. Wendell Bartnick and Vicki Tankle discuss a recent executive order that directs HHS to regulate AI’s impact on health care data privacy and security and investigate whether AI is contributing to medical errors. They explain how HHS collaborates with non-federal authorities to expand AI-related protections; and how the agency is working to ensure that AI outputs are not discriminatory. Stay tuned as we explore the implications of these regulations and discuss the potential benefits and risks of AI in healthcare.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Wendell: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in healthcare. My name is Wendell Bartnick. I'm a partner in Reed Smith's Houston office. I have a degree in computer science and focused on AI during my studies. Now, I'm a tech and data lawyer representing clients in healthcare, including providers, payers, life sciences, digital health, and tech clients. My practice is a natural fit given all the innovation in this industry. I'm joined by my partner, Vicki Tankle.
Vicki: Hi, everyone. I'm Vicki Tankle, and I'm a digital health and health privacy lawyer based in Reed Smith's Philadelphia office. I've spent the last decade or so supporting health industry clients, including healthcare providers, pharmaceutical and medical device manufacturers, health plans, and technology companies navigate the synergies between healthcare and technology and advising on the unique regulatory risks that are created when technology and innovation far outpace our legal and regulatory frameworks. And we're oftentimes left managing risks in the gray, which as of today, July 30th, 2024, is where we are with AI and healthcare. So when we think about the use of AI in healthcare today, there's a wide variety of AI tools that support the health industry. And among those tools, a broad spectrum of the use of health information, including protected health information, or PHI, regulated by HIPAA, both to improve existing AI tools and to develop new ones. And if we think about the spectrum as measuring the value or importance of the PHI, the individuals individuals identifiers themselves, it may be easier to understand that the far ends of the spectrum and easier to understand the risks at each end. Regulators in the industry have generally categorized use of PHI in AI into two buckets, low risk and high risk. But the middle is more difficult and where there can be greater risk because it's where we find the use or value of PHI in the AI model to be potentially debatable. So on the one hand of the spectrum, for example, the lower risk end, there are AI tools such as natural language processors, where individually identifiable health information is not centric to the AI model. But instead, for this example, it's the handwritten notes of the healthcare professional that the AI model learns from. And with more data and more notes, the tool's recognition of the letters themselves, not the words the letters form, such as patient's name, diagnosis, or lab results, the better the tool operates. Then on the other hand of the spectrum, the higher risk end, there are AI tools such as patient-facing next best action tools that are based on an individual's patient medical history, their reported symptoms, their providers, their prescribed medications, potentially their physiological measurements, or similar information, and they offer real-time customized treatment plans with provider oversight. Provider-facing clinical decision support tools similarly support the diagnosis and treatment of individual patients based on individual's information. And then in the middle of the spectrum, we have tools like hospital logistics planners. So think of tools that think about when the patient was scheduled for an x-ray, when they were transported to the x-ray department, how long did they wait before they got the x-ray, and how long after they received the x-ray were they provided with the results. These tools support population-based activities that relate to improving health or reducing costs, as well as case management and care coordination, which begs the question, do we really need to know that patient's identity for the tool to be useful? Maybe yes, if we also want to know the patient's sex, their date of birth, their diagnosis, date of admission. Otherwise, we may want to consider whether this tool can be done and be effective without that individually identifiable information. What's more is that there's no federal law that applies to the use of regulated health data in AI. HIPAA was first enacted in 1996 to encourage healthcare providers and insurers to move away from paper medical and billing records and to get online. And so when HIPAA has been updated over the years, the law still remains outdated in that it does not contemplate the use of data to develop or improve AI. So we're faced with applying an old statute to new technology and data use. Again, operating in a gray area that's not uncommon in digital health or for our clients. And to that end, there are several strategies that our HIPAA-regulated clients are thinking of when they're thinking of permissible ways to use PHI in the context of AI. So treatment, payment, healthcare operations activities for covered entities, proper management and administration for business associates, certain research activities and individual authorizations, or de-identified information are all strategies that our clients are currently thinking through in terms of permissible uses of PHI in AI.
Wendell: So even though HIPAA hasn't been updated to apply directly to AI, that doesn't mean that HHS has ignored it. So AI, as we all know, has been used in healthcare for many years. And in fact, HHS has actually issued some guidance previously. Under the White House's Executive Order 14110, back in the fall of 2023, which was called Safe, secure, and trustworthy development and use of artificial intelligence, jump-started additional HHS efforts. So I'm going to talk about seven items in that executive order that apply directly to the health industry, and then we'll talk about what HHS has done since this executive order. So first, the executive order requires the promotion of additional investment in AI, and just to help prioritize AI projects, including safety and privacy and security. The executive order also requires that HHS create an AI task force that is supposed to meet and create a strategic plan that covers several topics with AI, including AI-enabled technology, long-term safety and real-world performance monitoring, equity principles, safety, privacy, and security, documentation, state and local rules, and then promotion of workplace efficiency and satisfaction. faction. Third, HHS is required to establish an AI safety program that is supposed to identify and track clinical errors produced by AI and store that in a centralized database for use. And then based on what that database contains, they're supposed to propose recommendations for preventing errors and then avoiding harms from AI. Fourth, the executive order requires that all federal agencies, including HHS, focus on increasing compliance with existing federal law on non-discrimination. Along with that includes education and greater enforcement efforts. Fifth, HHS is required to evaluate the current quality of AI services, and that means developing policies and procedures and infrastructure for overseeing AI quality, including with respect to medical devices. Sixth, HHS is required to develop a strategy for regulating the use of AI in the drug development process. Of course, FDA has already been regulating this space for a while. And then seventh, the executive order actually calls on Congress to pass a federal privacy law. But even without that, HHS's AI task force is including privacy and security, as part of its strategic plan. So given those seven requirements really for HHS to cover, what have they done since the fall of 2023? Well, as the end of July 2024, HHS has created a funding opportunity for applicants to receive money if they develop innovative ways to evaluate and improve the quality of healthcare data used by AI. HHS has also created the AI task force. And many of our clients are asking us, you know, about AI governance. What can they do to mitigate risk from AI? And HHS has, the task force has issued a plan for state, local, tribal, and territorial governments related to privacy, safety, security, bias, and fraud. And even though that applies to the public sector, Our private sector clients should take a look at that so that they know what HHS is thinking in terms of AI governance. Along with this publication, NIST also produces several excellent resources that companies can use to help them with their AI governance journey. Also important is that HHS has recently restructured internally to try to consolidate HHS's ability to regulate technology and areas connected to technology and place that under ONC. And ONC, interestingly enough, has posted job postings for a chief AI officer, a chief technology officer, and a chief data officer. So we would expect that once those roles are filled, they will be highly influential in how HHS looks at AI, both internally and then also externally, and how it will impact the strategic thinking and position of HHS going forward with respect to AI. Our provider and tech clients have also been interested in how AI and what HHS is saying affects certified health IT. And earlier this year, actually, ONC published the HTI-1 rule, which, among other things, is establishes transparency requirements for AI that's offered in connection with certified health IT. And that rule, the compliance deadline for that rule is December 31st of this year. HHS has also been involved in focusing on non-discrimination just as the executive order requires. And so our clients are asking, can they use AI for certain processes and procedures? And in fact, it appears that HHS strongly endorses the use of AI in technology, improving patient outcomes, etc. They've certainly not published anything that says AI should not be used. And in fact, CMS issued a final rule this year and FAQs that clarify that AI can be used to process claims under Medicare Advantage plans, as long as there's human oversight and all other laws are compliant. So there is no indication at all from HHS that using AI is somehow prevented or companies should be worried about using it as long as they comply with existing law. So after the White House executive order in the fall of 2023, HHS has a lot of work to do. They've done some, but there's still a lot to do related to AI. And we should expect more guidance and activity in the second half of 2024.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views opinions or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
AI-driven autonomous ships raise legal questions, and shipowners need to understand autonomous systems’ limitations and potential risks. Reed Smith partners Susan Riitala and Thor Maalouf discuss new kinds of liability for owners of autonomous ships, questions that may occur during transfer of assets, and new opportunities for investors.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Susan: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. And today we will focus on AI in shipping. My name is Susan Riitala. I'm a partner in the asset finance team of the transportation group here in the London office of Reed Smith.
Thor: Hello, I'm Thor Maalouf. I'm also a partner in the transportation group at Reed Smith, focusing on disputes.
Susan: So when we think about how AI might be relevant to shipping, One immediate thing that springs to mind is the development of marine autonomous vessels. So, Thor, please can you explain to everyone exactly what autonomous vessels are?
Thor: Sure. So, according to the International Maritime Organization, the IMO, a maritime autonomous surface ship or MASS is defined as a ship which, to a varying degree, can operate independently of human interaction. Now, that can include using technology to carry out various ship-related functions like navigation, propulsion, steering, and control of machinery, which can include using AI. In terms of real-world developments, at this year's meeting of the IMO's working group on autonomous vessels, which happened last month in June, scientists from the Korean Research Institute outlined their work on the development and testing of intelligent navigation systems for autonomous vessels using AI. That system was called NEEMO. It's undergone simulated and virtual testing, as well as inland water model tests, and it's now being installed on a ship with a view to being tested at sea this summer. Participants in that conference also saw simulated demonstrations from other Korean companies like the familiar Samsung Heavy Industries and Hyundai of systems that they're trialing for autonomous ships, which include autonomous navigation systems using a combination of AI, satellite technology and cameras. And crewless coastal cargo ships are already operating in Norway, and a crewless passenger ferry is already being used in Japan. Now, fundamentally, autonomous devices learn from their surroundings, and they complete tasks without continuous human input. So, this can include simplifying automated tasks on a vessel, or a vessel that can conduct its entire voyage without any human interaction. Now, the IMO has worked on categorizing a spectrum of autonomy using different degrees and levels of automation. So the lowest level still involves some human navigation and operation, and the highest level does not. So for example, the IMO has a degree Degree 1 of autonomy, a ship with just some automated processes and decision support, where there are seafarers on board to operate and control shipboard systems and functions. But there are some operations which can be automated at times and be unsupervised. Now, as that moves up through the degrees, we get to, for example, Degree 3, where you have a remotely controlled ship without seafarers on board the ship. The ship will be controlled and operated from a remote location. All the way up to degree four, the highest level of automation, where you have a fully autonomous ship, where the operating systems of the ship are able to make their own decisions and determine their own actions without human interaction. action.
Susan: Okay, so it seems like from what you said, there are potentially a number of legal challenges that could arise from the increased use of autonomy in shipping. So for example, how might the concept of seaworthiness apply to autonomous vessels, especially ones where you have no crew on board?
Thor: Yeah, that's an interesting question. So the requirement for seaworthiness is generally met when a vessel's properly constructed, prepared, manned and equipped for the voyage that's intended. Now, in the case of autonomous vessels, they're not going to be able to. The kind of query turns to how a shipowner can actually warrant that a vessel is properly manned for the intended voyage where some systems are automated. What standard of autonomous or AI-assisted watchkeeping setup could be sufficient to qualify as having excised due diligence? A consideration is of course whether responsibility for seaworthiness could actually be shifted from the shipowner to the manufacturer of the automated functions or or the programmer of the software of the automated functions on board the vessel as you're aware the concept of seaworthiness is one of many warranties that's regularly incorporated in contracts for the use of ships and for carriage of cargo. And a ship owner can be liable for the damage that results if there's an incident before which the ship owner has failed to exercise due diligence to make the ship seaworthy. And this, in English law, is judged by the standard of what level of diligence would be reasonable for a reasonably prudent ship owner. That's true even if there has been a subsequent nautical fault on board. But how much oversight and knowledge of workings of an autonomous or AI-driven system could a prudent ship owner actually have? I mean, are they expected to be a software or AI expert? Under the existing English law on unseaworthiness, a shipowner or a carrier might not be responsible for faults made by an independent contractor before the ship came into their possession or before it came into their orbit. So potentially faults made during the shipbuilding process. So to what extent could any faults in an AI or autonomous system be treated in that way? Perhaps a ship owner or carrier could claim a defect in an autonomous system came about before the vessel came into their orbit and therefore they're potentially not responsible for subsequent unseaworthiness or incidents that result. There's also typically an exception to a ship owner's liability for navigational faults on board the vessel if that vessel has passed a seaworthiness test. But if certain crew and management functions have been replaced by autonomous AI systems on board, how could we assess whether there's or not there has actually been a navigational fault for which the owners might escape liability or pre-existing issue of unseaworthiness, so a pre-existing hardware or software glitch? This opens up a whole new line of inquiry as to at what might have happened behind the software code or the protocols of the autonomous system on board and the legal issues of responsibility of the ship owner and the subsequent applicable liability for any incidents which might have been caused by unseaworthiness are going to involve a significant legal inquiry and in new areas where it comes to autonomous vessels.
Susan: Sounds very interesting. And I guess that makes me think of, I guess, a wider issue that crewing is only part of, which would be standards and regulations relating to autonomous vessels. And obviously, as a finance lawyer, that would be something my clients will be particularly interested in, in terms of what standards are there in place so far for autonomous vessels and what regulation can we expect in the future?
Thor: Sure. Well, the answer is at the moment, there's not very much. So as I've mentioned already, the IMO has established a working group on autonomous vessels. And the aim of that IMO working group is to adopt a non-mandatory goal-based code for autonomous vessels, the MASS code, which will aim to be in place by 2025. But like I said, that will be non-mandatory, and that will then form the basis for what's intended to be a mandatory MASS Code, which is expected to come into force on the 1st of January 2028. Now, the MASS Code working group last met in May of this year. And it reports on a number of recommendations for inclusion in the initial voluntary MASS Code. Interestingly, one of those recommendations was for all autonomous vessels, so even the fully autonomous degree four vessels, to have a human being, a person in charge designated as the master even if that person is remote at all times so that may rule out a fully autonomous non-supervised vessel from being compliant with the code. So mandatory standards still very much under develop in development and not currently in force until 2028 at the moment that doesn't mean to say there won't be national regulations or flag regulations covering those vessels before then.
Susan: Right. And then I guess another area there would be insurance. I mean, what happens if something happens to a vessel? I mean, I'm looking at it from a financial perspective, of course, but obviously for ship owners as well, insurance will be the key source of recovery. So what kinds of insurance products would already be available for autonomous vessels?
Thor: Well, good to know that some of the insurers are already offering products covering autonomous vessels. So just having Googled what's available the other day, I bumped into Ship Owners Club, which holds entries for between 50 and 80 autonomous vessels under their All Risks P&I cover. And it seems that Guard is also providing hull and machinery and P&I cover for autonomous vessels. And I can see that their industry is definitely taking steps to get to grips with cover for autonomous vessels. So hull and P&I cover is definitely out there. So we've covered some of the legal challenges and insurance and what autonomous vessels are. I wonder, Susan, what other more specific challenges people interested in financing autonomous vessels might face?
Susan: Sure. Yeah. So, I mean, I guess I'll preface that by saying that I'm an asset finance lawyer. So instinctively, when I think about financing autonomous vessels, I'm thinking about the assets itself. So either financing the construction or the acquisitions of of the vessel. But in terms of autonomous vessels in particular, there are boundless investment opportunities beyond just the vessel itself, you know, on the financing, some of the research and development, some of the corporate finance of the companies designing and building those vessels, and the technology used to operate them. So there's, I imagine, a vast opportunity here for an investor who's keen to get involved. From a commercial perspective, autonomous vessels are pretty new. They're pretty untested. Obviously, you've talked a lot about the fact that a lot of the regulation isn't really completely there yet. There's a lot of development still to come. So it takes quite a brave investor to put funding into it. And so far, at least, the return on investment is a bit uncertain. It's not like investing in a tanker or a bulk carrier where you've got a known market. Everyone knows what the problems are. Everyone knows what the risks are, how to mitigate them. So in a lot of ways, this is all still very, very new, both for the owners and for the finances. But investors are very interested in sustainability solutions. They're interested in what the next big thing is. So I imagine that the autonomous ships are quite likely to appeal with potentially better safety records, being more sustainable. That in turn would then make the asset better value for the investors and less likely to result in insurance claims or reputational damage resulting from incidents and that sort of thing. From a legal perspective, it doesn't immediately seem that there would be a huge difference in taking a mortgage over an autonomous ship versus a manned one. But then it becomes a bit more complicated if we start to think about enforcing that mortgage. So in the traditional way to enforce a mortgage, the mortgagee will arrest the vessel in a suitable port. Depending on where the vessel is, the lender may need to instruct the borrower or the manager to sail the vessel to a suitable port. And if the borrower fails to do this, the lender can become a mortgagee in possession, take over the ship, sail it into a friendly port and apply for traditional sale. But how are you going to do that if you can't just go on board and say to the master, hey, I've arrested this ship, I'm going to take over now. And thinking about, for example, the degree three vessels where you'd have a remote operator redirecting the ship, what happens? Presumably the mortgagee would have to go to them and say we'd like you to redirect this vessel what if they refuse can the lender take over can they override the autonomous system or the remote operation would they have to. Would there be cybersecurity issues, issues with password and access and things like that? I mean, these are all kind of big questions at the moment that no one's tried to do this yet. So it isn't really clear how all of this would fit in with the existing law on the rights of a mortgagee in possession, which is a very well-tested legal concept, but it does assume physical control of the ship, which is not as obvious in an autonomous scenario as it would otherwise be. And a conducted issue to that would be, what I already mentioned, is kind of the absence of a clear market, and this would be relevant in the context of a judicial sale. So at least at the outset, valuing autonomous vessels could be a bit difficult. And until there's a clearly defined secondhand market, it might be difficult to lend us to determine whether it's even worth enforcing in terms of the potential return they would get, because it's difficult to analyze how much you might be able to get for the vessel. Not aware of any cases where someone has tried to do this. So the existing law will definitely need to develop and it's going to be very interesting times as we navigate these changes in the market in relation to autonomous vessels.
Thor: Yeah, I can see that autonomy definitely throws up a whole bunch of issues for financing.
Susan: Definitely. I mean, at the moment, we don't entirely know all the answers, but we're definitely looking forward to finding out.
Thor: Right.
Susan: Thank you so much for joining us for our AI podcast today.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
In this episode, we explore the intersection of artificial intelligence and German labor law. Labor and employment lawyers Judith Becker and Elisa Saier discuss key German employment laws that must be kept in mind when using AI in the workplace; employer liability for AI-driven decisions and actions; the potential elimination of jobs in certain professions by AI and the role of German courts; and best practices for ensuring fairness and transparency when AI has been used in hiring, termination and other significant personnel actions.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.
Judith: Hello, everyone. Welcome to Tech Law Talks and to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the workplaces in Germany. We would like to walk you through the employment-level landscape in Germany and would also like to give you a brief outlook on what's yet to come, looking at the recently adopted EU regulation on artificial intelligence, the so-called European Union AI Act. My name is Judith Becker. I'm a counsel in the Labor and Employment Group at Reed Smith. I'm based at the Reed Smith office in Munich, and I'm here with my colleague Elisa Saier. Elisa is an associate in the Labor and Employment Law Group, and she's also based in the Reed Smith office in Munich. So, Elisa, we are both working closely with the legal and HR departments of our clients. Where do you already come across AI in employments in Germany and what kind of use can you imagine in the future?
Elisa: Thank you, Judith. I am happy to provide a brief overview of where AI is already being used in working life and in employment law practice. The use of AI in employment law practice is not only increasing worldwide, but certainly also in Germany. For example, workforce planning and recruiting can be supported by AI. Therefore, already a pretty large number of AI tools does exist for recruiting, for example, in the job description and advertisement, the actual search and screening of applicants, as well as in the interview process, the selection and hiring of the right match, and finally the onboarding process. AI-powered recruiting platforms can make the process of finding and hiring talents more efficient, objective, and data-driven. These platforms use advanced algorithms to quickly scan CVs and applications and automatically pre-select applicants based on criteria such as experience, skills, and educational background. This does not only save time, but also improves the accuracy of the match between candidates and vacancies. In the area of employee evaluation, artificial intelligence offers the opportunity to continually analyze performance data and evaluate them. This enables managers to make well-founded decisions about promotions, salary adjustments, and further training requirements. AI is also used in the field of employee compensation. By analyzing large amounts of data, AI can identify current market trends and industry-specific salary benchmarks. This enables companies to adjust their salaries to the market faster and more accurately than with traditional methods. When terminating employment relationships, AI can be used with the social selection process, the calculation of severance payments, and drafting of warnings and termination letters. Finally, AI can support compliance processes, for example, in the investigation of whistleblowing reports received via Ethic Hotline. Overall, it is fair to say that AI has arrived in practice in the German workplace. This certainly raises questions about the legal framework for the use of AI in the employment context. Judith, could you perhaps explain which legal requirements employers need to consider if they want to use AI in the context of employment?
Judith: Yes, thank you, Elisa. Sure. The German legislature has so far hardly provided any AI-specific regulations in the context of employment. AI has only been mentioned in a few isolated instances in German employment laws. However, this does not mean that employers in Germany are in a legal vacuum when they use AI. There are, of course, general and not AI-specific employment laws and employment law principles that apply in the context of using AI in the workplace. In the next few minutes, we would like to give you an overview on the most relevant of these employment laws that German-based employers should have in their mind when they use AI. Now, I would like to start with the General Equal Treatment Act, the so-called AGG. Employers in Germany should definitely have that act in mind as it applies and it can also be violated even if AI is interposed for certain actions. According to this act, discrimination against job applicants and employees during their employments on the grounds of race or ethnic origin, gender, religion or belief, disability, age or sexual orientation is generally speaking prohibited. Although AI is typically regarded as something which is being objective, AI can also have biases and as a result the use of AI can also lead to discriminatory decisions. This may occur when, for example, training data the AI is trained with itself is based on human biases, and also if the AI is programmed in a way that is discriminatory. Currently, for example, as Elisa explained in the beginning, AI is very often used to optimize the application proceedings and when a biased AI is used here, for example, for selecting or for rejecting applicants, this can lead to violations of the General Equal Treatment Act. And since AI is not a legal subject itself, this discrimination would be attributable to the employer that is using the AI. And the result would then be, in the event of a breach of the Act, that the employer is exposed to claims for damages and compensation payments. And in this context, it is important to know that under the German Equal Treatment Act, the employee only has to demonstrate that there are indications that suggest the discrimination. So if the employee is able to do so, then the burden of proof shifts to the employer and the employer must then prove that there was in fact no such discrimination. And when an employer uses AI due to the complexity and the technical complexity that is involved, that can be quite challenging. In this regard, we think that a human control of the AI system is key and should be maintained. As we heard from Elisa in the beginning, AI is not only used in the hiring process, but also in the course of the employment. One question that came up here is whether AI can function as a superior itself and whether AI can give work instructions to employees. So the initial answer here is yes. German law does not stipulate any obligations that work instructions have to be given by a human being. Therefore, just as it is possible to delegate the right to give instructions to a manager or to another superior, it is also possible to enable an AI system to give instructions to the employees. In this context, it is important to recall, however, that the instructions are, of course, again attributable to the employer. And if the AI instructs in a way that is, for example, outside of the reasonable discretion or gives instructions which are outside of the employee's contract, then this instruction would, of course, be unlawful and that would be attributable to the employer as well. One aspect that I would like to point out here is that if an AI system would lead to a decision towards the employee that has legal effects and impacts the employee in a very significant way, then such decisions may not be made exclusively by an AI. This is because of a principle that is to be found in the data protection laws, and Elisa will explain on this in greater detail. Another aspect of AI in the course of employment is whether employers can instruct their employees to use AI. Again, here the answer is yes. This is a part of the employer's right to give instructions, and this right covers not only if employees, should use AI at all or if they are prohibited to use it. It also covers what kind of AI can be used and to avoid any misunderstandings and to provide for clarity here, we advise that employees should have a clear AI policy in place so that the employees know what the expectations are. And what they are allowed to do and what they are not allowed to do. And in this context, we think it is also very important to address confidentiality issues and also IP aspects, in particular, if publicly accessible AI is used, such as chat GPT.
Elisa: Yes, that's true, Judith. I agree with everything you said. In connection with the employer's right to issue instructions, the question also arises as to the extent to which employees may use AI to perform their work. The principle here is that if the employer provides its employees with a specific AI application, they are allowed to use it accordingly. Otherwise, however, things can get more complicated. This is because under German law, employees are generally required to carry out their work personally. This means that they are generally not allowed to have other persons to do their work in their place. The key factor is likely to be whether the AI application is used to support the employee in performing a task or whether the AI application performs the task alone. The scope of the use of AI is certainly relevant here as well. If employees limit themselves to give instructions to the AI application for a work task and simply copy the result, this can be an indication for a breach of the personal work performance. However, if employees ensure that they perform a significant part of the work themselves, the use of AI should not constitute a breach of duty. Employers are also free to expressly prohibit the use of artificial intelligence. It is also possible for employers to set binding requirements as to which tasks employees may use AI for and what they must observe when doing so. In the event of violations, employers can then, depending on the severity of the violation, take action by issuing a warning or giving termination. Even without an express prohibition from the employer, employees are not permitted to use artificial intelligence or must at least inform the employer about the use of AI in order not to violate the obligation arising from the employment contract. Data protection law is particularly important here. For data protection reasons, employees may not be allowed to enter protected personal data in an AI dialog box. This is mainly due to the fact that some AI systems are hosted on servers with lower data protection standards than in the EU. General data protection principles that apply in the context of employee data protection and the information rights of the employees concerned must also be observed when setting up and using AI. As a general rule, data is no longer required must be deleted and incorrect data must be corrected. If the purpose of the data is changed, the employee's concern must be informed too. In addition, the GDPR imposes special information requirement for automated decision making, in particular for profiling. In these cases, employers must inform the data subject of the existence of automated decision making and provide information on the scope and intended effects of such processing for the data subject. This can be a challenge for employers in practice, especially if they are using AI developed by other providers and the employer therefore does not have much knowledge about the function of the AI system. When using AI, companies should also observe the ban on automated individual decision-making in accordance with Article 22 GDPR. This regulation states that decisions that have legal consequences for the data subject or significantly affect them may not be based solely on the automated processing of personal data. Examples of this include selection decisions when recruiting applicants or giving notice of termination. The background to this is the protection of employees who should not be completely subject to a processing system in important matters. Accordingly, decisions on hiring, promotions, terminations, or warnings can generally not be made conclusively by an AI system, but are subject to a human decision-making process. Although it should be permissible to use AI to prepare such decisions, it should be ensured that a human is involved in the final decision-making process for this type of decisions. This human involvement should also be documented by the employer for evidence purposes.
Judith: Okay. I want to briefly take another look at termination of employment, but from a slightly different angle. Many employees may fear now that their job positions will be eliminated because AI will basically take the job- they will be replaced by AI. So we had a quick look at whether AI can be a reason for termination in Germany. Well, we haven't seen any specific case law on this, probably too early, but we think that the German labor courts would apply the general principles that they apply in redundancy scenarios. This means that the employer that uses AI would have to demonstrate that due to the use of AI, the job duties of the affected employee are eliminated in full and thus there's no need for this employment anymore. This can in practice be quite challenging and the German labor courts will fully review whether job duties have in fact been eliminated. The German labor courts, however, won't review whether the decision to use AI is reasonable or not. They will just review whether this decision is obviously arbitrary or, for example, discriminatory. But the decision itself is part of the entrepreneurial freedom and the courts won't assess whether it is a good decision or not. The courts would probably also apply all other general principles in redundancy scenarios with respect to, for example, offering suitable vacancies and with respect to social selection processes. We do not know yet whether courts would apply a stricter standard when it comes to training measures, for example, to get a position holder fit for the new workplace here. We think that the case law should be monitored and we would see how the courts would decide in such scenario. So, Elisa, let's have a look at a German specialty and let's have a look at those German-based employers who have a works council. Are there any specific legal implications here?
Elisa: Sure, Judith. The introduction and use of AI as a technical system in a company is generally subject to co-determination of the works council if there exists one in the respective company. In addition, the Works Council has information and consultation rights and must therefore be involved when using AI. Moreover, it is important to note that the Works Council's right to information already starts with the planning process, so that the Works Council must be informed at an early stage before the AI is actually implemented. The Works Council also generally has the right to consult experts during the implementation of AI. A different assessment with regard to the co-determination rights of the Works Council can arise when using external AI systems. This is because these are generally used by employees via their own account, to which the employer has no access. In such cases, it is not possible for the employer to monitor the performance or behavior of employees. This in turn means that no co-determination rights of the Works Council are triggered. If the employer wishes to use AI to implement selection guidelines, for example for recruitment or the transfer of employees, the consent of the Works Council is also required. In this regard, it should be noted, though, that co-determination rights only exist with regard to employees, not applicable. Last but not least, the implementation of AI could constitute a change in operation within the meaning of the German Works Constitution Act if the statutory conditions are met. The employer must in this case consult with the Works Council about the effects of the AI on the employees and, if applicable, conclude a so-called social plan. In addition to the existing legal requirements under German law, which we have just discussed, employers in Germany and in the European Union should also keep an eye on the recently published AI Act. Judith, what are the relevant provisions of the new EU regulation that employers will have to be aware of in the future?
Judith: Well, the AI Act offers material probably for its own episode, but let me at least briefly give you an overview on the Act. July 12th, the European AI Act that was adopted by the Council of the European Union in May was finally published. And this Act is considered as the world's first comprehensive law regulating AI. The Act applies to both providers of AI and deployers of AI systems. Employers will usually be considered as deployers in the meaning of the Act, unless they are really involved in the development of AI systems themselves. It is important to know that the Act does not stipulate a minimum size of the company for application, so that basically means that the AI Act applies to, in the employment law context, to all employers in the European Union. Just in brief, the centerpiece of this act is a classification of AI systems into different risk levels. And the AI Act then allocates different obligations and compliance requirements to the different risk tiers. So the Act takes risk level approach. The AI systems that are used in the HR departments, just as Elisa described in the beginning of our discussion, they will regularly be classified as so-called high-risk systems in accordance with the Act. And this applies, among others, to AI systems which are used in the course of recruitment, task assignment, performance evaluation, promotion, and termination of employment. And employers using such high-risk AI systems in the EU will face specific compliance requirements under the Act in the future. Besides these risk-specific obligations, there are also general obligations stipulated in the AI Act that apply regardless of the specific risk tier, and these are basically transparency and information obligations, and employers have to face these as well when using AI systems. Violations of the Act can result in severe fines, and although most of the obligations under the Act will only enter into force within two years' time, this means in the course of 2026, we think that it does make sense and would be prudent to deal with the Act at an early stage, so that all the AI systems which are in use and which are planned to be used in the future are implemented in a way that is compliant with the AI Act. Also, we think that works councils probably will demand corresponding information and will deal with the AI Act in greater detail and will ask for information and training measures. Well, having said all this, Elisa, what would you recommend to German-based employers? What measures should they take?
Elisa: Yeah, based on the legal situation just discussed, it is important to keep in mind that employers are free to decide whether AI should be used in the company or not. If the decision is made to implement and use AI, appropriate instructions for employers should be in place. This can be regulated by clear clauses on AI use in employment contracts, work instructions, or even in works agreement if a works council exists. According to our experience so far, it is common for employers in practice to at least have a list of AI systems that are permitted or prohibited in the company. However, beyond that, we advise to define certain framework for the use of AI in the company by means of an AI policy. This policy should contain clear requirements and minimum standards for the use of AI and reflect the core values of the individual company so that employees know what is expected of them and what is allowed and not allowed when using AI. In addition, before AI is implemented and also during its use, employees should receive appropriate training on how to use AI when performing their work. For example, employees should be advised not to enter any personal data into the AI system and not to commit any copyright violations. They should also be made aware that AI systems do not always deliver correct results, but often only something that sounds likely and plausible. So, every result generated with AI should be critically questioned and reviewed in detail. Due to the need for training, it can be assumed that the Works Council, if there exists one, will demand that training courses be offered by the employer. If such training is required, under German law, employers must generally feel the costs incurred. Beside the legal fact that the Works Council does generally have certain information and co-determination rights under German law, as just described. Involving the Works Council at an early stage is also recommended in order to determine the next steps and timetable for the implementation of AI. Moreover, the involvement of the Works Council generally serves to increase employees' acceptance of digitalization in the workplace. As AI is still a fairly new topic that is in flux and constantly evolving. We recommend that employers in Germany monitor future development in legislation and case law and keep an eye on future changes.
Judith: So thank you very much everyone for listening and we keep you posted for other episodes of this podcast.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Reed Smith partners Howard Womersley Smith and Bryan Tan with AI Verify community manager Harish Pillay discuss why transparency and explain-ability in AI solutions are essential, especially for clients who will not accept a “black box” explanation. Subscribers to AI models claiming to be “open source” may be disappointed to learn the model had proprietary material mixed in, which might cause issues. The session describes a growing effort to learn how to track and understand the inputs used in AI systems training.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. My name is Bryan Tan and I'm a partner at Reed Smith Singapore. Today we will focus on AI and open source software.
Howard: My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies team of Reed Smith in London and New York. And I'm very pleased to be in this podcast today with Bryan and Harish.
Bryan: Great. And so today we have with us Mr. Harish Pillay. And before we start, I'm going to just ask Harish to tell us a little bit, well, not really a little bit, because he's done a lot about himself and how he got here.
Harish: Well, thanks, Bryan. Thanks, Howard. My name is Harish Pillay. I'm based here in Singapore, and I've been in the tech space for over 30 years. And I did a lot of things primarily in the open source world, both open source software, as well as in the hardware design and so on. So I've covered the spectrum. When I was way back in the graduate school, I did things in AI and chip design. That was in the late 1980s. And there was not much from an AI point of view that I could do then. It was the second winter for AI. But in the last few years, there was the resurgence in AI and the technologies and the opportunities that can happen with the newer ways of doing things with AI make a lot more sense. So now I'm part of an organization here in Singapore known as AI Verify Foundation. It is a non-profit open-source software foundation that was set up about a year ago to provide tools, software testing tools, to test AI solutions that people may be creating to understand whether those tools are fair, are unbiased, are transparent. There's about 11 criteria it tests against. So both traditional AI types of solutions as well as generative AI solutions. So these are the two open source projects that are globally available for anyone to participate in. So that's currently what I'm doing.
Bryan: Wow, that's really fascinating. Would you say, Harish, that kind of your experience over the, I guess, the three decades with the open source movement, with the whole Linux user groups, has that kind of culminated in this place where now there's an opportunity to kind of shape the development of AI in an open-source context?
Harish: I think we need to put some parameters around it as well. The AI that we talk about today could never have happened if it's not for open-source tools. That is plain and simple. So things like TensorFlow and all the tooling that goes around in trying to do the model building and so on and so forth could not have happened without open source tools and libraries, a Python library and a whole slew of other tools. If these were all dependent on non-open source solutions, we will still be talking about one fine day something is going to happen. So it's a given that that's the baseline. Now, what we need to do is to get this to the next level of understanding as to what does it mean when you say it's open source and artificial intelligence or open source AI, for that matter. Because now we have a different problem that we are trying to grapple with. The problem we're trying to grapple with is the definition of what is open-source AI. We understand open-source from a software point of view, from a hardware point of view. We understand that I have access to the code, I have access to the chip designs, and so on and so forth. No questions there. It's very clear to understand. But when you talk about generative AI as a specific instance of open-source AI, I can have access to the models. I can have access to the weights. I can do those kinds of stuff. But what was it that made those models become the models? Where were the data from? What's the data? What's the provenance of the data? Are these data openly available? Or are they hidden away somewhere? Understandably, we have a huge problem because in order to train the kind of models we're training today, it takes a significant amount of data and computing power to train the models. The average software developer does not have the resources to do that, like what we could do with a Linux environment or Apache or Firefox or anything like that. So there is this problem. So the question still comes back to is, what is open source AI? So the open source initiative, OSI, is now in the process of formulating what does it mean to have open source AI. The challenge we find today is that because of the success of open source in every sector of the industry, you find a lot of organizations now bending around and throwing around the label, our stuff is open source, our stuff is open source, when it is not. And they are conveniently using it as a means to gain attention and so on. No one is going to come and say, hey, do you have a proprietary tool? Adding that ship has sailed. It's not going to happen anymore. But the moment you say, oh, we have an open source fancy tool, oh, everybody wants to come and talk to you. But the way they craft that open source message is actually quite sadly disingenuous because they are putting restrictions on what you can actually do. It is contrary completely to what the open-source licensing says in open-source initiative. I'll pause there for a while because I threw a lot of stuff at you.
Bryan: No, no, no. That's a lot to unpack here, right? And there's a term I learned last week, and it's called AI washing. And that's where people try to bandy the terms, throw it together. It ends up representing something it's not. But that's fascinating. I think you talked a little bit about being able to see what's behind the AI. And I think that's kind of part of those 11 criteria that you talked about. I think auditability, transparency would be kind of one of those things. I think we're beginning to go into some of the challenges, kind of pitfalls that we need to look out for. But I'm going to just put a pause on that and I'm going to ask Howard to jump in with some questions on his phone. I think he's got some interesting questions for you also.
Howard: Yeah, thank you, Bryan. So, Harris, you spoke about the open source initiative, which we're very familiar with, and particularly the kind of guardrails that they're putting around what open source should be applied to AI systems. You've got a separate foundation. What's your view on where open source should feature in AI systems?
Harish: It's exactly the same as what OSI says. We are making no difference because the moment you make a distinction, then you bifurcate or you completely fragment the entire industry. You need to have a single perspective and a perspective that everybody buys into. It is a hard sell currently because not everybody agrees to the various components inside there, but there is good reasoning for some of the challenges. But at the same time, if that conversation doesn't happen, we have a problem. But from AI Verify Foundation perspective, it is our code that we make. Our code, interestingly, it's not an AI tool. It is a testing tool. It is written purely to test AI solutions. And it's on an Apache license. This is a no-brainer type of licensing perspective. It's not an AI solution in and of itself. It's just taking an input, run through the test, and spit out an output, and Mr. Developer, take that and do what you want with it.
Howard: Yeah, thank you for that. And what about your view on open source training data? I mean, that is really a bone of contention.
Harish: That is really where the problem comes in because I think we do have some open source trading data, like the Common Crawl data and a whole slew of different components there. So as long as you stick to those that have been publicly available and you then train your models based on that, or you take models that were trained based on that, I think we don't have any contention or any issue at the end of the day. You do whatever you want with it. The challenge happens when you mix the trading data, whether it was originally Common Crawl or any of the, you know, creative license content, and you mix it with non-licensed or licensed under proprietary stuff with no permission, and you mix it up, then we have a problem. And this is actually an issue that we have to collectively come to an agreement as to how to handle it. Now, should it be done on a two-tier basis? Should it be done with different nuances behind it? This is still a discussion that is ongoing, constantly ongoing. And OSI is taking the mother load of the weight to make this happen. And it's not an easy conversation to have because there's many perspectives.
Bryan: Yeah, thank you, for that. So, Harish, just coming back to some of the other challenges that we see, what kind of challenges do you foresee the continued development of open source with AI we'll see in the near future you've already said we've encountered some of them some of the the problems are really kind of in the sense a man-made because we're a lot of us rushing into it what kind of challenges do you see coming up the road soon.
Harish: I think the, part of the the challenge you know it's an ongoing thing part of the challenge is not enough people understand this black box called the foundational model. They don't know how that thing actually works. Now, there is a lot of effort that is going into that space. Now, this is a man-made artifact. This piece of software that you put in something and you get something out or get this model to go and look at a bunch of files and then fine-tune against those files. And then you query the model, and then you get your answer back, a rag for that matter. It is a great way of doing it. Now, the challenge, again, goes back to because people are finding it hard to understand, how does this black box do what it does? Now, let's step back and say, okay, has physics and chemistry and anything in science solved some of these problems before? We do have some solutions that we think that make sense to look at. One of them is known as, well, it's called Computational Fluid Dynamics, CFD. CFD is used, for example, if you want to do a fluid analysis or flow analysis over the wing of an aircraft to see where the turbulences are. This is all well understood, mathematically sound. You can model it. You can do all kinds of stuff with it. You can do the same thing with cloud formation. You can do the same thing with water flow and laminar flow and so on and so forth. There's a lot of work that's already been done over decades. So the thinking now is, can we now take those same ideas that has been around for a long time and we have understood them and try and see if we can apply this into what happens in a foundational model. And one of the ideas that's being worked on is something called PINN, which stands for Physics Informed Neural Networks. So using physics, standard physics, to figure out how does this model actually work. Now, once you have those things working, then it becomes a lot more clearer. And I would hazard a guess that within the next 18 to 24 months, we'll have a far clearer understanding of what is it inside that black box that we call the foundational model. With all these known ways of solving problems that, you know, who knew we could figure out how water flows or how, who knew we could figure out how, you know, the air turbulence happens over a wing of a plane. We figured it out. We have the math behind it. So that's where I feel that we are solving some of these problems step by step.
Bryan: And look, I take your point that we all need to try to understand this. And I think you're right. That is the biggest challenge that we all face. Again, when it's all coming thick and fast at you, that becomes a bigger challenge. Before I kind of go into my last question, Howard, any further questions for Harish?
Howard: I think what Harish just came up with in terms of the explanation of how the models actually operate is really the killer question that everybody is poised with the work the type of work that I do is on the procurement of technology for financial sector clients and when they want to understand when procuring AI what the model does it they often receive the answer that it is a black box and not explainable which kind of defies the logic of what their experience is in terms of deterministic software you know if this then that you know ] find it very difficult to get their head around the answer being a black box box methodology and often ask you know what why can't you just reverse engineer the logic and plot a point back from the answer as a breadcrumb trail to the input? Have you got any views on that sort of question from our clients?
Harish: Yeah, there's plenty of opportunities to do that kind of work. Not necessarily going back from a breadcrumb perspective, but using the example of the PINN, Physics Informed Neuro Networks. Not all of them can explain stuff today. We have to, no one, an organization and a CIO who is worth their weight in gold should ever agree to an AI solution that they cannot explain. If they cannot explain, you are asking for trouble. So that is a starting point. So don't go down the path just because your neighbor is doing that. That is being very silly from my perspective. So if we want to solve this problem, we have to collectively figure out what to do. So I give you another example of an organization called KWAAI.ai. They are a nonprofit based in California, and they are trying to build a personal AI solution. And it's all open source, 100%. And they are trying really, really hard to explain how is it that these things work. And so this is an open source project that people can participate in if they choose to and understand more and at some point some of these things will become available as model for any other solution to be tested against so so and then let me then come back to what the verify foundation does we have two sets of tools that we have created one is to create One is called AI Verified Toolkit. What it does is if you have your application you're developing that you claim is an AI solution, great. Now, what I want you to do is, Mr. Developer, put this as part of your tool chain, your CICD cycle. When you do that, what happens, you change some stuff in your code. Then you run this through this toolkit, and the toolkit will spit out a bunch of reports. Now, in the report, it will tell you whether it is biased, unbiased, is it fair, unfair, is it transparent, a whole bunch of things it spits out. Then you, Mr. Developer, make a call and say, oh, is that right or is that wrong? If it's wrong, we'll fix it before you actually deploy it. And so this is a cycle that has to go continuously. That is for traditional AI stuff. Now, you take the same idea in the traditional AI and you look at generative AI. So there's another project called Moonshot. That's the name of the project called Moonshot. It allows you to test large language models of your choosing with some inputs and what outputs come up with the models that you are testing against. Again, you do the same process. The important thing for people to understand and developers to understand, and especially businesses to understand is, as you rightly pointed out, Howard, the challenge we have, this is not deterministic outputs. These are all probabilistic outputs. So if I were to query a large language model, like AAM in London, by the time I ask the question at 10 a.m. in Singapore, it may give me a completely different answer. With the same prompt, exactly the same model, a different answer. Now, is the answer acceptable within your band of acceptance? If it is not acceptable, then you have a problem. That is one understanding. The other part of that understanding is, it suggests to me that I have to continuously test my output every single time for every single output throughout the life of the production of the system because it is probabilistic. And that's a problem. That's not easy.
Howard: Great. Thank you, Harish. Very well explained. But it's good to hear that people are trying to address the problem and we're not just living in an inexplicable world.
Harish: There's a lot of effort underway. There's a significant amount. MLCommons is another group of people. It's another open source project out of Europe who's doing that. AI Verified Foundation, that's what we are doing. We're working with them as well. And there's many other open source projects that are trying to address this real problem. Yeah so one of the outcomes hopefully that you know makes a lot of sense is at some point in time the tools that we have created maybe be multiple tools can be then used by some entity who is a certification authority so to speak takes the tool and says hey Mr. company a company b, we can test your ai solutions against these tools and once it is done you pass we give you a rubber stamp and say you have tested against it so that raises the confidence level from a consumer's perspective, oh, this organization has tested their tools against this toolkit and as more people start using it, the awareness of the tools being available becomes greater and greater. Then people can ask the question, oh, don't just provide me a solution to do X. Was this tested against this particular set of tools, a testing framework? If it's not, why not? That kind of stuff.
Howard: And that reminds me of the Black Duck software that tests for the prevalence of open source in traditional software.
Harish: Yeah, yeah. In some sense, that is a corollary to it, but it's slightly different. And the thing is, it is about how one is able to make sure that you... I mean, it's just like ISO 9000 certification. I can set up the standards. If I'm the standards entity, I cannot go and certify somebody else against my own standards. So somebody else must do it, right? Otherwise, it doesn't make sense. So likewise, from AI Verify Foundation perspective, we have created all these tools. Hopefully this becomes accepted as a standard and somebody else takes it and then goes and certifies people or whatever else that needs to be done from that point.
Howard: Yeah and and we we do see standards a lot you know in the form of iso standards recovering almost like software development and cyber security again that also makes me think about certification which we're is seeing appear in European regulation. We saw it in the GDPR, but it never came into production as something that you certify your compliance with the GDPR. We have now seen it appear in the EU AI Act. And because of our experience of not seeing it appear in the GDPR, we're all questioning, you know, whether it will come to fruition in the AI Act or whether we have learned about the advantages of certification, and it will be focused on when the AI Act comes into force on the 1st of August. I think we have many years to understand the impact of the AI Act before certification will start to even make a small appearance.
Harish: It's one thing to have legislative or regulated aspects of behavior. It's another one when you voluntarily do it on the basis of this makes sense. Because then there is less of hindrance or less of resistance to do it. It's just like ISO 9000, right? No one legislates it, but people still do it. Organizations still do it because it's their, oh yeah, we are an ISO 9035 organization, And so we have quality processes in place and so on and so forth, which is good for those that is important. That becomes a selling point. So likewise, I would love to see something that right now, ISO 42001, which is all the series of AI-related standards. I don't think any one of them has got anything that can be right now be certified yet. Doesn't mean it will never happen. So that could be another one, right? So again, the tools that AI Verified Foundation creates and Mel Korman creates and everybody feeds into it. Hopefully that makes sense. I'd rather see a voluntary take-up rather than a mandated regulatory one because things change. And it's much harder to change the rules than to do anything else.
Howard: Well, I think that's a question in itself, but probably it will take us way over our time whether the market forces us to drive standardization. And we could probably have our own session on that, but it's a fascinating subject. Thank you, Harish.
Bryan: Exactly I think standards and certifications are possibly the kind of the next thing to look out for for AI you know Harish you could be correct. But on that note last question from me Harish so, interestingly the term you use moonshot right and so personally for you what kind of moonshot wish would you have for open source and AI. Leave aside resources, yeah if you could choose what kind of development would you think would be the one that you would look out for, the one that excites you?
Harish: I would rather that, for me, we need to go all the way back to the start from an AI training perspective, right? So the data. We have to start from the data, the provenance of the data. We need to make sure that that data is actually okay to be used. Now, instead of everybody going and doing their own thing, Can we have a pool where, you know, I tap into the resources and then I create my models based on the pool of well-known, well-identified data to train on. Then at least the outcome from that arrangement is we know the provenance of the data. We know how it was trained. We can see the model. model, and hopefully in that process, we also begin to understand how the model actually works with whichever physics related understanding that we can throw at it. And then people can start benefiting and using it in a coherent manner. Instead of what we have today, I mean, in a way, what we have today is called a Cambrian explosion, right? There are a billion experiments happening right now. And majority, 99.9% of it will fail at some point. And 0.1% needs to succeed. And I think we are getting to that point where there's a lot more failures happening rather than successes. And so my sense is that we need to have data that we can prove that it's okay to get and okay to use, and it is being replenished as and when needed. And then you go through the cycle. That's really my, you know, Mojoc perspective.
Bryan: I think there's really a lot for us to unpack, to think about, but I think it's really been an interesting discussion from my perspective. I'm sure, Howard, you think the same. And I think with this, I want to thank you for coming online and joining us this afternoon in Singapore, this morning in Europe on this discussion. I think it's been really interesting from a perspective of somebody who's been in technology and interesting for the ReadSmith clients who are looking at this from a legal and technology perspective. And I just wanted to thank you for this. And I also wanted to thank the people who are tuning into this. Thank you for joining us on this podcast. Stay tuned to the other podcasts that the firm will be producing, and I do have a good day.
Harish: Thank you.
Howard: Thank you very much.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
The rapid integration of AI and machine learning in the medical device industry offers exciting capabilities but also new forms of liability. Join us for an exciting podcast episode as we delve into the surge in AI-enabled medical devices. Product liability lawyers Mildred Segura, Jamie Lanphear and Christian Castile focus on AI-related issues likely to impact drug and device makers soon. They also give us a preview of how courts may determine liability when AI decision-making and other functions fail to get desired outcomes. Don't miss this opportunity to gain valuable insights into the future of health care.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Mildred: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, myself, Mildred Segura,, partner here at Reed Smith in the Life Sciences Practice Group, along with my colleagues, Jamie Lanphear and Christian Castile, will be focusing on AI and its intersection with product liability within the life sciences space. And especially as we see more and more uses of AI in this space, we've been talking about there's a lot of activity going on with respect to the regulatory landscape as well as the legislative landscape and activity going on there, but not a lot of discussion about product liability and its implications for companies who are doing business in this space. So that's what prompted our desire and interest in putting together this podcast for you all. And with that, I'll have my colleagues briefly introduce themselves. Jamie, why don't you go ahead and start?
Jamie: Thanks, Mildred. I'm Jamie Lanphear. I am of counsel at Reed Smith based in Washington, D.C. I'm in the Life Sciences and Health Industry Group. I've spent the last 10 years defending manufacturers and product liability litigation, primarily in the medical device and pharma space. I think, like you said, this is just a really interesting topic. It's a new topic, and it's one that hasn't gotten a lot of attention. A lot of airtime, you know, you go to conferences these days and AI is sort of front and center in a lot of the presentations and webinars. And much of the discussion is around, you know, regulatory cyber security and privacy. And I think that, you know, in the coming years, we're going to start to see product liability litigation in the AI medical device space that we haven't seen before. Christian, did you want to go ahead and introduce yourself?
Christian: Yeah, thanks, Jamie. Thanks, Mildred. My name is Christian Castile. I am an associate at Reed Smith in the Philadelphia office. And much like Mildred and Jamie, my practice consists primarily working alongside medical device and pharmaceutical manufacturers in product liability lawsuits. And Jamie, I think what you mentioned is so on point. It feels like everybody's talking about AI right now. And to a certain extent, I think that can be intimidating, but we actually are at a really interesting vantage point opportunity to get in the ground on the ground floor of some of this technology and how it is going to shape the legal profession. And so, you know, as the technology advances, we're going to see new use cases popping up across industries and, of course, of interest to this group in particular is that healthcare space. So it's really exciting to be able to grapple this headfirst and the people who are sort of investing in this now are going to be able to just really be a leg up when it comes to evaluating their risk.
Mildred: So thanks, Jamie and Christian, for those introductions. As we said at the outset, you know, we're all product liability litigators and based on what we're seeing, AI product liability is the next wave of product liability litigation on the horizon for those in the life sciences space and we're thinking very deeply about these issues and working with clients on them because of what we see on the horizon and what we're already seeing in other spaces in terms of litigation and that's, you know, what we're We're here to discuss today because of the developments that we're seeing in product liability litigation in these other spaces and the significant impact of, you know, what that litigation may represent for those of us in the life sciences space. And to level set our discussion today, we thought it would be helpful to briefly describe, you know, the kind of AI-enabled, you know, med tech or medical devices that we're seeing currently out there on the market. And I know, Jamie, you and I were talking about this, you know, in preparation for today's podcast in terms of, you know, just talking about FDA-cleared devices. I mean, what are the metrics that we're seeing with respect to that and the types of AI-enabled technology?
Jamie: Sure. So, we've seen a huge uptick in the number of medical devices that are incorporating artificial intelligence and machine learning. There are currently around 900 of those devices on the market in the United States, and more than 150 of those were authorized by FDA just in the last year. So, definitely seeing a growing number, and we can expect to see a lot more in the years to come. The majority of these devices, about 75%, are in the field of radiology. So, So for example, we now have algorithms that can assist radiologists when they're reviewing a CT scan of a patient's chest and highlight potential nodules that the radiologist should review. We see similar technology being used to detect cancer. So there's algorithms that can identify cancerous nodules or lesions that may not even be visible to a radiologist because they are undetectable by the human eye. And then other areas where we're seeing these devices being used, then cardiology and neurology.
Mildred: And I would add to that, you know, we're also seeing it with respect to, you know, surgical robots, right? And even though we don't have fully autonomous surgical robots out there on the market, you know, we do have some forms of surgical robots. And I think it's just on the horizon that we'll start to see, you know, in the near future, perhaps these surgical robots using, you know, artificial intelligence driven algorithms. And so that just the thought of that, right, that we're moving in that direction, I think, makes this discussion so important. And not just sort of in the medical device arena, but also within the pharma space where you're seeing the use of artificial intelligence to speed up and improve clinical development, drug discovery, and other areas. So you can see where the risks lie just within that space alone in addition to medical devices. And Christian, I know that you've been looking at other areas as well. So I wanted to tell us a little bit about those.
Christian: Sure. Yeah, and very similar to sort of the medical device space, there is a lot of really exciting room for growth and opportunity in the pharmaceutical space, seeing more and more technologies coming out that are focusing on streamlining things like drug discovery, using machine learning models, for example, to assist with identification of which molecules are going to be the most optimal to use in either pharmaceutical products, but also in the development of identifying mechanisms of action for being able to explain some of the medicines and the disease states that we have now that we're not able to explain as well. And then looking even more broadly, right? So you have, of course, these very specific use cases tied to the pharmaceutical products that we're talking about. But even more broadly, you'll see companies who are integrating AI into things like manufacturing processes, for example, and really working on driving the efficiency of the business, both from the product development standpoint, but also from a product production standpoint as well. So lots of opportunity here to get involved in the AI space and lots of ways to sort of grapple with how to best integrate it into a business.
Mildred: And I think that brings us to the question of what is product liability? For those listeners who may not be as familiar with the law of product liability, just to level set here too, you know, typically we're talking about three common types of product liability claims, right? You have your design defect, manufacturing defect, and failure to warn claims. Those are the typical claims that we see. And each of these scenarios or claims is premised on a product that leaves a manufacturer's facility, you know, with the defect in place, either in the product or in the warning. And these theories fit neatly for products that remain unchanged from the moment they leave the manufacturer's facility, such as consumer goods that are sold at retail. But what about when you start incorporating AI, machine learning technologies into these types of. Devices that are going to be learning and adapting, what does that mean for these types of product liability claims? And what is the impact? How will the courts deal and address and assess these types of claims as they start to see these types of devices and claims being made related to these types of technologies? And I think the key question that will come up right, is in the context of a product liability suit, is this AI-related technology even a product, right? And historically, courts have viewed software as a service that is not subject to product liability causes of action. However, that approach may be evolving to reflect the most Most products, you know, today maybe contain software or are composed entirely of software. We're seeing some litigation and other spaces that Jamie will touch on in a little bit that are showing sort of a change in that trend that we had been seeing and now moving in a different direction, which is something that we want to talk about. So maybe, Jamie, why don't you share a little bit about sort of what we're seeing in connection with product liability claims in other spaces that may inform what happens in the life sciences space.
Jamie: Yeah. So there have been a few cases and decisions over the last few years that I think help inform what we can expect to see with respect to products liability claims in the life science space, particularly around devices that incorporate artificial intelligence and software. One of those cases is the social media products liability MDL out of the Northern District of California. There you have plaintiffs who have filed suit on behalf of minors alleging that operators of various social media platforms designed these platforms to intentionally addict children. And this has allegedly resulted in a number of mental health issues and the sexual exploitation of minors. Now, last year, the defendants filed a motion to dismiss, and there were a lot of issues addressed in that motion, a lot of arguments made. We don't have time to go through all of them. But the one I do want to talk about that is relevant to our discussion today is the defendants argument that their social media platforms are not products, they're services. And as such, they should not be subject to product liability claims. And that argument is really in line with the historical approach courts have taken towards software, meaning that software has generally been considered a service, not a product. So software developers have generally not been subject to product liability claims. And so that's what the defendants argued in their motion, You know, that they were providing a platform where users could come, create content, share ideas. They weren't over in a warehouse making a good, distributing it to the general public, etc. So the court did not agree. The court rejected the defendant's argument and refused to take what it called an all or nothing approach to evaluating whether the plaintiff's design defect claims could proceed. And so the court took a more nuanced approach and it looked at the specific functions of these platforms that the plaintiffs were alleging was defective and evaluated whether each was more akin to tangible personal property or to ideas and content. So, for example, one of the claims that the plaintiff made was that the platforms lacked adequate parental controls and age verification. And so the court looked at, you know, what is the purpose of parental controls and age verification and its access? The court said this has nothing to do with sharing ideas, but this is more like products that contain parental controls, such as a prescription medicine bottle. And the court went through this analysis for each of the other allegedly defective functions. And interesting for each, it concluded that the plaintiff's product liability claims could proceed. And so what I think is huge to take away from this particular decision is that, you know, the court really moved away from the traditional approach courts have taken towards software with respect to product liability. And I think this really opens the door for more courts to do the same, specifically to expand products liability law, strict products liability to various types of software and software functions, such that the developers of the software can potentially be held liable for the software that they're developing. And while there have been a few one-off cases over the years, mostly in state court, in which the court has found that products liability liability law does apply to software. Here we have a huge MDL with a significant number of plaintiffs in federal court. And I think that this case is going to have, or this decision at least, is going to have a huge impact on future litigation.
Mildred: And I think that that's all really helpful, Jamie, in terms of the way you put the court's analysis. And I think one important to highlight is that in this particular case, the plaintiffs brought their causes of action both in strict liability as well as negligence. And I think the reason that's important to us and why it's of concern that you're seeing these plaintiffs, typically they might bring these types of claims under a negligence standard, which involves a reasonable person standard, assessing was there a duty to warn. And the court did look at some of that, you know, was there a duty here to the plaintiffs, but also strict liability, which is the one that you don't typically see brought in the case of, you know, software applications being discussed. And so the fact that you're seeing plaintiffs moving in this direction, asserting the strict product liability claims, in addition to negligence, which is what you would typically see, I think, is what is worth paying attention to. And, you know, this decision was at the motion to dismiss stage. So it will be interesting to see how it unfolds as the case moves forward through discovery and ultimately summary judgment. And it's not the only case out there. There are some other cases as well that are grappling with these issues. But this particular case, as Jamie noted, that the analysis was very detailed, very nuanced in terms of how the court got to where it did, you know, and it did a very thoughtful analysis going through, is it a software or a product? Once it answered that question and moved to, as Jamie noted, analyzing each of the product claims that were being asserted, and with failure to warn, it didn't really dive into that because of the way it had been pleaded. But nevertheless, it's still a very important decision from our perspective. And that was within sort of, you know, this product liability context. We've seen other developments in the case law. Involving cases alleging design defect, not necessarily in the product liability context, but more so in the consumer protection space, if you will. Specifically, one case that we were talking about in preparation for this podcast involving certain types. What was it, Jamie, the specific technology at issue?
Jamie: Yeah, so the Roots case is extremely interesting. And although it's a consumer protection case, not a products case, I do think that it foreshadows the types of new theories that we can expect to see in products liability litigation involving devices that incorporate software, artificial intelligence, and machine learning. So Roots Community Center is a California state case in which a community health center filed suit against manufacturers. Developers, distributors, and sellers of pulse oximeters, which are those devices that measure the amount of oxygen in your blood. And so the plaintiffs are alleging that these devices do not properly measure oxygen levels in people with darker skin, that the level of skin pigmentation can and does affect the output that these devices are generating by overestimating the oxygen level for these individuals. Individuals and that by doing so, these individuals are thinking that they have more oxygen than they do and they appear healthier than they are and they may not seek or receive the appropriate care as a result. And the reason for this, according to plaintiffs, is that the developers of the software when they were developing this device did not take into account the impact that skin color could have, that they essentially drew from data sets that were primarily white. And as such, they got results that. Largely apply to white folks. And so this issue of bias, right, is not one that I've ever seen raised as a theory of defect in a products case. And again, this isn't a products case, but I do expect to see this theory, products cases involving medical devices that incorporate artificial intelligence. You know, the FDA has been very clear that bias and health equity are are at the forefront of their efforts to develop guidelines and procedures specific to artificial intelligence machine learning-enabled devices. Particularly given that the algorithms depend on the data being used to generate output. And if the data is not reflective of the population who will be using the device and inclusive of groups like women, people of color, etc., the outputs for these groups may not be accurate.
Mildred: And what about with respect to failure to warn? You know, we know as product liability litigators that, you know, one of our typical defenses to a failure to warn claim is the learned intermediary doctrine, right? Which means that a manufacturer's duty to warn runs to the physician. You know, you're supposed to provide adequate warnings to the physician to enable them to be able to discuss the risks and benefits of a given device or pharmaceutical or treatment to the patient. But what happens to that? And then that's in the case of prescription medical devices or prescription pharmaceuticals, right? But what happens when you start incorporating AI and these, you know, machine learning technologies into a. Whether it's a medical device or in the pharma space, what happens to that learned intermediary defense? I mean, are you seeing anything that would change your mind in terms of learned intermediary doctrine is here to stay, it's not really going to change, right? I think if you ask me, I would say that based on what we're seeing so far, whether it's within the social media context or even cases that we've seen in the life sciences space that may not be specific to AI or machine learning. I think the fact that we're not yet at the stage where the technology is fully autonomous, it's more assistive, we're augmenting what a physician is doing, that will still require that you have this learned intermediary between the patient and the manufacturer who can speak to, you know, perhaps this treatment, whether it's through a medical type device or a pharmaceutical, is using this technology. Here's what it will be doing for you. This is the way it will function, et cetera. But, you know, does that mean that the manufacturer will have to make sure that they're providing clear instructions to the physician? You know, I think the answer to that is yes. And that's something that the FDA, through its guidance that it's put out, is looking at and has spoken to, Jamie, to your point, right? Not only with respect to bias, they're also looking to ensure that to the extent these technologies are being incorporated, that, you know, instructions related to their use are adequate for the end user, whether that's, you know, in many cases, the physician. But it also does raise questions as as these technologies get more sophisticated, who will be liable, right? What happens when you do start to see a more fully autonomous system who might be making decisions that. The physician just doesn't have the capacity to unpack, for instance, or fully evaluate, right? Who's responsible then? And I think that may explain a little bit of the reticence on the part of physicians to adopt these technologies. And I think ultimately, it's all about transparency, having clear, adequate information where they feel comfortable not only using the technology, right? But who ultimately will be responsible if, you know, God forbid, there's something that goes. Undetected, or the technology is telling the doctor to do something and the doctor overrides it, situations like that. And so I don't know if you all have any additional thoughts on that.
Christian: It's very interesting, right, this learned intermediary concept, I think, particularly because as we see this technology grow, we're going to see the bounds of this doctrine get stretched out a little bit. And to your point, Mildred, transparency here is going to be important, not only with respect to who is making those decisions, but also with respect to how those decisions are being made. So when you're talking about things like, how is the AI working and how are the algorithms that are underlying this technology coming to the conclusions that they are, it's going to be really important in this warning context that what's being discussed, how the AI is helping integrate into these products that everybody involved is able to understand specifically what that means. What is the AI doing? How is it doing it? And how does that translate to the medical service or the benefit that this product or pharmaceutical is providing? And that's all interesting and going to be, I think, incorporated in novel ways as we move forward.
Mildred: And Christian, sort of related to that, right, you mentioned in terms of, you know, whether it's regulation or guidance specific to FDA. What would you say about that in terms of, you know, how it goes hand in hand with product liability? ability?
Christian: Absolutely. So, I mean, as we see increased levels of regulation and increased regulatory attention on this topic, I think one aspect that's going to be really critical to keep in mind is that as we are developing our framework here, the regulations that come out are really going to represent, especially at this beginning stage when we're still coming in to better understanding of the technology itself, these regulations are really going to represent the floor rather than the ceiling. And so it's going to be important for companies who are working in this space and thinking about integrating these technologies to think about how can we incorporate and come into compliance with these regulations, but what are the very specific concerns that might be raised above and beyond these regulations? And so, Jamie, you're talking about some of these social media cases where some of the injuries alleged are very specific to subpopulations of of users in these social media platforms. And so how are we, for example, going to address the vulnerabilities in the population that our products are being marketed to while staying in compliance with the regulations as well? And so that sort of interplay is going to be really, really interesting to see to what degree these legal theories are stretched above and beyond what we're used to seeing and how that will impact understanding of the way the regulations are integrated into the business.
Jamie: You raise a great point, Christian, with respect to the regulations being a floor rather than a ceiling. I think there are a lot of companies out there that reasonably think that as long as they're following the regulations and doing what they're supposed to be doing on that front, that their risk in litigation is minimal or maybe even non-existent. But as we know, that's not the case. A medical device manufacturer can do everything right with respect to complying with FDA regulations and still be found liable in a courtroom. You know, plaintiff's lawyers often come up with pretty creative theories to put in front of a jury regarding the number of things the manufacturer, and I have my air quotes going over here, could or should have done but didn't. And these are often things that are not legally required or even practical sometimes. And ultimately, at least with respect to negligence, it's up to the fact finder to decide if the manufacturer acted reasonably. And while this question often involves considerations of whether the manufacturer complied with regulations and guidances and the like, compliance, even complete compliance, is not a bar to liability. And as product liability litigators, we see plaintiffs relying on a lot of the same theories, a lot of the same types of evidence, and a lot of the same arguments. And so having that base of knowledge and being able to share that with manufacturers and say, hey, look, I know we're not there yet. I know this litigation isn't happening today. But here are maybe some things that you can do to help mitigate your potential future risks or defend against these types of cases later on. And, you know, that's one of the reasons why we wanted to start this conversation.
Mildred: Yeah, and I would definitely echo that as well, Jamie, because as Christian mentioned, you know, the guidance that's being put out by FDA, for instance, really that's the floor in many ways and not the ceiling. And sort of looking at the guidance to provide input and insight into, okay, here's what we should be doing with respect to, you know, the design of this algorithm that will then be used for this clinical trial or to deliver this specific type of treatment, right, as you illustrated in the Roots case involving, you know, the allegation of bias in pulse oximeters, for instance, and really looking at mitigating the potential risk that is foreseeable and can be identified. And obviously, not every risk might be identifiable. And that all gets into, you know, the negligence standard in terms of, you know, what is foreseeable and what isn't. But when you're dealing with these very sophisticated, complex technologies, you know, these questions that we're so used to dealing with in sort of your normal product liability case, I think, will get more complex and nuanced. As we start to see these types of cases within the life sciences space, we're already starting to see it within the social media context, as Jamie touched on earlier. And so I would say, you know, because we're getting close to the end here of our podcast in terms of some key takeaways is obviously monitoring the case law, even if it's not in the life sciences space, but for instance, in the social media space and what's going on there as well as other areas. Monitoring what's going on in the regulatory space, because clearly we have a lot going on. And not just FDA, you also have the Federal Trade Commission issuing guidance and speaking to these issues. And if you don't have a prescription, you know, medical device that's governed by FDA, then you most certainly need to look at, you know, what other agencies govern your specific technology, especially if you're partnering with a medical device manufacturer or pharmaceutical company. And of course, legislation, we know there's a lot of activity, both at the federal and the state level, with respect to the regulation of AI. And so I think there too, we have our eye on that as well in terms of, you know, what if any legislation is coming out of that and how it will impact product liability. And Jamie and Christian, I know you guys have other thoughts on, you know, some of the key takeaways.
Jamie: Yeah. So I want to just sort of return to this issue of bias and the importance that manufacturers make sure that they're looking at and taking into account the available knowledge, whether in scientific journals or medical literature, et cetera, related to how factors like race, age, gender. Impact, the medical risks, diagnosis, monitoring, treatment of the condition that a particular device is intended to diagnose or treat, monitoring those things is going to be really important. I also think being really diligent about investigating and documenting the reasons for making certain decisions typically helps. Not always, but usually in litigation, being able to show documentation explaining the basis for decisions that were made can be extremely helpful. So, for example, the FDA put out a guidance document related to a predetermined change control plan, which is something that was developed specifically for medical devices that incorporate artificial intelligence and machine learning. And the plan is intended to set forth the modifications that manufacturers intend to or anticipate will occur over time as the device develops and the algorithm learns and changes post-market. And one of the recommendations in that guidance is that the manufacturer engage with FDA early before they submit the plan to discuss the modifications that will be included. Now, it's not a requirement, but I expect that if a company elects not to do this, that this is something plaintiff's counsel in a products case would say is evidence that the manufacturer was not reasonable, that the manufacturer could and should have talked to FDA, gotten FDA input, but didn't want to do that. Whereas if the manufacturer does do it and there's evidence of discussions with the FDA and even better, FDA's agreement with what the manufacturer ended up putting in its plan, That would be extremely useful to help defend against a product case because you're essentially showing the jury that, hey, this manufacturer talked with FDA, ran the plan by FDA, FDA agreed, the company did what even FDA thought was right, while that wouldn't be a bargain liability, right? Right. It's not it's not going to it's not going to completely immunize a manufacturer, but it is good evidence to support that the company acted reasonably at the time and under the circumstances.
Christian: Yeah. And I would just add to, you know, Jamie, you touched on so many of the important aspects here. I think the only thing I would add at this point is, you know, the importance as well, making sure that you understand the technology that you're integrating. And this goes so well in hand, Jamie, with much of what you just said about understanding who's making the decisions and why. Investing the energy upfront into ensuring that you're comfortable with the technology and how it works will allow you then moving down the line to just be much more efficient in the way that you respond, whether that's to regulatory modifications down the line, whether that's to legal risk. It will just put you in much of a stronger position if you are able to really explain and understand what that technology is doing.
Mildred: And I think that's the key, Christian, as you said, you know, being able to explain how it was tested, that it was robust, right? Yes, of course, it met the guidance, and if there's a regulation, even better. But that all measures, you know, within reasonable balance were taken to ensure that, you know, this technology being used is safe, is effective, and you try to identify all of the potential risks that could be known based on the anticipated way the technology is working. So with that, of course, we can do a whole podcast just on this topic alone with respect to mitigating the risk. And I think that will be a topic that we focus on as part of one of our subsequent podcasts, but I think unless Jamie or Christian, you have any other thoughts that brings us to an end here of our podcast.
Jamie: I think that pretty much covers it. Of course, there's a lot more detail we could get into with respect to the various theories of liability and what we're seeing and the developments in the case law and steps companies can be taking now, but maybe we can save that for another podcast.
Christian: And I completely agree. I think there's going to be so much to dig into over the next few months and years to come. So we're looking forward to it. Thank you, everybody, for listening to this episode of Tech Law Talks. And thank you for joining Mildred, Jamie, and I as we explore the dynamics between AI technologies and the product liability legal landscape. Stay connected by listening to this podcast moving forward. We're looking forward to putting out new episodes talking about AI and other emerging technologies. And we look forward to speaking with you soon.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Reed Smith and its lawyers have used machine-assisted case preparation tools for many years (and it launched the Gravity Stack subsidiary) to apply legal technology that cuts costs, saves labor and extracts serious questions faster for senior lawyers to review. Partners David Cohen, Anthony Diana and Therese Craparo discuss how generative AI is creating powerful new options for legal teams using machine-assisted legal processes in case preparation and e-discovery. They discuss how the field of e-discovery, with the help of emerging AI systems, is becoming more widely accepted as a cost and quality improvement.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
David: Hello, everyone, and welcome to Tech Law Talks and our new series on AI. Over the the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in eDiscovery. My name is David Cohen, and I'm pleased to be joined today by my colleagues, Anthony, Diana, and Therese Craparo. I head up Reed Smith's Records & eDiscovery practice group, big practice group, 70 plus lawyers strong, and we're very excited to be moving into AI territory. And we've been using some AI tools and we're testing new ones. Therese, I'm going to turn it over to you to introduce yourself.
Therese: Sure. Thanks, Dave. Hi, my name is Therese Craparo. I am a partner in our Emerging Technologies Group here at Reed Smith. My practice focuses on eDiscovery, digital innovation, and data risk management. And And like all of us, seeing a significant uptick in the interest in using AI across industries and particularly in the legal industry. Anthony?
Anthony: Hello, this is Anthony Diana. I am a partner in the New York office, also part of the Emerging Technologies Group. And similarly, my practice focuses on digital transformation projects for large clients, particularly financial institutions. and also been dealing with e-discovery issues for more than 20 years, basically, as long as e-discovery has existed. I think all of us have on this call. So looking forward to talking about AI.
David: Thanks, Anthony. And my first question is, the field of e-discovery was one of the first to make practical use of AI in the form of predictive coding and document analytics. Predictive coding has now been around for more than two decades. So, Teresa and Anthony, how's that been working out?
Therese: You know, I think it's a dual answer, right? It's been working out incredibly well, and yet it's not used as much as it should be. I think that at this stage, the use of predictive coding and analytics in e-discovery is pretty standard, right? Right. As Dave, as you said, two decades ago, it was very controversial and there was a lot of debate and dispute about the appropriate use and the right controls and the like going on in the industry and a lot of discovery fights around that. But I think at this stage, we've really gotten to a point where this technology is, you know, well understood, used incredibly effectively to appropriately manage and streamline e-discovery and to improve on discovery processes and the like. I think it's far less controversial in terms of its use. And frankly, the e-discovery industry has done a really great job at promoting it and finding ways to use this advanced technology in litigation. I think that one of the challenges is that still is that while the lawyers who are using it are using it incredibly effectively, it's still not enough people that have adopted it. And I think there are still lawyers out there that haven't been using predictive coding or document analytics in ways that they could be using it to improve their own processes. I don't know, Anthony, what are your thoughts on that?
Anthony: Yeah, I mean, I think to reiterate this, I mean, the predictive coding that everyone's used to is it's machine learning, right? So it's AI, but it's machine learning. And I think it was particularly helpful just in terms of workflow and what we're trying to accomplish in eDiscovery when we're trying to produce relevant information. Information, machine learning made a lot of sense. And I think I was a big proponent of it. I think a lot of people are because it gave a lot of control. The big issue was it allowed, I would call, senior attorneys to have more control over what is relevant. So the whole idea is you would train the model with looking at relevant documents, and then you would have senior attorneys basically get involved and say, okay, what are the edge cases? It was the basic stuff was easy. You had the edge cases, you could have senior attorneys look at it, make that call, and then basically you would use the technology to use what I would say, whatever you're thinking in your brain, the senior attorney, that is now going to be used to help determine relevance. And you're not relying as much on the contract attorneys and the workflow. So it made a whole host of sense, frankly, from a risk perspective. I think one of the issues that we saw early on is everyone was saying it was going to save lots of money. Didn't really save a lot of money, right? Partly because the volumes went up too much, partly because, you know, the process, but from a risk perspective, I thought it was really good because I think you were getting better quality, which I think was one of the things that's most important, right? And I think this is going to be important as we start talking about AI generally is, and in terms of processes, it was a quality play, right? It was, this is better. It's a better process. It's better managing the risks than just having manual review. So that was the key to it, I think. As we talked about, there was lots of controversy about it. The controversy often stemmed from, I'll call it the validation. We had lots of attorneys saying, I want to see the validation set. They wanted to see how the model was trained. You have to give us all the documents and train. And I think generally that fell by the wayside. That really didn't really happen. One of the keys though, and I think this is also true for all AI, is the validation testing, which Teresa touched upon, that became critical. I think people realized that one of the things you had to do as you're training the model and you started seeing things, you would always do some sampling and do validation testing to see if the model was working correctly. And that validation testing was the defensibility that courts, I think, latched on on. And I think when we start talking about Gen AI, that's going to be one of the issues. People are comfortable with machine learning, understand the risks, understand, you know, one of the other big risks that we all saw as part of it was the data set would change, right? You have 10 custodians, you train the model, then you got another 10 custodians. Sometimes it didn't matter. Sometimes it really made a big difference and you had to retrain the model. So I think we're all comfortable with that. I think as Therese said, it's still not as prevalent as you would have imagined, given how effective it is, but it's partly because it's a lot of work, right? And often it's a lot of work by, I'll say, senior attorneys instead of developing it, when it's still a lot easier to say, let's just use search terms, negotiate it, and then throw a bunch of contract attorneys on it, and then do what you see. It works, but I think that's still one of the impediments of it actually being used as much as we thought.
Therese: And I think to pick up on what Anthony is saying, what I think is really important is we do have 20 years of experience using AI technology in the e-discovery industry. So much has been learned about how you use those models, the appropriate controls, how you get quality validation and the like. And I think that there's so much to use from that in the increasing use of AI in e-discovery, in the legal field in general, even across organizations. There's a lot of value to be had there of leveraging the lessons learned and applying them to the use of the emerging types of AI that we're seeing that I think we need to keep in mind and the legal field needs to keep in mind that we know how to use this and we know how to understand it. We know how to make it defensible. And I think as we move forward, those lessons are going to serve us really well in facilitating, you know, more advanced use of AI. So in thinking about how the changes may happen going forward, right, as we're looking forward, how do we think that generative AI based on large language models are going to change e-discovery in the future?
Anthony: I think there, in terms of how generative AI is going to work, I have my doubts, frankly, about how effective it's going to be. We all know that these large language models are basically based on billions, if not trillions of data points or whatever, but it's generic. It's all public information. That's how the model is based. One of the things that I want to see as people start using generative AI and seeing how it would work, is how is that going to play when we're talking about very, it's confidential information, like almost all of our clients that are dealing with e-discovery, all this stuff's confidential. It's not stuff that's public. So I understand the concept if you have a large language model that is billions and billions of data points or whatever is going to be exact, but it's a probability calculation, right? It's basically guessing what the next answer is going going to be, the next word is going to be based on this general population, not necessarily on some very esoteric area that you may be focused on for a particular case, right? So I think it remains to be seen of whether it's going to work. I think the other area where I have concerns, which I want to see, is the validation point. Like, how do we show it's defensible? If you're going in and telling a court, oh, I use Gen AI and ran the tool, here's the relevant stuff based on prompts, what does that mean? How are we going to validate that? I think that's going to be one of the keys is how do we come up with a validation methodology that will be defensible that people will be comfortable with? Again, I think intuitively machine learning was I'm training the model on what a person, a human being deemed is responsive. So that. Frankly, it's easier to argue to a court. It's easier to explain to a regulator. When you say, I came up with prompts based on the allegations of the complaint or whatever, it's a little bit more esoteric, and I think it's a little bit harder for someone to get their heads around. How do you know you're getting relevant information? So, I think there's some challenges there. I don't know how that's going to play out. I don't know, Dave, because I know you're testing a lot of these tools, what you're seeing in terms of how we think this is actually going to work in terms of using generative AI in these large language models and moving away from the machine learning.
David: Yeah, I agree with you on the to be determined part, but I think I come in a little bit more optimistic and part of it might be, you know, actually starting to use some of these tools. I think that predictive coding has really paved the way for these AI tools because what held up predictive coding to some extent was people weren't sure that courts were going to accept it. Until the first opinions came out, Judge Peck's decision and the Silvermore and subsequent case decisions, there was concern about that. But once that precedent came out, and it's important to emphasize that the precedent wasn't just approving predictive coding, it was approving technology-assisted review. And this generative AI is really just another form of technology-assisted review. And what it basically said is you have to show that it's valid. You have to do this validation testing. But the same validation testing that we've been doing to support predictive coding will work on the large language model generative AI-assisted coding. It's essentially you do the review and then you take a sample and you say, well, was this review done well? Did we hit a high accuracy level? The early testing we're doing is showing that we are hitting even better accuracy levels than with predictive coding alone. And I should say that it's even improved in the six months or so that we've been testing. The companies that are building the software are continuing to improve it. So I am optimistic in that sense. But many of these products are still in development. The pricing is still either high or to be announced in some cases. And it's not clear yet that it will be cost effective beyond current models of using human review and predictive coding and search terms. And they're not all mutually exclusive. I mean, I can see ultimately getting to a hybrid model where we still may start with search terms to cut down on volume and then may use some predictive coding and some human review and some generative AI. Ultimately, I think we'll get to the point where the price point comes down and it will make review better and cheaper. Right. But I also didn't want to mention, I see a couple other areas of application in eDiscovery as well. The generative AI is really good at summarizing single large documents or even groups of documents. It's also extremely helpful in more quickly identifying key documents. You can ask questions about a whole big document population and get answers. So I'm really excited to see this evolution. And I don't know when we're going to get there and what the price effectiveness point is going to be. But I would say that in the next year or two, we're going to start seeing it creep in and use more and more effectively, more and more cost effectively as we go forward.
Anthony: Yeah, that's fascinating. Yeah, I can see that even in terms of document review. If a human was looking at it, if AI is summarizing the document, you can make your relevance determination based on the summary. Again, we can all talk about whether it's appropriate or not, but that would probably help quite a bit. And I do think that's fascinating. I know another thing I hear is the privilege log stuff. And again, I think using AI, generative AI to draft privilege logs in concept sounds great because obviously it's a big costs factor and the like. But I think we've talked about this, Dave and Therese, like we already have, like there's already tools available, meaning you can negotiate metadata logs and some of these other things that cut the cost down. So I think it remains to be seen. Again, I think this is going to be like another arrow in your quiver, a tool to use, and you just have to figure out when you want to use it.
Therese: Yeah. And I think one of the things I think in not limiting ourselves to only thinking about, right, document review, where there's a lot of possibility with generative AI, right, witness kits, putting together witness outlines for depositions and the like, right? Not that we would ever just rely on that, but there's a huge opportunity there, I think, as a starting point, right? Just like if you're using it appropriately. And of course, today's point, the price point is reasonable, you can do initial research. There's a lot of things that I think that it can do in the discovery realm, even outside of just document review, that I think we should keep our minds open to because it's a way of giving us a quicker, getting to the base more quickly and more efficiently and frankly, more cost-effectively. And then you can take a look at that and the person and can augment that or build upon it to make sure it's accurate and it's appropriate for that particular litigation or that particular witness and the like. But I do think that Dave really hit the nail on the head. I don't think this is going to be, we're only going to be moving to generative AI and we're going to abandon other types of AI. There's reasons why there's different types of AI is because they do different things. And I think what we are most likely to see is a hybrid. Right. Right. Some tools being used for something, some tools being used for others. And I think eventually, as Dave already highlighted, the combination of the use of different types of AI in the e-discovery process and within the same tool to get to a better place. I think that's where we're most likely heading. And as Dave said, that's where a lot of the vendors are actually focusing is on adding into their workflow this additional AI to improve the process.
David: Yeah. And it's interesting that some of the early versions are not really replacing the human review. They are predicting where the human review is going to come out. So when the reviewer looks at the document, it already tells you what the software says. Is it relevant or not relevant? And it does go one step beyond. It's hard because it not only tells you the prediction of whether it's relevant or not, but it also gives you a reason. So it can accelerate the review and that can create great cost savings. But it's not just document review. Already, there's e-discovery tools out there that allow you to ask questions, query databases, but also build chronologies. And again, with that benefit, then referencing you to certain documents and in some cases having hyperlinks. So it'll tell you facts or it'll tell you answers to a question and it'll link back to the documents that support those answers. So I think there's great potential as this continues to grow and improve.
Anthony: Yeah. And I would say also, again, let's think about the whole EDRM model, right? Preservation. I mean, we'll see what enterprises do, but on the enterprise side, using AI bots and stuff like that for whether it's preservation, collection and stuff, it'll be very interesting to see if these tools can be used there to sort of automate some of the standard workflows before we get to the review and the like, but even on the enterprise side. The other thing that I think it will be interesting, and I think this is one of the areas where we still have not seen broad adoption, is on the privilege side. We know and we've done some analysis for clients where privilege or looking for highly sensitive documents and the like is still something that most lawyers aren't comfortable using. Using AI, don't know why I've done it and it worked effectively, but that is still an area where lawyers have been hesitant. And it'll be interesting to see if gender of AI and the tools there can help with privilege, right? Whether it's the privilege logs, whether it's identifying privilege documents. I think to your point, Dave, having the ability to say it's privileged and here's the reasons would be really helpful in doing privilege review. So it'll be interesting to see how AI works in that sphere as well, because it is an area where we haven't seen wide adoption of using predictive coding or TAR in terms of identifying privilege. And that's still a major cost for a lot of clients. All right, so then I guess where this all leads to is, and this is more future-oriented. Do we think we're at this stage now that we have generative AI that there's a paradigm shift, right? Do we think there's going to be a point where even, you know, we didn't see that paradigm shift bluntly with predictive coding, right? Predictive coding came out, everyone said, oh my God, discovery is going to change forever. We don't need contract attorneys anymore. You know, associates aren't going to have anything to do because you're just going to train the model, it goes out. And that's clearly hasn't happened. Now people are making similar predictions with the use of generative AI. We're now not going to need to do docker view, whatever. And I think there is concern, and this is concern just generally in the industry, is this an area, since we're already using AI, where AI can take over basically the discovery function, where we're not necessarily using lots of lawyers and we're relying almost exclusively on AI, whether it's a combination of machine learning or if it's just generative AI. And they're doing lots of work without any input or very little input from lawyers. So I'll start with Dave there. What are your thoughts in terms of where do we see in the next three to five years? Are we going to see some tipping point?
David: Yeah, it's interesting. Historically, there's no question that predictive coding did allow lawyers to get through big document populations faster and for predictions that it was going to replace all human review. And it really hasn't. But part of that has been the proliferation of electronic data. There's just more data than ever before, more sources of data. It's not just email now. It's Teams and texts and Slack and all these different collaboration tools. So that increase in volume is partially made up for the increase in efficiency, and we haven't seen any loss of attorneys. I do think that over the longer run that there is more potential for the Gen AI to replace replace attorneys who do e-discovery work and, frankly, to replace lawyers and other professionals and all other kinds of workers eventually. I mean, it's just going to get better and better. A lot of money is being invested in. I'm going to go out on a limb and say that I think that we may be looking at a whole paradigm shift in how disputes are resolved in the future. Right now, there's so much duplication of effort. If you're in litigation against an opposing party, You have your documents set that your people are analyzing at some expense. The other side has their documents set that their people are analyzing at some expense. You're all looking for those key documents, the needles in the haystack. There's a lot of duplicative efforts going on. Picture a world where you could just take all of the potentially relevant documents. Throw them into the pot of generative AI, and then have the generative AI predetermine what's possibly privileged and lawyers can confirm those decisions. But then let everyone, both sides of court, query that pot of documents to ask, what are the key questions? What are the key factual issues in the case? Please tell us the answers and the documents that go to those answers and cut through a lot of the document review and document production that's going on now that frankly uses up most of the cost of litigation. I think we're going to be able to resolve disputes more efficiently, less expensively, and a lot faster. And I don't know whether that's five years into the future or 10 years into the future, but I'll be very surprised if our dispute resolution procedure isn't greatly affected by these new capabilities. Pretty soon, I think, when I say pretty soon, I don't know if it's five years or 10 years, but I think judges are going to have their AI assistance helping them resolve cases and maybe even drafting first drafts of court opinions as well. And I don't think it's all that far off into the future that we're going to start to see them.
Therese: I think I'm a little bit more skeptical than Dave on some of this, which is probably not surprising to either Dave or to to Anthony on this one. Look, I think, I don't see AI as a general rule replacing lawyers. I think it will change what lawyers do. And it may replace some lawyers who don't keep pace with technology. Look, it's very simple. It's going to make us better, faster, more efficient, right? So that's a good thing. It's a good thing for our clients. It's a good thing for us. But the idea, I think, to me, that AI will replace the judgment and the decision-making or the results of AI is going to replace lawyers and I think is maybe way out there in the future when the robots take over the world. But I do think it may mean less lawyers or lawyers do different things. Lawyers that are well-versed in technology and can use that are going to be more effective and are going to be faster. I think that. You're going to see situations where it's expected to be used, right? If AI can draft an opinion or a brief in the first instance and save hours and hours of time, that's a great thing. And that's going to be expected. I don't see that being ever being the thing that gets sent out the door because you're going to still need lawyers who are looking at it and making sure that it is right and updating it and making sure that it's unique to the case and all the judgments that go into those things are appropriate. I do find it difficult to imagine a world having, you know, been a litigator for so many years where everyone's like, sure, throw all the documents in the same pod and we'll all query it together. Maybe we'll get to that point someday. I find it really difficult to imagine that'll happen. There's too much concern about the data and control over the data and sensitivity and privilege and all of those things. You know, we've seen pockets of making data available through secure channels so that you're not transferring them and the like, where it's the same pool of data that would otherwise be produced, so that maybe you're saving costs there. But I don't, again, I think it'll be a paradigm shift eventually in that, paradigm shift that's been a long time coming, though, I think, right? We started using technology to improve this process years ago. It's getting better. I think we will get to a point where everyone routinely more heavily relies on AI for discovery and that that is not the predictive coding or the tar for the people who know how to use it, but it is the standard that everybody uses. I do think, like I said, it will make us better and more efficient. I don't see it really replacing, like I said, entirely lawyers or that will be in a world where all the data just goes in and gets spit out and you need one lawyer to look at it and it's fine. But again, I do think it will change the way we practice law. And in that sense, I do think it'll be a paradigm shift.
Anthony: The final thought is, I think I tend to be, I'm sort of in the middle, but I would say generally we know lawyers have big egos, and they will never allow, they will never think that a computer, AI tool or whatever, is smarter than they are in terms of determining privilege or relevance, right? I mean, I think that's part of it is, there's, you know, you have two lawyers in a room, they're going to argue about whether something is relevant. You have two lawyers in a room, they're going to argue about something privileged. So it's not objective, right? There's subjectivity. And I think that's going to be one of the chances. And I think also, we've seen it already. Everyone thought. Every lawyer who's a litigator would have to be really well-versed in e-discovery and all the issues that we deal with. That has not happened. And I don't see that changing. So unless I'm less concerned about being a paradigm shift than all of us going out for those reasons.
David: Well, I think everyone needs to tune back in on July 11th, 2029 when we come back to get stuff to begin and see who we're going.
Anthony: Yes, absolutely. All right. Thanks, everybody.
David: Thank you.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Singapore is developing ethics and governance guidelines to shape the development and use of responsible AI, and the island nation’s approach could become a blueprint for other countries. Reed Smith partner Bryan Tan and Raju Chellam, editor-in-chief of the AI Ethics & Governance Body of Knowledge, examine concerns and costs of AI, including impacts on owners of intellectual property and on workers who face job displacement. Time will tell whether this ASEAN nation will strike an adequate balance in regulating each emerging issue.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.
Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore all the key challenges and opportunities within the rapidly evolving AI landscape. Today, we'll focus on AI and building the ecosystem here in Singapore. My name is Bryan Tan, and I'm a data and emerging technology partner at Reed Smith Singapore. Together, we have with us today, Mr. Raju Chellam, the Editor-in-Chief of the AI E&G BOK, And that stands for the AI Ethics and Governance Body of Knowledge, initiative by the SCS, Singapore Computer Society, and IMDA, the Infocomm Media Development Authority of Singapore. Hi, Raju. Today, we are here to talk about the AI ecosystem in Singapore, of which you've been a big part of. But before we start, I wanted to talk a little bit about you. Can you share what you were doing before artificial intelligence appeared on the scene and how that has changed after we now see artificial intelligence being talked about frequently?
Raju: Thanks, Bryan. It's a pleasure and an honor to be on your podcast. Before AI, I was at Dell, where I was head of cloud and big data solutions for Southeast Asia and South Asia. I was also chairman of what we then called COIR, which is the Cloud Outage Incidence Response. This is a standards working group under IMDA, and I was vice president in the cloud chapter at SCS. In 2018, the Straits Times Press published my book called Organ Gold on the illegal sale of human organs on the dark web. I was then researching the sale of contraband on the dark web. So all of that came together and helped me when I took over the role of AI in the new era.
Bryan: So all of that comes from dark place and that has led you to discovering the prevalence of AI and then to this body of knowledge. So the question here is, so tell us a little bit about this body of knowledge that you've been working on. Why does it matter? Is it a game changer?
Raju: Let me give you some background. The Ethics & Governance Body of Knowledge is a joint effort by the Singapore Computer Society and IMBA, the first of its kind in the Asia-Pacific, if not the world, to pull together a comprehensive collection of material on developing and deploying AI ethically. It is anchored on the AI Governance Framework 2nd Edition that IDA launched in 2020. The first edition of the BOK was launched in October 2020 before GenAI emerged on the scene. The second edition focused on GenAI was launched by Minister Josephine Thieu in September 2023. And the third edition, the most comprehensive, will be launched on August 22, which is next month. The most crucial thing about this is that it's a compendium of all the use cases, regulations, guidelines, frameworks related to the responsible use of AI, both from a developing concept as well as a deploying concept. So it's something that all Singaporeans, if not people outside, would find great value in accessing.
Bryan: Okay. And so I see how that kind of relates to your point about the dark web, because it is really about a technology that's there that can be used for a great deal of many things. But without the ethics and the governance on top of that, then you run into that very same kind of use case or problem that you were researching on previously. And, you know, So as you then go around and you speak with a lot of people about artificial intelligence, what do you really think is the missing piece or the missing pieces in AI? What are we not doing today?
Raju: In my view, there are two missing pieces in AI, especially generative AI. One is the need for strong ethics and governance guidelines and guardrails to monitor, if not regulate, the development and deployment of AI to ensure it is fair, transparent, accountable, auditable. Two, is the awareness that AI, especially GenAI, can be used just as effectively by bad actors to do harm, to commit crimes, to spread fake news and even cause major social unrest. So, these two missing pieces which are not mutually exclusive can be used for good as well as bad. It's the same with the beginning of the airplanes, for instance. Airplanes can be used to ferry people and cargo around the world. They can also be used to drop bombs. So we need strong guardrails in place. And the EU AI Act is just a starting point that has shown the world that AI, especially GenAI, needs to be regulated so that companies don't misuse information that customers and businesses entrust to it.
Bryan: Okay. Let's just move on a little bit. about cybersecurity. Some of your background is also getting involved with cybersecurity, advising, consulting on cybersecurity. In terms of generative AI, do you see any negative impact, any kind of pitfalls that we should be looking out for from a cybersecurity point of view?
Raju: That's a very pertinent question, given that the Cyber Security Agency of Singapore has just released data that estimates that 13% of phishing scams might be AI-generated. There are also two darker versions of ChatGPT, for example. One is called Fraud GPT, F-R-A-U-D, and the other is called Worm GPT, W-O-R-M. Both are available on the dark web. They can also be used for RAAS, which is ransomware as a service that bad actors can hire to carry out specific attacks. Being aware of the negative possibilities of GenAI is the first step for companies and individuals to be on guard and keep their PII or personally identifiable information safe. So as a person involved in cybersecurity, I think the access that bad actors have to the tool that's so powerful, so all-consuming, so prevalent, can be a weapon.
Bryan: And so it's an area that we all need to kind of watch out for. You can't simply ignore the fact that alongside the tremendous power that comes with the use of GenAI, the cybersecurity aspects should not be ignored. And that's something we should pay attention to. But other than just moving away from cybersecurity, other than cybersecurity, any other issues in AI that also worry you?
Raju: The two key concerns about AI, according to me, other than cybersecurity, are number one, the potential of AI to lead to a loss of jobs for humans. And the second concern is its impact on the environment. So let me delve a little deeper. The World Economic Forum has estimated that AI adoption could impact 85 million jobs by 2030. Goldman Sachs has said in a report that AI could replace about 300 million full-time jobs. McKinsey reports that 14% of employees might need to change their careers due to AI by 2030. This could cause massive unrest in countries with large populations like India, China, Indonesia, Pakistan, Brazil, even the US. The second is sustainability. According to the University of Massachusetts at Amherst study, the training process for a single AI model can emit 284 tons of carbon dioxide. That's equal to greenhouse gas emissions of roughly 62.6 petrol-powered vehicles being driven for a year in the US. These are two great impacts. People, governments, companies, regulators have yet to grapple with because these could become major issues by the time we turn this decade.
Bryan: So certainly some challenges coming up. I remember that for many years you were also an editor with the Business Times here in Singapore. And so this question is about media and media content, specifically, I think, digital media content. And, you know, with that background in mind, now looking closely at generative AI, do you see generative AI affecting the area of digital media and content generation? Do you see any interesting use cases in which gen AI has been applied here?
Raju: Yes, I think digital media and content, including the entire field of advertising, public relations, marketing, will be or is being currently impacted to a large extent by Gen AI, both in its use as well as in its potential. To the extent that many digital media content companies are actively looking at GenAI as a possible route to replace human labor. In fact, if you look at the Hollywood Actors Union, they all went on strike because producers were turning to GenAI to even come up with movie scripts. So, it is a major concern because unlike previous technologies which impacted the lowest ranks of the value chain, such as secretarial jobs, for instance. GenAI has the potential to impact the higher or highest value chain, for instance, knowledge workers. So they could be threatened because all of their accumulated knowledge can be used by GenAI to churn out material as good as, if not better than, what humans could do in certain circumstances. Not in all circumstances, but with digital media content, most of the time, the GenAI model is not augmenting its human potential, it's also churning out material that can be used without human oversight.
Bryan: So certainly a challenge and interesting use case in the field of digital media content. Last question, and again, back to the body of knowledge and talked a little bit about the Singapore government's involvement in this area. In Singapore, we do have a tendency for a lot of things to be government-led. In this particular area where we are really talking about frontier technology like artificial intelligence. Do you think this is the right way to go about it to let the government take the lead? And if so, what more can be done or should be done?
Raju: That's a good question. The good part is that Singapore is probably one of the very few countries, if not the only one where the government tries to be ahead of the curve in tech adoption and in investing in cutting-edge technologies such as AI, quantum computing, biotech, etc. While this is generally good in the sense that a clear direction is set for industry to focus on, is there a risk that companies may focus too narrowly on what the government wants instead of what the market wants? I don't know. More research needs to be done in this area. But look at the numbers. Spending on AI-centric systems is set to surpass 300 billion US dollars worldwide by 2026, as per IDC estimates, up from about $154 billion in 2023. So Singapore's focus on AI and GenAI was the right horse to bet on. And it's clear that AI is not a fad, not a hype, not an evolution, but a revolution in tech. So at least we got that part right here. Whether we will get the other parts or the components right, I think only time will tell.
Bryan: Okay, and final question, looking at it from ecosystem point of view, various moving parts, various parts working together. For you personally, if you had a crystal ball and a wishing wand and you could wish for anything in the future that would help this ecosystem or you think will aid this ecosystem, what would that be?
Raju: I think there is need for stronger guardrails and some kind of regulation to ensure that people's privacy is protected. The reason is, GenAI can infringe upon the copyrights and IP rights of other companies and individuals. This can lead to legal, reputational, and or financial risks for the companies using pre-trained models. GenAI models can perpetuate or even amplify biases learned from the training data, resulting in biased, explicit, unfair or discriminatory outcomes, which could cause social unrest if not monitored or audited or accounted for accurately. And the only authority or authorities that can do this are government regulators. So I think government has to take a more proactive role in ensuring that basic human rights and basic human data is protected at all times.
Bryan: With this, I thank you. Certainly a lot more to be done in building up the ecosystem to encourage and evolve the role of AI in today's world. But I want to thank you, Raju Chellam, for joining us. And I want to invite you who are listening to continue to listen to our series of Tech Law Talks, especially this one on artificial intelligence. And thank you for hearing us.
Raju: Thank you, Bryan. It's been a pleasure.
Bryan: Likewise. Thanks so much, Raju. I really enjoyed doing this.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Regulatory and investigations partner Kendra Perkins Norwood invites former U.S. GSA Associate Administrator Krystal Brumfield to discuss how the federal government is gaining an understanding of AI’s uses in procurement. They also explain how the General Services Administration and other federal agencies are using AI to streamline and safeguard the contract award process.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Kendra: Hello, welcome to Tech Law Talks, a new Reed Smith podcast series on artificial intelligence or AI, as it's commonly called. I am Kendra Norwood, a partner in Reed Smith's Global Regulatory and Investigations Practice Group. I'm based in Washington, D.C., and my specific practice area is government contracts. Through the Tech Law Talk series, we will be exploring the key challenges and opportunities that are presented within the rapidly evolving AI landscape, and today we will focus on AI in government contracts. For today's episode, we are very fortunate to have a special guest joining us, Krystal Brumfield, who for the past three and a half years has served as the Associate Administrator for the Office of Government-Wide Policy at the General Services Administration, or GSA. Now, for those who may not be familiar with GSA, it's an independent agency of the U.S. Government that provides centralized procurement and shared services for the federal government, managing a nationwide real estate portfolio, overseeing over $100 billion in federal government contracts for goods and services purchased by the government, and for delivering technology services to millions of both government and public users across dozens of federal agencies. So when we decided to do a podcast on AI and government contracting, Krystal immediately came to mind as the perfect podcast guest to discuss that topic, and I am so glad to have her here today. So welcome, Krystal.
Krystal: Thank you, Kendra. Excited to be here with you and Reed Smith.
Kendra: Wonderful. So if you could start off with a brief introduction of yourself and your role and responsibilities at GSA, that would be great to set the stage for our discussion today.
Krystal: Sure. So I served as the Associate Administrator for Government-Wide Policy in January 2020 through May 2023. In that role, I was designated as the Regulatory Policy Officer, the chief acquisition officer. I also served as the chair of the agency Cyber Supply Chain Risk Management Executive Committee. And as the associate administrator of OGP, it's what we call government-wide policy for short, I oversaw the drafting and promulgation of the federal acquisition regulation, also known as the FOG. IT policy, as well as the Federal Acquisition Institute, which is responsible for training and educating the 20,000 plus federal acquisition professionals across the federal government.
Kendra: Oh, wow. That is such an impressive background and just some really big responsibilities. And again, I think you're the perfect guest to have here today to talk about this topic. So again, thank you so much for being here. So I guess I'd like to start by framing our discussion a bit. I tend to think about AI in government contracting in two ways. So first, I think about how the government uses AI to support the federal acquisition process, and that's from contract formation to contract administration, and ultimately through contract closeout, and using AI in that way to work smarter, faster, and more efficiently. And the second way I think about government contracts AI is the approach used by the government to purchase AI-driven tools through government contracts in order to support agency missions. We are seeing more and more federal government contracting opportunities that are either in whole or in part being released for the acquisition of AI of some type. And so I'm hoping we can address both of those uses of AI during our time today.
Krystal: So, Kendra, you absolutely have described it correctly. It's kind of a two-pronged approach from the way that we see it or saw it in my role as the associate administrator at GSA. We saw it so much so used in our everyday work, really across agency, that once the executive order President Biden issued on safe, secure, and trustworthy development and use of artificial intelligence, the agency decided to hire a chief AI officer because it was that important of a function across the agency. As you mentioned, it's really, it is sort of a new phenomenon when it comes to government contracting. And so one of the other things that we saw when when we are doing procurements is that there was a gap in understanding and knowledge between where we needed to be in the use of AI and the acquisition workforce. And so while I was there, we stood up the GSA Acquisition Policy Federal Advisory Committee, which what we call GAPFAC for short, which is a federal advisory committee that consists of federal, state, local, and government officials, representatives from trade associations. Professors from universities, as well as business leaders from all across the country to really help us, answer problems related to federal contracting. And one of those problems is AI, understanding AI, what it means to the business of federal contracting. And that's one of the key areas that we saw was important. It was a growing trend and it's ever evolving. And so our GAPFAC very soon will be focusing on AI in procurement.
Kendra: Oh, that's wonderful. I had not heard about GAPFAC, but I love the way it seems as if it's using a very collaborative and cross-sector approach with individuals involved sort of at all levels of government, across the government, and even in the private sector. So that sounds really exciting. I look forward to hearing more about what comes out of that. So I guess let's just dive right in as we talk about how AI is used to support the federal acquisition process. Now, during your time at GSA, were there any specific milestones or key developments related to the use of AI other than what you've already mentioned, which is phenomenal, in order to improve contracting procedures?
Krystal: Absolutely. So on the government side, GSA started using artificial intelligence for the pre-award vendor assessment about two years ago. So just to kind of break it down to the listeners of how it works in a very simplified way. You would gather data, relevant data, about potential vendors from various sources. And so this data could include historical performance data, financial records, customer reviews, compliance records, all from available public databases or information that's provided from the vendors themselves. This data is then integrated into a centralized system or platform where our AI algorithms can assess and analyze it. This could involve cleaning the data and standardizing the data to make sure that it's consistent and it's accurate. Then we would move to the scoring and ranking phase where the algorithms would generate scores and rankings for each of the vendors based off of the extracted features and historical data. The scores will reflect the likelihood of the vendors meeting our specific criteria or performing well in the context of the contract that's being awarded. Then we would move to the decision phase where. Final output that we would see would be an AI analysis providing whoever the decision maker is with valuable insights and recommendations, helping the procurement officer or the project manager make a more informed decision on who to invite to further evaluate or negotiate the contract with. And so we've seen this process work many times, the benefits being that we gain more efficiency in the government. It helps us to reduce some of the automated or arduous tasks that we have. We've also experienced there being more accuracy. And so we're reducing more human error throughout the process. As you can imagine, there are lots and lots of contracts, could be a lot of modifications to those contracts. And so to help scale that volume down, AI has been helping in that way. And then it also creates an opportunity for us to rely on data-driven decisions rather than subjective judgments. And so this helps a lot in a lot of different ways. But one of the things that we've been careful about is making sure that it's essential that we ensure that we train the AI models on diversity and representative data sets so that we can manage the bias that data sometimes has, as well as ensuring that we have fair evaluations throughout the process.
Kendra: So that's pretty impressive, Krystal. I'm sort of awestruck right now because it sounds like to me that for all practical purposes, AI is being used to handle pretty much the entire evaluation process up to the point of making those recommendations to the selecting official. And I guess I'm wondering, is there any involvement or is there still involvement by a source selection board or some other human element in this process before the recommendation goes to the source selection official?
Krystal: Absolutely. That human component can't be replaced. Because we know that AI is ever-evolving, it's a new phenomenon that we're still trying to understand. And it is instances where there has created error and biases. Then the human eyes and perspective and analysis remains a key component to keep as a part of the process.
Kendra: Well, that sounds like a win-win. I mean, you know, there could be errors with AI, but of course, as you mentioned, there are often human errors. I mean, it sounds like AI could be used to reduce those. And I would imagine perhaps reduce the amount of protests that we see, which isn't good for my business, but I think overall good for the government if we can sort of eliminate or at least minimize or mitigate that human error factor. So that's great to know. So just moving along, you know, as I understand it, there's two basic categories of AI. So there's traditional AI, which is great for, you know, addressing sort of well-defined problems, performing repetitive tasks, and dealing with very structured data. And then there's generative AI, which is used to work with more unstructured data and sort of designed to learn new content. You've already explained how some of that is already happening in federal government agencies, GSA in particular, sort of using that traditional AI to automate certain manual tasks across the entire government contracts lifecycle cycle. I know Department of Health and Human Services uses it to consolidate certain contract vehicles. I previously worked at NASA. You're at GSA. I know both of those agencies are using traditional AI to deploy robotic processes, basically bots, chatbots that are software-based robots to execute standard rules-based business procedures and interface with the users, the system users. And so, it's sort of that kind of traditional AI, I think that does have the potential to reduce. Workload backlogs, which, as I understand, it can be substantial depending on the agency, in addition to helping agencies conduct procurements more quickly, which is what I think I heard you say as you described the system that's already in place now. I know the Air Force was at one time contemplating using AI to help acquisition professionals better understand these very complex procurement policies, rules, regulations, again, towards towards speeding up that process, which can often be very lengthy and is often something that discourages some companies from wanting to do business with the government just because of the involved process. So it seems as if AI could certainly be used to help that. Now, these are just a few examples. And, you know, there's some who believe that, you know, this kind of traditional AI could be used to completely automate, you know, sort of those early contracting procedures, You know, deciding what type of contract should be used, what contract type should be used, how it should be structured, should it be set aside for small business. You know, they've said that GSA multiple award schedule contracts, the blanket purchase agreements, IDIQ task order contracts, GWACs, the government-wide acquisition contracts, all of those are sort of in many ways specific to GSA And I'm just wondering if you have any thoughts on how specific to those contract types AI could apply.
Krystal: Sure. So, I mean, Kendra, these are all great examples of how traditional AI can be used and has been used to improve and automate procurement processes within the federal government organizations. There are countless ways and things that we can point to that have have been beneficial to using AI. But I also want to make sure that it's important that we keep in mind that by using AI to automate these routine tasks, for example, the market research and the pre-solicitation period that I mentioned earlier, or the contract modifications, invoicing, or the award-free determinations, there can be some pretty substantial financial impacts for the government or even the contractor or both if errors occur around pricing or payments. And so there's all, but there's concerns that we have sometimes when we rely on AI alone, when it's reading regulations, right? It could lead to regulations being misinterpreted in some cases or even possibly misapplied. And so if that happens, it could have some opposite effects of slowing down a procurement process. In ways that we don't anticipate there could be. And so whether it's through increased bid protests or the need to redo procurements when AI generated errors or discovered. Or even if there are a number of unintended consequences with using AI that we know about or don't know about, this by no means is a perfect solution. So we have to weigh the pros and cons when it comes to it, because although there are a lot of benefits to it, there are certain things that we don't know. And in some cases, there are certain errors or inefficiencies that it may cause.
Kendra: Those are all great points. So, you know, as many benefits as AI bring to the table, you know, There are, again, associated risks, and it sounds like the government is taking that into account and factoring that into its use of AI by not eliminating the human element. So I guess turning to generative AI, I was thinking how this could be used to, in some instances, allocate or manage risk. I know that, for instance, it's already being used to collect data points to determine if If a contractor or a prospective contractor is presently responsible, that's a term of art in government contracts, as I'm sure you know, and you have to be presently responsible to be eligible to receive a government, a federal government contract award. So that's, you know, sort of that use of collecting disparate data and bringing it together to make determinations on responsibility. Also, maybe schedule and risk assessments, you know, determining whether, you know, a project is likely to be completed on time, but at the same time, as you mentioned, on the flip side, could have some consequences in terms of setting up some unrealistic expectations to the extent AI isn't factoring into some of the very real considerations that go into whether a project is completed on schedule. Now, again, these are just hypotheticals, well, except for the one about the responsibility determinations. But again, in terms of the generative AI, can you speak briefly on that, the use of that in the government?
Krystal: So I think what you kind of laid out there are all great examples. And we know these and even, in fact, other applications for AI are just around the corner if they aren't already in use. But GSA has really been at the cutting edge and the lead, and we've long recognized the power of generative AI to increase efficiencies, to lower our operating costs, and even to prevent and detect some criminal activity against the federal government. My office, our top priority was to make government efficient. More modern, streamlined, and accessible. That was our North Star, that we drove all of our policies behind that and our drive to make government operations more efficient and effective was to make sure that we're modernizing the way that we're doing things. And the regulations would reflect that, that we were streamlining them to make sure that processes disease were more efficient and they were accessible to all. And so with that in mind, we think that there is power and great benefit from generative AI. A couple of examples that we saw was with creating documents for contracting officials. We also saw we are utilizing pattern recognition and trend analysis of financial data to identify fraud activity in federal financial systems. Using generative AI to create sample data sets that could be used to test software or even customize commercial software for government use. Also, the cybersecurity threat detection could use AI by using it to model trained historical cyber data like network traffic or user interactions so that they could anticipate and respond to the cyber attacks against our federal IT systems, which of course we often know contain very sensitive information related to government contracting, whether that's financial data or confidential and proprietary data that belongs to companies doing business with the federal government agency, but it resides in government systems. So all of these examples really just show the benefit of generative AI. I think the applications are limitless in terms of how AI can improve our operations or the government's operations and also how they can better deliver efficiencies all across the federal government.
Kendra: Wow. I mean, as you said, the possibilities are limitless. You know, it's just the power of data and the power that AI brings to the table in terms of leveraging that data to make things more efficient and more mission focused, as you mentioned, for federal agencies. So just quickly, I want to touch on how the government is going about purchasing these tools, these AI tools that they're using to use traditional regenerative AI in their day-to-day work. I've seen solicitations come along here lately that are for the purchase of AI tools, whether that's the entirety of the procurement or AI is still somewhat embedded as an expectation or an option for a contractor to propose when they are selling or attempting to sell to the government. Now, I guess the biggest thing that comes to mind for me is that have there been any ethical concerns that factor into how the government is going about procuring these AI technologies? And if so, how are they being addressed?
Krystal: Yeah, well, I believe that there are some fundamental requirements that the government only procures AI technology. That it both use, you know, do so responsibly, but also trustworthily. So first, we have to recognize that AI is one of the most profound technological shifts in this generation. And because the space is so large, and because it has so many complexities, I think contracting officers, they should consider cybersecurity. Supply chain, risk management, data governance, and other standards and guidelines when it comes to procuring AI, just as they would any other IT procurements. I think it's critical that our acquisition workforce also work with and consult with the technical subject matter experts like software engineers and data scientists. You know, the good news is that there are several efforts that are underway already to ensure that the ethical AI standards are in place and they exist. For example, GSA and the Department of Defense's Joint Artificial Intelligence Center have a center of excellence effort in place and have had it for several years now that has a focus on advancing AI technology across DOD. The center of excellence has a guide to AI ethics that it promotes the development of ethical AI by adopting AI applications with a human-centered mindset and approach. The guide also includes a series of questions that should be answers to every phase of an AI development project to ensure that there are ethical designs, developments, and deployments within the AI solution, along with any other extensive testing to mitigate unintended consequences from the application of this AI technology. The Department of Defense also has its own policy document on ethical principles of AI. And of course, the president has issued an executive order. That really addresses and really is the foundation of which all of these different policies rest with regards to being in promoting the use of trustworthy artificial intelligence across the federal government. And so absolutely should be ethical considerations. And I think we are seeing a lot more information come out about considerations to think about or require when it comes to ethics.
Kendra: Wow. So it just seems like there's a lot going on, but it also seems like it's being done in a very intentional and thoughtful way in the government, which is reassuring. And I'm sure my clients, our customers that are doing business with the federal government, whether they are selling AI technologies to the government or implementing AI technologies in order to better sell to the government or better perform on government contracts. And so it just, you know, it's encouraging to see that the government is already in many respects leveraging the power of AI in both purchasing as well as in its administration of government contracts. So I guess the last point I want to touch on speaking to sort of my client base is sort of what should government contractors keep in mind as they consider how to incorporate AI into their business models?
Krystal: Sure. Well, companies who are doing business with the government or even those who are interested in entering into the government procurement market space, they should make sure that their company's vision and goals are aligned with the technological evolution that's occurring around AI. I mean, identifying the AI tools that can lead to future business opportunities is essential, I think, for the private sector. I also think that it's also important to consider what business objectives can be better achieved through the use of AI. And just for businesses in general, they should be mindful that AI is going to continue to grow in the government space. It's not going to go anywhere. And so companies have to keep up with that. And they want to be a part of the government solution. They have to prepare for it. And so I think to jump in the game, They got to get involved. They got to be a part of the game. And so they've got to adopt AI as a part of their practice and as a part of their protocol and as a priority in their business.
Kendra: Well, that's very well said. I couldn't agree more. And I think we are right at the point where I want to thank you so much for your time and your contributions today. I think this has been really enlightening for me and hopefully for the listeners as well. Thank you so much. And then we will continue to look forward to everything that comes along with AI in the federal government space.
Krystal: Thank you, Kendra. Appreciate the conversation today.
Kendra: Likewise. Thank you so much for listening and have a great day.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Can we keep up with AI? Paul Foster, CEO of the Esports Federation, dives into the legal implications of artificial intelligence. Gamers have a unique familiarity with artificial intel. Explore how AI is transforming game design, content creation, brand promotion and much more. Along with entertainment/media lawyer Bryan Tan of Reed Smith’s Singapore office, Foster discusses the unique ways AI is enabling gamers to monetize their skills.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities with the rapidly evolving AI landscape. Today, we will focus on AI in the interesting world of esports. And we have together with us today, Mr. Paul Foster, who is the CEO of the Global Esports Federation. Good morning, Paul.
Paul: Good evening, Bryan. It's nice to be with you, coming from California.
Bryan: And I'm coming to you from Singapore, but we are all connected in one world. Today, we are here to talk about AI and esports. But before we start, I wanted to talk about you and to share what you were doing before AI and how that has changed after AI has now become a big thing.
Paul: Thanks, Bryan. Yeah. So I come from originally from Sydney, Australia, and I was from my background is really 20 years in the Olympic movement. So I started at the Sydney 2000 Olympics. So I started about three years before the Olympics, worked for the Olympics for about 20 years, so traditional sports, and then moved to esports and opened or founded the Global Esports Federation in 2019. And of course, we all know what happened four months after that, Bryan, with COVID and the pandemic that really closed a lot of traditional sport. And so esports really took off. So it's definitely been a very, very exciting and accelerating journey these last couple of years.
Bryan: Great. So you've kind of gotten into esports after a background in sports. And it's interesting you mentioned 2019 because we also know that somewhere around 2022, the end of 2022, just as the pandemic was sorting itself out. Artificial intelligence, primarily generative artificial intelligence, then began to capture the imagination of people. And the question here, I guess, is maybe we're talking to converted, but esports is obviously one of the most technologically advanced and clued in online ecosystems, communities. How has artificial intelligence impacted esports and is that a positive or negative thing?
Paul: You're absolutely right, Bryan. I think one of the things that I like to say is that we're living in what I consider to be one of the most exciting times in the history of humanity. The reason I say that is because of the convergence of all these incredibly powerful technologies at exactly the same time, at the very early dawn of AI. AI and I think it's something that we should reflect on because I think many people talk about AI as if we're already in the middle of the cycle and I think my position is that we're at the very early dawn, maybe even the pre-dawn of AI. I did some postgraduate studies in machine learning and AI so it's a passion project of mine, something I'd love to think about and I was recently at the global summit on AI with the International Telecommunications Union in Geneva and had a chance to sit with the leaders group. There was 30,000 people attending, Bryan, which is a big number of people showing the interest from all over the world. But there was a leaders group convened to look at policymaking. And in a sense, there was this feeling that industry has been rapidly growing and expanding almost at a pace or a cadence that is hard to sort of register in a sense. And then policymakers and particularly governments and others are trying to sort of catch up in a sense and and try to get in front of that. As you know Bryan we're very strong partners with UNESCO, the United Nations Education Science and Cultural Organization, as well as the international telecommunications union so we're also contributing to their thinking and bringing our community into the discussion one of the things that could be interesting for our listeners is that it's true what Bryan said is that our community, which is roughly in the range of 18 to 34-year-olds, are early adopters of technology. And one of the things that might be interesting is that gamers and people who play electronic sports and games have always been exposed to artificial intelligence in some form, even very, very early form. And so it's not a surprise to me that the adoption of of artificial intelligence and the interest in artificial intelligence in some ways has been really accepted by the esports community in the gaming community and as you also said Bryan you know this this demographic 18 to 34 is really the heart of what we call Gen Z or Gen and then now we're seeing Gen Alpha of course the next generation starting high school but what we also call this and some people widely call this generation is actually Gen T. Generation around around technologies and the early acceptance of that. And I can talk a little bit more about some of the applications for esports if that's interesting.
Bryan: No, I think that's interesting. And I think in particular, I think what will be interesting to hear is some of your own visions about what the future of AI and esports could be like. What does it promise to the esports community? What can they kind of look forward to? How do you see that going?
Paul: Yeah I think thanks for the question I think that it's um again I think there's the the possibilities are limitless and we're really at the beginning the early time about this and when I speak for example recently to colleagues and friends at companies such as open ai and others that I speak to every day there's a true interest around this particularly around the the creative economy and how we'll create games in the future and there's three things Bryan that I thought I'd mentioned, which is really the use of AI in terms of teams and players' preparation, the fact that we can have quicker and more efficient, I call it, you know, one of the great benefits, one of the things I've learned in my studies is that it needs to be human-focused and human-centric. And we're also, at the Global League Sports Federation, we support the UNESCO's position on the creation of ethical AI. And what that really talks about is human-focused because it's human-centered. And so one of the things I think is really interesting for our community is that they'll have quicker access to statistics, to analytics, to data. So they'll be able to, if you think about esports as a competitive sport or as a competitive event, any preparation that you can have to prepare you to have better results and better preparation will ultimately, should ultimately provide for a better outcome for you as a competitor, right? So that's number one. The second thing is that really what we can do for the creative economy, which is absolutely fascinating, Bryan, in esports, the whole economy around or the whole community around creators and content creators and people that really bring esports and bring it alive. Is that we'll be able to have automatically created clips and reels and analytics. So in real time, things like in pre-AI, we would have had to wait for editing, Bryan. We would have had to wait for editing. it might have taken hours or days and now that can happen in seconds so for and so that's fascinating and then the third thing so team preparation creative economy and then the third thing really talks around the economics of gaming and around sponsorship and value-based identity so that in the future our sponsors and our partners that are so important for the thriving nation and the the sustainability of gaming and esports will also be able to use AI to have greater analytics and greater awareness of their brand values, to actually understand the value of their brand. A very simple example is we can use AI to track how many times a certain brand was visible on a jersey or in an audience. We'll be able to use AI to actually track it in real time. Whereas again, Bryan, in the past, we would have had to look through video files and actually count it manually. And I remember doing that. I mean, another example I'll give you is I remember but not that long ago in my work at the Olympic Games, I literally remember installing what we used to call video walls. So walls of not even a video screen, but 12 video screens or 16, I think they were, 4x4 screens to be able to look at every venue at once. And when I think about that now, that seems like a long time ago, but it wasn't a long time ago. I mean, within the last 10 or 12 years, that was still our reality. And now we can use AI to capture that data, to give us the same results in real time across teams, across creators and across partners.
Bryan: Okay, thank you for that. I think three very concrete areas that we can look forward to as an esports community. Interestingly, you also mentioned the regulators, the governments trying to keep in touch with the development of AI. Yes. And it sounded as if it was a bit of a struggle for them. Do you think there are any big concerns about the deployment of AI esports that we should be kind of aware of and maybe try to avoid?
Paul: Yeah, I think it's really this notion about catching up, right? How do you catch up with something that's evolving every day and every hour of every day at a speed that's really difficult to contain? And also two schools of thought, really, which is one school of thought, which I remember, Bryan, I think Sam Altman from OpenAI recently said it. And he said, look, we're so busy and this is running so fast and so powerful, we'll come back and we'll get back to that later. Like we're off, you know, creating these incredibly strong and powerful platforms. We'll have to come back to those matters at a later stage. And it was interesting because when I was last couple of weeks ago in Geneva, you had policymakers, governments, ministers, etc., whose role was to make sure that the frameworks were established around implementing the framework on ethical AI. Were really struggling with this reality of being able to just, I mean, literally physically struggling with this reality of trying to get ahead of the knowledge, not only the knowledge, but also the policy work that needed to be put in place, the frameworks, the regulations, and then rolling that out across industry. At the same time, you have technology firms and particularly firms with specialization in AI, and you've seen the incredible value chain skyrocketing in recent months, really racing ahead. And yet you've got policymakers trying to get their hands around this and trying to even understand it. You've got the same challenges in academia, don't you, Bryan, with academia also trying to create curricula that by the time it's published, we may already be behind the eight ball in terms of where AI has taken us. So I think the thing that I would talk about is the concern I would have is the ethical side of AI because, you know, and keeping it human focused and in the best interest of humanity, meaning that really what the benefits are, the focus of benefits should be around making our lives more efficient, effective and more equitable. And there is a risk of course within AI that it can because of prejudice that is potentially built into the ai itself that it could continue to manifest that across the community and that's something that's difficult to get ahead of unless it's created with that lens at the very beginning.
Bryan: No i think that's that's absolutely correct it's uh it's a good reminder that this is technology we're dealing with, and technology can be something that's used for the good of humanity, but it can also be abused. And we have to keep in mind that the technology is there only for the benefit of mankind, like you said, and to keep that human centricity always in focus as AI is applied to esports. Okay, so last question, I promise you, Paul, as CEO of Global Esports Federation, what would you wish for the future of esports? And maybe just to make it interesting, on two spectrums, one, a more realistic expectation, and the second one, a moonshot. If your wish can be granted, what would your wish for esports be?
Paul: What a great question. Thanks, Bryan. I love that opportunity. Well, Well, the thing that's so interesting in esports and gaming is that anything that was a moonshot about two weeks ago is now already a reality. It moves so fast. So when you were mentioning a moonshot, I was thinking about the Olympics. And I'll talk about that in a moment because that would have been considered a moonshot just a few months ago, if not years ago. But what I think the future is, is the globalization of esports as a source of incredibly inclusive, powerful, evocative entertainment, right? So just as you have traditional sport and just as you have, for example, in the United States, you have the proliferation of leagues and professional sports. It's coming into view that you'll have very significant value and be able to really create a very sustainable living as an economic means through esports. Not only as a player winning significant prize money, but also as a content creator, as a game developer, as a marketeer, as an event organizer, as an academic. There's tons of opportunities. opportunities and in fact Bryan I was speaking with some friends of mine who are attorneys actually and it surprises me because traditionally I would talk to attorneys and then through conversation it comes out that they're really passionate about gaming and now maybe they specialize around being with illegal expertise in terms of intellectual property rights or different aspects of it and this also I wanted to share that with you Bryan because I thought that was interesting that even in a traditional professional, such as the practice of law. There's now a lot of interest in this field as well. What does that mean? That means that we get to manifest our lives how we wish them to be manifested. In the past, if I wanted to go into event management, I would have to do a certain angle. Now I can do that with inside esports. If I wanted to be in communications and global media, I might have had to do that in public relations, or in traditional luxury goods, for example, or consumer products. Now I can do it inside esports. So I think the future is extremely bright and relatively limitless in terms of being able to manifest my career, finding something I want to do in my profession, my skills, but be able to do it in something that I love doing. And that's a blessing, I think, Bryan, that very few of us, so you and I, that has happened in our lifetime, that we're able to actually have the life that we want, create professional professional conditions we want, earn a living of that by doing it in that field that we love. The moonshot which you've challenged me on, I was so proud having come from the Olympic movement in my hometown of Sydney and now seeing the reality of the Olympic eSports Games, which was just announced by the IOC a couple of weeks ago and then rapidly evolving. And how interesting is this? At the Paris 2024 Olympic Games, it seems that we'll see an announcement of the Olympic eSports Games itself, agreed by the IOC, confirmed by the IOC. And so one of the things i think is fascinating if you think about Olympic sport traditional sport, it took golf 121 years to come back onto the Olympic program and esports in global esports federation was as you said Bryan, founded in 2019 here we are just four years later not only is it inside the Olympic movement but there's actually a separate IP created called Olympic esports games. If we think about that for a moment, the IOC traditionally had their Olympic Games as their main IP. Yes, Winter Games. Yes, Youth Games. But now we have the Olympic Esports Games as a separate IP. And what's even interesting at the recent press release I read is that it said a whole new division, a whole new structure will be created at the IOC. Rather than trying to fit it into traditional models, a whole new structure will be created. it. So this was a moonshot. And I think that this will be fascinating in terms of how we see that evolve and how you see a traditional sports organization just a couple of years ago, really being a long way away from today. And in those very short years with the Global Esports Federation staging our Commonwealth Esports Championships, the European Esports Championships, the Pan American Esports Championships, and now seeing the evolution at the Olympic Esports Games, What an incredible opportunity that is for athletes, creators and community right around the world.
Bryan: Thank you for sharing that. And I think that's a great statement to make that what was yesterday's moonshot is today's reality in a fast-paced world that evolves because of technology. Thank you again for sharing with us your thoughts, Paul. I think it's been greatly exciting. We look forward to a great future in esports. And once again, thank you for joining us in this series.
Paul: Thank you, Bryan. Thanks very much, everyone.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
AI offers new tools to help competition enforcers detect market-distorting behavior that was impossible to see until now. Paris Managing Partner Natasha Tardif explains how AI tools are beginning to help prevent anticompetitive behaviors, such as collusion among competitors, abuse of dominance and merger control.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Natasha: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, our focus is going to be on AI and antitrust. AI is at the core of antitrust authorities' efforts and strategic thinking currently. It brings a number of very interesting challenges and great opportunities, really. In what ways? Well, let's have a look at how AI is envisaged from the perspective of each type of competition concept. I.e. anti-competitive agreements, abuse of dominant position, and merger control. Well, first of all, in relation to anti-competitive agreements. Several types of anti-competitive practices, such as collusion amongst competitors to align on market behavior or on prices, have been assessed by competition authorities. And when you look at those in relation to algorithms and AI, it's a very interesting area to focus on because a number of questions have been raised and some of them have answers, some others still don't have answers. The French competition authorities and the German Bundeskanzleramt issued a report sharing their thoughts in this regard in 2019. The French competition authority went as far as creating a specific department focusing on digital economy questions. At least, three different behaviors have been envisaged from an algorithm perspective. First of all, algorithms being used as a supporting tool of an anti-competitive agreement between market players. So the market players would use that technology to coordinate their behavior. This one is pretty easy to apprehend from a competition perspective because it is clearly a way of implementing an anti-competitive and illegal agreement. Another way of looking at algorithms and AI in the antitrust sector and specifically in relation to anti-competitive agreements is when one and the same algorithm is being sold to several market players by the same supplier, creating therefore involuntary parallel behaviors or enhanced transparency on the market. We all know how much the competition authorities hate enhanced transparency on the market, right? And a third way of looking at it would be several competing algorithms talking, quote-unquote, to each other and creating involuntary common decision-making on the market. Well, the latter two categories are more difficult to assess from a competition perspective because, obviously, we lack one essential element of an anti-competitive agreement, which is, well, the agreement. We lack the voluntary element of the qualification of an anti-competitive agreement. In a way, this could be said to be the perfect crime, really, as collusion is made without a formal agreement having been made between the competitors. Now, let's look at the way AI impacts competition law from an abuse of dominance perspective. In March 2024, the French Competition Authority issued its full decision against Google in the Publishers' Related Rights case, whereby it fined again Google for €250 million for failing to comply with some of its commitments that had been made binding by its decision of 21 June 2022. The FCA considered that Bard, the artificial intelligence service launched by Google in July 2023, raises several issues. One, it says that Google should have informed editors and press agencies of the use of their contents by its service Bard in application of the obligation of transparency, which it had committed to in the previous French Competition Authority decision. The FCA also considers that Google breached another commitment by linking the use of press agencies and publishers' content by its artificial intelligence service to the display of protected content and services such as search, discover, and use. Now, what is this telling us about how the competition authorities look at abuse of dominance from an AI perspective? Well, interestingly, what it's telling us is something it's been telling us for a while when it comes to abuse of dominance, and particularly in the digital world. These behaviors have even been so much at the core of the competition authorities', concerns that they've become part of the new digital markets app. And this DMA now imposes obligations regarding the use of data collected by gatekeepers with their different services, as well as interoperability obligations. So in the future, we probably won't have these Google decisions in application of abuse of dominance rules, but most probably in application of DMA rules, because really now this is the tool that's been given to competition authorities to regulate the digital market and particularly AI tools that are used in relation to the implementation of the various services offered by what we now call gatekeepers, big platforms on the internet. Now, thirdly, the last concept of competition law that I wanted to touch upon today is merger control. What impact does AI have on merger control? And how is merger control used by competition authorities to regulate, review, and make sure that the AI world and the digital world function properly from a competition perspective? Well, in this regard, the generative AI sector is attracting increasing interest from investors and from competition authorities, obviously, as evidenced by the discussions around the investments made by Microsoft in OpenAI and by Amazon and Google in Anthropic, which is a startup rival to OpenAI. So the European Commission considered that there was no ground investigating the $13 billion investment of Microsoft in OpenAI because it did not fall under the classic conception of merger control. But equally, the Commission is willing to ensure that it does not become a way for gatekeepers to bypass merger controls. forms. So interestingly, there is a concern that the new way of investing in these tools would not be considered as a merger under the strict definition of what a merger is in the merger control conception of things. But somehow, once a company has been investing so much money in another one, it is difficult to think that it won't have any form of control over its behavior in the future. Therefore, the authorities are thinking of different ways of apprehending those kind of investments. The French Competition Authority, for instance, announced that it will examine these types of investments in its advisory role, and if necessary, it will make recommendations to better address the potential harmful effects of those operations. A number of acquisitions of minority stakes in the digital sector are also under close scrutiny by several competition authorities. So, again, we're thinking of situations which would not give control in the sense of merger control rules currently, but that still will be considered as having an effect on the behavior and the structure of those companies on the market in the future. Interestingly, the DMA, the Digital Markets Act, also has a part to play in the regulation of AI-related transactions on the market. For instance, merger control of acquisitions by gatekeepers of tech companies is reinforced. There is a mandatory information of these operations, no matter the value or size of the acquired company. And we know that normally for an information to be given to the competition authorities, it would be the notification system that is only required where certain thresholds are met. So we are seeing increasing attempts by competition authorities to look at the digital sector, particularly AI, from different kinds of lenses, being innovative in the way they approach it because the companies themselves and the market are being innovative about this. And competition authorities want to make sure that they remain consistent with their consumptions and concepts of competition law while not missing out on what's really happening on the market. So what has the future really made us now? Well, the European Union is issuing its Artificial Intelligence Act, which is the first ever comprehensive risk-based legislative framework on AI worldwide. That will be applicable to the development, deployment and use of AI. It aims to address the risks to health, safety and fundamental rights posed by AI systems while promoting innovation and the outtake of trustworthy AI systems, including generative AI. The general idea on the market and from a regulatory perspective is that if you're looking at competition law or more generally as a society, when you're scrutinizing AI, even though there may be abusive behavior through AI, the reality of it is AI is a wonderful source of innovation, competition, excellence on the market, added value for consumers. So authorities and legislators should try to find the best way to encourage, develop, nurture it for the benefits of each and every one of us, for the benefits of the market and for the benefits of everybody's rights really. Therefore, any piece of legislation or case law or regulation that will be implemented in the AI sector must be really focusing on the positive impacts of what AI brings to the market. Thank you very much.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith’s Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Gregor Pryor of Reed Smith’s Entertainment & Media Group in London describes why it’s important for law firms to train their lawyers in how to use AI. Although AI-powered tools do not exceed living lawyers in all aspects of legal practice, their powers of calculation bring immense yields in efficiency and can be a powerful accelerator for law firms delivering services.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Gregor: Hi, everybody, and welcome to our new series on artificial intelligence. Over the coming months, we'll explore some of the key challenges and opportunities within the rapidly evolving AI landscape. And today we're going to focus on artificial intelligence within law firms and hopefully give a bit more context about how Reed Smith is deploying artificial intelligence in the delivery of its services to clients. The first thing I would say is that certainly in my day job as an entertainment lawyer, AI has been a very controversial subject, mostly as it pertains to training. And as we'll discover as I explain more about how Reed Smith is using AI, training and the ability for law firms to use data that it obtains, is highly contingent on clients agreeing to that training or being comfortable with the manner in which law firms are deploying the technology to improve and make their services more efficient. So the first thing I want to talk about is how we are using AI and what our future plans are. So there's a whole lot of buzz and hype, I think, about how AI is impacting the way that law firms are operating. There's been a number of surveys. phase, most of them say that AI will transform the business of law and having been in the business of law for 25 years. I've heard that for the whole 25 years, but now feels like the time when it actually is happening and the decisions that firms are making to adopt AI are having an impact on how they perform. Most of the global 200 firms have policies about how they use generative AI. There's a high level of governance and risk management concerning client data, as I mentioned. But not all of those firms have a policy concerning how they use AI. We've been working and have worked on ours for a number of years and continue to iterate as the technology improves and client perspectives on the use of AI change. We think that there's likely a gap between preparedness and managing risk and the implementation of AI and we've been being very careful as we prepare our infrastructure to integrate and use AI carefully. Obviously the bigger the law firm, the more they are able to leverage AI and invest, but not that many firms are working with clients on AI projects. We've just finished a trial of about seven different providers. We've used them on a beta basis through limited rollout. We're not putting all our eggs in one basket. We're trying to figure out which AI has proper utility, which machines generate real-time efficiencies and help us in the delivery of our service. I think it's fair to say that some of them are nowhere near as impressive as we'd hoped, but we are still continuing to invest. One of the things that we've been very careful about is organizing our data and making sure it's hygienic. That means not using client data for AI without permission and also making sure that we have organized ourselves so that our data doesn't get unnecessarily or incorrectly commingled. One of the other topics that comes up is how AI can improve efficiency and productivity. I think there's some really obvious ways, such as summarizing documents, helping translate things, creating text or drafting. It gets a bit more complex. I think decision tree software has been around for a long time, but that's evolved so that you have a much more of a chatbot style interaction with AI. Of course, how do we address ethical, security, confidentiality concerns? These are all going to be developed in conjunction with clients. We don't think that we have the right or privilege of dictating to our clients how we use technology. But we do want to be a first mover where we can be because that will give us an advantage over other providers, other firms. We think that training our lawyers how to use AI is almost as important as the technology itself. It's no use having these incredible tools, but not being able to have lawyers that are well-versed in using them. And indeed, that is a trend that we're seeing within our clients as well. So we've been delivering prompt training, use case training for lawyers, because unless lawyers themselves change the way they work and adopt the use of the machines, then the machines will be pretty useless. One of the things I often get asked is whether I perceive AI as a competitive advantage for law firms. I'm not sure that it's necessarily only AI. And I do think there are quite tight limits on the use of AI. One bit of feedback we had from a particular client was that they preferred their lawyers to understand how the law works. And just reading case summaries generated by AI was no substitute for human learning. And to a great extent, I agree with that. Although I do think that there are ways in which lawyers learn and the change in the way that we practice over the past 20 years has been huge. I remember, I'm making myself seem very old, but I remember going to photocopying rooms and delivering faxes and carrying around big bundles of documents. Things that seem very arcane to us now in the early 20s, but actually when I qualified 20 years ago they were real things that we had to do. We've seen some clients use AI in really interesting and clever ways. We're fortunate that many of the clients we represent, huge technology companies who already have very sophisticated artificial intelligence applications and utilities. So that gives us an insight into how they expect us to adapt and align with them. One of the challenges we face is that exact issue, given that many of our clients have their own thoughts, processes, ethics, rollout, timetables for AI, and we have to align our delivery with what they're doing. And then finally, I guess one of the things that I should talk about is the limitations of AI. What can it do and what can it not do? Where would we struggle to use it? I still think when it comes to advocacy and negotiation. Particularly live and in real time, there's still a very strong place for lawyers with talent, with a keen intellect, who are quick on their feet, and who can see through a client's objectives very capably. I don't see a computer doing that anytime soon. We can look at case predictors or outcome predictive models, certainly to try and streamline the transactional process or find a way to reduce litigation costs. But having lawyers that can think however powerful the AI is to human brains is typically much more powerful. And so long as we are able to leverage, educate ourselves, use the technology to our advantage, and leverage it for a better outcome for our clients, then I think AI can be an incredibly powerful accelerator for us in the delivery of our services at retail. Whether there are any particularly outlandish or wild predictions related to the use of AI in the next decade or so. I could see a couple of things. Firstly, I could see is for low-level legal services. The use of “robolawyers” or entirely automated services to conclude transactions or effectuate things that you want to do as a consumer or an SME, almost certainly that's going to happen we're already seeing it in the venture capital space we're already seeing it with some of the automated services you see coming out of silicon valley why would you pay hundreds of pounds of dollars or thousands for a lawyer when you can just have a machine that would give you an outcome that's if you know provided you're prepared to accept a little bit of risk I could see robolawyers being a standalone service that's one thing I could see happening. The other thing that I could see happening within the next decade is the abolition or at least firms operating with complete abandonment of an hourly rate. It's something that Reed Smith has been examining closely. We've got client value teams. We've got a bunch of alternative billing models. We give clients options and choices about how they might want to pay for what we're delivering. But for law firms, there's a real opportunity there. If we've invested in and can leverage technology to deliver something much more quickly or to a better standard than a competitor, it doesn't necessarily follow that we would charge less. It may be that we can deliver it for a lower cost and our margin would increase, but that doesn't necessarily mean we're going to price it. There's a race to the bottom on pricing. So I'm quite bullish on some of the opportunities that are afforded by AI if a firm can gain a competitive advantage. If everyone's using the same technology, then inevitably the prices will go down because clients will see, why would I pay more if you're all using the same thing? So the prize, if you like, will be for law firms who can figure out ways to create products or services more efficiently and to a higher standard, and then still be able to charge well for them and increase their margin. So law firms typically have a high margin, they're high margin businesses. We don't want to be greedy. We want to deliver great value to clients. But equally, if there are opportunities to make money through the use of AI, we'll pursue them. And I think one of the ways to do that is to really critically challenge and question the use of the hourly rate, which has a bunch of built-in inefficiency.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith’s Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Continuing our new series on artificial intelligence, Christian Simonds and Henry Birkbeck discuss the use of AI in film and television. AI features in every stage of production – from pre-production, through production, to post-production – and reliance on AI will continue to increase as it evolves. The discussion centers around the legalities that management in the industry should be aware of, as well as the recurring questions and issues raised by clients in both the UK and U.S.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Henry: Welcome to our new series on AI. Over the coming months, we will explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in film and TV. My name is Henry Birkbeck. I'm a senior associate in the London office at Reed Smith, and I'm speaking today with Christian Simonds, who is a partner in the New York office. Christian and I have previously written an article on AI in the film and TV space, and there's probably quite a lot to mention and not a huge amount of time to cover it. But I guess I'll start maybe Christian just by asking, you know, have you seen recurring themes coming up with clients asking about AI in the film and TV space so far?
Christian: Yeah, I think in terms of, you know, the film and TV industry as a whole, it's kind of always been kind of at the forefront of technology, particularly in terms of how to properly utilize it. Just not only from a budgetary perspective, but also from a creative perspective, right? And obviously, you know, AI has been such a hot topic, particularly with respect to the guilds during the strikes of 2023. So there is a lot to kind of unpack in terms of how it's being integrated into film and TV production. You know, I think the general consensus is that about two thirds of every budget for a AV project in the kind of the film and TV space is kind of made up of labor. Right. And particularly now in relation to kind of the economy and where it is, there's been a heightened scrutiny of, of each line item in a, in a particular budget. And, and, and as a result, it's kind of driven the need or reliance on AI as a potential solution to mitigating certain costs, labor costs. And again, I know it's not ideal from an individual employment perspective, but from an overall budget perspective, it is something that I see the studios and production companies on the independent level embracing as it relates to trying to drive down costs of a particular budget it kind of, AI kind of plays into each stage of production on, it plays into development, pre-production, production, and post. And when you're navigating that as it relates to the legalities of how it's used, yeah, there are certain issues that come into play vis-a-vis what the guilds have agreed to kind of in their most recent ratification at the end of the strikes. And how to ensure that you're adhering to what those requirements are, at least aware of what they are. In addition to that, just from a copyright perspective and other considerations in terms of how AI is used in that kind of development stage. So there's kind of a lot to unpack within kind of the lifespan of a production as it relates to AI and how it's been used. And the reality is it's going to be used and it's going to continue to evolve and be, be relied on to a greater degree, particularly with respect to certain elements of the production process on the, on, on the VFX side in particular, obviously certainly in, in, in the development stage and, and, and the editing stage. And there's, there's certain things that, that it really has a, a, a tremendous value from a timing perspective, a cut down and production timing and, and, and, and other elements that I think will, will benefit the production as a whole. But yeah in terms of legalities yeah there's a lot to kind of unpack there I'm happy to touch on each of those in the time that we have.
Henry: Well I think the first one which you did touch on obviously is that the guild strikes of 2023 and clients and and those those people in the industry in the US were probably a lot closer to it but for the benefit of those of us that weren't in the US do you do you want to give a very quick recap of kind of where where they ended up?
Christian: Absolutely. Yeah. In terms of the Screen Actors Guild, SAG, there are really four main components to how they regulate artificial intelligence. So you've got your employment-based digital replica, which is basically a replication of a performer's employment or participation in the production. So it's literally taking the performer himself and portraying him in a scene that they didn't actually shoot or a performance that they didn't actually shoot. A good example is if you see in a movie someone talking to themselves. That second digital replication is employment-based digital replica. Right. What does that mean in terms of what you need to do vis-a-vis SAG? It means you need to get mandatory consent from the performers when you're going to do that. So it needs to be disclosed at the outset, either from the performer himself or if the performer is no longer around, you need to get consent from an authorized representative from his estate or the union itself. The contracts need to be reasonably specific as to the description of how it's going to be used. And if you're going to use it outside of the project, i.e. kind of that beyond a kind of one picture license, you're going to need additional consent to do that. And, you know, they need to be, the performers will need to be compensated, you know, for that digital creation as though it was themselves, right? So you need to take into consideration that residuals obviously Obviously, we'll need to be paid on whatever amounts are paid in connection with that. And then you've got independently created digital replica, which is basically using digital replicas created kind of using materials that the performer has provided in a scene that they didn't actually shoot. Right. So you're not actually using a previous performance in the movie itself, but you're like literally digitally replicating the performer in a scene that he didn't otherwise participate in. So again, you need to consent from the performer when you do that. You need to be reasonably specific in terms of how you're describing that use in the contract. Again, obviously, he's entitled to compensation for that use. And obviously, that compensation is entitled to payment of fringes.
Henry: Has this kind of been met with or met positively, I guess, in the industry since? Because I know that in 2023, you know, there was a big kind of reputational issue around AI and there was a lot of speculation about whether these, you know, protecting the performer's rights, was this being overblown a little bit? Or do you think, you know, are people comfortable with where they landed?
Christian: I think with respect to those two elements, I think yes, right? I think that the general consensus is they do adequately protect the actors when you're exploring those types of digital replica usages. I think the real wildcard here or the area of real concern is around generative AI, right? So basically, taking data, materials, prior performances, facial expressions, their image likeness, and actually creating new content from that material from an actor. I think that's really where we start to enter into an area where people aren't necessarily comfortable, particularly on the talent side. And again, the guild is clear that if you're going to do that, you need to get consent from the actor, right? So if you're going to use his image or likeness or prior performances to kind of feed a gen AI tool to create a new piece of content from those materials, you're going to need consent from that actor. You got to notify the union when you're going to do such a thing, you know, and the compensation around that usage is usually specifically negotiated with the talent themselves. And you basically need to continue to keep that talent informed how those materials are being used vis-a-vis the gen AI. So that really is a touchy area. I mean, I think a lot of people are familiar with what happened with Scarlett Johansson when she basically said that her voice was being utilized for purposes of chat GPT. Recently, they claimed that they had used an actress's voice that they had brought in that sounds similar to hers, but it wasn't her voice. I mean, so it just shows you the heightened sensitivity on the talent side in terms of of how their, you know, image like this voice is being used within the gen AI space. So yeah, there is, there's a lot of sensitivities around that usage.
Henry: Okay. So it's interesting that there's an ongoing obligation as well, which I guess makes sense, but you know, it's a burden, I guess, on the production company. And the other question I had relating to guilds was, was the writers guild. And I think the other thing that seemed to make the headlines internationally was about how particularly large language models and generative AI tools can be used for screenwriting and at what point. There's always a question with AI and copyright and ownership. And I was wondering if you could speak a little bit on where they landed in terms of who, if anyone, can own a script that has been written by an AI tool.
Christian: You know, within the US, the copyright law is clear in the sense of, you know, anything that's gen AI created is not copyrightable, right? And if you are utilizing elements of materials that were gen AI created in your copyrightable material, you have to disclose what those materials are, and those materials will not be protected within the overall piece of IP, right? So you could copyright the script, but if certain elements of it were pulled from gen AI sources, those elements of your script will not be protectable. So the copyright law is fairly clear on that. In terms of the WGA and how they're trying to protect their writers from being replaced by AI, it's saying, hey, obviously with respect to signatory companies, you have to be clear that... gen AI produced material will not be considered kind of literary material under the WGA, right? So what does that mean? So if you can't give an AI generated screenplay to a writer and say, hey, go rewrite this, and you potentially won't be entitled to separate rights because you're writing something that's based on underlying materials, WGA says absolutely not. We're basically going to exclude that gen AI-created screenplay or treatment or Bible from the kind of separated rights discussion and say, Hey, that writer that contributes those first writing services, you know, for purposes of taking that gen AI material and, you know, either polishing or rewriting it or, or, or developing further into the screenplay, that will be considered the first step of the writing process. And that writer will be the writer that will be entitled to the writing credits around the material moving forward. So yeah it's almost like it's fine for studios to to provide gen AI materials for purposes of writers developing a screenplay or or assisting them with their writing services and again if they're going to do that they need to fully disclose that to the writer when they're doing it but those materials will basically be excluded from kind of the chain right when the WGA is considering what makes up the overall literary material that's going to be considered for purposes of writing credits residuals before etc. And I think again you know it's it's a good place to land for writers it doesn't necessarily solve the AI issue as a whole because it's still going to be utilized for purposes of coming up with ideas potentially on the studio side because it's it's something that they can quickly do and and also it has added benefits in terms of saying, hey. It can analyze a screenplay or it can analyze an idea and say, hey, here's the likelihood of the success of this idea or screenplay based on, you know, historics of screenplays like it in the past. Or, hey, I think the second chapter of this story should be changed this way because it's going to be better received because of X, Y, or Z, right? I think that tool will be beneficial. So it's not totally carving out the usage of AI as it relates to the development of a literary material. And I think it could be utilized in a positive way, which is great. I just don't think it will ever be able to fully remove physical writers from the process. I think the WGA did a sufficient job ensuring that.
Henry: Yeah. It's interesting because obviously we're talking mostly about the US here, but I think most other markets in the film and TV industry were watching very closely what happened in 2023. And certainly in the UK, there's kind of been largely a following suit. You know, we haven't had the same kind of high profile developments, but PACT, which is the Producers Alliance in the UK, has since issued guidance on AI and the use of AI in film and TV productions. And, you know, they don't. They kind of stop short of taking a hard stance on anything, but they do talk about being very mindful and aware of the protection of the rights of all the various people that might be involved and how to integrate AI into production. So I think, I think the US position is, is really the kind of the market leader for this. And, you know, there's, there's a slight nuance in the copyright law is slightly different in the US and the UK and, you know, how AI relates to that. And, and lots of other jurisdictions, of course, there's implications there. But I think so far, the UK market seems to be broadly following what's happening in the states.
Christian: It's interesting, too, because the states here are starting to take a position on AI. And there's, at least in New York, there's a few bills that are being considered currently. There's three bills, one of which I think probably has a likelihood of getting passed, which deals with contracts around the creation of digital replicas. And it kind of tracks what SAG has already said. But basically, any contract between an individual and an entity or individual for performance of personal or professional services as it relates to a new performance by digital replication basically is contrary to public policy and will be deemed null and void unless it satisfies three conditions, one of which is the reasonably specific description of the intended use of the digital replica. But it adds like an interesting element, which is that the person who is on the other side, whose performance is potentially being digitally replicated, needs to have been represented by council or a member of a guild, which is interesting, right? So it kind of adds a little, an extra level of protection. And again, this is going to be state specific. So it'll be interesting how this kind of impacts other states or what other states are potentially considering. And then there's two other bills that are currently in place. One is on the advertising side, you know, in connection with disclosing synthetic media as it relates to advertising. But the other one that's interesting, just from a film financing perspective, is that they're taking a position that, you know, productions that spend money on AI digital replication or AI usage might not qualify for the New York tax rebate, which is very interesting.
Henry: Oh, really? Wow.
Christian: They think that that one won't necessarily pass. Certainly the first one will because it's kind of already in line with what SAG has said. But yeah, that second one, I think, will probably get shelved. But just an interesting one to consider.
Henry: Yeah, and it's really kind of, I guess, both of them showing that there is this protection element being added in and trying to... It's almost like holding these AI devices slightly at a distance to stop them kind of... Becoming a source of, I don't want to say evil in the industry, but kind of going too far overstepping the mark.
Christian: Absolutely. And it's interesting too, because it's almost like, obviously you've got your defined protections within SAG and copyright law, and now obviously what's being considered by legislation. But the reality is a lot of this is being driven just by public policy in terms of the public's rejection of AI to a degree, right? I think people are generally like scared of it right so the knee-jerk reaction is to say no let's continue to promote you know the employment of real people right?
Henry: Yeah.
Christian: I think you know and and I think this also plays into how AI is used you know in films and again you know i think it's going to evolve to a point where it'll be tough to distinguish between AI and real right but you know, a good example being Irishman or some other films that have used significant AI to basically de-age people, you know, and people see it and like, that looks ridiculous, right? Like, I think there's a general knee-jerk reaction to doing it, right? And whether that changes over time based on how the AI technology evolves. Particularly from a visual perspective is TBD. But but yeah I think a lot of it is driven by public perception of AI right.
Henry: Yeah yeah and I think you know it is interesting what you were saying before and we've seen this in the UK as well is that initially it was like okay well how is this gonna affect producers and you know and there's kind of efficiencies there but actually we're seeing studios and commissioners really embrace it and and look for ways to cut the cost of production as well and and you know i think it's It's just going to, like you say, it's going to touch basically every aspect of film and TV production and streamline things.
Christian: Yeah, I think there are, I mean, I think there are real positives in terms of how it can be integrated into the production process. I know we touched on a few, but, you know, also like dubbing, right, localization of content and, you know, basically extending the reach of a piece of AV IP, right? So if you can do it in a way where it looks natural, because I know there's always kind of been a visceral reaction to seeing something that is dubbed really poorly, but if it can evolve to a point where it's almost seamless, it may have a better impact in terms of the breach of content. Again, I think there are different schools there in terms of whether investing in that makes sense in certain places or if it's just easier just to dub it and release it and how much of an impact it's actually having. But I do think in certain jurisdictions or certain areas, a seamless localization could have value to the reach of a particular piece of AV IP.
Henry: Yeah, yeah, absolutely. And it could kind of move things geographically as well. I know, in the UK, we have this a lot where, you know, the cost of post production is a lot cheaper in countries close to the UK. And so as a result of that, you quite often see productions that will be, you know, 80% of it will be of the money will be spent in the UK. And actually, the final 20%, which, you know, often doesn't qualify for the tax credit over here anyway, gets outsourced to somewhere in Europe, where it is much cheaper to have the post production be carried out remotely, or even to Canada, or somewhere where there's a kind of better incentive package. And actually, if you could streamline that whole process, and you've got, you know, a company that or an AI tool that can, you know, do all the grading and all this kind of stuff that was cutting and these previously quite labor intensive activities, then there's no reason why you couldn't bring a lot of that production cost back on shore and reduce it. So it may kind of move where people are located in the market as well.
Christian: Yeah, I think, you know, and it's not necessarily, you know, solely issue for, you know, scripted projects. You know, I think also in the US, it's even kind of led into the doc space, right? So I know, you know, the APA in the US, which is the Archival Producers Alliance, which basically is a document variance, you know, basically said as well, like, hey, just be careful how you use AI in this space, even though it's not, you know, governed by a auild necessarily, right? Like still, or how it's used in the non-scripted space, you know, it doesn't really have guild coverage depending on what it is. They're at least saying, hey, like, just be careful how you use it because, you know, what we're doing, particularly in terms of how we're depicting history or events, historical events or things that have happened, you know, AI, you don't want to, you don't want to change it to a point where you're basically changing the perception of history, right? Or generating things that are so AI oblivious to the realities of what actually happened, right? Like you might potentially, it might have a negative effect. And it's funny, it's more of like an ethical line that documentarians are trying just to be mindful of as it's kind of integrated into that space as well. Because look, when you don't have kind of primary source material to utilize for purposes of trying to depict something in a documentary, Yeah, it may be easy to duplicate it using AI, right? Generated by AI. But at the same time, that may have implications that you might not have thought about, which is, you know, how is this telling a story that might not actually be accurate to the underlying facts?
Henry: I mean, I think that's a pretty good kind of starting point. Obviously, these are issues and discussion points that have a lot of depth to them and have been discussed a lot in the industry internationally already. And we're kind of in a point now where it's like, let's wait and see how this stuff shakes out in practice. And certainly, as we've discussed, the US has kind in a lot of areas that there is now a bit of a direction. So it'll be interesting to see how things unfold. And I know for the most part in the UK, the production industry sort of follows what's happening in the states. And in respect of international productions, they have to kind of be aligned to a degree. So it will be really interesting to see how this develops in the coming months and years.
Christian: Yeah, for sure. And like Luke said in the beginning, there is no question that it is a part of the filmmaking process and will continue to be so at an ever-increasing degree. So it's kind of unavoidable. And I truly think it could be utilized in a way that's beneficial to filmmaking, both from a budgetary perspective, but also like, hey, if you can reallocate a bunch of the spend from what otherwise was labor-intensive, time-consuming elements of production, re-allocate that to talent or other things, elements of the process that can compensate certain individuals in a way that they weren't before. I think that can be beneficial as well. And I think, you know, there will be a point where, you know, it'll really help independent filmmaking in particular, right? Because that always is, budgetary constraints are always paramount in that space. And if you can do things that otherwise cost a ton of money previously for cents on the dollar using AI, moving forward to the extent it kind of evolves quality wise, I think you'll see an uptick in really quality, you know, independent filmmaking.. Again, there's never going to be a universe, in my opinion, that totally circumvents the utilization of real people, but AI can certainly be utilized in a beneficial way to help the process.
Henry: Great. Thanks very much, Christian. And thanks everyone for listening. Tune in for the next episode on our series on AI.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith’s Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
As part of our new series on artificial intelligence, in the coming months, we explore the key challenges and opportunities in this rapidly evolving landscape. In this episode, our labor and employment lawyers, Mark Goldstein and Carl de Cicco, discuss what employers need to know about the use of AI in the workplace and the key differences and implications between the UK and the U.S.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Mark: Hi, everyone. Welcome to Tech Law Talks podcast. We're starting a new series on artificial intelligence or AI, where the coming months we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the U.S. and U.K. workplaces. My name is Mark Goldstein. I'm a partner in Reed Smith’s Labor and Employment Group, resident in our New York office. And I'm here joined today by my colleague, Carl De Cicco from our London office. And we're going to talk today about some of the U.S. and U.K. Implications for AI as it relates to the workplace. So, Carl, let me kick it over to you. And if you can tell us, you know, from a high level, what do employers in the UK need to know when it comes to AI related issues in the workplace?
Carl: Thank you, Mark. So, yes, my name is Carl. I'm a partner here in the London Employment Group of Reed Smith. And essentially, I think the AI issues to be concerned about in the UK are twofold. The first is how it pertains to day to day activities. And the second is how it relates to kind of management side of things. So look on the type of day-to-day activities point that's hopefully the things that people are starting to see themselves in their own workplace starting to come in so use of particular generative AI programs or models to help generate content and that's obviously increasing the amount of output individuals can have and so on the one hand it's quite good on the other hand thinking about it there might be some issues to look at so for example are people being overly reliant on their AI are they simply putting the request in and whatever is churned out by the AI system is that being submitted as the work product and if so that could be quite concerning because, AI is obviously a very useful tool and is sure to continue improving as time goes on but where we stand right now AI is far from perfect and you can get what are known as hallucinations and this seems to be quite a nice term of art for effectively what are errors so things that are conclusions that are drawn on the basis of information that doesn't exist, or quotations of things that do not exist either. So really, the content that's produced by AI should be seen as something that's collaborative with the worker that's involved in the matter rather than something which AI should be totally responsible for. So see it as a first pass rather than the finished product. You should be checking the product that comes out, not just the things like making sure that sources stack up and the conclusions draw back to the data underneath, but to make sure also that you're not getting to a stage where there might be plagiarization. So AI takes what is available on the internet and that can lead to circumstances where actually somebody somebody's very good work is already out there is simply being reproduced if not word for word substantially that can obviously lead to issues not just for the person who's submitting the work but for the employer who might use that particular piece of generated work for something that they're doing. Other benefits could be things like work allocation so one of the issues that people look at in the DEI space is our opportunities for work being fairly and equally distributed, people getting enough of a looking at work, both in terms of amount and quality. And obviously, if you have a programme which is blind to who the work is going to, there's potential for that work to be more fairly distributed so that those who don't often get the opportunity to work on particular matters are actually finding themselves onto the kind of work they weren't previously dealing with and they would like to be able to get and experience of. Now, that's the positive side of it. The potential negative there is that there might be some bias in the AI that underpins that resourcing program. So, for example, it might not pick all the individuals who are less occupied than others in a way which a business might have in a view to what's coming up over the next week or two. It might not even pick up quite how the quality of work should be viewed through all particular lenses. It might have a particular skew on how quality of work is viewed. And that could lead perhaps to an individual being even more pigeonholed than before. So all of these things are potentially positive but need to be underpinned by essentially a second human checker so whilst there are many many positives it shouldn't be seen as a panacea. So well how's that holding up for what you're seeing in the states particularly new york?
Mark: I think that that's absolutely right Carl similar principles apply here in the US i think it's by way of background to go through kind of where I've seen AI kind of infiltrate the workplace, if you will. And I'll distinguish between AI, traditional AI, and then generative AI. So I've seen, you know, we've seen AI be used by employers in the U.S. and a whole host of fronts from headhunting, screening job applicants, running background checks, inducting job interviews coming up with a slate of questions. Also to things like performance management for employees and even selection criteria and deciding which employees to select for a reduction in force or mass layoff. I've also seen employers use AI in the context of simple administrative tasks like guiding employees to policy documents or benefits materials and then creating employee and workplace-related agreements and implementing document retention and creation policies and protocols. In terms of generative AI, which is more, as you noted, on the content creation front, I've certainly seen that by employees being used to translate messages or documents. And to perform certain other tasks, including creating responses from manager inquiries to more substantive documents. But as you rightly note, just as in the UK, there are a number of potential pitfalls in the US. The first is that there's a risk, as you noted, of AI plagiarizing or using a third party's intellectual property, especially if the generative AI is going to be used in a document that's going to be outward facing or external, you run substantial risk. So absolutely review and auditing any materials that are created by generative AI, among other things, to ensure that there's no plagiarism or copying, especially when, again, that material is going externally, is incredibly important. Simply reviewing the content as well, just beyond plagiarism, simply to ensure general accuracy. There was a story out of, you know, New York Federal Court last summer about an attorney who had ChatGPT help write a legal brief and asked ChatGPT to, you know, run some legal research and find some cases. And ultimately, the case sites that were provided were fictional, were not actual cases that had truly been decided. So a good reminder that, as Carl said, while generative AI can be useful, it is not, you know, an absolute panacea and needs to be reviewed and conducted, you know, reviewed thoroughly. And then, you know, similarly, you run a risk if employees are using certain generative AI platforms that the employee may be disclosing confidential company information or intellectual property on that third party platform. So we want to make sure that, you know, even when generative AI is used, that employees are doing so within the appropriate confines of company policy and their agreements, of course, things like confidential information and trade secrets and intellectual property. You know, so I think it's important that employers, you know, look to adopt some sort of AI and generative AI policy so that employees know what the expectations are in terms of, you know, what they can and equally, if not more importantly, what they cannot do in the workplace as it relates to AI and generative AI. And certainly we've been helping our clients put together those sorts of policies so employees can understand the expectations. Carl you know we talked we've talked so far kind of generally about you know implications for the workplace is there any specific legislation or regulations from the UK side of things that you all have been monitoring or that have come out?
Carl: The approach of the UK government to date has been to not legislate in this area in a in what I think is an attempt to achieve a balance between regulation and growth the plan I think so far has been to to at some point introduce a voluntary self-regulatory scheme, which bodies sign up to. But we're recording this in June 2024, less than one month away from a UK general election. So matters of AI regulation and legislation are currently on the back burner, not to be revived perhaps for at least another two to three months. But what we can, there is still, of course, a lot of interest in this area. And the UK TUC, which is a federation of trade unions in the UK, has published sort of a framework proposal for what the law might look like. This is far from being a legislation and obviously many hurdles to pass before this might even come before Parliament and whether or not if it is passed, put before Parliament, whether it's approved by all there. But this looks at things very similar to what the EU are looking at, that is to the risks-based approach to legislation in this area. And they draw a distinction between regular decision-making and what they call high-risk decision-making. And the high-risk decision-making is really shorthand for decisions which might affect the employment of an individual, whether that's recruitment. Whether it's a decision, disciplinary decisions, termination decision. Essentially all the major employment related decisions are to go through essentially a system of checking so you couldn't rely purely for example in the framework on a decision made purely by AI. It'd be required that an individual sits alongside that or at least only uses the AI tangentially to decision that they're making. Things like no emotion recognition software would be allowed so that's for example if you were to have a disciplinary hearing and that's to be recorded you could use software which is designed to pick up on things like inflection word pattern things that might infer a particular motive or meaning behind what's been said and what this framework proposal does is say that kind of material could have that kind of software or programming couldn't be used in that kind of setting. So what happens in the UK remains to be seen but i think you guys are a bit further ahead than us and actually have some law and statute. How are things working out for you?
Mark: We've seen a lot of government agencies, as well as state legislatures, put an emphasis on this issue in terms of potential regulatory guidance or proposed legislation. To date, there has not been a huge amount of legislation passed specifically relating to AI in the workplace. We're still at the phase where most jurisdictions are still considering legislation. That said, there was an extremely broad law passed by New York City a few years ago, which finally went into effect last July. And in a nutshell, we can have an entirely separate podcast just on the nuances of the New York City law. But essentially what the New York City law does is it stops employers or bars employers from using an automated employment decision tool or an AEDT to screen job candidates when making employment decisions unless three criteria have been satisfied. First, the tool has been subjected to an independent bias audit within the year prior to the use. A summary of the most recent bias audit results are posted on the employer's website, and the employer has provided prior written notice regarding use of the AEDT to any job applicants and employees who will be subject to screening by it. If any one or more of these three criteria aren't satisfied, then an employer's use of that AEDT with respect to any employment decisions would violate the New York City Human Rights Law, which is one of the most employee-friendly anti-discrimination statutes in America. And other jurisdictions have used the New York City law as somewhat of a model for potential of the legislation. We've also seen the Equal Employment Opportunity Commission or the EEOC weigh in and issue some guidance, though not binding necessarily strongly cautions employers with regards to the use of AI and the potential for disparate impact on certain protected classes of job applicants and employees, and generally cautioning and recommending that employers conduct periodic audits of their tools to ensure no bias occurs. Carl, do you have any final thoughts?
Carl: So whilst we're still a long way from legislation in the UK, there are things employers can be thinking about and doing now to prepare themselves for I think what will inevitably be coming down the road. So just a few suggestions on that front. Establish an AI committee. So take ownership of how AI is used in the business, whether that's in the performance of day-to-day tasks and content generation and such. As Mark said earlier on, setting up things like what can be done what checks should be carried out ensuring that there is a level of quality control and also in terms of decision making ensuring that there is a policy that employers can look to to make sure that they are not going to one fall foul of something in the act and also have something so that if any decisions are challenged in future not just can they look back on the measures they've taken but show that it's consistent with a policy that they've adopted and applied on an equal basis for all individuals going through any particular process may give rise to complaints. And they might also, for example, conduct a risk assessment and audit of their systems. I mean, one of the things that will be key is not just saying that I had AI and that was used in a particular process, but knowing how that AI actually worked and how it filtered or made decisions that it did. So, for example, if you want to be able to guard against an allegation of bias, it would be good to have a good understanding of how the AI system in question that gave rise to decision that's in dispute had made its determination as over one individual than the other that will help the employer to be able to demonstrate first of all that they are an equal opportunities employer in the event of real challenge the discrimination didn't occur, so look those kind of things are things employers can be thinking about and doing. Now what kind of things do you think people on your side of the pond might be thinking about?
Mark: Yeah so I think you know similar similar considerations for U.S. employers. I think among them, considering the pros and cons, if you're going to use an AI tool, building your own, which some employers have opted for versus purchasing from a third party. If purchasing from a third party, particularly given the EEOC and other agencies' stated interest in scrutinizing how tools potentially might create some sort of discriminatory impact, consider including an indemnification provision in any contracts that you're negotiating. And in jurisdictions like New York City, where you're required to conduct an annual audit, but even outside New York City, especially given that it's been recommended by the EEOC, consider periodic auditing of any employee and company AI use to ensure, for instance, that tools aren't skewing a paper of or against a particular protected class during the hiring process. And again, I strongly recommend developing and adopting some sort of workplace AI and generative AI policy. Thank you all for your time today. We greatly appreciate it. Thank you, Carl. And stay tuned for the next installment in this series.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith’s Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Anthony Diana and Therese Craparo are joined by John Collins from Lighthouse to provide an overview of some of the challenges and strategies around data retention and eDiscovery with Microsoft’s AI tool, Copilot. This episode explores Copilot’s functionality within M365 applications and the complexities of preserving, collecting and producing Copilot data for legal purposes. The panelists cover practical tips on managing Copilot data, including considerations for a defensible legal hold process and the potential relevance of Copilot interactions in litigation.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Anthony: Hello, this is Anthony Diana, a partner in the Emerging Technologies Group at Reed Smith, and welcome to the latest Tech Law Talks podcast. As part of our ongoing podcast series with Lighthouse on Microsoft M365 Copilot and what legal departments should know about this generative AI tool in M365. Today, we'll be focused on data retention and e-discovery issues and risks with Copilot. I am joined today with Therese Craprro at Reed Smith and John Collins of Lighthouse. Welcome, guys. So, John, before we start, let's get some background on Copilot. We've done a few podcasts already introducing everyone to Copilot. So if you could just give a background on what is Copilot generally in M365.
John: Sure. So the Copilot we're talking about today is Copilot for Microsoft 365. It's the experience that's built into tools like Word, Excel. PowerPoint, Teams, Teams meetings. And basically what it is, is Microsoft's running a proprietary version of ChatGPT and they provide that to each one of their subscribers that gets Copilot. And then as the business people are using these different tools, they can use Copilot to help generate new content, summarize meetings, create PowerPoints. And it's generating a lot of information as we're going to be talking about.
Anthony: And I think one of the interesting things that we've emphasized in the other podcasts is that each M365 application is slightly different. So, you know, Copilot for Word is different from Copilot for Exchange, and they act differently, and you really have to understand the differences, which we talked about generally. So, okay, so let's just talk generally about the issue, which is retention and storage. So, John, why don't you give us a primer on where is the data generally stored when you're doing a prompt and response and getting information from Copilot?
John: So the kind of good news here is that the prompts and responses, so when you're asking Copilot to do something or if you're chatting with Copilot in one of the areas that you can chat with it, it's putting the back and forth into a hidden folder in the user's mailbox. So the user doesn't see it in their outlook. The prompts and responses are there, and that's where Microsoft is storing them. So there's also files that get referenced that are stored in OneDrive and SharePoint, which we may talk about further. But in terms of the back and forth, those are stored in the Exchange mailbox.
Anthony: That's helpful. So, Therese, I know we've been working with some clients on trying to figure this out and doing testing and validation, and we've come across some exceptions. You want to talk about that process, I'll say.
Therese: I think that's one of the most important things when we're talking about really any aspect of Copilot or frankly, new technology, right? It's constantly developing and changing. And so you need to be testing and validating and make sure you're understanding how it's working. So as you said, Anthony, you know, we found early on when Copilot, our clients first started using Copilot, that the prompts and the responses for Outlook were not being retained in that hidden folder, right? And then Microsoft has since continued to develop the product. And now, in most cases, at least we're seeing they are, you know, similarly, for those of you who are using transcriptionless Copilot, so it's Copilot that doesn't can give you meeting summaries and answer questions during the meeting, but doesn't retain the transcript, because people had some concerns about retaining transcripts, we're seeing that those Copilot artifacts, so the prompt and the response are now currently not being retained in the hidden folder. So a lot of this is you need to understand how Copilot is working, but understand that it's also a dynamic product that's constantly changing. So you need to test it to make sure you're understanding what's happening in your environment with your organization's use of Copilot.
Anthony: Yeah. And I think it's critical that it has to constantly be tested and validated. Like any technology, you have to constantly test it because the way it's happening now, maybe even if it's being retained now, could change, right? If they revise the product or whatever. And we've seen this with Microsoft before where they may not always, you know, they change where it's stored because for whatever reason, they decided to change the storage. So if you have an e-discovery process and like, you just have to be aware of it. Or if you're trying to retain things and you're trying to govern retention, you just have to make sure you understand where things are stored. Okay, so John, if you could explain presently sort of how retention works with Copilot data that's stored in the hidden folder of Exchange.
John: So what Microsoft has done so far is they've made it possible for the prompts and responses that we were talking about. So when you're in Word or Excel or PowerPoint, or if you're using the chat function inside of the Teams or in general, those are the subject to the same retention that you've set for your one-to-one in your group chats. So if you have a 30-day auto-delete policy for your one-to-one in group chats, that's going to be applied to these Copilot interactions. So I've heard, and I think you guys may have heard this as well, that Microsoft is going to split those off, but it's not on the roadmap that we've seen, but we've heard that they are going to make them two separate things. But right now they're the one in the same.
Therese: Yeah, and I think it's the good and the bad news, right, for people who are looking at Copilot and how do I best manage it. The good news is that you can control retention. You can set a retention on it that's within the organization's control and you can make the decision that's right for your organization. The bad news is that it has to be the same as whatever right now is for whatever you're setting for Teams chat, which may or may not be how long you would like to retain the Copilot data. So there are some features that are good, that gives you a little bit control to make decisions that are right for the organization. But right now, they're only partially controllable in that sense. So you have to make some decisions about, you know, how long do you need Teams chat? How long do you need Copilot? And where's the right place in the middle to meet business needs, right? And also to take into consideration how long this data should exist in your organization.
Anthony: Yeah. And John, we've heard the same thing and heard from Microsoft that they're working on it, but I haven't heard if there's a roadmap and when that's happening. But we have several clients who are monitoring this and are constantly in contact with Microsoft saying, when are we going to get that split? Because at least for a lot of our clients, they don't want the same retention. And I think, Therese, we could talk a little bit about it in terms of what people are thinking about, what to consider. Once we get to a place where we can actually apply a separate retention for Copilot, what are the factors to consider when you start thinking about what is the appropriate retention for this Copilot data?
John: And Therese, do you want them to be ephemeral where you could have a setting where they just go, they aren't captured anywhere? I'd be curious if you guys think that's something that you would want clients to consider as an option.
Therese: Well, look, I mean, the first thing all of our clients are looking at is business needs, right? Is this with anything, right? Do the artifacts from Copilot need to exist for a longer period of time for a business use? And in general, the answer has been no. There's no real reason to go back to that. There's no real reason to keep that data sitting in that folder. There's no use of it. The users don't go, like John, as you said, you can't see it. Users aren't going back to that data. So from a business perspective, you know, number one thing that we always consider, the answer has been no, we don't need to retain these Copilot artifacts for any business reason. The next thing we always look to is record retention, right? Is there a legal regulatory obligation to retain this Copilot artifacts that are coming out? And in most cases, when our clients look at it, the answer is no, it's not a record. It's not relied on for running the company. It doesn't currently fall under any of the regulations in terms of what a record is. It's convenience information or transitory information that may exist in the organization. So typically, again, that next component is records. And typically, at least with Copilot, we're seeing that the initial output from Copilot, the question and the response, are not considered records. And so once you get to that point, the question is, why do I need to keep it at all? Which is, John, to what you're alluding to, you know, today for all data types, whether it's Copilot or otherwise, right, over-retention presents risks. It presents privacy risks and security risks, all kinds of risks to the company. So the preference is to retain data only for as long as you need it. And if you don't need it, the question arises, do you need to keep it at all? Do you need it even for a day? Could you make it ephemeral so that it can just disappear when it's gone because it has served its useful life and there's no other reason to keep it? Now, we always have to consider legal holds whenever we have these conversations, because if you have a legal hold and you need to retain data going forward, you need to have a means of doing that. You need to be aware of how to retain that data, how to preserve that data for a legal hold purpose if you deem it to be relevant. of it. So that's always the caveat. But typically when we're seeing people look at this, when they actually sit down and think about it, there hasn't been to date really a business or records reason to retain that data in the ordinary course of business.
Anthony: And so it's a matter of how do you enforce that? And whether, John, I mean, when we talk about ephemeral, it is retained. So ephemeral would be probably like one day, right? So it would basically be kept for one day, which raises all kinds of issues because they're there for one day. And as we've seen with other, whether it's team chats or any type of instant messaging, once it's there, and we're gonna talk a little bit about preservation, it's there, right? So for one day it's there. So let's talk a little bit about sort of the e-discovery issues and particularly preservation, which I think is the issue that a lot of people are thinking about now as they're rolling this out is, can I preserve this? So John, how do you preserve Copilot data?
John: So that's pretty straightforward, at least in one respect, which is if you're preserving a user's Exchange Online mailbox, unless you put in some kind of condition explicitly to exclude the Copilot prompts and responses, et cetera, they're going to be preserved. So they will be part of what's being preserved along with email and chats and that type of thing. So the only The only question is, and if we're going to get into this, Anthony and Therese, but the reference files, the version shared and all that. But as far as the prompts and responses, those are part of the mailbox. They're going to be preserved.
Anthony: So you're talking about a potential gap here then. So let's just talk about that. When you're doing prompts and responses, oftentimes you're referencing a specific document, right? It could be a Word document. You're asking for a summary, for example, of a Word document. it's going to refer to that document in the prompt and response. So what is or isn't preserved when you preserve the mailbox in that situation?
John: Well, I know we were talking about this before, but there's really, the question is, do you have the version shared feature enabled? Because the Microsoft documentation says if you want referenced files to be preserved as part of your Copilot implementation, you have to enable version shared. But in our testing, we're seeing inconsistent results. So in one of our tenants, we have version shared, and that's exactly what it's doing is if you say, summarize this document or use this document to create a PowerPoint, it is treating it almost like a cloud attachment. And it's it, but that's not, but that's not for preservation purposes. That's at the collection stage. It goes back to the topic that I know you guys, we talk about this a lot is, well, do I have to preserve every SharePoint and OneDrive where something might live that somebody referenced, right? And that's kind of the question there with the reference files.
Anthony: Got it. So you're not preserving it necessarily because like a modern attachment, which we've talked about in the past, it's not preserving it. Although if they're looking at a document from their OneDrive and you have the OneDrive on hold, that document should be there. So when you go to collect it, you can. Assuming that there's this setting, you have to have the version shared. So it's actually linking that attachment to this Copilot data. So a lot to digest there, but it's complicated. And again, I think you point out, this is a work in progress you have to test, right? You cannot assume that based on what we're saying, it's actually going to work that way because we've seen the same thing. You have to test and it often changes because this is a work in progress and it's constantly being changed. But that's an interesting point to think through. And again, I think from a preservation standpoint, Therese, is it required to preserve it if you have Copilot data and they're referring to a document? Is it similar to like an e-comms where we say, generally in the e-comms front where we talk about a Team's message, we always said, well, you need it because it's the complete electronic communication. So therefore to get completeness, we generally say you should be producing it in the like Copilot data, do we think it's going to be any different?
Therese: Look, I mean, I think it depends is the answer. And if you look out there, even when we're talking about e-discovery, when you look at the cases that are out there talking about links, right, links to attachments or links to something that's in an email or in an e-comm, it's mixed, right? Right. The courts have not necessarily said in every case you have to preserve every single document for every single link. Right. You need to preserve what is relevant, even with production. Courts have said I'm not going to make them go back and find every single link necessarily. But if there's something that is deemed relevant and they need that link, you may have an obligation to go back and find it and make sure that you can find that document. So I don't think it's as clear cut as you must, you know, turn on version shared to make sure that you can, you are, quote, preserving every single referenced file and every single Copilot, you know, document. We certainly don't preserve every single document that's referenced in a memo, right? Or in a document that it refers to. There's a reference to it and maybe you have to go find that. So I think that it's not really clean cut. It's really a matter of looking at your Copilot setup. And making some strategic decisions about how you are going to approach it and what makes the most sense and making sure you're communicating that, right? That the structure and the setup of Copilot is coordinated with legal and the people who need to explain this to courts or to regulators and the like. And that you're educating your outside counsel so that they can make sure that they are representing it correctly when they're representing you in any particular case that says, this is how our Copilot works. These are the steps that we take to preserve. This is why. And this is how we handle that data. And I think really that's the most important thing. We're sort of, this is a new technology. It's, we're still figuring out what the best practices are and how it should be. I think the most important thing is to be thoughtful. Understand how it functions. Understand what steps you're taking and why. So that those can be adequately communicated. I think most of the time when we see these problems popping up, it's because someone hasn't thought about it or hasn't explained it correctly. Right? And that's the most important thing is understanding the technology and then understanding your approach to that technology from a litigation perspective.
Anthony: Yeah. And I think one of the challenges, right? And I think this is both a risk and a challenge. And we've heard this from a lot of litigation teams as this Copilot is being launched is it's not always accurate, right? Like it's, and I think you maybe make the argument that it's not relevant because if I'm a user and I'm asking a question, and it comes back, and it's just answering a question, and it's wrong, but you don't know that. I mean, it's Copilot. It's just giving you an answer. Is it really relevant? What makes it relevant? I may be asking the question of Copilot relating to the case. Let's assume that it's relating to the case in some way, underlying matter. You ask a question, you get a response back. Do you really need to preserve that? I've heard from litigators saying, well, if they go to Google, and they do a Google search on that topic, we're not preserving that necessarily. So what do we think that the arguments are that it's relevant or not relevant to a particular matter?
Therese: I mean, look, relevance is always relative, right, to the matter. And I think that it's difficult with any technology to say it's never relevant, right? Because relevance is a subject matter and a substantive determination. Just to say a particular technology is not relevant is a really hard, I think, position to take in any litigation, frankly. It's also very difficult to say, well, it's not reliable, so it's not relevant. Because I can tell you, I've seen a lot of emails that are not reliable, and they are nonetheless very relevant, right? The fact that somebody asked a certain type of prompt in certain litigations could be quite relevant in terms of what they were doing and how they were doing it. But I think it's also true that they're not always going to be relevant. And there's a reliability aspect that often comes in, I think, probably less so at the preservation stage and more so at the production stage. Right, in terms of how reliable is it, is this information? Again, this is about understanding the technology. Does your outside counsel know that if you are one, are you going to take the position that we're not going to produce it because it's not reliable, right? And be upfront about that and take that position and see if that's something that you can sustain in that particular case. Can you can explain that they would not be relevant here and it's not going to be reliable in any case, right? Are you going to take that position or not? Or at the very least, if you are going to produce it, that you understand that it is inherently unreliable, right? A computer gave an answer to a question that may or may not be right, and depends on a user reviewing that. You know, if the user used it and sent it out, you can review, that's when it becomes important or valuable. But understanding the value of the data, so you take appropriate positions in litigation, so that if for some reason Copilot artifacts are relevant, you can say, well, sure, that may be on a topic that is relevant to this case. But the substance of that is unreliable and meaningless, right, in this context. So I think, I mean, one of the funny things that I always think is, right, we say email is relevant, but not all email is relevant. We preserve all email because we don't have a way at the preservation stage to make a determination as to which email is relevant and which email is not, right? But I think that's true, right, with Copilot. I mean, at the end of the day, unless you are being upfront that I'm not preserving this, or you can say this type of data, there are cases where email is not relevant, right, at all for the case, unless you could take that position. You preserve that data because you don't know which of those Copilot interactions are on the topic that could matter or could be relevant. But you're thoughtful again down the road about your strategic positioning about whether or not it should be produced or whether or not it has any value in evidentiary value in that litigation given the nature of the data itself.
Anthony: And John, you talked a little bit about this. I know you're doing some testing. Everyone's doing some testing now on collecting and reviewing this data, this Copilot data. What can you tell us about? You got prompts and responses. Are they connected in any way? Is there a thread if you're doing a bunch of them? How does it all work based on what you're seeing so far?
John: Right. Well, like a lot of Microsoft, when it comes to e-discovery, some of it's going to depend on your licensing. So if you have premium eDiscovery, then for the most part, what we've been seeing in our testing is that when you collect the chats or when you collect the Copilot information, what it's going to do is if you select the threading option and the cloud attachment option, it's going to treat the Copilot back and forth largely like it's a Teams chat. So you'll see a conversation, it'll present as an HTML, it'll show you, it'll actually collect as cloud attachments, the files that are referenced, if you've got that set up. So to a large degree, in terms of determining if things are relevant and that type of thing, you can do keyword searches against them and all of that. So at this point, what we're seeing with our testing is that for the most part, it's treating the back and forth as these chat conversations similar to what you see with Teams.
Anthony: And I'm sure there'll be lots of testing and validation around that and disputes as we go forward. But that's good to know. Okay, well, I think that that covers everything. Obviously, a lot more to come on this. And I suspect we'll be learning a lot more about Copilot and retention and discovery over the next six months or so, as it becomes more and more prevalent, and then starts coming up in litigation. So thank you, John and Therese, and hopefully you all enjoyed this and certainly welcome back. We're going to have plenty more of these podcasts on this topic in the future. Thanks.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Anthony Diana and Samantha Walsh are joined by Lighthouse’s Chris Baird as part of our series on what legal teams need to know about Microsoft 365 AI-driven productivity tool, Copilot.
This episode presents an overview of the risks relating to Copilot’s access to and use of privileged and sensitive data and how businesses can mitigate these risks, including using Microsoft 365's access control tools and user training.
In particular, the episode provides in-depth information about Microsoft 365's sensitivity labels and how they can be used to refine a business’s approach to managing risk associated with privileged and sensitive data stored in Microsoft 365.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Anthony: Hello, this is Anthony Diana, a partner here in Reed Smith's Emerging Technologies group, and welcome to Tech Law Talks and our podcast series on AI for legal departments with a focus on managing legal and regulatory risks with Microsoft Copilot that Reed Smith is presenting with Lighthouse. With me today are Sam Walsh from Reed Smith's Emerging Technologies Group and Chris Baird from Lighthouse. Welcome, guys. Just to level set, Copilot is sort of the AI tool that Microsoft has launched relatively recently to improve productivity within the Microsoft environment. There are a number of risks that we went through in a previous podcast that you have to consider, particularly legal departments, when you're launching Copilot within your organization. And let me just start to level set with Chris, if you could give a little bit of a technical background on how Copilot works.
Chris: Absolutely, Anthony. So thanks Thanks for having me. So I guess a couple of key points, because as we go through this conversation, things are going to come up around how Copilot is used. And you touched on it there. The key objective is to increase, improve data quality, increase productivity. So we want really good data in, want to maximize the data that we've got at our disposal and make the most of that data, make it available to Copilot. But we want to do so in a way that we're not oversharing data. We're not getting bad legacy data in, you know, stale data. And we're not getting data from departments that maybe we shouldn't have pulled it in, right? So that's one of the key things. We all know what Copilot does. In terms of its architecture, so think about it. You're in your Canvas, whatever your favorite Canvas is. It's Microsoft Word, it's Teams, it's PowerPoint. You're going to ask Copilot to give you some information to help you with a task, right? And the first piece of the architecture is you're going to make that request. Copilot's going to send a request into your Microsoft 365 tenant. Where is your data? It's going to use APIs. It's going to hit the Graph API. There's a whole semantic layer around that. And it's going to say, hey, I've got this guy, Chris. He wants to get access to this data. He's asking me this question. Have you got his data? And the first thing, really, there's this important term Microsoft use. They call it grounding. When you make your request into Copilot, whatever you request, you're going to get data back that's grounded to you. So you're not going to get data back from an open AI model, from Bing AI. You're only going to get data that's available to you. The issue with that is if you've got access to data you didn't know you had, you know, through poor governance. Maybe somebody shared a link with you two years ago. That data is going to be available to you as well. But what's going to happen, a few clever things happen from an architecture perspective. The graph gives a response. It says, hey, I've got Chris's data. It looks like this. That's going to go into the large language model. That's going to make it look beautiful and pass you all that data back in a way you can understand it. There's a final check that Copilot does at that point. It goes back to the graph and it says, I've got this response. I need to give it to the user. user, are there any compliance actions I need to perform on this response before I give it? And I think that's what we're going to focus on a lot today, Anthony, right? But the important thing is thinking about that grounding. And the one message I want to give to people listening is really, you know, don't be immediately scared and worried of Copilot. It respects a lot of the controls that are in there already. The challenge is if you have poor access control and governance, there are things that you need to work on.
Anthony: Yeah. And I think that's one of the challenges. I think a lot of legal departments don't know what access controls and what controls that the IT department has put in place into M365. And I think that's one of the things that you have to understand, right? I think that's one of the things we'll be talking about today is the importance of that. out. So Sam, just talking about what we're our focus today, which is on the risks associated with privileged information, highly confidential information, sensitive information. So can you just give a just a brief description of what those risks are?
Samantha: Sure. So I think one of the risks Chris just alluded to that Copilot is going to have access to information that you have access to, whether you know it or not. And so if you have privileged information that is sort of protected by just being in a spot maybe where people don't know it's there, but it's not necessarily controlled in terms of access, that could be coming up when people are using Copilot. I think another thing is Copilot returning information to people, you lose a bit of context for the information. And when you're talking about privilege and other types of sensitivity, sometimes you need some clues to alert you to the privilege or to the sensitive nature of the information. And if you're just getting a document sort of from the ether, and you don't know, you know, where it came from, and who put it there, you know, you're obscuring that sort of sensitive nature of the document potentially.
Anthony: Yeah. And then I guess the fear there is that you don't realize that it's privileged or highly confidential and you start sharing it, which causes all kinds of issues. And I think just generally for everyone is the regulators. And I think both on the privacy side, where there's a lot of concern about where you're using AI against personal information or highly sensitive personal information, as well as the SEC, which is very focused on material, not public information and how you're using AI against it. I think one of the things that people are going to be asking, the regulators are going to be saying, what controls do you have in place to make sure that it's not being used inappropriately? So again, I think that sets the groundwork for why we think this is important and you start setting things up. So one of the first things you do, let's talk about how you can manage the risk. I think one of the things you can do, right, which is pretty simple, is training, right? Like the users have to know how to do it. So Sam, what should they be thinking about in terms of training for this?
Samantha: I think you can sort of train users both on the inputs and maybe on what they're doing with the outputs from Copilot. I think there are certainly ways to prompt Copilot that maybe would reduce the risk that you're going to get just this information flooding in from parts unknown. known. And I think having clear rules about vetting of co-pilot responses or limitations on sort of just indiscriminately sharing co-pilot responses, you know, these are all kinds of things that you can train users in to try to sort of mitigate some of the data risk.
Anthony: Yeah, no, absolutely. And I think we're also seeing people just so in doing this and launching it, having user agreements that sort of say the same thing, right? What are the key risks? The user agreement says, make sure you're aware of these risks, including the risks that we've been talking about with sensitive information and how to use it. Okay, so now let's switch to more sort of from a technical perspective, some things you can do within the M365 environment to sort of protect this highly confidential information or sensitive information. Information so let's start with Chris sort of this concept of which i know is in there when you have a SharePoint online site or a team site that has a SharePoint online site i think one of the one of the things you can do is basically exclude those sites from co-pilot so if you give us a little a brief description of what that means and then a little bit about the pros and cons.
Chris: Yeah of course Anthony so that that control by the way that's that's nothing new. So for anybody that's administered SharePoint, you've always had the ability to control whether a site appears in search results or not. So it is that control, right? It's excluding sites from being available via search and via Copilot. You would do that at the SharePoint site level. So, you know, Microsoft makes that available. There's a couple of other controls, maybe one I'll mention in a second as well. These are kind of, I don't want to call it knee-jerk reaction, I guess I just did, but it's what are the quick things you can do if you want to get access to Copilot quickly and you're worried about some really sensitive information. And it is a knee-jerk, right? It's a sledgehammer to crack a door. You're going to turn off entire access to that whole site. But in reality, that site may have some real gems of data in that you want it to make accessible to Copilot. And you're going to miss that. The other quick win that's similar to that one, there's a product called Double Key Encryption. A lot of the products I'm going to talk about today are part of the Microsoft Purview stack. And as part of MIP, which is Microsoft Information Protection, we're definitely going to cover that, Anthony, shortly about labels. One thing you can do with the label is you can apply something called Double Key Encryption. And you would use your own encryption key. And that means Microsoft cannot see your data. So if you know you've got pockets of data that are really secret, really sensitive, but you want to activate Copilot quickly, you've got these options. You can disable a site from being available at search level. That's option one. The other option is at a data level. You can label it all as secret. That data is not going to be accessible at all to Copilot. But like I say, these are kind of really quick things that you can do that don't really fix the problem in the long term. don't help you get the best out of Copilot. The reason you're investing in Copilot is to get access to good quality data and hiding that data is a problem.
Anthony: Yeah. And I think one of the things that, and Microsoft has basically said, even though it's available, they've been pretty open about saying, this is not the way you should be managing the risks that we're talking about here. Because you do lose some functionality in that SharePoint site if you take it out of search. So it's an option if you're rushing. And that's basically why they said, If you frankly aren't comfortable and you haven't have all the controls in place and you really have certain data that you want excluded, it's an option. But I think, as you said, it's a sort of a knee-jerk short-term option if you really have to launch, but it's not a long-term solution. So, now let's focus a little bit on what they think is the right way to do it, which is, and first let's talk about the site level. I think you talked a little bit about this, is putting in this concept of a sensitivity label on a site. Now, before you do that, which we could talk about, is first you have to identify the site. So, Chris, why don't you talk a little bit about that, and then let's talk a little bit about the technical.
Chris: No, absolutely. So a couple of terminology things. When I talk about data classification, I'm talking about something different to applying a label. When I often say to a lot of my clients, data classification, they think, oh, that's confidential, highly confidential secret. What I mean when I talk about data classification is what is the data in its business context? What does it mean to the organization? Let's understand who the data owner is, what the risk of that data is if it falls into the wrong hands. What are the obligations around processing and handling and storing that data? How do we lifecycle it? So simple things would be, really simple things would be social security numbers, names, addresses, right? We're identifying data types. We can then build that up. We can move on from those simple things and we can do some really clever things to identify documents by their overall type, their shape, their structure. We can use machine learning models to train, to look for specific documents, case files, legal files, customer files, client files, right? We can train these machine learning classifiers. But the great thing is if you get a good handle on your classification, you will be able to discover and understand your data risk across your enterprise. So you'll see there are tools within Microsoft 365 Purview, Content Explorer, data classification. These tools will give you insights into SharePoint sites that you have in your organization that have high amounts of social security numbers, high amounts of case files, legal affairs documents, right? It's going to come back and tell you, these are the sites that have this type of information. And you can do that analysis. You don't have to go out and say, guys, you've got to put your hand up and tell us if you've got a SharePoint site with this information. The administrators, the guys that are running Purview, they can do that discovery and reach out to the business and go and discuss that SharePoint site. But Anthony, what you're talking about there is once you've identified that SharePoint site, you know, if we know we've got a SharePoint site that contains specific case files that are highly confidential, we can apply a highly confidential label to that site. And the label does a number of things. It visually marks the file, right? And what I mean by that, at a file level from a metadata perspective, anybody interacting with that file electronically will receive a pop-up dialogue on a ribbon or a pop-up. It's going to be front and center to say this file is labeled as highly confidential. I've also got options, which I'm sure we've all done before in the day-to-day work. You can mark the document itself across. You can put a watermark across the document to say it's highly confidential. You can put headers and footers on. So the label isn't just this little concept, but it takes it a step further even more. And this is where it really, really works with Copilot is you can define custom permissions at a label level. So we can say for highly confidential labels, we might have a label for a particular case, a particular project. And if it is a case label, then we could give permissions to only the people involved in that case. So only those people can open that file and that means only those people can talk about that file to copilot you know if you're not in that case Anthony if you're not part of that case and me and Sam are and i use that label you're going to ask copilot to give you all the information it can about that case you're not going to get any information back because you don't have the permissions that's on that source file so that's that's one of the first things that we can do is we can take that label and apply it to a sharepoint site and that's going to apply a default label across all the documents that are in that site. What we're really talking about here, by the way, when we talk about labels, is we're trying to plug a hole in access control and governance. So think about SharePoint management and hygiene. The issue is SharePoint has just grown exponentially for many organizations. You know, there's organic growth, you've got SharePoint migrations, but then you have this explosion of use once you're on SharePoint online. There's going to be public sites. There's going to be SharePoint sites that are public, that are available to everybody in your organization. There'll be poor JML processes, join and move and leave processes, where people who move departments, their access isn't revoked from a SharePoint site. The issue with Copilot is if the site access control isn't strict, if it's open and the file doesn't have permissions on the file, Copilot is going to be able to see that file. If it's public, it's going to be able to see that file, right? So with the label, where that differs to the permissions is it puts the access controls on the files that are in that SharePoint site directly. So if you lift those files from that site, if it is a public site and I take those files, I put it in another SharePoint site or I put it on my laptop, it carries the access control with it. And that's what's really important. That means that wherever that file goes, it's going to be hidden from Copilot if I don't have that access. That's the important thing. So, you know, sensitivity labels are a huge part of ensuring compliance for co-pilot, probably the biggest first step organizations can take, And I think you touch on the first step quite nicely, Anthony. A lot of our clients say, well, we're scared of labeling everything in the organization, going out immediately, doing all that discovery, labeling everything, right? Maybe just knock off the top SharePoint sites, the ones that you know contain the most sensitive data. Start there. Start applying those labels there.
Anthony: Yeah, and Sam, we've talked with some clients about using their provisioning process or attestation process, process lifecycle management to start gathering this information because it's a big project, right? If you have thousands of sites, the concept of figuring out which ones have that. Obviously, Chris talked about, so the technical way you could do it, which would be fantastic because that obviously, but there are other ways of low-tech ways of doing this.
Samantha: Right. Just kind of relying on your human resources to maybe take a little bit more of a manual approach to speaking up about what kind of sensitive data they have and where they're putting it.
Anthony: Which they may be doing already, right? I think that's one of the things that you have to track is like they may, an organization, you know, a specific business line may know where their data is. They just haven't told, they haven't told IT to do something with it. So I think it's just getting that information, gathering it through, you know, whether it's the provisioning process, you could do an attestation or survey or whatever, just to start. And then as Chris said, once you have an idea of what the highly confidential information sites are, then you start doing the work. And again, I think it's applying the labels. One of the things that I think, just to emphasize, and I want to make sure people understand this, is in the sensitivity labels, it's not an all or nothing. At least what I've seen, Chris, is that for each sensitivity label, right, and you could have different types of highly confidential information. Maybe it's sensitive personal information, maybe a material non-public information. Whatever it is, privileged information, you can have different settings. So, for example, you can have it where the site is in essence like a read-only, right, where nobody can touch it, nobody can transfer the data, you can't copy it. That's the most extreme. But then you can have others where it's a little bit more permissive. And as you said, you can tailor it so it could be, you know, certain people have it, certain groups or security groups or whatever, how you want to play. But there is some flexibility there. And I think that's where the legal departments have to get, you know, really talk to the IT folks and really look and figure out what are the options for just not just applying the sensitivity label, but what restrictions do we want to have in place for this?
Chris: Anthony as well like you know you you're touching on the really important thing there and I'm going to go back to what Sam had talked about earlier with training as well about culture but I guess you know the the important thing is finding the balance right so with a sensitivity label you are able as an administrator as an IT administrator you can define the permissions for that label so like I say you could have a high level and by the way you can have sub labels as well so let's go with a common scheme that we see, public, internal, confidential, highly confidential. We've got four labels. Highly confidential could be a parent label. And when we click on that, we get a number of sub labels and we could have sub labels for cases. We could have sub labels for departments. And at an administrative level, each of those labels can carry its own predefined permission. So the administrator defines those permissions. And exactly as you say, Anthony, you know, one of the great things about it, it's not just who can access it, it's what can they do with it. Do not forward, block reply to all. You can block screen share screen copy all of those kind of things save and edit it can block all of those things where i say you need to find a balance is that's going to become onerous for the administrator if every time there's a case you're going back for a new label for each case and you're going to end up with thousands of labels right so what microsoft gives you is an option to allow the users to define the permissions themselves and this is where it really works well for copilot but before i talk about what that option is i want to go back to what Sam said and talking about the training. One of the important things for me is really fostering a culture of data protection across the organization, making people realize the risk around their data, having frequent training, make that training fun, make it interactive if you can. At Lighthouse, our training is, it's kind of a Netflix style. There's some great coffee shop things where it's fun. We get to watch these little clips. But if you make people want to protect their data, when they realize data is going to be available to co-pilot now, they'll be invested in it, right? They'll want to work with you. So then when you come to do the training, Sam, you need to say, right, we're not going to use the administrative defined labels. It's too much burden on the admin. We're going to publish this label for highly confidential that allows the users to define the permissions themselves. And that's going to pop up in Word. If you're in your favorite canvas, you're in Word, you click highly confidential, it's going to pop up and say, what permissions do you want to set on this file? If you haven't trained, if you haven't fostered that culture of information protection amongst the user community, people are going to hate it, right? People aren't going to like that. So it's so important to start to engage and discuss and train and coach and just develop that culture. But when it's developed, people love it. People want to define the permissions. They want to be prescriptive. They want to make sure that information cannot be copied and extracted and so on. And anything you do at that level, again, it protects that data from being read in by Copilot. That's bringing that back to the whole purpose of it.
Anthony: And I would just say, again, that this all goes about prioritization because people are like, I have 50,000 people in my organization. There's no way I'm going to train everybody. You don't. I mean, obviously some, but there's only certain people who should have access to certain of this information, right? So you may want to train your HR people because they have a lot of the personal sensitive information, the benefits folks or whatever, because you have to break it down because I think a lot of people get caught up into, I'm never going to have 50,000 people do this, but you don't. Everyone has different things that come across their desk based on the business process that you're working on. So again, it's just thinking logically about this and prioritizing because I think people think training and, oh my God, I'm relying on the user and this is going to be too much. I think to your point is if you do it in chunks and say, okay, here's a business line that we think is really high risk, just train them on that. And like you said, it's part of their job, right? HR is not going to have like compensation. They're not throwing that everywhere in the organization. They shouldn't be right. But if they do, they know they're sensitive about it. And now you're just giving a tool, right? We know you want to protect this. Here's the tool to do it. So again, I think this is really important. Before we end I know, Chris, I think you had one more thing that you want to add, which was on the monitor monitoring side, which I had not heard of, but could you just talk a little bit about that?
Chris: You know, this is sort of really key information that you can think of going up to your leaders in your organization to say, look, we've got a roadmap for co-pilot adoption. It's X many months or however long it's going to take, but now we can implement some quick wins that really give us visibility. So there's a product, there's two products. Many of the listeners will probably know the second product that I'm about to talk about, but the first one might be new. There's a product called Communication Compliance. It's part of the Microsoft E5 or E5 Compliance or IP and Governance Suite. It's in Purview. Technically speaking, it's a digital surveillance product that looks at communications through Teams and throughout Look and through Viva. But what Microsoft has introduced, and this is a stroke of genius, it really is, they've introduced co-pilot monitoring. So the prompt and the responses for co-pilot can now be monitored by communication compliance. And what that means is we can create simple policies that say, if personal information, client information, case information. Is passed through a prompt or a response in Copilot. Let us know about it. We can take it a step further. If we get the sensitivity labels in, we can use the sensitivity labels as the condition on the policy as well. So now if we start to see highly confidential information spilling over in a Copilot response, we can get an alert on that as well. And that I think is just for many of the listeners, it's a quick win. You can go, cause you're going to be your CIO or, or, you know, your VP is going to be saying, we need Copilot. We want to use Copilot. that your CISO and your IT guys are saying, slow down. You can go to the CISOs and say, we've got some controls, guys. It's okay. Now, the other tool, which a lot of the listeners will know about is eDiscovery Premium. What you can do with communication compliance once you're alerted is you can raise a case in eDiscovery Premium to say, go and investigate that particular alert. And what that means is we can use the eDiscovery tools to do a search, a collection. We can export and download. We can look at a forensic level. What information came back in the response? And if it was data spillage, if that data came from a repository that we thought was secure, specific to some case or legal information, and now it's in the hands of a public-facing team in the organization, you can use the tools. You can use eDiscovery through the Graph API to go and delete that data, that newly created data. So two real quick wins there to think about is deploying communication compliance with eDiscovery.
Anthony: That's fantastic. Well, thanks, everybody. This was really helpful. We're going to have additional podcasts. We'll probably talk about e-discovery and retention alike in our next one. But thank you, Chris and Sam. This was highly informative. And thanks to our listeners. Welcome back. We hope you keep listening to our podcast. Thanks.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation.
All rights reserved.
Transcript is auto-generated.
Anthony Diana and Karim Alhassan are joined by Lighthouse’s John Collins to discuss Microsoft's AI-driven productivity tool, Copilot.
This episode presents an overview of Copilot and its various use cases, the technical nuances and details differentiating it from other generative AI tools, the identified compliance gaps and challenges currently seen in production, and the various risks legal departments should be aware of and accounting for.
This episode also provides a high-level overview of best practices that legal and business teams should consider as they continue to explore, pilot and roll out Copilot, including enhanced access controls, testing and user-training, which our speakers will further expand upon in future episodes.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Anthony: Hello, this is Anthony Diana, a partner here in the Emerging Technologies Group at Reed Smith. Welcome to Tech Law Talks. Today will be the first part of a series with Lighthouse focusing on what legal departments need to know about Copilot, Microsoft's new artificial intelligence tool. With me today are John Collins from Lighthouse and Karim Alhassan from Reed Smith. Thanks guys for joining. So today, we really just want to give legal departments sort of at a very high level what they need to know about Copilot. As we know Copilot was just introduced, I guess like last fall, maybe November of last year by Microsoft. It has been in preview and the like and then a lot of a lot of organizations are at least contemplating the use of Copilot or some of them, I've, I've heard have already launched Copilot without the legal departments knowing which is an issue in of itself. So today, we just want to give a high level view of what Copilot is, what it does at a really high level. And then what are the things that legal department should be thinking about in terms of risks that they have to manage when launching Copilot? This will be of a high level of additional series, which will be a little bit more practical in terms of what legal department should actually be doing. So today is just sort of highlighting what the risks are and what you should be thinking about. So with that John, I'll start with you. Can you just give a very high level preview of what, what is Copilot that's being launched?
John: Sure, thanks Anthony for having me. So Copilot for M365 which is what we're talking about is Microsoft's flagship Generative AI product. And the best way to think about it is it's Microsoft which has a partnership with open AI is taken the ubiquitous ChatGPT and they've brought it into the Microsoft ecosystem. They've integrated it with all the different Microsoft applications that business people use like Word Excel, powerpoint and teams. And you can ask Copilot to draft a document for you to summarize a document for you. So again, the best way, think about it is that it's taking that generative AI technology that everyone is familiar with, with ChatGPT and bringing it into the Microsoft ecosystem and leveraging a number of other Microsoft technologies within the Microsoft environment to make this kind of a platform available to the business people.
Anthony: Yeah. And I think, you know, I think from at least from what Microsoft is saying and I think a lot of our clients are saying that this is, this is groundbreaking, this is going to be and frankly, it's probably going to be the largest and most influential AI tool Enterprise has because Microsoft is ubiquitous, right? Like all your data is flowing through there. So using AI in this way, should provide tons of productivity. Obviously, that's the big sell. But it is obviously everyone, if they get licenses for everybody, this is something that's going to impact I think most organizations pretty highly just because it is, you know, if you're using Microsoft M365 you're going to be dealing with AI and you know, sort of on a personal level, large scale and the like, and I think that's one of the challenges we'll see. So Karim could you just give a little bit? I mean, John gave a very nice overview just in terms of a few things that he said in terms of we've got this ChatGPT, what is it that's unique about Microsoft in terms of how it works from a from a technology perspective because I know a lot of people are saying I don't want people using ChatGPT for work.
Karim: Sure, thanks Anthony. You know, as opposed to kind of these publicly web based ChatGPT tools, the I think the big sell and what makes Copilot unique is that it's grounded in, you know, your enterprise data, right? And so essentially it integrates with the Microsoft graph, you know, so which allows, you know, users within the enterprise to leverage their M365 data, which adds context. And so rather than just going to, you know, GPT-4, which is, you know, as everyone knows, trained on publicly available data and has its own issues, you know, hallucinations and whatnot. You know, having this unique enterprise specific data kind of adding context to inputs and outputs, you know, leads to more efficiency. You know, another big thing and, and a big issue that legal departments are, you know, obviously thinking about is that having this data input into the tool, you know, one of the problems is that you're worried that it can, you know, train the underlying models. With Copilot, you know, that's not happening, you know, the instance of the LLM in which the tool is relying on is within your surface boundary. And so, you know, that kind of gives protections that you wouldn't necessarily have when you know, people are kind of just using these publicly available tools and So, you know, that's, I think that the big differentiating factor with Copilot as opposed to, you know, GPT-4, ChatGPT and, you know, kind of these other public tools.
Anthony: And I think that's critical and John, obviously I'll let you sort of expand on that too, but I do think that's a critical piece because I know a lot of people are uncomfortable using these large language models, like you said, they're public. The way Microsoft built this product is you get your, in essence your own version. So if you get a license, you're getting your own version of it and it's inside your tenant. So it doesn't go outside sort of your firewalls, not technically a firewall, but it's in your tenant. Um And I think that's critical and I think that gives a lot of people comfort. Um And at least that's what Microsoft is saying.
John: Yeah, just a, just a couple of things to point out is some folks might be familiar with or heard about that their Microsoft has this responsible AI framework where if you are using Azure Open AI tools and you're building your own custom uh ChatGPT but using the Microsoft or the Azure Open AI version of ChatGPT, this responsible AI framework. There is, Microsoft is actually retaining prompts and responses and a human being is monitoring those prompts and responses. But that's in the context of a custom development that an organization might do. Copilot for M365 actually opted out of that. And so to Karim and Anthony's point, Microsoft's not retaining the prompts for Copilot, the responses, they're not using the data to retrain the model. So there's that whole thing. The second thing I just want to point out is that you do have the ability with Copilot from 365 to have plugins and one of the plugins is that when you're chatting in Microsoft teams using the chat application, you have the option to actually send prompts out to the internet to further ground. Karim talked about grounding information in your organization's data. So there are some considerations around configuration. Do you want to allow that to happen? You know, there's still data protection there. But those are a couple of things that come to mind on this topic.
Anthony: Yeah, and look, and I think this is critical and I, I agree with you. I think that's one of the, there is a lot of, I don't say dangers, but there's a lot of risks associated with Copilot. And I think as you said, you really have to go through the settings to make sure that is one that I think we've been advising clients at least to turn off for now just because, and, and just to give, we give clarity here, I think we, we're talking about prompts and responses. A prompt is in essence, a, a question, right? It could be summarized the document or you can type in, you know, give me a recipe for blueberry muffins, right? You can type anything. It's a question through Copilot. So it's an app, you make that question. When we talk about grounding, right? I think this is an important concept for people to understand when you're grounding on a document. So for example, if you want to summarize a a document, right? We have a or a transcript of a meeting, right? Like OK, I want to summarize it. My understanding of the way it works is when you press summarize a document, what you're really doing is telling the tool to look at the document, use the language in that document to create, I'll call it a better prompt or question that has more context. Then that goes to the large language model which is again inside the tenant and then that will basically give you an answer. But it's really just saying the answer is this is the likely next word is the best way to describe it. It's about probability and the like so it doesn't know anything. It is just when you're grounding it in a document or your own enterprise data, all you're doing is basically asking better questions of this large language model. That's my understanding of it. I know people have different types of descriptions, but that's the way I think the best way to think about it. Um And then and again, this is where we start talking about confidentiality. Like why people are a little concerned about it going public, you know, to the public part is that those questions, right are gonna take potentially confidential information, whatever and send it to this outside model if it wasn't Copilot, be out there. And this is true with a lot of a a tools and you may not have control like John that of who's looking at it. Are they storing it there? How long? And they're storing it? You know, is it, is it secure all that stuff? That's the type of stuff that we normally worry about. However, here, because the way Copilot is, you know, built, some of those concerns are are less. Although as you pointed out John, there are features that you can go to the internet which could cause those same concerns any other flavor, Karim or John that you want to give again. That's my description of how it works.
John: But yeah, no, the thing I was gonna say, no, I think you gave a great description of the grounding. Um Karim had brought up the the graph. So the graph is something which is underlies your Microsoft 365 tenant. And Karim alluded to this earlier and essentially when the business people in your organization are communicating back and forth, sharing documents as links, they're chatting in Microsoft teams sending emails. This graph is a database that essentially collects all of that activity, you know, it, it knows that you're sharing documents with. So and so and that becomes one of the ways that Microsoft surfaces information for the grounding that Anthony alluded to. So how does it, how does Copilot know to look at a particular document or know that a document exists on a particular topic about blueberry muffins or whatever, you know, in part that's based on the graph and that's something that can scare a lot of people because it it tends to show that documents are being overshared or people are sharing information in a way that they shouldn't be. So that's another issue. I think Anthony, Karim that we're seeing people talk about.
Anthony: Yeah. And that is that is a key point. I mean, the way that Microsoft explains it is copilot is very individualistic, I'll say so if it's it's based and when you're talking about all the information could be grounded and it's based on the information that someone has access to. And this is, and I think John, this is the point you were saying as we start going through these risk, one of the big challenges, I think a lot of organizations are now seeing is Copilot’s exposing, as you noted, bad access controls is the best way to describe it, right? Like in a lot of people M365 there's not a focus on it. So because people may have more access than they need when you're using Copilot it really does expose that. I think that's probably one of the biggest risks that we're seeing and big, the biggest challenges um that we're talking to our clients about because, because it, it is limited to the access that the person has. There's a presumption that that person only has access to what they should have. And I think we all know now that's not the case. Um And I think that's one of the big dangers and we'll talk about this and she episodes about how do you deal with that specific risk. Um But again, I think as we talked about it, that is one of the big risks that legal departments have to think about is, you know, you should be talking to your IT folks and saying what access controls do we have in place. And the like, because that is one of the big issues that um people have so to give you, you know, highlight it, you have highly confidential information within your organization. You know, if people are using Copilot, that could be a bad thing because suddenly they can ask questions and get answers based on highly confidential information that they shouldn't be getting. So it, it is one of the, the big challenge that we have. One thing I want to talk about, Karim, before we get into a little bit more on the risks is sort of the product level differences. And we talk about copilot as if it's one product, but I think as we've heard and seen there's sort of different flavors, I guess of Copilot.
Karim: Sure. Uh Yeah, so as Anthony noted, um, you know, Copilot is an embedded tool within the, you know, 365 suite. And so, you know, you're gonna have Copilot for word Copilot for Powerpoint. And so there are different, you know, tools and, and there's different functionality within, you know, whatever product you're working within. And that of course, is going to affect, you know, one the artifacts that are created uh some of the prompts that you're able to use and leverage. And so it's not as simple as just thinking as Copilot as this unified, you know, product because there are going to be different configurations. And I'm sure Anthony will, will speak to this, you know, we've kind of noted that even some of the configurations around where these things are stored, you know, there are certain gaps depending on whether you're using Outlook for example. And so, you know, you really have to dig into these product specific configurations because it, you know, the risks do vary. And just to, to kind of add and uh John kind of point to this, there is, you know, one version of Copilot, which is 365 Chat I believe is what it's called. And that is the probably the most efficient uh from a product perspective because it can leverage data across the user um you know, personal graph. And so again, bigger risks may be there than if you were looking at just kind of Excel. So, you know, definitely product specific functionalities change. And so food for thought on that point.
Anthony: And John, I don't know if you've done lighthouse has done testing on each of these different applications, but what we've seen our clients do is actually test each application, you know, so we have Copilot for word Copilot for Excel. Copilot for teams and stuff because as Karim said, we have seen issues with each, I don't know if you've seen anything specific that you want to raise.
John: Yeah, well, I think you guys bring up some really good points. I think um the like for example, Outlook, the prompts actually don't get captured as, as an artifact versus in Word and Excel and Powerpoint. But then we're also seeing some interesting things in our testing. So we're, we're ongoing, we're doing ongoing testing. But when it comes to meeting summaries and the difference between recapping a meeting that only has a transcript versus a meeting that has a full recording and the artifacts that get generated between the different types of meeting summaries, whether it's a recap or a full what they call AI notes. So that we're seeing some, some, some the meeting recaps aren't being captured 100% verbatim. So there's a lot of variability there. I think the key thing is that you pointed out, Anthony is you got to do testing, you've got to test, you've got to get comfortable that, you know, what artifacts are being generated and, and where they stored and whether you can preserve, collect all of those things.
Anthony: Yeah. And, and I think one of the things that um I'm gonna talk a little bit about this generally and John, I'm sure you're saying the same thing is prompts and responses, or at least Microsoft is saying prompts and responses are generally stored in all these applications in a hidden folder in your um in a person's outlook, right? So their their mailbox, as we noted and Karim noted for whatever reason, outlooks prompts and responses aren't being saved there. Although I'm told that that fix is coming in may have been even this week so that there was a gap, obviously, it came out in November. So there's still gaps and I think, you know, as is true, everyone's doing testing, John's doing like Lighthouse is doing testing. I'm sure you're talking to Microsoft note these gaps, they're filling in the gaps. So it is a new product. So that's one of the risks that everyone has to deal with is if anybody telling you this is the way it works may not be absolutely correct because you have to test and certainly you can't really rely on the the documentation that Microsoft has because they're improving the product probably faster than their updating their documentation. So that's another challenge that we've seen. So, test, test test is really important. Um So I just think some things to think about. Okay, we're almost out of time. So we're getting too much in the risk. But I think we've already talked about at a high level, some of the risks as we've talked about, there's the confidentiality and access issues that you should be thinking about retention and over retention we'll get into later. But there is obviously a risk. A lot of litigation teams are saying I don't want to keep this around. People are asking these questions, getting responses, they not be accurate. From a litigation perspective, they don't want it around. There's not business case for it, business use for it. Um because usually when you're doing this and like, you know, you're asking to help draft an email or whatever, you're probably doing something with that data, even meeting transcripts. If you get, you know, a recap, some people are copying and pasting it somewhere just to keep it. But generally the idea is the the output of Copilot is usually sort of short term. So that's generally been the approach I think most people are taking um and to get rid of it, but it's not easy to do because it's Microsoft. And so that's, that's one of the risks. And I think as we talked about, there's going to be the discovery risks, John, right? Like we're talking about, can we preserve it. There may be instances you can or can't. And that's where the outlook thing becomes an issue. As I think Karim noted hallucinations or accuracy is a huge risk. It should be better. Right. As I think Karim said, because it's grounded in your data, it should be better than just sort of general. Um, but there's still risks, right and there's a reputational risk associated with it if, if someone's relying on a summary of a meeting and they don't look at it carefully, that's obviously gonna be a risk, particularly if they circulate that this was said. So a lot of things to think about. At a high level, you know, Karim and John, let, let's conclude, what are one of the risks that you're seeing that? Do you think legal department should be thinking about other than what I just said?
John: Yeah. Well, I think that um one of the things that legal departments seem to be struggling with, I'm sure you guys are hearing this, is that what about the content itself being tagged or labeled as created by generative AI? There, there's actually some areas in M365 like meeting summaries will say generated by AI. But then there's other areas like a word document was completely, you know, the person creates a document. They do, they do a very light touch review of it. The documents essentially 99% generative AI, you know, a lot of companies are asking, well, can we get metadata or some kind of way to mark something is having been created by? So that's one of the things that in addition to all the issues that you highlighted Anthony that we're hearing companies bring up.
Anthony: Yeah, Karim, anything anything else from, from your perspective?
Karim: Sure. Yeah, I think one big issue that, you know, and I'm sure we'll talk about this in future uh episodes is, you know, with this kind of emergence of, you know, this AI functionality, there's a tendency on behalf of users, you know, to, to heavily rely on the outputs. And I know a lot of people kind of talk about human in the loop. And so, you know, I heard somebody say, uh you know, these are called Copilots for a reason and not autopilot. And so the training aspect, you know, really needs to be ingrained because when people do sit and rely on these outputs as if they're, you know, foolproof, you know, you can run into operational reputational, um you know, risks. And so I think that's, that's one thing we're seeing that, you know, the training is going to, you know, be integral.
Anthony: Yeah, and I think the other thing that I'll finalize this just to think about and this is really important for legal departments, think about privilege, right? If you're using Copilot for and it's using privileged information, I know I've heard a number of GCs say my biggest concern is waiver of privilege because you may not know that you're using Copilot against privileged information. That's summarizing an answer which is basically privilege, but you don't know it and then it circulated broadly and stuff. So again, there's a lot to consider, I think as we've talked about, it's really about training and access controls and really understanding the issues. And like I said, in future episodes, uh with Lighthouse, we'll be talking about some of the risks more specifically and then what can you do to mitigate those risks? Because I think that's really what this is gonna be about is mitigating risks and understand the issues. So well thanks to John and Karim. I hope this was helpful. Like I said, we'll have future podcasts, but thanks everyone for joining and hopefully we'll listen soon.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email [email protected]. You can find our podcast on Spotify, Apple Podcasts, Google podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Partners Andy Splittgerber, and Christian Leuthner shed light on recent developments in the use of cloud services and EU data protection issues. This includes a summary of and comments on recent publications by the DSK (the German data protection authorities) as well as recent perspectives on data processors’ reuse of data. Andy and Christian conclude by giving some practical tips for both providers and customers.
2023 has been the year for the tokenization of real-world assets. The theme has been all-pervasive, with businesses across the world seeking to tokenize a whole spectrum of assets ranging from real estate to music, film, art, loans, invoices and credit. In this episode, Reed Smith’s Hagen Rooke (Singapore) and Soham Panchamiya (Dubai) are joined by Wayne Tan, general counsel at OpenEden, a platform that has become known for pioneering the tokenization of Treasury Bills. The conversation seeks to unpack the concept of tokenization and growing interest in this area, surveys some of the key activities in the real-world assets space, and contemplates the attendant legal and regulatory challenges.
This episode was recorded on 23 October 2023.
Anthony Diana joins Daniel Broderick, co-founder and CEO of BlackBoiler, to discuss how AI is already being used by legal departments to manage contracts, the requirements for making the use of AI for contract management, and how future developments in AI will impact contract management.
Does your game have a token and an NFT marketplace? What about a DAO for your hard-core players? It doesn’t matter if you’re AAA or casual mobile gaming – a token component or staking functionality is likely to involve regulation of some sort. Join Reed Smith’s Hagen Rooke (Singapore) and Soham Panchamiya (Dubai) and Victoria Wells, general counsel of Illuvium DAO (Australia’s largest Web3 gaming DAO), to understand what the regulations say about gaming, play-to-earn, NFT marketplaces, and some unrelated musings we’ve managed to pack into this podcast.
Link referenced at 28:30 can be found at polygon.com.
This episode was recorded on 23 October 2023.
Reed Smith’s Hagen Rooke (Singapore) and Soham Panchamiya (Dubai) are joined by Samson Leo, co-founder and Chief Legal Officer of Fazz, to discuss stablecoin regulation in various jurisdictions, including Singapore and the UAE. How will these regulations affect stablecoin issuers and projects? What do they signal about the state of play for stablecoins in the future? Also, there is general debate around the regulatory framework affecting crypto in this increasingly maturing landscape.
This episode was recorded on 9 October 2023.
To deal with the increased risks surrounding regulatory inquiry, privacy compliance, and cybersecurity, it is vital that private equity firms and their portfolio companies focus on implementing internal policies and procedures to better establish healthy privacy and security hygiene. Please join Anthony Diana, Catherine Castaldo, Sheek Shah and Gary Barnabo to discuss why establishing policies should matter for private equity firms and their portfolio companies and which policies should be prioritized so the private equity firm can more effectively protect itself and its investment.
Reed Smith’s Hagen Rooke (Singapore) and Soham Panchamiya (Dubai) share their insights from Token2049, a premier crypto event. In this podcast, they provide a comprehensive overview of the event, highlighting their key takeaways and painting a picture of the event’s dynamic atmosphere, while also discussing essential topics such as the rise in tech infrastructure projects, maturing global regulations … and the importance of free drinks. Tune in to gain valuable insights into the rapidly evolving world of cryptocurrency and Web3.
In today’s day and age, no company is immune from the risks posed by data breaches, and that includes private equity firms and their portfolio companies. Fortunately, there are practices that can be implemented to minimize the risks and effects of a breach if one occurs – practices that protect the value of the portfolio companies. Please join Anthony Diana, Catherine Castaldo, Sheek Shah and Gary Barnabo to discuss why and how tabletop exercises with the private equity firm and its portfolio companies can help the private equity firm more effectively protect itself and its investment.
Technology and data risks at portfolio companies directly impact the value of these companies as these risks are becoming a major focus of due diligence when the companies are sold, often leading to a reduction in purchase price, indemnifications, or, in some instances, killing the deal. Whether you’re faced with a data breach that damages brand reputation or discloses highly valuable, confidential information or with privacy policies that do not allow for the sale of customer information, the importance of portfolio companies proactively managing these risks is paramount and can be done without significant investment.
Gary Barnabo, Anthony Diana, Catherine Castaldo and Sheek Shah discuss these issues at a high level, introducing the concept and offering some steps that private equity companies can take to manage the risks, with future podcasts providing more specific guidance.
AI is all the rage, and employees may be using AI without their employers understanding, or putting in place a policy to lessen, the risks of using this new technology. Find out about the risks and legal issues to consider and why it’s important to have a policy in place that governs the use of AI by employees.
Reed Smith Partners Cynthia O’Donoghue and Andreas Splittgerber delve into the recent developments surrounding the EU-U.S. Data Privacy Framework and discuss other data transfer mechanisms.
In the latest eComms episode with Smarsh regarding managing the risks associated with electronic communications (eComms) for financial services firms, Anthony Diana and Therese Craparo are joined by Tiffany Magri from Smarsh. Together, they discuss the messaging from regulators regarding compliance with the eComms regulations, considering the recent guidance, fines and public statements on eComms compliance. The discussion will cover the challenges the financial industry faces in attempting to interpret and comply with regulators’ expectations from both a practical and technological perspective.
A regulator has just sent you an inquiry or a subpoena regarding unauthorized use by one of your employees of text messaging or a third-party electronic communications application. What steps should you take to ensure you can respond effectively to the inquiry? This episode of our eComms series with Smarsh sees Anthony Diana, Kiran Somashekara, and John Lukanski accompany Tiffany Magri from Smarsh to answer this question and more. They discuss what the regulators’ expectations are for the inquiry or subpoena and the necessary actions all firms should be taking to minimize their risk.
Partners Andy Splittgerber, Michaela Westrup, and Alexander Klett discuss the legal risks and challenges associated with the use and operation of ChatGPT in Europe, and Germany in particular. They cover potential issues in the areas of data protection, copyright, anti-trust, and competition.
Join Reed Smith Tech & Data partners Cynthia O’Donoghue, Andreas Splittgerber, and Barbara Li as they discuss the legal updates and changes relating to cross-border transfers of personal data from an EU, UK, and Chinese perspective. In this episode, participants dive into the latest developments and legal issues and explain best practices that organizations must consider when transferring personal data to or from these regions.
As online disinformation becomes more prevalent, partner Anthony Diana and founder/CEO of AREDA Ventures Scott Mortman discuss the rise of disinformation, explain how fake users are now able to spread disinformation more efficiently, and explore the risks that disinformation poses to companies and their brands. They also discuss emerging trends in disinformation, including the rise of deep fake technology and its future impact on global business.
As the metaverse continues to evolve, partner Christine Morgan and associate Hallie Wimberly explore the origins of the metaverse, discuss early examples of virtual reality innovations and metaverse environments, and outline the differences between current centralized and decentralized metaverse platforms. They also discuss examples of recent metaverse patents issued by the USPTO and the opportunities and risks associated with enforcing such patents.
Bad actors are constantly looking for ways to infiltrate financial institutions for personal gain. Whether it be customers, employees, or third parties, every institution needs to be aware of the potential risks associated with bad actors and how to appropriately identify and investigate these risks. This episode of our eComms series with Smarsh sees Kiran Somashekara, John Lukanski, and Kile Marks join Tiffany Magri from Smarsh, and together, they dive in and discuss who exactly can be a bad actor, what tools to use and red flags to look for to find bad actors, and the obligation to investigate potential bad actors as they appear.
In this third and last episode of the podcast series, Reed Smith Paris partners Marianne Schaffner and Thierry Lautier focus on what companies should do with regard to the European patents of their competitors, and they reveal strategic tips to help mitigate the risks of injunction and damages in the “UPC zone”.
With the UPC expected to enter into force on April 1, 2023, Reed Smith Paris partners Marianne Schaffner and Thierry Lautier focus this podcast on companies who have European patents and reveal strategies to adopt when enforcing European patents before the UPC or the national courts.
The long-awaited Unified Patent Court (UPC) and the European Patent with Unitary Effect (UP) should significantly change the European patent landscape. Reed Smith Paris partners Marianne Schaffner and Thierry Lautier provide an overview and the main changes induced by the UPC and the UP.
In the latest episode of our eComms series with Smarsh, partners Anthony Diana and Therese Craparo are joined by Blane Warrene of Smarsh to address the use of collaborative tools by financial institutions. The discussion focuses on different approaches institutions take when using collaborative tools, the questions every firm should ask prior to adopting such tools, and whether and how regulations govern their use.
Our eComms series with Smarsh continues as Greg Breeze joins attorneys Anthony Diana, John Lukanski, and Kiran Somashekara to discuss supervision for eComms. They address what must be supervised and surveilled, as well as trends and challenges in the industry.
Regulations impose on financial institutions a strenuous set of capture and storage requirements for electronic communications. Several of these regulations are over half a century old and do not contemplate the ever-accelerating shift to new and innovative forms of eComms. The solution: focusing on content, maintaining current controls, and developing and properly enforcing policies.
In our latest podcast, Anthony Diana, John Lukanski, and Kiran Somashekara join Robert Cruz from Smarsh to discuss electronic communications capture and storage requirements and how they impact financial institutions.
Join Angie Matney and Anthony Diana from Reed Smith, James Hart from Lighthouse, and Emily Dimond from PNC as they discuss M365’s vision-specific accessibility tools. They explore the types of tools offered, the technical and legal implications of their use, and a framework that businesses can employ to address the various associated risks and considerations.
Cynthia O’Donoghue and Aselle Ibraimova from Reed Smith’s London Office discuss changes in the EU/UK to the standard contractual clauses for data transfers between EU/UK and non-EU/non-UK countries. The two explore the new interpretations of the rules on data transfers by the EU Data Protection authorities, the impact on day-to-day compliance, the introduction of data transfer tools by other countries around the world, and the adjustments that businesses need to make as a result of these changes.
Financial regulators are increasingly concerned with the enforcement of electronic communication requirements, making compliance a top priority for financial institutions. However, compliance has become greatly complicated by the many changing and emerging technologies as well as an amorphous definition of what an eComm actually is.
Anthony Diana and Therese Craparo join Robert Cruz from Smarsh to discuss what electronic communications are and how they impact financial institutions.
Companies are increasingly looking for technological solutions that are accessible to their employees and customers with disabilities, including within M365. Angie Matney and Anthony Diana from Reed Smith, James Hart from Lighthouse, and Emily Dimond from PNC discuss accessibility laws and regulations, the general approach to accessibility in M365, and related privacy, security records and eDiscovery considerations.
Christine Morgan, whom The Daily Journal recognizes as “one of the country’s leading experts on what is, and what is not, an abstract idea” under the Alice defense, joins Hallie Wimberly to discuss how to successfully navigate the complex issues related to Section 101 of the U.S. Patent Code, including how to protect clients from expensive patent litigation on abstract patents by invalidating them early on in litigation.
In Q1 2022, the UK’s Information Commissioner’s Office (ICO) issued 26 enforcement actions. Reed Smith associates Angelika Bialowas and Aselle Ibraimova compare these with enforcement actions from last year and share their predictions of what to expect in the future.
Anthony Diana and Samantha Walsh join Lighthouse’s Matthew Newington to discuss the complex eDiscovery considerations of Teams, including challenges with data preservation and collection, determining whether to treat Teams channels as a custodial or non-custodial data source, identifying “relevant” channels, and preserving and collecting third party application data.
Anthony Diana and Kiriaki Tourikis join Lighthouse’s Matthew Newington and Justin Marsh to discuss the features and eDiscovery challenges with SharePoint Online, including potential downstream issues with versioning, limitations on preservation hold library collection, as well as tips on proactive preservation.
Catherine Castaldo and Christine Gartland discuss the recent National Institute of Standards and Technology (NIST) guidance on practices for software supply chain security and how it can be applied to private businesses and their respective software supply chains and cybersecurity practices.
Lighthouse’s John Holliday joins Reed Smith’s Anthony Diana and Samantha Walsh to discuss the unique features/functions of OneDrive for Business that distinguish it from a traditional home or personal drive, new artifacts stored in OneDrive for Business, and the new eDiscovery challenges that those differences create.
Andreas Splittgerber and Aselle Ibraimova discuss the recent decisions by EU data protection authorities and courts in the EU that are affecting the use of analytics cookies on websites. They explore why website operators are being told to stop the use of analytics cookies and what can be done to overcome the issues.
Reed Smith lawyers Sarah Bruno, Cynthia O'Donoghue, and Jordan Tanoury discuss issues that may arise when onboarding and negotiating with artificial intelligence providers, and why it is possible to negotiate mutually beneficial contracts.
Anthony Diana and Kile Marks join John Holliday from Lighthouse for a discussion about the downstream eDiscovery considerations of Teams Chats. Topics include expanded Chat functionality (memes, giphy’s, etc.), challenges associated with threading of Chats, and how persistence of Chats threads affects review and production of relevant content.
Anthony Diana and Therese Craparo join John Collins from Lighthouse to dive into the issues and eDiscovery challenges related to Teams A/V, such as controls of audio/video conferencing capabilities, identification of attendees, recordings, whiteboards, shared notes and all the various features available today or beyond.
Lighthouse’s Damian Murphy joins Reed Smith’s Anthony Diana and TJ Satnick to discuss the differences between an Exchange mailbox and a new Exchange Online mailbox, new artifacts stored in Exchange Online, and new eDiscovery challenges that those differences create.
The Metaverse poses cutting-edge issues for IP owners and users that do not fit neatly into traditional legal principles developed for the pre-metaverse world. Partner Christine Morgan and associate DJ Cespedes explore these challenges and how the explosion of new rights will present opportunities and challenges for owners and users of IP in the Metaverse.
Reed Smith’s Anthony Diana and Catherine Castaldo join Damian Murphy of Lighthouse to discuss the importance of privacy in Teams sites/channels, best practices for implementation, considerations for geographic location of Teams sites/channels data, and options for barriers to create privacy-compliant Teams sites/channels.
Lighthouse’s John Collins joins Reed Smith’s Anthony Diana and TJ Satnick to discuss applying lifecycle management of Teams sites and channels for improved governance. They also share best practices for establishing the framework for certification, archiving and deletion.
Join Reed Smith’s Sarah Bruno and Jason Gordon for a discussion on non-fungible tokens and an overview of the use and sale of NFTs. They describe how NFTs generate legal considerations in areas such as intellectual property, copyright, privacy, post-mortem rights and right of publicity.
Reed Smith’s Anthony Diana and Therese Craparo, with special guest John Collins of Lighthouse, share practical considerations on provisioning and ownership of Teams sites and channels to achieve improved governance and risk management.
This session is one of our M365 in 5 series, which discusses compliance and governance in M365.
It has been a busy few weeks in the EU for all things data protection, particularly data transfers. Cynthia O’Donoghue and Andy Splittgerber walk us through the new Standard Contractual Clauses (SCCs) for international transfers and for controllers to processors, the newly issued EDPB Supplementary Measures Recommendation, and the UK adequacy decision.
E-Discovery consultant Lighthouse returns to our M365 in 5 series for a discussion about the importance of compliance and governance in M365 and collaboration among stakeholders to balance risk and business needs. Reed Smith’s Anthony Diana and Therese Craparo join Lighthouse’s John Holliday to discuss implementing controls and managing data to mitigate risk.
Join our UK tech & data lawyers Cynthia O’Donoghue and Asel Ibraimova as they guide you through Data Subject Access Requests (DSAR) in the European Union. They cover recent court cases, issues surrounding identity verification, the varying scope of DSAR requests, and how to deal with third-party data and timelines.
Next on M365 in 5, we examine the data protection legal and regulatory considerations of Microsoft 365 relating to profiling, tracking and data transfers in compliance with the GDPR – and their implications in the EU and Germany.
Hear from Reed Smith partners Andy Splittgerber (Munich) and Anthony Diana (New York), with counsel Catherine Castaldo (New York).
Tune in for the latest on influencer marketing from Jason Gordon and Sarah Bruno as they discuss practical considerations for brands, including morals clauses, data ownership, data protection laws, intellectual property rights, and geo-fencing and other cross-border issues.
Proposed cybersecurity rules from the OCC, FDIC and FRB affect banking organizations and bank service providers. In this panel discussion, three lawyers from Reed Smith’s Tech & Data practice – partner Anthony Diana, counsel Catherine Castaldo and associate Trevor Satnick – discuss specific impacts and describe what business leaders have to do to prepare.
Leading tech & data lawyers Andy Splittgerber and Christian Leuthner discuss marketing consent in Europe in relation to data protection and spamming laws. Andy and Christian will guide you through the various issues involved and what you need to know.
Join two of our Munich-based data protection team, Ramona Kimmich and Andy Splittgerber, as they outline the legal situation on the use of cookies in Germany and the EU. They discuss the current status of the EU ePrivacy Regulation and of Germany’s cookie law (TTDSG) and provide insight into the changes organizations operating websites in the EU need to make in 2021, if they want to use tracking technologies in compliance with data protection rules.
Sarah Bruno and LiLing Poh discuss recent trends as organizations invest more in technology through the acquisition of new platforms or programs, and through partnerships with vendors, to bring products to market.
Join members of our tech and data team, Andy Splittgerber and Christian Leuthner, as they discuss the first fines levied under the EU’s data protection law three years after the EU General Data Protection Regulation went live. They take a look at recent developments and describe situations where it may be worth challenging the data privacy enforcers. Andy and Christian give valuable tips on what to do if the data protection authorities knock on your door.
Wrapping up our M365 in 5 Foundation series with Lighthouse, this episode concludes our Teams overview with a dive into functionality and controls of audio/video conferencing capabilities, including the integration of chats, whiteboards, translation, and transcription services, by Reed Smith’s Anthony Diana and Erika Kweon and Lighthouse’s John Collins.
In this episode, Reed Smith’s Anthony Diana and Therese Craparo, with John Holliday from Lighthouse, continue the discussion of Team functionality. Hear about how Teams Channels are changing the way organizations work and collaborate, integrating a communications, documents, tools and apps for a department, project or program in one location, bringing increased operational productivity along with key legal and risk considerations.
Hear more about Teams from Anthony Diana and Trevor Satnick of Reed Smith and John Collins from Lighthouse in a discussion of the enhanced functionality of M365’s new instant messaging platform, including persistent chats, modern attachments, expressive features, and priority messaging, which enhance communication but can bring increased eDiscovery or regulatory risks.
Anthony Diana and Erika Kweon from Reed Smith are joined by John Collins from Lighthouse to provide an introduction to Teams and how it is transforming the way organizations are working and communicating.
Join Reed Smith’s Anthony Diana and Trevor Satnick, with John Holliday from Lighthouse, in a conversation about OneDrive for Business and how organizations can use it for personal document storage, such as giving other users access to individual documents within an individual’s OneDrive and acting as the storage location for all Teams Chats.
Anthony Diana and Trevor Satnick from Reed Smith, joined by Lighthouse’s John Holliday, discuss the enhanced file share and collaboration functionality in SharePoint Online, including real-time collaboration, access controls and opportunities to control retention and deletion.
Tune in for the launch of our M365 in 5 Foundation Series for a primer on what legal, compliance and C-suite executives need to know about M365 apps and tools. Listen to Reed Smith’s Anthony Diana and Therese Craparo – and guest John Collins from Lighthouse – as they discuss the enhanced functionality of EXO, including new data types and the potential for enhanced governance.
En liten tjänst av I'm With Friends. Finns även på engelska.