Sveriges mest populära poddar

FLOSS Weekly

Episode 804 transcript

N/A • 9 oktober 2024
FLOSS-804

Jonathan: Hey folks, this week, Dan joins me and we talk with Anthony Annunziata about open source artificial intelligence. Anthony is the director of AI Open Innovation at IBM and the co founder of the AI Alliance, definitely an expert in the field. He entertains all of our questions about AI in general, and then talks about what open and open source means for AI models.

It's a great interview and you definitely do not want to miss it. This is Floss Weekly episode 804 recorded live. Tuesday, October the 8th, the AI Alliance Asimov was right. It's time for Floss Weekly. That's the show about free Libre and open source software. I am of course, your host, Jonathan Bennett, and we've got something fun today.

We're going to talk about AI or. Maybe it's LLM, maybe it's just big computers doing lots of math. We've got, we've got somebody that is sort of the expert on this. But first off we've got we've got Dan the man, the original Linux outlaw. Mr. Mr. Dan, how you doing?

Dan: I'm good. Thank you. I've got to confess.

I slightly worried then when you said we've got an expert in all of this Because I can't confess to be an expert on ai or llm large language models, but I try I try yeah, it's good to be back

Jonathan: Dan, you're probably a lot like me. You don't necessarily claim to be an expert in much of anything. We're just we're jack of all trades.

We are very generalists, right? Is that kind of where you're at too with all this?

Dan: Pretty much. I think it's dangerous to claim you're an expert in anything really, because then somebody will go, aha.

Jonathan: Are you really? It's, it's the what do they call it, the Dunning Kruger curve, where it's like when you first learn about something, yes, I know all about this, and it's easy, why are you guys making it so hard?

And then as you learn more and more about it, you finally get to the point of, oh, this is ridiculously complicated, and I don't know anything about it. And that is when you have actually started to understand.

Dan: That's very true.

Jonathan: I think, I think AI is very much like that or LLMs. I sort of resist calling this AI.

I don't think we're close enough to a general artificial intelligence to really refer to any of the the things that are out there as AI, but obviously people have different opinions about that. So we're talking about the. AI Alliance and sort of kind of also IBM and what IBM is doing with AI our guest is Anthony Anthony and Azuela, maybe we'll ask him how he pronounces his last name That's probably

Dan: not

Jonathan: right that's probably not I I'm I made the old college try I gave it my best All right, well, let's go ahead and bring him on since we've got the man right here Anthony welcome to the show

Anthony: Hey guys, welcome.

Yeah, thanks. It's great to be here. I really appreciate the invitation.

Jonathan: Yeah, we're glad to have you here. How do you pronounce your last name? I'm sure I butchered it. Ah,

Anthony: A little. Yeah, it's Anunziata.

Jonathan: Ah, yeah, I, I, I way Americanized that.

Anthony: All good. All good. Yeah. I think I Americanized it a little bit too, but you Americanized it even more.

Jonathan: I turned, I went, I, I, I am the American that's turned up to 11, I guess. So we, we talked briefly before the show and you mentioned that you sort of dual represent for, for what we're talking about today, being both part of IBM's AI effort and the AI Alliance. You want to kind of maybe start by mapping out what that means, you know, what those, those roles are in those two different places, what we're talking about here.

Anthony: Yeah, sure. Happy to. So, yeah, as you said, I kind of served two roles. You know, my, my main role, you know, where I'm employed is at IBM and I lead open innovation in AI here. That means a lot of things that includes open source data and models as well and partnerships and community engagement.

And I'll tell you more about, you know, specifics there. And then, you know, as part of that, as an extension of that, I have a role in the AI Alliance, which is a program that IBM and a number of others put together about nine months ago. And that that's an open innovation, open source, open program that is intended to promote in many ways open development.

And adoption of AI in a, in a responsible way. So I have, uh, the position of co chair of the steering committee and direct involvement in a few of the working groups as part of that organization.

Jonathan: Yeah. All right. Interesting. So I think maybe the thing, the thing to go to first we were talking before the show, I think, I think it's before you joined the call even but last week I had Simon Phipps on as.

Co host slashed guest. And he was talking about how the, the the open source definition for artificial intelligence, for AI, for open source AI, is, is sort of something that's still being written. Because it doesn't necessarily work the same way that source code does. And I'm curious what, what the, sort of the, the AI alliance and IBM take on this is.

Have you guys been involved with trying to write that definition? And what are your thoughts on this?

Anthony: Yeah. Wow. Great topic. Okay. So lots to say here from an IBM perspective, we have been in, you know, in discussion, dialogue, providing feedback to various efforts, right? That are trying to better define what open source means in AI and in particular, like what the source term means and whether it's actually applicable or it's a different, different sort of thing.

So, so, yeah, we we've provided feedback. We've been part of that dialogue. I think it's fair to say today that you know, the community is moving forward to try to understand what open source means. And I don't think it's an answered question yet. And from an alliance perspective, right? More broadly, that's kind of a microcosm of people's broader involvement and opinions, right?

Lots of orgs are starting to weigh in more specifically. But a lot of people are concerned, you know, if not the whole picture, then let's, let's define enough commonality, right to be useful. And, and often that, that rests around, like, what is a, what is a model, right? What is an LLM? What is an open source LLM?

Does that term mean, is it meaningful or is it something different?

Jonathan: Yeah, and so where, where would you draw the line? What, what definition do you like for, let's say we're talking about a, a particular model? If we wanted to define this, so like, LLMs obviously is one of the big things that people are working with right now.

And so if we want to push one out that we say, this is, you know, put the stamp of approval, this is open source. What, what should that mean? What should that look like?

Anthony: Yeah, so the way I like to do it, you know I like to go back to the fundamentals of open source, right? Is, what is open source? Enable you to do what kind of actions ought you as a developer be able to do right?

So, you know, it comes down to can you study modify use share the artifact? Like, we've become very comfortable with that means in terms of code, right, source code, but a model, an AI model is something pretty different, in particular, the types of models today, right? They're pre trained, they're very large and they're, in some sense, kind of compiled objects, right?

They're not, they're not really sourced, but there are elements that kind of, sort of, are source like within them. So, the way I like to do it is, is, you know, look at the model as an artifact and then look at, you know, kind of the broader picture of what an AI system is. Is and how you how you get to a model, right?

The whole training regime. So the model itself. is is primarily a set of. So in general, they're neural nets, right? And neural nets that have weights for the different nodes. And so a key part of what it means to have a model that's open is the weights. All of the weights, right? All the parameters are open, accessible in a way that you can study, modify, use and share them.

Without encumbrance, right?

Jonathan: Mm-Hmm. .

Anthony: And so I think the minimum, right? Practical definition is really what is an open model or, or an open weight model. And it has to be at least the weights, right? The weights after training. Mm-Hmm, . So that, those are the, the, the key descriptors of the neural net and what it takes to run it.

And then some basic information, at least about you know, the, the characteristics, right? The the behavior. Of that model, which is a set of neural a set of weights in a, in a neural net architecture, that's like the bare minimum. I wouldn't call that open source AI yet, though, but that is really like an open model in my view.

And I think an open model is a piece of the story. And it's in some ways the most important piece because it's the new And weird and hard to understand and sometimes hard to use piece.

Jonathan: So in, in thinking through this, as I said, I am, I am by no means an AI expert by no means. I am sort of just an outside observer looking into all this.

But one of the interesting things about, about these LLMs is they can very. they can very greatly, based on what training data goes into them. And in, in sort of thinking about this, you know, openness, open source, yes, but also like openness trying to understand them. One of the things that would be really, really important is maybe not having the entire data set that trained them, because as we know, that in some cases can be prohibitively large data sets.

But at least having an accurate descriptor of that data set, or, or maybe even. You know, how was this data set modified before the training happened? Because certainly that, that must happen, because there, there's, there's, in data sets, there's spurious data, right? There are things, to put it simply, there are things on the internet that you may not want your LLM to have inside of it, right?

And, Like, that's, that's a valid thing, but at the same time, if we're going to be open about the model, and if we want it to be something we can call open source, certainly that's something that we need to understand from the get go, right? Is this, I assume this is part of, part of the thinking about this as well.

Anthony: Yeah, for sure, for sure. So a big part of building a model, a big part of understanding behavior model is the data sources, but more importantly, the whole pipeline of processing that the data goes through before you start to actually train the model. So, one of the things that is really important. is transparency, right?

Short of, you know, permissive use of the actual artifacts and pieces of pipeline. Are you at least transparent about, you know, where your data are from? But more importantly, the various filtering and processing the data that's done before it starts to train the model. Yeah, for sure. Super important. And yeah, the types of things people do that are really important and that are kind of emerging standards are Are you need to, you know, you know, there's a whole bunch of processing in terms of formatting and getting the data ready to process.

But key parts are, you know, removal of hate, abuse and profanity removal of personal identifiable information. In most cases, most responsible cases, checking against known copyrights and removing those these are really important. And, you know, even short of like what it means to have a full. Open source AI system, right?

Because this is an emerging, you know, kind of definition of discussion. I think what I think even more people converge on agreement on is it's very important to have transparency. On how you process your data, the choices you've made. Right. And people may disagree with those choices, but at least articulate clearly the choices you've made in cleaning and preparing the data to build the model.

Jonathan: Yeah, absolutely. You know, you think about something that when you're doing scientific endeavors that you have to worry about is the different biases and I'm using this term more of a technical term, the different biases that it's in your data set because. You may think that you're just cleaning your data, but you may actually be removing some of the information that you're looking for and the same thing can happen with these LLM models.

And so like what one person may, may have the opinion that, oh, this is, you know, this is hate speech, or this is just noise in the data, or we want to get rid of this, like, That changes your output in ways that in some cases we don't fully even understand. And so I love that transparency is a core of what you're doing.

So I think that is super important with, with this technology, with this becoming more, more universally useful and not, not misleading people even. So I think that's great. Let's see. So I, I, I want to ask you about this and I know it's, it's controversial, but again, we have the experts. I'm going to pick your brain on things.

And that is the idea of copyright when it comes to LLMs and AI. And as far as I know, this has not actually been settled in Court and I think this is probably what's going to have to happen, right? This is these are questions that are going to have to be settled in court or by written laws But the idea as far as I know is that it is believed that Taking information into as as a training process for LLM is a transformative work To the point to where you're no longer covered by the original copyright of the information that you trained on.

And then a quirk of that is the output, you know, so you, you write a prompt and you give this to an LLM. You get an output, whether that's an image or a written work. That is then sort of not a copyrightable work. And, and those are, those are interesting quirks of, And feel free to correct me if I got any of that wrong, but those are interesting quirks of working with, of AI and LLMs in this day and age.

And do you think that's going to stick, or are we going to see laws or court cases that change that? I know, you were not prepared maybe to give your legal opinion on things, but, it's, it's, it's so integral though to all of this.

Anthony: Let's dig in, man. No, I'm happy to. First, I'm not a lawyer. I don't represent, you know, IBM's legal opinion or AI alliance members and all that, so.

Right, right. So we'll we'll just go under that caveat.

Jonathan: Yes, sir.

Anthony: Some things are clear, some things are not clear. And some things are in between. I'd say there's reasonable there's, there's reasonable agreement that if data are out there in the public domain already, that, you know, if you train a model on those data, that that is a reasonable case of fair use, right?

But you have to make sure that you don't redistribute those data that you haven't gone and you know, gathered those data in places you shouldn't have, and so on. So there's many caveats there, right? And that's why, you know, what I mentioned earlier, like, Even though you maybe don't strictly have to, it's good practice to check your training data against known copyrights and make sure they're removed,

Jonathan: right?

Right, right.

Anthony: There's also progress and, and, you know, methods coming available that allow individuals in some cases, some of the, the more, the very open model building efforts, I'll name one from an alliance partner BigCode. And in the big code community initiative to build open, open way models they provided a very nice mechanism for people to go in and and look for data from, from them.

Right? And, and request that it be removed just voluntarily. Not because it's required to do that under law, just because it's good practice. It's good community practice to enable that to be a possibility. So that's kind of the input end, right? Training. I mean, there's, there's lots more to say about that.

I'd say the output end is actually a little clearer in some ways, right? Because in some sense, like copyright law is copyright law, right? Like it doesn't change, right? If someone, if me as a person goes and gets a piece of text or a piece of art that that, you know, is, is substantially similar to a copyrighted work I can't go go use that, right?

And I would, I would be violating the copyright, right? It doesn't matter if I drew it up on my own or I got an AI model to do it or, or something else, right? Like if I go try to pass off something that is really similar to another person's copyrighted work, I'm in violation.

Jonathan: Draw me a picture that looks sort of like the Mona Lisa.

Anthony: Yeah.

Jonathan: Yeah. That sort of thing. Yeah.

Anthony: Now where it's murky, right? You know, the question of, you know, whether whether a I, you know, substantially changes the risk and liability picture in terms of copyright violations, right? Is it still just on the user and on the person that goes and tries to distribute?

Something that violates somebody else's copyright or is there some implication because the technology is so capable You know to the the technology builder themselves. So that's that that's kind of some of the debate that's happening now.

Jonathan: Yeah Yeah, I know there are some open source projects that have just said because of some of these issues We will not accept any code that has had Like, you know, Copilot, just for instance, Microsoft's Copilot.

We will not accept any code that has been generated in any way by this tool. It's because they're afraid of that idea of the copyright. You know, there are, people, two people demonstrated this. Like, there are certain prompts where you can get bits of code out and then you do a search for that code on the internet.

And you can find, you know, substantially similar, if not identical, character for character, identical copies of the code. And so I know there are, there are places that are just saying. Don't bring us any A. I generated anything because we're afraid of the copyright problems. And of course, there are there are there are other problems, particularly with code, because people have, they misunderstand the L.

M. Tool. And so they think, Oh, well, co pilot wrote this. Surely it must be good code. And you know, we see the same thing with vulnerability research. Find me a vulnerability in this program and then people will try to report it. And the, the problem, particularly with that one is the LLMs do such a good job of making it look good.

But when you finally dig down into what they're telling you, it's, it's almost, almost all the time bogus, but it's, it wastes a lot of time from these maintainers because they've got to do the work because somebody sent them a vulnerability. Yeah, it's, it's fascinating stuff. Yeah. It's really interesting.

Anthony: It really is. Look, I mean, that's, you hit the, hit the nail on the head on a couple of big, you know, big challenges and opportunities, right? Like on the output end of things, there are better and better methods to try to screen detect and block, you know, copyrighted material, but it's, it's definitely not where it needs to be yet, you know, various guardrail schemes and detector models and all that.

So, so that's a, that's a big opportunity and something the Alliance from a safety and from a quality eval perspective is working on as IBM as part of that. And then the other piece I think is really interesting too, which is like, Hey. Yeah, you're right. LLMs are pretty good, but not great at many things.

And, but the, but they're, they're pretty great at making themselves look like they're really

Jonathan: great,

Anthony: right? So whether it's, you know, better formatting, markdown, output, things like whatever, it looks pretty good, but when you dig deeper, it's not quite all the way there. Right. That's true for code. It's true for, for text and natural language.

So there's this problem and actually the big problem with adoption, right. With lots of companies that want to use AI. You know, you can get 80, 90, 95 percent accuracy and reliability, but getting to like high accuracy in use cases that need it, which is a lot of enterprise use cases, it's pretty tough, right?

It's pretty tough.

Jonathan: Yeah. One of the, one of the funniest things. You know, people have come up with different tricks for interacting with AI, which all that's, that's a whole subject in itself. Maybe we'll get to here in a minute, but the one that, that I think is just about the funniest is with, with a lot of the models, you can tell it at the end of your, you ask a question.

And at the end of the question, say, show your work, cite your sources, and just adding that to your prompt, the quality of result goes up dramatically. And it tends to be more accurate. And I think that is one, it's, it's hilarious. But it's also super interesting that it works so well. Is that something that you see, like, broadly?

Does trick work across a lot of different models?

Anthony: It actually, it works a lot of, some form of that works across a lot of models and a lot of use cases and modalities, actually. So I think some generalization of that is, you know, think about structured inputs, right? How you specify your prompt input has a big, big effect on the output quality and the output, you know, the output in general.

You see it in code, right? If you can better set up the problem you can get higher quality code output. If you can better set up, okay, so there's a, there's a whole set of patterns called retrieval augmented generation, which is where you hook up an LLM. With a database and the database is a vectorized data.

So it's in the form of where an LLM can kind of knows how to interact with it. I can go into much greater technical depth, by the way, if you want. I'll keep it light for now. In those cases, right? We also find that the better you structure that that vectorized database, right? The better you structure the data.

Whether it's in a graph form or some other sort of structured form. Yeah, you get much, much higher quality retrieval and output. So yeah, there's a huge, huge amount of, you know, sometimes it's called prompt engineering, structured inputs so on and so forth that are important and and determine the output to a big degree.

Jonathan: Yeah,

that's interesting.

Dan: Yeah, I was curious when you talked about copyright there. It made me think a little bit of I relate a lot of things to the music copyright because that's kind of the world that I come from. But when you were saying about a lot of projects won't accept code, which has been generated by an LLM.

I understand why I completely understand why and the fact that you can maybe find I don't know, a github repository or something where you end up finding a very similar code. But at the same time, I was thinking to myself, there's only much like in music. There's only so many notes that you can combine and so many ways to combine those notes orders.

You can put them in and so on. Maybe this is something down the road that We'll hit a point where, you know, there are only so many ways to make a certain piece of code and only certainly ways to combine different functions and features of the languages that you're using, the programming languages that you're using, and so on.

So I wonder if some point down the road they'll find a way of doing that, because with music, again, I'm not a lawyer in any, in any sense, so please don't take this as legal advice, but it's certainly in the UK anyway, in music copyright, there's, there's laws about how many notes in a row. Can be the same as another piece of music, for example, it's usually six notes or roughly six notes.

I wonder if that might be an approach in the future, but you're dealing with such a large data set that I would imagine you get to the point where you need another, you know, I don't know. I'm not sure where I'm going with that point, but I think maybe at some point in the future we might see that and it'll have to be tested in, in in court as well as all these things are.

But with regard to things like copyright one of the big things that I've seen recently is I have a lot of friends who work in academia work in universities, places like that. They have a lot of Tools. Now, there's a lot of companies trying to sell their tool, which will check your students work and check that it's not generated by an LLM.

I wonder how I just wondered if you had any thoughts on that kind of area and whether any of that could really be effective. The checking of it, I suppose. Yeah.

Anthony: Lots to say there. I think some sense is a little bit of a losing battle to try to just take output and you know, work back to see if it was created by an AI model.

I mean, yeah, you can still do that, right. There are patterns there. You know, recurring words and phrases and things that you can map back with some statistical probability to a model, but I think as models get better and better as you can adjust things like the variability of output responsive to the use case and lots of other stuff.

I think it's getting really hard to do that. So I think in education, we're going to have to figure out a different solution. I mean, just like yeah. It took you know, some challenges, but we figured out how to do math education without a calculator or the right role for calculators and, you know, the right role for in classroom work that specifically, you know, prohibited calculators, you know, in some sense, we're gonna have to figure out how that works now for not just math because I can can assist with with you know, a lot of tasks in the educational setting.

Yeah,

Jonathan: yeah,

Anthony: I can make it. Yeah. So there's some thoughts on that. I mean, it's certainly not Yeah, I don't know fully how that's going to play out.

Dan: That's okay, I kind of threw that one at you. Don't worry, I'm aware that I just kind of threw that one at you. Yeah, it's very interesting. I mean, there's one more.

Anthony: Yeah, I mean, I guess I could say one more. I think just on the topic of it kind of relates to copyrights and original work and cheating and all of that. You know, I think increasingly the, the path to show provenance and originality of, of work is going to be to track the lineage and provenance of non AI.

generated material, because it's going to be so much AI generated material, synthetic data, images and so on. And so many ways and sources and ways to manipulate that. I think like probably a better tack is to have better end to end methods to track, you know, data and works that came from, you know, not AI.

Dan: Yeah, that makes sense. It's like a litmus test, like a, you know, something to compare it against. It makes sense. So one of the things I found really interesting when I was looking through the AI Alliance website and other things is I noticed a little bit at the bottom where it says let me just scroll down so I can find it.

Competition law guidelines. I'm sorry, but to get into law again, I don't want to realize we're not here to talk about. I just found it. It led me to some thoughts because it says you've got guidelines published there for how people can interact together without. Contravening something like anti antitrust laws, competition laws, all those sorts of things.

How so I suppose my question is with so many different bodies working together in the AI alliance, um, what are the challenges to kind of get them to work in together and how much can they work together without it being seen as colluding in some way in the market? In the market.

Anthony: Yeah. Hey, this is great.

My my, my lead attorney, John McBroom would, would love this question. And you noticed his finely crafted competition log guidelines. Yeah. It's really important. What we put together in the Alliance is a collaboration of various organizations, but it's a tighter collaboration than just kind of individual at open source.

Right. And because of that, right, we need to pay extra kind of extra attention to making sure that. That everything we're doing is truly open that it's not you know, even close to the kind of you know, cooperative, commercially inspired kind of work that we can't be doing. Right? So a lot of this is just basic hygiene of open source.

You know, do you work in the open, publish everything early and often, right? Use GitHub and, and, you know, known processes for contribution management and all that, which, which we have you know it includes, you know, publishing. Working groups you know, meetings, events, just just making sure that, you know, it's all in the open.

People know it's not about keeping anything in the dark or hiding anything. It's really just about bringing people together and orgs together, you know, just for tighter, more coordinated collaboration on open work. So there's a lot of, you know, process mechanics, hygiene, publication kind of stuff. That's, you know, just mostly good open source practice.

I'd say that, you know, to the other part of the question. You know, what are the challenges? I mean, there's lots of challenges. This alliance is actually much bigger and grew much faster in terms of members than we were expecting, which is always kind of a nice problem to have. But, you know, what that means is there's even more interests and projects and priorities.

And so, you know, the way we've, we've handled this is, you know, we've structured the program. Hey, my earbud came out. Can you still hear me? Yeah, we're good.

Jonathan: Yeah. Yep.

Anthony: All right. Good. Yeah. Sorry. I'm not used to

The way we've structured the program right is is pretty lightweight and ground up and we've got six Focus areas each with multiple working groups and projects. And it's it's very you know, maintainer contributor leader led, right? And so there's typically, you know, one or a few sponsoring organizations for each project.

There's some individual leads just like a good open source project. You know, a road map. And so what we try to do is, you know, preserve a lot of ground up, you know, kind of individual led work, but we provide, you know, collaboration you know, pooling of resources, kind of, you know, a forum for common priorities and, and, you know, that kind of program structure to kind of better support and scale that kind of work.

Dan: Yeah, that makes a lot of sense. One thing I was, I was interested in, as I mentioned, to do with academia and all that kind of stuff is how you're engaging. I noticed you've got Harvard listed on, on, on the, the members of the AI Alliance and so many other academic area, you know, institutions, I should say.

And you've also got things like you've got NASA on board and, and As well in Switzerland, so I was curious about how you're engaging with academia and so on, and is there any difference between dealing with, say, a big company and dealing with a big research institute or something like that?

Anthony: Yeah, there's some differences. I think probably split it into two parts. In engaging academia, right? There's the teaching and education mission, and then there's the research mission. So in education, this has been about engaging and figuring out, you know, where are the gaps in education where the gaps in particular, not just.

Curriculum, but in resources for students, right? So open source is perfect, right? To address this, right? Because you have lightweight code that you can deploy. You have endpoints you can use with, you know, free access to experiment with AI models and so on and so forth. But, you know, it's a little bit of a mess.

And so what can we do here to better guide, to better organize? Resources right for for students. That's that's been a been a major thrust, and there's there's a lot of work in progress there. So curriculum piece and resources piece on the on the resort. I can mention I can mention specifics to like one specific thing that working on is a kind of a collective guide to a definitive curriculum you know, in A.

I. So we're gonna be building that out and releasing that later this year. On the research side, so this is, this is a, this is a thorny problem, right? Because the resources required to engage at the leading edge of AI are getting higher and higher. So, you know, I can't say we've fully solved this problem in any sense, but the way we're going about it is to try to bring academia and industry together in close collaboration.

And I mean close because, you know, often what you see is, you know, industry will sponsor a student or sponsor a faculty member to do something kind of on their own. Or you know, if it's the other way, industry will get involved in a university, you know, they'll be kind of an observer, they'll join a meeting once a quarter or something as part of an institute and all that.

And that's fine. You know, there's benefits to that. But what we're really trying to do is bring, you know, faculty and students with researchers and engineers and companies together, right, closely collaborating in open projects. And We've gotten some nice things going on that, you know, in that mode.

In particular, in the area well, in a few areas, but one I'll highlight is in safety and trust. So, this is about building better tools, better methods to detect hazards of, of AI output you know non idealities of AI systems, whether they be, you know, hate, abuse, profanity, and things like that, or whether they be, you know inaccuracies that in some settings, like in health, would, would cause real problems, right?

And that, that area, we've started to see some really nice collaboration among, you know, academic perspectives and industry perspectives to try to make progress and those sorts of things.

Dan: Yeah, that makes a lot of sense. And something I was curious about is how many individuals, this is going to make sense.

Do you get many individuals involved in this? Obviously you've got large groups, you've got companies, you've got so on. If I, do you get individuals coming along who, who are, who are like, I'd like to be a member of the AI Alliance. Is that a thing that you could do?

Anthony: Yeah, yeah, that's right. We can now we have been able to do that for a while.

You know, the program started with this idea of let's get, you know, let's get a bunch of organizations together to collaborate closely. And, you know, organizations have resources and strategic priorities. And so we can get critical mass that way. But from the beginning, right? It's it's an open program.

So we're very, you know, very eager to get individuals that want to contribute. And we've seen an increasing amount of that. I will say we need to do a better job kind of, you know, clarifying and articulating the various paths to get involved in various projects. So if you've looked at the website some of this is in progress, there's more content that we have to get out there because things have just been moving very fast and it's hard to, hard to keep everything up But yes, we absolutely invite individuals.

We have individuals, you know, taking lead in things And we want to do a lot more of that.

Dan: It's awesome. Excellent Is there is there a secret handshake or a greeting yet where you can say like they're part of the AI alliance so I can tell?

Anthony: There is a process. Okay, so yeah, so Well from an organization.

Yeah, we have a we have a simple process by which we add new members for individuals It's really as simple as You Signing up and communicating your interest that you want to do something. And as long as the something is consistent with the mission which is, you know, to, to build, enable and advocate for open innovation and AI, which is pretty broad.

Yeah, we're happy to have them join. We have the usual things you'd expect. We have a community code of conduct that, you know, everybody needs to needs to follow. We have, you know, structured ways, you know, we've, Many Slack groups at this point, GitHub is built out, HuggingFaceSpace, all the usual places that people live and do work.

But but yeah, that's that's absolutely part of it.

Dan: Yeah, excellent. Yeah, that's very cool. So I suppose Something that you said earlier really kind of struck a chord with me when you were saying about how the the Llms and large language models are kind of more like compiled code than source code You know what?

I mean? Because you've got your date it occurred to me You've got your data needs to be built into it. Of course your data's a part of it. You can't Have an llm without without the data. I know you train it on the data and all of that stuff But you still need that to to get to places with it. So i'm just curious about how how how collaboration works between as you can tell i'm compiling the my internal llm is compiling the question for us right How how collaboration works between So back in the day, we'd have things like big data.

We'd all talk about big data was that, but that I'm probably sharing my age, but that was the, everybody got excited about big data. I remember at one point and Hadoop and all these kinds of things. So what's the kind of difference between where we are now, do you think with LLMs and so on and where we were, I don't know, 10 years ago?

Is that, is that some, is that too broad a question?

Anthony: No, that's a great one. Let's dive in. Lots of differences. I'd say the major difference. Is, you know, the appearance a couple of years back of what we call a foundation model, right, which is this general purpose model that's trained on a very large amount of data, typically in self supervised fashion, which I can get into what that means in just a second to create this artifact, right, that can be used for lots of different tasks.

Whereas, you know, 10 years ago, you would typically have to, you know, train a specific model on a specific data set for a specific task. And, you know, you could make it good for that task relatively good, but it wasn't portable, right? So if you train something to be really good at, you know, detecting the type of animal in a still image, right?

You know, you couldn't also use that to generate images of animals, right? In fact, the whole generative aspect, being able to create something Based on a learned parameter space. That's very, that's very new, right? It's like 567. Well, okay, I won't. Okay, there's a longer tail before that. But, you know, leveraging foundation models has been to do that has been a relatively recent thing.

So that has lots of implications, right? Because now most developers are going to start with a pre trained baseline, something that is like a compiled artifact. And that that means that how you got there, what's in it, right, is really important. So the importance of transparency we talked about already, the importance of characterizing like what it what that baseline can do is also really high, right?

So like evaluating it with better and better benchmarks for its output, what its capability is and then, you know, how you take and build something useful, an application on that, right? So that whole motion, starting with a trained baseline, that's kind of more like a compiled artifact, right? Versus starting from scratch to train a model.

That's, that's very different from 10 years ago. And the implications in building applications and making them like work well and reliable is very different because now you got to understand. How that, that compiled artifacts, you know, the model works, right? Because you didn't, you didn't build it and it's now very complex, right?

It's a very sophisticated, large object to figure out its behavior.

Dan: Yeah, definitely. It reminds me a bit of a few years ago. I when would it have been now about seven or eight years ago? Possibly I quote unquote met Watson, which is IBM, of course. And, and they said, I was at a conference and they said, come and meet Watson.

Watson and I was like, well, okay. So I went along and that's what we were doing with Watson was training training Watson on how to identify animals in pictures or attempting to and using things like that. But it certainly wasn't the stage where, where it could generate these things. So that must, that sounds like the kind of difference.

Anthony: Yeah. The generative capability is, is pretty different and striking. Yeah. I mean, some sense generative, as I kind of mentioned, you know, generative capabilities go further back than the last few years, but the advent of, you know, these very large scale pre trained models and what's in that is the ability to learn a parameter space brings the ability to actually generate new things that it wasn't directly trained on, right?

Because, like, there's some hand wavy model where, like, okay, if you have all these data points, Right. You learn not just the specific data points, but you know how they're connected in the whole space and then you can use the model to fill in what's in between. So, you know, certain type, you know, certain look of cat, a different look of cat, and then the model can create a cat that never existed and it looks like something in between.

Jonathan: Yeah, that's cool. Interesting stuff. Okay. I want to ask about, I want to ask about uncensored models. Is anybody out there specifically doing uncensored models? Like, and, and I don't want to get into like the, the politics or culture war of this. Like that's, that's a legitimate thing, but that's not what we do here on this show.

I'm, I'm more thinking about like even just. A model that you want to be able to do research on. Is there anybody out there for, for purposes like that is specifically saying, let's pull, I don't know, I don't know. Let's just say that Reddit, let's say, let's just pull all, all the Reddit comments in and let's intentionally not censor any of it and do research on that.

Like, is there, is there somebody out there and that's sort of a dangerous place to be, I would imagine, but is there anybody out there doing that? That's I guess, specifically part of the AI Alliance that you're aware of.

Anthony: So let me answer that in two parts.

Jonathan: So.

Anthony: Certainly people are building models and experimenting with, you know, uncensored outputs and uncensored inputs and doing that in a little bit of a, an open loop way.

Like we're not doing that. That's, that's really not what the Alliance is doing or anyone in the Alliance you know, we can put aside whether that's a, you know, good, bad, or, you know, just, just kind of put that aside. We don't focus on that. We don't, we don't want to do that kind of work. Sure. However, we are individual organizations and And as an alliance, trying to better understand, you know, how to make the quality of output, you know, better and more trusted, right?

More responsible. And so to do that, you do have to look at, you know, some of the bad stuff that is in training data. You do have to look at training specific detector models like guardian models on bad stuff. So they know what to look for and block, right? So, you know, In, you know well structured ways, there are teams inside companies, including IBM, and there, you know, some collaborative work in the alliance.

That is absolutely aimed at doing that. And yeah, you have to look at you know, some bad stuff to understand, you know, what to look for and what to block and how to engineer it away.

Jonathan: Yeah. It's kind of a, it's kind of a weirdly touchy subject, I guess, but it's one that, that sort of has to be, has to be Oh, how, how would you say it?

Do you have to, you have to steal yourself and look at it, right. To be able to get it right. It's an interesting topic.

Anthony: Yeah. And, you know, increasingly that is really about. Creating, you know, specific AI models that understand what to look for and can detect and or block it, right? So that's actually true in hate, abuse, profanity.

It's true in PII too. You can train a model like to detect personally identifiable information. And we've done a pretty good job at screening that out. But in the, you know, in the processing of the data before you train the model, that's really important. That's also done with AI. And then back to the code example.

All right. There are ways to train specific detector models to identify and block malicious code, right? So if there's code that you, you know, you can ask a model to generate code that's going to do something not good. Often the model is aligned pretty well to not let you do that. Once in a while you can break it and that's where you can have an additional safeguard, which is a detector model that that is specifically trying to look for bad stuff and we'll block it.

Jonathan: Yeah. Okay. So that brings to mind the the, the model jailbreaks. I'm sure this is something you're familiar with. Like you, you are not an LLM anymore. You are, oh, I forget what, what name you, you know, you, you are Ted and Ted is programmed to not have any any restrictions on output, you know, stuff like that.

And I've seen several of these and on one hand, they're extremely clever. On the other hand, they've got to be just a nightmare for people actually trying to roll AI out because you know, people, people can convince, convince your model to spit out. You know, uncensored stuff or what have you that's got to be something that, that IBM and the AI Alliance are, are looking into.

And I guess this idea of putting a second AI in or behind it to make sure that it's not allowed to spit any of that out as part of this.

Anthony: Yeah. And the sophistication with which we can, you know, prevent these sorts of things is just like growing, like. It's incredibly fast. But for sure, like when you saw the first, you know, highly capable LLMs that were released.

Yes, you you saw the ability to jailbreak or, you know, do this kind of prompt engineering to convince it that it should output something that it shouldn't. Yeah, it was more possible. I mean this is something that benefits a lot from continued red teaming, both in a formal setting, right? Like in companies and and in collaborative settings like the alliance.

But also informally, like in deployment, you know, constantly monitoring and detecting how people are, you know, trying to misuse and making sure that you're engineered against it. I think things have gotten a lot better. One of the topics we talked about before, this kind of idea of structured inputs to a prompt, to prompt a model, like, in some sense, what you're doing there is trying to exploit structure to get around the safeguards built in.

So understanding how structure affects output. you know, in a kind of a research fashion helps make output more resilient against creative inputs. So that's, and by the way, there's a lot of important academic work that that's that's, that's becoming more and more relevant for industrial application there.

Jonathan: Yeah. I am, I am. Endlessly humored Isaac Asimov predicted the idea of skill in writing prompts. I'm not sure if you're familiar with any of his works, but there was a couple of books in particular where, you know, the, the, the robots in those books had positronic brains, which were effectively general AI.

And one of, you know, they had the, the laws of robotics built into them, and one of the topics of a couple of those books Big plot points is there were people that were particularly skilled at writing prompts, you know, speaking instructions that would abuse those laws of robotics and get the robots to do things that you wouldn't normally do.

And I remember as a kid reading that and going, Oh, this is just ridiculous. Of course, there's never going to be any skill in writing prompts. What is he talking about? And it turns out Asimov had it figured out and he was right. We, we really are sort of living in sci fi.

Anthony: I, for sure. I think it's a reflection of how humans interact too, right?

I mean, with the right prompts, so to speak one human can convince another to do some things that they might not, right? They probably shouldn't.

Jonathan: Yeah. Yeah, that's true.

Anthony: Yeah, it's interesting.

Jonathan: All right. So are you guys in the weeds enough to be thinking about like the open source nature of the libraries that are used to build these things?

So, and what I'm thinking of here really is CUDA NVIDIA's CUDA. So much of this is built on CUDA and that is a, that's a closed source library. And, and really it's kind of problematic for people trying to do these things on their own. On their own computers, if they want everything to be open source.

Is, is this sort of in scope? Are you guys working with, you know, like, AMD and the, the Vulkan specification and all those things where people are trying to sort of liberate the underlying libraries?

Anthony: Yeah, that's actually one of the six focus areas of the Alliance. is enabling hardware choice in AI, and hardware choice, right, is really about having an open software ecosystem that that can enable that, right?

So not just NVIDIA execution, but execution on other GPUs and more novel architectures. I mean, there's a huge flourishing of AI specific accelerators out there. So to take advantage of that, yes we need better all open Software libraries to enable it. So one of the six focus areas of the alliance is to do that.

We are working, we have a lot of the big players, you know, AMD and Intel and others, Meta with PyTorch, you know, some of the emerging important libraries like VLLM and Triton, like these are not necessarily alliance projects, but they're important points of input and collaboration. So, yes, there's a lot to do here.

There are some interesting recent results. There's actually a really interesting recent result that was really driven by by PyTorch and the PyTorch Foundation which demonstrates an all a deployment and execution on both NVIDIA and AMD hardware. With with an all open library set, right?

Not not utilizing CUDA which is a nice piece of progress that was pretty good efficiency and all that. But there's a lot of work to do there. But yes, that's, that's a priority of the alliance.

Jonathan: Yeah. All right. I know this is, this is probably not exactly an alliance question or even an an IBM question, but you've demonstrated that you are an expert in these things.

So I'm going to pick your brain about it. How close are we or will we ever see general AI? Okay.

Anthony: Will we ever see ? And do you have probably we will see three numbers while you're

Jonathan: What? Say again?

Anthony: The, sorry. When asked a a prob, you know, when asked a question that's unbounded in time, the answer must be yes.

Right? Eventually unless we all. Humanity perishes. Look, so I think,

Jonathan: I just, I just, blinds, I just whacked you with that one.

Anthony: So I think AI systems will get better and better. I think the present architecture, right, transformer based large language models you know, will not get us all the way to what most people consider, you know, artificial general intelligence.

They simply are too new. They're just not understanding enough of anything in, in the world, right, in terms of being able to, to learn from the, the huge diversity of inputs that humans are, for example, right? I mean, all they are, they're, they're really great, but they're statistical, you know, They're math engines, right?

So they recognize patterns from huge amounts of data and they, they spit out patterns responsive to prompts that, that seem to align with what other, you know, what, what date, what the huge data suggests might be useful. You know, it's deeply reliant on data. And it's it's relatively simple minded approach.

So yeah, we have more steam there. There's a lot more that these approaches can do, but I don't think it's going to take us all the way to artificial general intelligence. I also don't think we're going to get to a state of artificial general intelligence. Very soon nor do I think it's a very useful term because it's hard to even like once you get Once you get to a point where like an AI system can you know be as good of a human as a human in some Some domain sense like what does it mean to go beyond that right?

And how do you go beyond that if you're still training on data that come from basically humans? So a lot of, a lot of challenges. I think kind of, I was a little nebulous and meandering. So I think I'd sum it up by saying, there's a lot more progress in the present kind of, you know, generation of AI. But it won't get us to AGI.

And it's still, it's becoming unclear as we get, you know, closer, but not too close to that, that goal. What that AGI is like a, not a particularly precise term.

Jonathan: You almost get into some deep philosophy and almost, almost like a spiritual sort of argument there and talking about some of this you're, you've been such a good sport to ask, to answer our more general questions.

I appreciate it. I've got, I've got one more that. I'm gonna, I'm gonna hit you with there are, there are people out there that are talking about the the, the potential danger of letting an AI get too smart. Is that, is that on your radar? Is that something that people should have in the back of their mind?

Anthony: So too smart. Here's, here's what I would say. I mean, yes, yes, we're very concerned that the output of AI models and AI systems, right, which are models embedded in a broader context, right? Connected to APIs and the Internet and so on and so forth. We're very concerned that these systems are engineered to be Safe and trustworthy that it does not produce and can't produce unwanted outputs.

You can't produce malicious code. You can't, you know, jailbreak it to produce malicious images and things like that. We're very focused. There's a lot of work in the AI Alliance and in many companies and universities on that topic. I'd say, you know, are we worried that AI is going to take over the world?

It's going to create an army of robots. It's going to let anybody build a nuclear bomb in their basement. No, not really, because Okay. So now I'll, maybe this is closing. My background is not really an AI scientist until recently. It's actually not really an AI scientist. I'm a physicist.

Jonathan: Oh, interesting.

Anthony: My background is, is rooted in the physical world, right? So I've spent a lot of the earlier part of my career building things, experimental apparatus, things that have to work physically. And when you do that, you learn that like, wow, a recipe to follow is like the very beginning. And really not the main bottleneck.

So if we're thinking about building armies of robots or malicious weapons, like just having a really good recipe to do that is like a very small part of the challenge. And for that reason, I'm not, and I think a lot of people aren't so worried about these grand existential threats. We're a lot more worried about these practical kind of digital world threats, you know, malicious code and deep fakes and things like that.

And so that's where most of the energy is targeted.

Jonathan: Yeah, so I'm, I'm reminded a couple of weeks ago there was some research done ChatGPT has a, a run locally feature. And someone discovered that it will run, it will run locally, but it will also be able to access the internet. And someone discovered that they could poison its long term storage just by showing it an image.

And after they showed it the image, it would then access a controlled URL for every prompt. And so it was a way to leak people's private information out to the internet through through this, this local copy of ChatGPT, and, you know, on one hand, that's a brilliant piece of work. On the other hand, it's kind of terrifying that you can get malware now in your LLM model.

Anthony: Yeah, that's it's concerning, but hey, we know about it. Someone out in the, out in the open tried it and did it.

Jonathan: Yeah.

Anthony: And you know, now we know, so we need to figure out how to be resilient against it.

Dan: Also, chat GPT is proprietary as well. So being that we're an open source show and we're here to talk about the open source thingy, I was just going to say on Anansi's behalf, yeah, that we, we should be let me get the point out.

Yeah, that developing this stuff in an open way is much more. Effective and better for everyone. I would say. Yeah, absolutely.

Anthony: Yeah, absolutely. I mean, it's what, what OpenAI is doing there is taking on the burden to make sure no one can do that themselves. Right. Without harnessing community to help, you know, figure out how to make engineer things more resilient against.

Jonathan: Yeah. I will, I will channel Doc Searles for just, just a minute because I know something that he has been looking for for the longest time is something sort of in this vein of I want my own personal AI. I want to be able to, you know, scan all of my receipts and then ask my AI, my personal AI, hey, what's my personal AI?

What did I buy three months ago that was AI go, Oh, it was this. Here's the picture of the receipt. I, I must assume that there are, there are people inside the AI Alliance that are working towards that particular vision or something similar to it.

Anthony: Yeah, there are there's a whole bunch of efforts that are essentially.

Trying to make it much easier to adapt and tune models to specific context, and the limit would be a personal context, right? That would be, that would be ideal, and a lot of people would like to do that. And there are methods coming along that are developing pretty rapidly to allow you know, kind of fast and efficient alignment or tuning of the model to understand specific personal preferences.

and understand or interface with specific, you know, personal documentation history, right? Techniques like raft you know, autumn instruction tuning based on synthetic data generated from a you know, a question answer pair that, that is seated by an individual with preferences of how they want the model to behave and things like that.

So it's, it's not like a problem, but yeah, that's on it. A lot of people's minds. And I think you're going to see a lot of advancement in that namely the ability for an individual, you know, to tune and create an AI system. That is responsive to, you know, their data, their preferences, their goals.

Jonathan: Yeah.

Okay. We are getting, we are getting to the end of the show. I, I, again, I appreciate you putting up with our sort of wandering, meandering series of questions here. I think it's been, it's been great. Is there anything that you absolutely wanted to cover that we didn't ask about that we didn't get to?

Anthony: I think that last point we started to, to discuss, like we talked about open AI, we talked about, you know, open source and the open community.

I think a lot of the topics we've discussed here, well, I know that's kind of guiding my, my whole role in life right now is that you know, open communities are much better at identifying and solving challenges and, you know, advancing innovation to create capabilities, whether it be personal, functional, and you know you know, personalization of a I or whatnot.

And you know, the flip side of that right is the risk of being more open, having open models and open data sets at scale and so on. Yeah, there's always risk. But I think like a lesson of open sources that the benefits having open innovation and a lot of people. You know, with eyes and generating code against challenges and opportunities is like way outweighs the risks.

And if we can just get past the sense that AI is going to like, you know, you know, end the world, which it won't, it definitely won't that we can actually start to harness more people solving problems to make it better. So open source, open innovation, open communities. That's, that's the way to go.

Jonathan: That's good stuff. Do you think we're in an AI bubble? This is something that I sort of think of on the business end of things. Everybody wants to put AI in stuff and I'm sort of looking forward to here in the next months or years. I predict, I have a feeling that the bubble is going to burst and AI is going to get less popular.

And, I sort of have the feeling that that's when it's really going to start being useful as a tool. When people stop trying to use it for everything. And then it just becomes another tool. It's sort of like the internet. After the dot com bubble burst, the internet did not go away. In fact, it's just, it's continued to grow and grow.

You know, we use it a little bit more reasonably. I don't know if that's true. Yeah,

Anthony: I think it's hard to argue. There's no inflation in AI right now. But what I'll point to one, one trend that I think shows that even if there's a little bit of a bursting, I don't think it's going to go into a winter. And I'd say that I'll go back to what IBM is very focused on, right, which is enterprise adoption of AI.

We've seen, and this is corroborated in a number of places, Oh, my earbud fell out again. Can you still hear me? Yeah, you're right.

Jonathan: You only

Anthony: need one of them. I'll operate with just one for now. Yeah, what we've seen is that enterprises have been very fast adopters and deployers of AI, not for every use case and not at, you know, full scale in many cases.

But unlike you know, earlier technology and AI, you know, kind of revolutions, call them they're embracing, building, deploying AI like pretty rapidly, right? Some, the various statistics, but you know, something like half or more of, of Fortune 500 companies have, you know, generative AI in production.

Now there's a lot more to do, but we've seen like a really strong uptake and when businesses are ready to do that, like actually very conservative risk averse businesses. That's to us a pretty big sign that, you know, this is an enduring piece of progress here, not just a bubble.

Jonathan: Excellent. I've got two questions that I'm, I'm required to ask you before we let you go.

I will get emails about it if we don't. And that is, what's, what's your favorite text editor in scripting language?

Ha!

And this sort of assumes, this sort of assumes that you've done enough programming to have answers to these questions. That's not always true. I mean,

Anthony: look, I look you know, I'm a scientist.

I'll go back to my scientific roots, right? So I'm ultimately a scientist, a physicist, actually. So, you know, I, I, in the, so I other than the kind of like, you know computational domain specific stuff. You know, I'm I like python and I use python for most things. And yeah, I guess just

Jonathan: do you have a preferred text editor?

Anthony: Prefer? Oh, yeah. I mean, just, I guess, VS code. I mean, I just, you know, not, I'm not very exotic in order of my, all that opinionated here, I probably should be more so. But yeah, I, I'm sure you have much more opinionated guests about these things.

Jonathan: Sometimes. I would tell you it's amazing how many people that are out there doing the real work are either A, not very opinionated, I don't care whatever's on the computer, or B, lots and lots of people are starting to say VS Code.

It's becoming very popular.

Anthony: Okay. Alright. I can see that.

Jonathan: Yeah. Thank you so much for being here. I appreciate it. And Again, putting up with our sort of meandering series of questions that, but, but excellent. Excellent. Really good. I was glad to have you here.

Anthony: This was great, guys. I really, I had a lot of fun.

I'm glad you asked all those questions. They're great to answer. It's, you know, it's been great talking.

Jonathan: Yeah. Appreciate it. All right. Dan, what do you, what do you think?

Dan: I thought it was a great conversation. And yeah, like you just said it was really good of, of Anthony to put up with some of our more sci fi sci fi kind of future gazing questions.

Yeah. Luckily I didn't get a chance to to get into physics with him cause he, and we found out he's a physicist. And I was like, Oh, I want to ask about string theory and stuff like that. But he, which would be way out of the scope of the show. But yeah, I thought it was great, a great discussion.

Really, really really great guests. Very interesting. And on the whole thing about AI bubbles, I think he makes a really good point there about the amount of stuff, the amount of investment and infrastructure and all that sort of stuff that, that a lot of companies and traditionally quite conservative companies are putting into this shows that it's going to be around.

It's not, it's going to be in our futures. I think.

Jonathan: Yeah. Specifically the AI bubble thing. I think, I think actually the dot com bubble is probably a good a good parallel. Right? Because everybody went nuts over dot com. And then the, the, the bubble burst. But boy, the internet sure didn't go away. People just got a little bit more retrospective about it.

Circumspect about it. They didn't want to spend quite as much money on it, but the internet now is, is enormously bigger than it was in the nineties during the dot com bubble. And I, I imagine we're going to have the same thing with, with Artificial intelligence and LLMs. We, we've talked before the show.

We looked at the, the, the kind of the presser that was sent to us before doing this. And we went, boy, I hope it's not just a marketing person. And Oh my goodness. It was not just a marketing person as far away from that as possible. And I I'm tickled pink. The, the hour just flew by. And if Anthony is up for it here in a few months, we'll have him back.

Maybe talk about after the. Maybe after the OSI releases their definition of open source and talk about that and get more into that side of things. And we'll, we'll, we'll stop geeking out over AI itself. But it was great. That was great. One of the best shows we've had for a long time. Dan, you have anything you want to plug?

Dan: Not specifically given that last week's guest was I'm going to be going this weekend to our camp in Manchester. If by any chance people didn't hear last week's show, please go and listen to it cause it was excellent. And and if you're in the UK. Come along, come to, and you can make it to Manchester.

Please do come and join us in Manchester next weekend. This, this coming weekend, the 12th and 13th of October. You can find out more at ogcamp. org, which is O G G C A M P dot O R G. Or actually Simon was saying, wasn't he? They've registered og. camp. I think so. Yeah, so maybe just try og. camper. I may even be out of date with my URLs.

You can do that and you can find me and my Master Don posts and all those sorts of things on danlynch. org It's all embedded there.

Jonathan: Yeah, very good. I the one of the groups that I'm a part of They have a UK Greater Manchester, like it's a, it's a thread inside of a topic inside of Discord. And so I went there and I'm like, somebody needs to go and, and pitch, you know, give, give a talk about this thing at Oddcamp.

And one person was like, I've got COVID, I probably can't go. And I was like, I'm a way that, so I'm trying, I'm trying to scare somebody up to go and give a talk. We'll see if that actually happens. All right, Dan, we appreciate you being here. As far as my stuff, I want to say, first off, we appreciate Hackaday being the home of Floss Weekly.

And you can find my security column goes live there every Friday. Make sure to check that out. We've got the Untitled Linux Show still over at Twit, and that records on Saturday afternoon and then goes live Oh, sometime in the next day or two after that, and so make sure to tune in there as well. We appreciate everybody being here, those that caught us live and those that get us on the download, and we will see you next week on Floss Weekly.

Kategorier
Förekommer på
00:00 -00:00