Transcript: JAMES BRIDLE on Modes of Intelligence /343


Ayana Young  Hello and welcome to For The Wild Podcast. I'm Ayana Young. Today we are speaking with James Bridle.

James Bridle  There's this incredible total communication and awareness going on all around us all the time at every single level of life. Intelligence is way, way more interesting than anything we could build in a box. But for some reason, we always seem to need to build the box first. We always seem to need to make these kinds of toy versions of things before suddenly, then we start to recognize that they're already all around us in the world.

Ayana Young  James Bridle is a writer, artist, and technologist. Their artworks have been commissioned by galleries and institutions and exhibited worldwide, their writing on literature, culture, and networks has appeared in magazines and newspapers, including Wired, The Atlantic, The New Statesman, The Guardian, and Financial Times. They are the author of New Dark Age (2018) and Ways of Being (2022.) And they wrote and presented New Ways of Seeing for BBC Radio Four in 2019. Their work can be found at www.jamesbridle.com.

James, thanks so much for joining us today, I've really been looking forward to this conversation.

James Bridle  So thanks very much for having me. Lovely to be here.

Ayana Young  I love that we both have birdsong in our background, although yours sounds like songbirds and mine is a raven, who's probably wanting to get into my compost. But yeah, I'm looking forward to jumping into AI and algorithms. And as we begin, I want to think through some of the insidious and invisible ways algorithms and AI shape our lives. So to start the conversation, I'm wondering how the goals and programs of these algorithms are tailored to a specific view of the world. And who and what are these algorithms serving?

James Bridle  Well, that's a pretty broad place to start, but not a bad one in the sense that these are quite vague terms––AI algorithms, things like this. But then they're always kind of intentionally vague, I think. When you hear them being deployed there's always something that's missing in the conversation, like some level of understanding or some level of specificity that really matters. But you know, the place that we hear them deployed the most is in mostly these days in relationships, like the big tech companies. So pretty much anything you do that involves touching a computer and not just the obvious ones, like the ATMs or even the computer in your car, because most cars are computers these days. Those algorithms just mean a computer program. It's any bit of computing machinery, which is kind of throughout all these parts of our lives these days.

But mostly, what we hear we talk about that is, you know, the kind of the stuff that's made by companies, because that's who makes most of the tech in the world around us. And so of course, then, you know, they're making those programs as part of part of kind of profit systems–systems that at some level, are trying to make a profit, make money, and that money largely out of us. Every time we use a computer system, there's an expectation that somewhere someone is benefiting from our use of that, and so some profit is being extracted. And that means that so many of the systems in the world around us are built with this need to extract. to exploit some level built into their very basic philosophy. They don't come from a place of cooperation or collaboration. They come explicitly from places of exploitation.

 And most of us are kind of aware of that these days. We know that, that our use of things for free on the internet, search engines, and so on, come with a kind of cost to our privacy. But that logic is embedded in every aspect of our lives. It's not a sort of technology question, but a question of capitalism, get money off at the grocery store, if you use your loyalty card, right, which is just the same exchange of personal data for some kind of discount. And that logic has been so kind of imbued into our everyday lives that always notice when it extends into bigger things as well. And one of those places is now contemporary, artificial intelligence.

And we can talk about what that means. I put very, very large scare quotes around AI at this point. But the AI that we all hear about at the moment is that made by these very large tech companies whose position is the same. They're looking to make money out of this thing. And they're breeding a kind of intelligence that is based on this kind of profit motive. And my feeling before we'll go into this is that kind of intelligence or rather the idea of intelligence, it's just this incredibly narrow one. We expect it from our interactions with capital in various forms, but these kinds of expectations of exploitation are built in. But I think it becomes a very different thing when we're kind of interacting with things that claim to be intelligent but also have this kind of intensely predatory basis built into them. But that's the world that certain sectors of the world are trying to construct at the moment. That I think, is very poorly understood.

Ayana Young  I want to go into a quote from your book Ways of Being, and you write, "That's what happens, it would seem when the development of AI is led primarily by venture-funded technology companies, the definition of intelligence, which is framed, endorsed and ultimately constructed in machines is a profit-seeking extractive one. This framing is then repeated in our books and films and the news media and the public imagination, in science fiction, tales of robot overlords, and all-powerful, irresistible algorithms until it comes to dominate our thinking and understanding. We seem incapable of imagining intelligence any other way," end quote. I'd love to hear how has this control and development of AI by venture capitalists hijacked our capacity to imagine intelligence beyond exponential growth.

James Bridle  Also now getting applied to things that were regarded as kind of creative professions as well. Even though the product is still terrible, there's just the power and the glamour of these things. The fact that they get wrapped up into these strange technological terms, make them capable, are still twisting our idea of, I guess, what's important, and what matters about these things on a very deep level.

 Well because they started with things long before intelligence. We seem very incapable of imagining society beyond a paradigm of exponential growth. So I think it's quite understandable how a vision of intelligence has been hijacked in that way. Where you see it coming out most strongly at the moment is looking at the areas in which there's this thing that gets called AI by a certain sector of industry, when it gets applied. And it gets applied to things that they find interesting, but also, often things that they think are sort of, you know, easy, or kind of low value in certain ways. So what we had was a long period where AI was largely about games of various kinds. You know, playing chess, playing go, playing these kinds of things. And the you get AI being applied to service industries, logistics, things that again, are seen as kind of low status or low skilled, driving actually being one of them, though, it turns out to be much, much more complicated than they ever imagined, as most of these things do. And now the latest wave is hitting at the level of kind of secretarial work, data work at the level of language, things like Chat GPT, but also critically, I think, at this level of image generation, because again, you have this idea, from the perspective of technology companies, that's something that must be easy, right? Something that's both for people who trade in algorithms and mathematics exclusively, that that kind of visual creativity is also something easy, something low status, that can be concreting in this way. None of these things are easy. In fact, like there's, there's lots of really interesting things that happened, when this gets applied to games, things get more complicated, things get more interesting. Turns out, it's very difficult to apply to real-world things like driving. That's actually been going on not a huge success. And, you know, it's a totally different thing when it gets applied to writing or to image making to creation, what we think of as these very human forms of creation as well. But they have such self-power and glamour in society, that also gets taken very seriously. And so you get a kind of degradation of those forms, you get an even kind of lower status assigned to the kinds of work that these companies direct AI out, whether that's driving, increased kind of automation of delivery, logistics services, whether that's kind of delivery things, whether it's taxi, driving, all these kinds of things.

Ayana Young  Yeah, it's complicated. And there's another piece that you wrote "The Great Distractor", you say, quote, "To actually interfere with the money-making algorithm at the level required to actually protect children, democracy, and our sanity would mean reducing the scale of their business. For a company whose business is scale, which cannot operate except at scale, that presents an existential threat. Treating the problem would mean they would cease to exist," end quote. So yeah, I just want to riff on the issue of seeing profit as godlike or a guiding principle here, because if that's the basis of the project, you know, what are people not willing to do?

James Bridle  Yeah, I mean, it's worth putting that quote into a bit of context, I think. So that was part of a kind of ongoing work about contemporary media, and about the way online video content creativity works. And very specifically a lot of studies I've done of children's media online, things like YouTube for kids, the way small children use media, and the way that that's the media ecosystem––particularly things like YouTube has been shaped around them. And so what you have happening there is that you have what gets called the attention economy, or surveillance capitalism, which is a system for making money off getting people's attention, quite simply. The more eyeballs and the terminology that are on a thing, the more money will be made. And then when that system is largely automated, really, really weird, and often quite terrible stuff starts happening. I first encountered it really strongly and wrote quite a lot about what happens when that happens to kids' content. So you get children's cartoons that are increasingly designed partly by humans, but also partly by the kind of feedback loop of these profit machines to get longer to get stranger to get more addictive for small children in very unhealthy ways. And that sometimes also are then susceptible to really nasty content, violent content, sexual content being kind of threaded into them. But the system at that point becomes so complex and large that no one really understands what's going on. It's very hard to intervene, and to intervene would mean shutting the system down

Now, I think the system should mostly be shut down in that case. We're in this very weird position now where technology and technological companies behind them have got such kind of momentum, that it's almost impossible to imagine that you would just shut them down. So this isn't just a problem for children's media. Although it is a huge problem with children's media. It's a problem for all of us. It's well documented now that there's a kind of radicalization process that happens when people view content online, that is ordered for them algorithmically, whether that's kind of YouTube videos or the Facebook newsfeed. Because that system wants you to watch more, it ends up showing you more and more extreme content, that people get pushed into positions of very extreme radicality, whether that's the far right, religious extremism, or just kind of absolutely insane conspiracy theories. The algorithm wants you to pay more attention, it will feed you more extreme content, and your position will be moved to this more and more extreme place.

And my counter-example is always, imagine there was a stop shop on your local high street or your main street, whatever it is, and 1 in 10, people who went in there came out believing in the supremacy of the white race, like, we would shut it down. It's just fundamentally something that we would say, there are all kinds of complex issues here. But we should stop this particular thing from being present within our society. We seem really incapable of doing that within technological cultures that exist at the moment and because of this kind of weird glammer that technology has––this claim that people don't really understand it fully, that people are scared to speak about it fully, that it's somehow disconnected from our culture. And you see this happening now again. I said, I've been writing about this extensively about television media, children's cartoons, both adults, social media, whatever it is. You see it happening now with AI, that you have what are not intelligent, at all, but are like very powerful computer systems fed on the last couple of decades of all of our personal information being unleashed into the world with what will be huge consequences. And what we actually have is the people building it going around the world telling us that the robot overlords are coming, and we should worry about that, and not about the kind of incredibly predatory forms of profit extraction that they're actually doing. It's the most extraordinary piece of misdirection that they've attempted in some time, and it's very blatant. And it seems to be working very well. But it's, yeah, it's just it's such a strange thing that capitalism, in its current form, is just so good at moving on to these new territories as it always does well, and we seem incapable of sort of talking meaningfully or directly about it.

Ayana Young  That really concerns me. Let me start over because a lot of this concerns me. There's, of course, the media piece and the piece around impacting and influencing children, which to me, is a real cultural issue, just everything that feeds into how we as humanity understand things, believe in our value systems, and the fact that AI is going that deep is really frightening. And then the other thing that maybe frightens me as much is how AI and algorithms are also now playing a role in extraction. And there's another quote, in your book Ways of Being that speaks to Repsol, which has this relationship with Google. And it's putting advanced machine learning algorithms to work across networks of oil refineries, helping boost efficiency and output and things of that nature. So I just wanted to dive in a bit about how AI is being used to further extraction. You think about industrialization, and how quickly that ramped up the rate of destruction of the planet. And I can only imagine now what AI could do in terms of that speed of extraction of oil.

James Bridle  Yeah, so what's happening within that kind of fossil fuel economy at the moment is that you have the weird paradox. As we approach the limits of that fossil fuel economy, if we start to basically extracted everything that's relatively easy to extract and what's left gets harder and harder. But as a result, it also gets more valuable, because we're completely failing to transition away from it. So the world is still just in much need of what's there. And so what is there just kind of keeps increasing in value even if it's hard to get. So it becomes more rewarding to get out, even though it's harder. But I have to push back a bit on the idea that this is really about AI and algorithms. Yes, of course, having more powerful computer systems does allow you to drill more efficiently, to do more fine-grained surveys, to identify deposits that would not have been so obvious before, etcetera, etcetera. It does allow for that. And of course, it allows it at a lot higher level, and so on and so forth, compared to the techniques that were going on before. But really, this is just in line with a much broader trajectory of capitalism, which is always going to go and extract these things anyway, it just now has AI as a tool to do it. But there'll be guys out there with chisels if they didn't have AI. Like, in that sense, it really is just a tool that's being put to use. And I think it's really important when we talk about algorithms and AI, that we don't do it in this kind of abstract way, where we point to a computer and go, "Oh, is the computer doing the scary thing?" It's very clear in cases like this, that it is the industry that is doing this thing. It is, you know, it is the fossil fuel companies and their allies in the technology companies who are continuing to do work, which is going to result in vast destruction of life across this planet. And that's just not a technological question. That's a social and political question. It's a question about our culture, and what we're prepared to do, and how much we're prepared to change. And really, you know, for me, say, AI is so not interesting, particularly this kind of AI, which is just really powerful computers. It's interesting how powerful those computers are, but then when you see what they're being put to use for you realize that they are just a building block of something much more powerful, which is the kind of the profit motive and the selfishness and stupidity that's associated with that.

 What that realization should do is kind of shift our attention away from this glamour of AI and algorithms to understand that technology really is just a tool. And it's a tool that we can make choices about, that we can have opinions about, that we can think about quite clearly rather than getting lost in a kind of technological determinism. By technological determinism, I mean, this sort of feeling that most of us have, that technology just does what it does, irrespective of our desires. You know, we may not like what it's doing, but oh, well, that's just you know, what happens when you apply logic? Well, it's not, it's just the culture rather, in which those technologies are produced. We have a culture of fossil fuel dependency. And we have a culture of seeing the earth as a resource for extraction. And really, as I said, it doesn't matter. In that culture, whether you use a teaspoon or an AI to extract the oil, people are going to go after that oil. And it's the kind of culture and thinking around the stuff that needs to change.

Ayana Young  No, I think that makes a lot of sense. And I'm sure that those of us who want to continue extracting will create more and more tools to support that extraction, whether it's AI or some other technological tool moving into the future. But I also wanted to speak about different forms of intelligence. And I'm just considering that as we reckon with the reality that intelligence may be far more than our human-centric minds once believed, I'm interested in the multiple paths of intelligence that are possible. What might it look like to recognize artificial intelligence alongside more than human intelligence more broadly?

James Bridle  Yeah, this is important, why I get frustrated with this kind of very narrow vision of intelligence that's promoted by those who talk about AI a lot… because it's such a pathetically narrow way of seeing the world. By leaving aside all the hideous, actual damage that it does, it's also just this extraordinary failure of the imagination. I've been working with AI and again, this thing that gets called AI in various forms for a couple of decades. And there's always this constant of questions to work out, like, what is interesting about this thing? How does it allow us to think things that we haven't thought before? Like, one more interesting definition of intelligence is not like, how does this allow us to do the same stupid ass things we've been doing forever, just faster and harder, but rather, how does it allow us to think new thoughts about the world?

And one of the new thoughts about the world that, for me, AI, allowed me to think was this slow and dawning realization that machine intelligence, whatever it is, is something different to human intelligence. And that comes really without all the other interesting things I'll talk about in a moment, that comes from just a direct engagement with the technology as a totally rational, quite narrow, humanist perspective.

You know, for the last decades upon decades, since the 1940s, if not, some theorizing before, computer scientists have been trying to build this thing called AI, expecting it to be like human intelligence, basically expecting to make something that acted in the world like a human being or thought in the world like a human being. And every time we try out all these different problems, it does stuff that we don't really expect. It behaves in very different  

You see it in everything from the way that it plays chess, to the way that it sorts information, to the way that it understands, categorizes, and acts upon the world, which is what most forms of intelligence do, some forms of intelligence do. It just does it in a radically different way to humans. And that's really fascinating. It means you can collaborate with it in interesting ways if you treat it as a collaborator, rather than just as a kind of a device or slave. But at a higher level, it also says, "Well, hey, there are different ways of doing intelligence more generally. Like that human intelligence, even in the thing that we created, that we thought was going to be likely human intelligence turns out to be something quite different. That intelligence is, in fact, just for starters, some kind of spectrum, that there are different kinds of ways of doing it." And so for me, I come to believe quite fundamentally, that the cultural purpose of artificial intelligence is really to be something that forces us to recognize that within the Western rationalist enlightenment position, which is not by any necessity, the majority position of this planet, we're bad at recognizing we're not good at acknowledging the intelligence of nonhumans in any meaningful way. In fact, we've tended to restrict this term intelligence pretty much to humans' standard definition of it.

 And suddenly, we find even in this thing that we've built ourselves, something unhuman going on, in terms of its intelligence. The straight-up first result from that experimental result is that oh, okay, there's more than one way of doing intelligence. And if more than one, then multiple, than the potentially infinite. My view on technologies in general, however much they used or misused, they are always downstream of culture, they are products of the cultures that we have. And they tell us something quite often about our culture.

I spend most of my life in fact, as a teenager, thinking too much about the internet. And thinking that to the extent to which is this kind of incredible flowering of culture that no one really set out to build or think about. And yet, it's kind of the greatest cultural invention, it's probably language itself. And likewise, I think our obsession with intelligence and AI, is an often unconscious, mostly unconscious, but not dissimilar desire to manifest some aspect of this nonhuman intelligence in the world so that we can feel closer to it, so that we can understand it better, so that it can essentially open our eyes to all of the other intelligences that exist in the world. It turns out that intelligence is, is way, way more interesting than anything we could build in a box. But for some reason, and this is something I have not figured out yet, but I'm endlessly fascinated with, we always seem to need to build the box first. We always seem to need to make these kinds of toy versions of things before suddenly then we start to recognize that they're already all around us in the world. 

Ayana Young  Yeah, I'm with you on that. And to follow this thread of intelligence a bit more I'm thinking about as AI was created by humans, can we see it as an extension of human intelligence? And with that, how has our limited understanding of intelligence come to shape AI in the vein of distinctively human intelligence?

James Bridle  I guess, I'd say to the idea of an extension of human intelligence is that, but human intelligence is an extension of something else, which is a much more generalized form of intelligence. The way that I've come to understand it, to thinking about the intelligence of beings, more generally––animals, plants, ecosystems, machines–it's the human intelligence, that is one way of doing intelligence.

And the couple of the main things that I wrote about in the book "Ways of Being," are the fact that kind of the more you treat other things as intelligent, the more you come to recognize that intelligence isn't something that just happens in the head. Right? It isn't something that happens to humans and isn't something that just happens inside the head. It's something that's embodied and it's relational, which means it's something that happens with your whole body, that's a result of your embodied experience of the world. There's no such thing as a brain in a jar, there just can't be. It makes no sense, to be disconnected from the world in that way. And there's no such thing as an intelligence without anything else to connect to. So intelligence is also relational and emerges from our encounters with other beings and other bodies in the world.  

So yeah, we've always had this 'we.' And let's clarify that 'we' by law. The way in which the kind of Western scientific enlightenment view of intelligence has been essentially what humans do. But you can look back through the last few 100 years at what philosophically and scientifically has shaped our idea of intelligence. And it always essentially boils down to what humans do, you can draw these kinds of other little qualities of it. And there's all these kinds of interesting experiments in which we test out other creatures, to what extent they are intelligent, but the benchmark of what intelligence is always human. So, the way in which we test other intelligence is always "how much are you like us." And that's an incredibly narrow view if you understand the world as a kind of an ecological system of which humans are only just one part. And our intelligence is only one of many that is involved on this planet, because of the particular ways in which we have evolved. The way that we come to see and think intelligence is crucially, as it's not purely the domain of the human. And also not purely something that happens inside the head. We tend to think of intelligence as this kind of brain activity, this thing that just happens inside our skulls. But that's really again, another kind of a reductive way of saying it. Intelligence is embodied. It's something that happens with our whole body, as a result of the kind of body plan that we have.

One of my favorite experiments that really, to me shows this is experiments that are done with other apes, in which they tried to do this,  which really tried to sort of quantify this intelligence of other species. One of the standard tests is the kind of ability to think and think ahead or make plans or to use tools. And so a classic intelligence test on other species is to put some poor animals in a cage, and give it a stick, place a little treat outside of the cage and see what happens. And lots of apes, ourselves included, are quite good at this, and would quite quickly use the stick to grab the treat. But for a long time, gibbons in particular and noticeably, were really bad at this test, they failed it all the time. And it was decades of basically science treating gibbons as being dumber than other apes, which made no sense kind of evolutionary or socially, but they refused to do this test until someone redesigned the experiment. And they hung the sticks from the top of the gibbons' enclosure, and mostly did this, the gibbons immediately reached out, grabbed a stick [munching sound], and got the treat. And in that moment, in the way that we structure the world, gibbons became intelligent. And of course, they're intelligent all along. But they're brachiators, which means they live in the trees. And along with various other kinds of bodily adaptations, that their kind of awareness and thinking is oriented upwards, because of their body plan because of the sociality, the lives of everything. And so they operate, their intelligence is just kind of pointed in this slightly different direction. Of course, they're smart, they're just thinking about other things, and meeting the world in a different way, because of the different body. And that's true for every other thing on this planet.

 It's embodied and also it's relational. So it's about this kind of encounter that we have with other beings and with other creatures. And that has, understanding intelligence as being both embodied and relational, has implications for us. For humans meeting other humans, for cultures meeting different human cultures. It has implications for our relationship with other nonhuman species of all kinds––animals, plants, ecosystems that I say, and it also has implications for machine intelligence as well––that we start to think of, particularly artificial intelligence is not being just a different or lesser or form of human intelligence, but really another type of intelligence arising in the world that is as interesting or not as any other kind.

Ayana Young  Yeah, I would like to follow this thread a bit more and talk about more than human intelligence. And another quote from "Ways of Being" you write, "I began to realize that the forest was filled with a constant hum of unseen signs and unheard chatter. Decisions were being made. Agreements reached. Bargains made and broken, end quote. And, of course, as a forest dweller, I love this quote, I felt very connected and agreed on so much of that, but I just want to hear you describe some of the other forms of intelligence you've encountered in your research for this book. And generally, within your relationship to the more than human world.

James Bridle  Now, the beautiful thing about starting to see this world is that you really begin to see it in every place you look, and every kind of encounter that you have the ways in which intelligence in its various forms manifests. But you've given the example of the forest, which is both an example of the intelligence of trees, the way individual tree species communicate, and sense their surroundings, and all these kinds of marvelous ways, not just with others of their own species, or with their kin, which they do, but also, across species, and in these networks with fungi and other beings as well. That's the holistic kind of ecological view of it. 

 You can also look at these incredibly specific examples of intelligence or kind of thinking. So I think of something like the slime molds, which I read about a fair bit, which in one particular aspect of something that they do, are very akin to machine intelligence in that it is possible to work with slime molds in such a way that you can get them to essentially perform a type of mathematical operation.

 Now, there's a thing called the traveling salesman problem, which is this really gnarly, mathematical problem that simply asks, given six cities on a map, how do you visit each one, you know, in the shortest, possible time only once? Very valuable problem for logistics companies. But a problem that humans and also computers are really terrible at. The problem we face in various forms all the time. But the mathematics of figuring out the best answer is really hard. And critically, it's what's called an exponentially hard problem. So every time you add a city, it gets way, way, way harder.

 But it turns out slime mold, one of the things so mods, do you notice that they spread around this little kind of damp, wiggly patches on the forest floor, finding bits of food and finding the shortest route between them. Turns out they solve this problem, this particular kind of what to us is a mathematical problem is them is just looking for food. But they're capable of solving this problem better than us or or any of our greatest supercomputers. They're thinking this problem in a way that we have, thus far no access to at all. So it's a very specific form of intelligence. 

 And then, thinking, intelligence kind of relationally as just an awareness of the world and an acting upon it. I think one of the most beautiful examples I've come across recently, is in the book I wrote quite a lot about the intelligence of honeybees, for example, who are you know, reasonably well known these days for being brilliant communicators, but also political decision-makers. Honey bees collect information about the world around their nests, particularly the location of nectar sources, then they return to their hives and they do this amazing dance, the waggle dance, to communicate the location of nectar sources. They do this little kind of running around in circles, wiggling-their-bums dance, and that tells the other bees how far to fly in or angle to the sun in order to find the nectar. And they do this with nesting sites as well. So when the hive wants to move, honey bees will go out of scouts to find new nesting sites, and then they'll bring that information back and they'll do the dance. But then multiple bees will dance different locations in a kind of form of debate. While different locations are checked out until gradually through a process of essentially direct democracy. A consensus is reached and the whole hive goes off to one of these new locations. It's an incredible job of collective information synthesis and consensus building. And the hives hundreds, sometimes 1000s of bees, that some have kind of likened as well to like the way in which our neurons are really our whole body is going to synthesize information. This kind of mass information since this is a very hard problem, but which evolution has solved in all of these different ways, just as it appears to have solved the problem of intelligence. Well, not the problem of intelligence, the opportunity of intelligence, perhaps in all these radically different ways, in different kinds of embodied life forms.

But I was just reading the other day, something that is in the book as well that I just can't stop thinking about, which is that it's not just the bees that are out there, looking around being smart, gathering information aware of the world around them. The flowers are too. Recent research has shown that flowering plants perhaps in general, certain certainly many species of flowers, individually perceive the vibrations of the wings of their pollinators, usually honey bees, also moths, as well, for some species. They perceive the specific frequency of those pollinators, and they change their behavior as a result. So within a few seconds, or a few minutes of hearing, being vibrated by this frequency, hearing, hearing the bee, the flower increases the sweetness of its nectar in response in order to attract first one, pollination, maybe repeat visits from others. The flower hears the bee. There's this incredible total communication and awareness going on all around us all the time at every single level of life, most of which we are radically unaware of, and may remain unaware of forever, but nevertheless exists.

Ayana Young  Those are such beautiful examples, thank you for sharing that. I really also appreciate how you speak of the networks of intelligence and the myth of the individual. And I think there's so much there in terms of what you've written about with humility and superiority––and yeah, just the myth of independence. So I'd love if you could speak to some of these large themes that you talk about in you definitely one of your pieces called "What is Our Relationship with Alien Consciousness," and other forms of writing that you've shared in

James Bridle  The piece you referring to is really one of the first times I stated plainly, or I tried to write something and realize what I was saying about this moment of realization that artificial intelligence was the kind of gateway to thinking about other intelligences more broadly. And that the dominant science and Western enlightenment thought, is to narrow down the main result of enlightenment sciences is that we've tended to understand the world by breaking it apart, breaking it down into smaller and smaller pieces, splitting things apart, and separating them from one another. And that's largely the experience of the individual in modernity as well––the experience of atomization and alienation, that we feel not just from the world, but from one another. That is a product of modernity in its various forms, and a product of the scientific method has done the same to knowledge, split into all these discrete pieces. And one of the things that you see that I find to be most characteristic of newer ecological sciences as they're largely about putting these things back together again.

 Botany being a good example where we've always treated plants in particular, as essentially tiny machines, cutting them up into smaller pieces, until we get down to the cells removing the leaves, the flowers, the stem, the roots, seeing these as kind of mechanistic pieces rather than as wholes as whole organisms with their own behaviors and desires and much else. And so much of the work I think is kind of putting these wholes back together, putting organisms back together, and then starting to put systems, ecosystems back together. And that, it really applies to us as well. The myth politically of the selfish individual is tightly tied to ideas of the kind of intact bodily individual, which is not scientifically sensible. There is no such thing as an individual at the physical level. We are made utterly and totally out of everything else, whether that's the biome that keeps us alive, the two kilograms or so of other organisms that live in our gut, in our skin, and on the surface of our bodies that are utterly intrinsic to our health––washed, cleansed, antibiotic-ed of them we will die.  We are living communities of beings, of which the kind of human part of us is not the whole and is actually at the limit kind of completely indistinguishable from the rest. Your ability to solve complex mathematical tasks is affected by the health of the gut flora. If you change the makeup of these bacteria in your gut, your brain will operate differently. So there's, there's no clear boundary there. And that goes all the way down into our basic cells. 

 And most of our DNA has come to us at various points from other species. We're the result of DNA being written and cross written for millennia. And not just through the systems of linear descent that we've all been taught in school––the Darwinian atomization of the genetic line, this belief that we are just like this splicing of our direct parents. In fact, the genetic history and thus, our bodies––our beings are far more complicated than that. We're the product of all kinds of strange modernizations and unexpected encounters that go back throughout the entirety of our evolutionary history. And once you start to knock away at the basis of that evolutionary individualism,  you have to knock away at the political, the social individual as well. You have to start seeing these things as being ecological to the root, composed of multiples at all levels, you know, start to kind of float free of a lot of the assumptions that we make about our bodies, and about ourselves as a species, about our superiority and then our relationship to the rest of the world. And I think it's a really critical realization to come to, this series of realizations about the dependency of each of us on other members of our species and other members of this planet. Really fundamental realization. I think this is also essentially critical to our survival, and makes a kind of rethinking of our place and relations absolutely necessary. Once we understand that we are the product of those extraordinary patrigenization over billions of years, it makes almost everything that we're doing right now, almost completely nonsensical.

Ayana Young  That's a perfect way to say it. I often feel like the ecocidal decisions we're making right now are a type of insanity. And truly don't make sense for our survival or even our fulfillment. I don't think we're happier for it. Or more well, of course. So yeah, I appreciate you speaking to that. 

And another topic that I found really interesting that you write about is the ecology of technology. And in your book, Ways of Being you say, quote, "An ecology of technology then is concerned with the interrelationships between technology and the world, its meaning, and materiality, its impacts and uses, beyond the everyday deterministic fact of its own existence," end quote. So I'd love to hear from you, why is it important to contextualize technology in this way, especially as new forms of technology become more and more embedded within our daily lives and rituals?

James Bridle  To start for better accounting for what it was that my use of technology, in particular, how that related to the planet. So something that I've thought about for a long time is the way in which, when things become, to use a funny word here, like a better one, technologized. When they kind of slip into computer programs and behind screens (unknown). It's one of the ultimate abstractions we have from the world. You know, if you're working with physical materials in various ways, then they have more of a connection, obvious connection, to the world around them. If you're making something out of wood, or you're throwing a piece of plastic into the bin, there's an opportunity there at least to understand the relationship that sits within various kinds of systems, networks, of use appropriation and waste, or whatever it is. But that becomes so much more abstract when you do that through a screen and through technology. You and I are talking to each other, now, across vast distances across pretty much half the planet, or half the planet that isn't circled by material connections between us: radio waves connecting my computer to a router that is then going through some kind of little wire into the ground that is going probably from the island I am on to the mainland over there to join a  bigger cable. Along that journey, it will pass through large buildings filled with more computers that are running hot, that require air conditioning to run, that require local access to resources. I can't possibly account for that all of this energy use and material that's being mobilized just to carry our voices to one another at this particular moment. And then of course, that then is been carried out to whoever somewhere hopefully maybe as listening to this; but also more materiality, more stuff that's been mined out the Earth, the plastic, the lithiums and all the various substances that go into the stuff around us.

And so on one level technology can be this thing that does estrange us in all these kinds of ways, socially, and individually, as well as environmentally. And mostly what I'm asking for in that moment is that we choose to see it in this way as something that can actually be part of creating that awareness. Just to make AI be this thing that makes us more aware of other intelligences rather than blinding us to them. So an attentiveness to the materiality of technologies, and much else, allows us to actually reconnect in certain ways to think of our place within these systems; and therefore to change our behavior and thoughts around them and to think better about what their place in our kind of politics or thinking actually is. And not to disregard this huge part of our culture and politics, but to see it as just another domain in which we have choice, in which we have what I increasingly call agency. 

 One of the things that I've studied quite a lot in time is the way in which technology broadly disempowers people, despite the apparent power it gives, because it takes so much control of the world around us out of our hands, and of most of our hands. You know, whether that's automated systems that are doing our job, or determining our social position or ordering our social lives. So much of that work is done completely invisibly to us. And the idea is always that that makes our lives easier. But of course, what it's doing is disempowering us. It's taking potential choices out of our lives in various ways––choices over the kind of information we consume, but also the kind of labor relations, personal relationships that with those decisions disappear from us. And the result of that I feel very strongly is the kind of social world that we have now. It's always been the stated and unstated, but more stated the most, belief, that these kinds of communications technologies have overtaken the globe in the last 100 years, would produce some kind of magical consensus, right, that we'd kind of all come together around new forms of truth and ability and power and experience to produce a better world based on these technologies. That hasn't happened. In fact, what we see broadly culturally around the globe is increased fundamentalism, increased polarization. There's a complete failure to reach any kind of accord. And in fact, that its mostly the communication that happens through these platforms is incredibly divisive, and brews a kind of anger, that personally, I feel is based on a fear and a real sense of disempowerment that they produce. There's a direct connection between the complexity and the opacity of these technologies that we use every day and the general sense of fear, shading into anger and rage, as the dominant tenant of our times. And I see that very much in our responses to planetary crisis as well. That the planetary system, as we are now aware of it, again through technology, is so extraordinarily complex. And so obviously, in a really hideously, terrifying sense of chaos and danger, but most of us have no way of responding to that in any meaningful way. And so again, our result is this kind of fear, shading into anger and fear, primarily, helplessness in the face of vast global change. Everyone knows something terrible is occurring, and nobody knows what we can possibly do about it. We're all frozen in place by this climate trauma, this climate anxiety, and the realization of what's going on.

  And so now mostly I'm concerned with knowing what are these practices of drawing connections, of making, speaking about this potential of so much of the stuff around us whether it's technological ecologically, kind of considered differently, to make us more capable of taking action in some way. Not as a process of kind of revelation, "I'm just going to look how bad the situation is," but like pointing out the actual mechanical basis of it, whether that's in our technology, or whether it's in our relationship to other beings. We are capable of operating on and with the world around us at the most amazing level, whether it's through the use of computers... So in my computational work, I often teach people to code of really simple levels, not as a chance to get a job, but so that they don't feel so disempowered by the technology that they use every single day. And likewise, in ecological work, that can take the form of you know, one of the things I do is I teach people to wire up solar panels so that they don't feel that their relation to the climate is so completely radically alienated. Or talk about going out and actually talking to the trees or getting to know the bees or having these kinds of direct relationships, which shift us from a position of total helplessness and total fear into one of awareness and relationship and agency that actually allows us to have a meaningful relationship with the world once again, and therefore perhaps to start to perform the kind of shifts in consciousness and action that are so necessary.

Ayana Young  I'm so happy you spoke to that––all of those things, because I was definitely thinking about how social media dysregulates us and also how AI takes away our ability to act. And that was where I wanted to go. And this next question you had brought up, how much of this over culture distracts us and guides our attention in really specific ways. And there's something you wrote titled, "Air Pollution Rots our Brains––Is that why we don't do anything about it?" That's the title. And you write, "Multiple studies have shown that the way in which social media regulates our information intake for profit leaves us more divided, less politically aware, and increasingly prejudiced and violent," end quote. And I wonder on a bigger level, when we are, in so many ways, being called to stand and fight for our loved ones, the health of our loved ones, for ourselves, for the land we live on, for our water. The fact that we make decisions, or allow decisions to be made without a struggle or without resisting, in terms of poisoning our own water. It's really interesting, just on such a basic level, how distracted and almost paused in our own tracks, we are for things that are seemingly very clear, that we should want to stand up for so I guess this question of like, how do these algorithms function in making us less capable of action?

James Bridle  Again, I wouldn't just reduce it to the algorithms, but systems are built, many of them, with very much the intention to distract us. They don't really care what they're distracting us from. I don't think every single person who builds like, you know, some stupid mobile game is actively trying to, hasten the destruction of the planet, because it means we pay attention to that, and not that. But in part that is the result just because we're very bad at choosing where we direct our attention. And that's not really modern problem, or anything else. That's essentially the central lesson of three major world religions throughout history, or any kind of major spiritual practice is about paying attention to the here and now in various forms. Attention is this incredible, is a superpower when you choose where to direct it. But most of us don't have that choice. Because we lack... it's so hard to maintain a kind of awareness of the ways in which we are being distracted. 

 But also, I mean, really, really critically, we're also traumatized. We're also living under circumstances that are almost totally unbearable. That most of us are in the position of struggling every day for our daily survival under capitalism in incredibly precarious situations where we can barely think how we might change our lives. We're told that the situation of the world is in some way, our fault, that it's due to our individual behaviors, that the world is the way that it is. And yet, there's no way in which the kind of actions that we're capable of taking, most of us in the present moment, will have any effect on that whatsoever. So we're faced with that realization that we are, we are kind of genuinely helpless. 

 And we're also traumatized, like deeply and psychologically, by the level of which things are happening. It is the most damaging kind of thing that can happen to anyone, psychologically, is to be that kind of level of helpless. And what it does is what trauma does, which is that it freezes us in place. It makes us completely incapable of changing our situation. At the highest level, you see this in things like PTSD, which are around very specific, deep forms of direct trauma, where people will continue to kind of relive and reenact that moment of trauma and be unable to kind of break out of this cycle. We're all in similar cycles to that because of the trauma of everyday life, and also the very specific trauma or realizing the existential danger that our biosphere faces in the present moment. And we're all traumatized by that realization. And no matter if we can talk about it all the time and we can blame various factors, and we could certainly can blame certain factors and we should blame oil companies–-the biggest contributors to this do far, far more than any kind of individual--but none of that really matters. We all know how bad the situation is. What matters is essentially treating our own trauma, in that sense, by reconnecting ourselves to the world around us in really meaningful ways, is a prerequisite to any kind of change that might occur. 

 Earlier on, we were talking about the individualism that is a result of both algorithmic and social, political, corporate systems, and there's also a result in part of just our embodiment, as humans, and the power of overcoming that in various ways. One of the people I returned to quite a lot is Gregory Bateson. He was very clear that, you know, this is way back in the kind of 1960s and 70s, that the environmental crisis that will be termed at the time, things like the pollution of Lake Erie, were a psychological crisis, were a mental crisis, were kind of a sickness of society that was infecting the world in this way. He wrote extensively about the fact that we’ve driven Lake Erie mad through our own kind of madness... that we were in this kind of what he called an eco-mentor relationship with the environment around us. That continues. And so part of the work is understanding how we heal our own relationships, which is how we heal the world. It's not through recycling or trying to sustain in various ways the terrible, terrible social situation that we find ourselves in the present. It is always going to be about a radical, radical shift in the nature of work and the nature of understanding and ultimately in the nature of consciousness that happens both at the practical level but also at the spiritual level of awareness as well.

Evan Tenenbaum  Thanks for listening to For The Wild. The music you heard today is by Memotone. For The Wild is created by Ayana Young, Erica Ekrem, Julia Jackson, Jackson Kroopf, José Alejandro Rivera, and Evan Tenenbaum.