Made in our Image

Sermon Image
Preacher

Stephen Driscoll

Date
May 12, 2026
Time
18:30

Transcription

Disclaimer: this is an automatically generated machine transcription - there may be small errors or mistranscriptions. Please refer to the original audio if you are in any doubt.

[0:00] It's great to see you all. Thanks very much. Lovely to see you all. Human faces and bodies and all that kind of thing.! Hopefully some of you had a chance to meet Steve and I'll get him up in just a second and we'll hear. I haven't prepared him or anything for getting to know him but we'll just get to know him a little bit and then I'll hand over to him.

[0:26] I think Steve's going to talk for about 30 or 40 minutes on the topic of AI. What does the Bible teach about AI? Just checking. Then we'll have a chance for questions and we can pepper him with questions, grill him with questions. I don't know, for another 30 or 40 minutes if we like, I think. Does that sound good? Sure.

[0:50] Yeah, he's very cash, comfortable. He's not going to use AI to answer questions. I think, I think, I think, go for it. Right? If ChatGPT has any questions for Steve, I'm sure we can ask him. We can use it. So why don't I pray for a good evening and then I'll get Steve up.

[1:08] Loving Father God, we thank you for gathering us here this evening and we pray that you give us, help us to be engaged, help us to be clear in our thinking, help Steve as well to deliver the truth and with clarity about what your word does teach us and how your word prepares us for this new age of AI that we're stepping into.

[1:30] We thank you that your word gives us all that we need for life and godliness and we do just pray that you'd help us to trust you and your sovereignty and your good plans for us in Christ as we face the future and all the unknowns of the middle before the final great and glorious end that will never end and we'll get to be with you and with each other forever.

[1:53] We just pray that you'd help us to be edified this evening and encouraged and to be more strengthened in our, in our Christian faith and our journey with you. And we pray these things in Jesus name. Amen.

[2:07] All right, Steve. So come up here. Let's just get to know you. So first of all, where are you from? Maybe I shouldn't ask more than that. Where are you from?

[2:20] Where are I from? Starting in the womb. I was born. I was born. Can you stop asking questions? Be quiet. I was, I was born in a city to the south of here called Sydney.

[2:33] My dad worked at a university and we kind of were born just a few minutes away from the university. I grew up there. That's where I'm from. Australia.

[2:43] Yeah. Yeah. And now you're not from that city. I'm still from that city. I don't live there anymore. Yeah. You didn't come from there recently. You're a terrible interviewer.

[2:57] Six years ago, I left Sydney. I was going to take over. I left Sydney and I moved to Canberra. I work in at the Australian National University, which is one of the unis in Canberra.

[3:08] And we're trying to minister to the university students there. Yeah. So you're working with our good friends in the AFES network. Yeah. So what does your kind of day to day or your week look like usually?

[3:20] Yeah. Good question. There's a certain number of students that I would catch up with one on one. There's, we have a Monday night theology and training and courses night and we do different topics.

[3:32] And then we train people in practical skills like barber reading and sharing their faith and all sorts of stuff. We put on a talk every week, not trying to replace the church, but just trying to have a nice talk that maybe people will bring their friends to.

[3:47] It's really public. It's in the middle of, it's in an open location. We do mission on campus and then we run heaps of camps and things and we're trying to train and deepen people in their biblical thinking, but also to take the opportunity to reach people on the campus.

[4:04] Yeah. Awesome. Yeah. And you're married with, I heard, three kids. Yeah. Initially, I heard only two and that's what we advertised, but you've had another one.

[4:14] Yeah, that's right. Yeah, yeah. Still going. Yes. When I wrote the book, I had two kids and so that's gone into Google. And now for all of time, I'm like, there's been this snapshot that Stephen just got two kids.

[4:25] So if we ask Chachibit how many kids does Stephen just got two kids. Yeah, we'd be convinced that I have two, but I'm convinced that I have three. My youngest is a one-year-old and I've got a three-year-old girl and a six-year-old boy.

[4:37] Yeah, cool. And you go to church? Yeah. Do you? Regularly? Yeah. Yeah, regularly. Every week when I'm not, you know, away. Just checking. And, but it's an FIEC church down in Canberra, right?

[4:50] Yeah. You go to Crossroads with the saints down there? That's right. Yeah. What sort of church is your church, if you don't mind me asking? We are an FIEC church as well. Oh, are you? Oh, cool. I didn't know that. That's great. Well, we didn't talk.

[5:00] Yeah. So I go to Crossroads, which is an FIEC church in Canberra. So if we're over down there, we could visit you. Yeah. And check if you're going. Sure. Good to know.

[5:13] Is this what like churches are? Should I ask you any more questions? I'm slightly angry. What got you into AI? Yeah.

[5:25] Hey, I'm just thinking on the fly with the only brain I've got. I'm not made of silicon here. But what got you into AI? Why are you so interested in this topic? Yeah. Oh, well, I'm not that interested in the topic.

[5:36] But people keep asking me to talk about it. Well, really, sell it to us, mate. Sell it to us. Neither are we. No, go on. I'm interested in, I love teaching the Bible. And, you know, most of my job is going through Galatians and going through, you know, Exodus at the moment and preaching the Bible.

[5:53] But one thing we try and do is we try and think biblically about different issues in the world, mainly on our Monday night time. And all sorts of issues just keep emerging, don't they? And we have new issues in five years and all that.

[6:04] And I'm convinced that Christians often undervalue the wisdom in the Bible. And they just go, you know, great to listen to secular voices and all of that. But they don't actually go, how does my Bible speak into this new issue that's arisen?

[6:18] And train themselves over time to get really good at making those connections. So I just thought, oh, you know, how does the Bible speak into this issue of AI? And I came up with some thoughts. It's the sort of thing we will do in a training session with our students.

[6:31] And then I was like, oh, you know, maybe there's a book idea in that. And I sent it off and then kind of it went from there. And I've now done 65 AI talks. And, you know, but I'm not obsessed with the topic.

[6:44] It's an important topic. There's a lot of important topics in the world. And this is just one of them. Yeah, that's really cool. All right. Well, look, I'll hand over to you. Cool. All right. So take us away. All right. How about I pray?

[6:56] Dear God, I pray that you would help us to think well about this important arising technology and that we would get a bit of practice thinking biblically, seeing how much wisdom there is in the Bible.

[7:09] Amen. Just for myself, how many people were at the lunchtime talk at the University of Queensland today? Is it just one? All right.

[7:19] Because it's going to be very similar. So I just want to know how self-conscious to feel. But all right. If it's only one, I'll say pretty much the same stuff I said at the lunchtime talk, which is I'm going to start by saying, how far are you from plastic?

[7:33] Are you 10 meters, 1 meter, 10 centimeters? I believe these chairs are plastic. Are they not? No. All right. So all of us are within a meter. I would think of plastic.

[7:44] The revolution in AI is a lot like the revolution in plastic that occurred earlier in the 20th century. Plastic was something, if you wanted to go and see plastic for some reason, you would have had to go to a research laboratory in a university somewhere in 1930.

[8:02] But by 1950, plastic was just everywhere. And so they did a soil analysis at the top of Mount Everest, which is the highest place on Earth.

[8:13] And what did they find in the soil at the very top? They found microplastics. They did a soil analysis in the Mariana Trench, the deepest place on Earth. What do they find? Microplastics.

[8:24] There's not a square meter on planet Earth, I would think, that doesn't have plastic to some degree or not. It went from being absolutely nowhere to absolutely everywhere.

[8:38] And we're all within a meter of it right now, which is crazy, isn't it? But various technologies have done this. They're nowhere. And no one's even heard of them. And if you try and talk about them, people are like, what do you want about?

[8:50] What? That's silly. And then a few years later, they're absolutely everywhere. The smartphone, Wi-Fi, credit cards, QR codes, right?

[9:02] Like, for a while there, like if someone said, what's a QR code to you 10 years ago, you would have been like, I don't know what that is. Now they're everywhere, right? And these technologies, when they come through, have an enormous practical impact.

[9:17] Just think of all the ways that the smartphone has impacted you practically. They impact the way you do work. They impact the way you learn your education.

[9:29] They impact your family. They impact parenting. They impact church. They impact faith. They impact your mental health. They impact all these really practical things, these technologies that come through.

[9:41] So what's the next big technological revolution that's coming through? I think it's pretty clear in 2026, if it wasn't already clear before, I think it's pretty clear that the next big technological revolution is going to be artificial intelligence.

[9:57] In fact, it's probably already arrived, hasn't it? I mean, has it arrived in your workplace? Yep. And everyone's stoked about that. Has it arrived in the schools?

[10:08] Are the kids using it? Yeah, it's certainly arrived in the universities, I can tell you, that they've been redesigning the entire, you know, assessment processes at the universities. Has it arrived in your Bible study groups?

[10:20] That someone's, oh, let me, I don't know what that verse is. Let me see what ChatGPT thinks, right? It's arrived, doesn't it? It's all around it. Nobody check ChatGPT while I'm talking tonight, all right?

[10:34] But we need to start thinking about AI. It's too important not to think about. But more than that, we need to start thinking about it biblically, right? Plenty of people in our society are thinking about this topic.

[10:46] Anyone who's got it in the workplace has probably been to a seminar or a convention or heard a podcast. I mean, everyone's thinking about the topic of artificial intelligence. But we need to think about it biblically.

[10:59] The Bible gives us so much wisdom and so much help to think about even such a modern topic as artificial intelligence. It gives us a framework. It gives us an explanation of why we exist.

[11:12] It tells us what we're here for. It tells us what right is and what wrong is. Other people in our society are muddling through a technological transformation right now with very little help.

[11:24] But we've got all this. We've got all this health and the riches of the Bible. Now, for the sake of time today, we're mainly going to draw on the start of the Bible and the end of the Bible.

[11:35] We're going to look at the doctrine of creation. And we're going to look at the doctrine of new creation. And we're going to notice just like a couple of different things in each that will help us to think well about the issue of artificial intelligence.

[11:50] And I'll get to that in a minute. But before we get into the Bible and we look at creation, we look at new creation. Some of us might still be thinking, what exactly is AI?

[12:00] What is it? How does it work? What's going on here with this new technology? So in four quick points, here's my best attempts to give you a slightly better understanding of AI.

[12:13] If you're in computing already, this is going to be too basic. But here's my four key points. Point one that I think is helpful for us to know is that modern artificial intelligence is inspired by the brain.

[12:27] Modern AI runs on neural networks. Even the name should make you think of the brain. The idea is you take aspects of how a brain works and then you create the closest emulation that you can in a digital form.

[12:45] Geoffrey Hinton is often called the godfather of AI and he says, I've always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to a human brain.

[13:00] All right. Key point two. Just trying to get our heads around what this thing is. Modern AI is trained more than coded. Neural networks are learning machines. You feed training data into them and they actually assemble themselves.

[13:13] They figure out how strong the connections are going to be between the different parameters by the rules of calculus. You can have a generic learning system and feed in training data on almost any field.

[13:26] As long as you've got the data, the same learning system could learn to drive a car. But the same learning system could be taught to fold proteins or translate languages or write your Bible studies for you.

[13:37] No one's done that, have you? I have. Oh, no. No. Well, we'll talk about that later. Just a chicken. Yeah. Yeah. Of course.

[13:48] All right. Key point three. Modern artificial intelligence is generally trained by making predictions. It's not the whole story, but it's at least part of the story. A model learns by making predictions and then calculating what connections, what pathways led to successful predictions and which ones didn't.

[14:05] If a language model was training by working through a whole lot of language, it would make predictions about what's coming next.

[14:15] It would make sense of a conceptual understanding of different things. On week one, it might learn something basic like the English words average five letters. It might learn how the comma works. Maybe a month later, it keeps training and it learns to make functional sentences.

[14:28] Then maybe it learns tone, humor, logic, sarcasm. Do you have that in Queensland? Do you have sarcasm? Okay. Sorry. It's late at night.

[14:40] But eventually you might learn like even more deeply hidden things like worldviews and logic. You might learn that human beings often don't say what they mean. And it would learn about our vices and our virtues.

[14:53] It would learn about human language, but therefore it would learn about human nature. And it would learn about fallenness and sin and all the different ways that we deceive and manipulate people. It's trained by prediction.

[15:06] Now in 2026, that's really just the first layer of the training process. But it's still worth saying. It begins as a mimic. It begins by learning to imitate what human beings would tend to say.

[15:19] At least these large language models do. Key point four, modern AI is a general purpose technology. This isn't a technology that just has one or two use cases. They're making intelligence and knowledge cheap.

[15:31] And just think of how many use cases there are for intelligence and knowledge. I mean, this is a really, really general technology. Don't you guess. But can anyone guess what was the top use case of chat GPT in 2025?

[15:46] Does anyone know? What are you? Oh, you weren't at the talk, were you? No. Have you been told the answer? No. Okay, this is a genuine guess. All right, I'll take it on faith.

[15:58] Is it making stupid enemies? No. That's good. Anyone else got a guess? The top use case. Sorry? Making videos.

[16:11] Making videos, yep. I was going to say therapy. Oh. How did you know that? Intuition. Okay, next slide. The top use case of chat GPT in 2025 was therapy and companionship.

[16:25] Well done. You're the first person to ever guess that. Yeah. No one ever says that. People say coding. People say writing emails. I don't know. But therapy and companionship was the top use case.

[16:37] Do you see how these people are using it? How general this technology is? Open AI retired a model called 4.0 in August. And millions of people on social media were devastated.

[16:50] Because they'd actually been chatting with it for months. They'd been sharing some of their deepest struggles and what they were worried about and what they felt in life with this model.

[17:01] And it has a memory. It actually gets to know you over time. And chat GPT thought they were just retiring a model that was surplus to requirements. But they felt like their best friend had just been put out to pasture.

[17:14] So these things are therapists. They're friends to hundreds of millions of people around the world. The second top use case, organizing your life. But the third one, look at that, finding purpose.

[17:25] So again, I think there's 1.6, when I last checked, 1.6 billion people who are weekly users now of these chat bots. 1.6 billion people.

[17:36] 1.6 billion people. And that would suggest to me that there would be hundreds of millions of people who are out there trying to find purpose through these chat bots. I think it might already be the most significant religious pulpit in the world.

[17:48] And I'm already hearing stories of people becoming Christians through chat GPT, which is crazy. And I hope they do end up in a church and don't just stay on chat GPT. But I think there's millions of people searching for God, searching for truth, even searching for the gospel through these things, if you would believe it.

[18:04] But enhanced learning, generating code, generating ideas, it goes on. My point is it's a very general purpose technology. It will find its way into almost everything. I currently feel really weird if I don't have this thing in my pocket.

[18:18] Does anyone feel like that? If I walk out the door and I don't have this, oh, what's going wrong? Well, I think soon, being without hyperintelligence will have that same feeling of, oh, there's a question I can't immediately have answered.

[18:31] That's so weird. I need my AI to help me out. Well, that's the world we're moving towards. A lot of people respond to all this stuff by saying a sentence that I think is far too common.

[18:44] And the sentence is, people will say, well, that's really impressive, but it will never do X. And the X changes at different points in time.

[18:55] When I first started seeing this sentence, it was like commas. You know, that's impressive, but they'll never understand the comma. Yann Lecun is a French computer scientist. He put forward sentences.

[19:05] He said they'll never understand sentences. Other people have put forward tone or sarcasm as the X. AI will never understand sarcasm. People have put forward images and art as the X.

[19:16] They'll never be able to do that. Gary Kasparov, he wrote a book on AI. He put forward creativity as the X. He said they'll never be able to be creative. Now, I understand why people do this.

[19:29] If it was a successful strategy, it would put AI in this sealed box of limited competency where it can't threaten, can't threaten us, and can't threaten our identities, and can't threaten our economic usefulness, and all that sort of stuff.

[19:47] The problem with that turn of phrase is that all sorts of things that we've said that AI would never, ever be able to do have been done by AI, one after another, after another, after another.

[20:03] And I think it can be creative. In fact, there's this list of, I think it's 500, if there's any mathematicians in the room, correct me. There's something like 500 unsolved problems called the ERDOS problems.

[20:14] And they're unsolved. You know, that's the whole point of them. And AI is ticking through them on a daily basis now. You know, it would be rare for there to be a day in which it didn't solve one of these. It is creative.

[20:25] It is. So we can't put it in that sealed box of limited competencies. What do we do with it? Well, this guy isn't a Christian, but I think this is a more authentic response to the rise of AI.

[20:37] He's a professor of cognitive scientist, Douglas Hofstadter. And he used to be really skeptical of AI. He wrote a Pulitzer Prize winning book, which was really skeptical of AI. He wrote articles for the Atlantic, like the shallowness of Google Translate.

[20:51] And he said these things are just, you know, mimics, just copy paste machines and all that sort of stuff. Then in 2023, he changed his mind. And he said this. My whole intellectual edifice, my system of beliefs, it's a very traumatic experience when some of your most core beliefs about the world start collapsing and especially when you think that human beings are soon going to be eclipsed.

[21:14] And it makes me feel in some sense like a very imperfect, flawed structure compared with these computational systems that have, you know, a million times or a billion times more knowledge than I have and are a billion times faster.

[21:30] Now, I think that's a really honest and authentic engagement with AI from someone that used to be skeptical. But it's certainly not a Christian engagement with artificial intelligence.

[21:41] And I said the Bible has a lot to offer us. We've had a quick sketch of the state of modern AI. Very quick. We won't talk about agents. We won't talk about all sorts of exciting things that are happening in 2026.

[21:54] But I said that the Bible has great wisdom for understanding AI. And I said we were going to use creation and new creation to think about the issue. So what wisdom does the Christian understanding of creation offer as we think about artificial intelligence?

[22:11] I'm going to give you 60 seconds with the person next to you. The doctrine of creation, right? Genesis, all that sort of stuff. Does that provide anything that could be helpful as people try and think about artificial intelligence?

[22:24] Have a chat with the person next to you. What do you think? What do you think? What do you think? What do you think? Good question. Yeah. You wouldn't have any water, would you?

[22:35] Oh, I think I'll mix some. Thanks, man. Oh, yeah. Oh, yeah.

[22:46] Oh, thanks, man.

[23:07] Oh, that's so much.

[23:22] Have I met you before?

[23:49] Have I met you before? Or somewhere sometime? I was in NTE. Okay. Are you a UQ person? Yeah, I was in States. Okay. Maybe somewhere.

[23:59] I don't know. As long as I'm not expected to remember meeting you. I don't remember. Yeah, because you don't remember meeting me. So that puts me off the hook. No, you're not familiar, but I don't remember meeting you.

[24:11] There you go. There you go. AFES is too many. Yeah. NTE as well. You have like strand groups with people and then you're like, yeah, yeah, yeah. All right. All right. We might come back together. Does anyone want to share what they've come up with?

[24:29] How does the doctrine of creation help us? You were all talking. So you've got helpful stuff to share. Who wants to kick us off? I just said it's orderly.

[24:41] It's orderly. The creation is orderly. Yeah, that's helpful. How does that apply to AI? Oh, like Matt was saying that machines follow patterns.

[24:57] Yeah. So they work. Machines work because God created an orderly universe. I don't know if that's the point you're making. I think it's true. That's the point. That's the point. Yeah. Ah, so good.

[25:10] God created an orderly universe in which AI is possible. That's true. I think that's true. Anyone else? Got anything else from the doctrine of creation?

[25:20] We were talking about how it has, I mean, it's a bit different from what God created because God's creation has quality in it and it's like, it's not flawed when it is created, but AI has a lot of flaws.

[25:35] So this is a big creation of ours, of the human race. Yeah. Yeah, but it's flawed. And you're saying God's original creation wasn't flawed. Yeah, that's really helpful.

[25:46] And when God created us, he made us in his image. Yeah. But we've made this thing actually in our image and it's actually trained on our training data and it's flawed and it's got all these issues with it.

[26:01] Yeah. Yeah, that's helpful. Anyone else on creation? No, all right. Well, oh, yeah. I was going to say, God is the source of intelligence.

[26:12] Yeah. Which is, yeah, which he, you know, he gave us. Yeah. We're trying to recreate. Ah, so he's passed it into us and then in a sense we're passing it into this new object we've created.

[26:28] Him creating us in his image. Yeah. And yeah, as you said, we're trying to create something. Yeah, yeah. In our image. Yeah, that's helpful. I'll throw in two of my own, two thoughts from the doctrine of creation, helping us to think about the issue of AI.

[26:41] And the two thoughts I have are, I'm going to talk a little bit about categories and about identity. Because I think the doctrine of creation gives us these two things. It gives us some categories. They're really helpful. And it gives us a sense of our identity.

[26:54] So I'll do categories first. And I'll start with Genesis 2, 7. It's an account of the creation of the human race. I'm Adam, the first person. It says in Genesis 2, 7, Then the Lord God formed the man of dust from the ground and breathed into his nostrils the breath of life.

[27:11] And the man became a living creature. There's just one quick verse early in the Bible. But it tells us that human beings are made out of two things, not just one thing.

[27:25] We're made out of dust. We're actually just ordinary dust from the ground. Nothing really special there. But there is a physical side to us that comes from the creation itself, right?

[27:36] The word Adam comes from Adama, which means ground. He's a dirt man, right? He's just made out of the ground. But that's not enough. The breath of life is needed to make him a living being.

[27:48] And the word for breath is actually the word that gives us spirit. And I think it's God's spirit that goes into this dirt man and turns you into a person. I think that human beings are made out of two things.

[28:00] There's the physical part of us, right? And when that starts to break down, you realize how physical we are. There's the dirt side to us, the physical side. But there's also the spiritual or the supernatural life.

[28:14] And you can't function without both. Genesis 6, 3, the Lord said, My spirit shall not abide in man forever, for he is flesh, his days shall be 120 years. God actually takes that spiritual part of us away.

[28:24] And as soon as it's gone, we're not a living being anymore. In Ecclesiastes, it says the dust returns to the earth as it was, and the spirit returns to God who gave it. So I've got these two categories, the physical and then the spiritual or the supernatural side.

[28:39] Why are these two categories important or helpful as we think about AI? Well, AI is just made out of the natural. It's just made out of dirt, particular kinds of dirt and rare earths and different things that come together.

[28:52] But it's made out of the natural. It's made out of the creation. Modern computers are a great experiment in us trying to see how far we can go in the task of rearranging dirt.

[29:03] It's what we're doing. And we're doing it at the two nanometer level with incredible precision. But we're rearranging dirt. I don't think artificial intelligence is a person.

[29:17] I think it's in a different category to us. I don't think it has moral rights. I don't think it has consciousness. It can't be saved. And it technically can't sin either. Although it can do a good copy of our sin.

[29:28] The doctrine of creation prepares me to face more and more refined dust. Smarter and smarter dust. And the dust of the future will be better than the dust of today. But I know that computers lack the thing that makes us people.

[29:43] And the question I'd want to ask seekers and inquirers and people who aren't Christians is now that we have ChatGPT, do people really believe that it's the same as us?

[29:55] On what basis are we different to ChatGPT? If it gets to the point where it can replicate all of our outputs and it can do all of the creative and intelligent things we can do, on what basis would we say that we're actually a different category to it?

[30:09] Most Australians don't think ChatGPT is a person. But Christianity offers a very sensible answer. And we're going to hear weirder and weirder answers from atheist materialists who think the universe is just made out of matter, that it is just dust.

[30:28] And therefore, ChatGPT is the same category of thing that we are. Peter Singer is an Australian atheist philosopher. Have people heard of Peter Singer by any chance?

[30:38] Yeah, some people. That's great. So he attaches your moral value to your capabilities and the level of self-awareness that you have.

[30:49] So if you have more capabilities, you have more moral importance. And if you have less capabilities, you have less. Now, he's an atheist. I'm very clearly disagreeing with this guy. He says some terrible things about people who are old or people who are young or people who are disabled or people who haven't been born because he connects your moral importance to what you can do effectively.

[31:10] Now, ChatGPT can do more than any of us. I think it's got an IMO Maths Olympiad gold medal. Now, how many people, just everyone in the room who has one, just stick your hand up.

[31:24] Oh, none of you. Okay. Well, I haven't actually got one. There are many things that ChatGPT can do that I can't do. If my moral importance is related to my capabilities, is it more important than all of us?

[31:40] Like, we don't have a problem of it joining the category of person. I mean, it's a superior being to us. It's going to relate to us the way we relate to ants from a moral point of view.

[31:51] It should be the only thing allowed to vote in the elections. Last week, Richard Dawkins, another atheist, said that he thinks Claude, one of the models, is already partially conscious.

[32:03] So I think he's saying it's well on the way to being a person. He wrote The God Delusion. It was a great critic of Christianity around the early 2000s. Is he going to extend the vote to trillions of Claude copies all around the world?

[32:16] You see, these people have no way to distinguish a person from refined dust. They just have to put it all in the same category together. But creation gives us categories.

[32:30] Now, more than that, it gives us identity. So flipping back one page, Genesis 1, the first chapter of the Bible, has so much wisdom to offer because AI is coming for three big identity categories.

[32:43] It is a challenge to our intelligence. It is a challenge to our creativity. And it's also a challenge to our economic usefulness. Some people will feel that challenge sooner than others.

[32:54] I think the more relational or even the more physical your job is, the longer you'll hold on and not be challenged by artificial intelligence. But you could say the whiter your collar is when you go to work, the sooner you'll feel that challenge coming.

[33:11] A threat to our intelligence, a threat to our creativity, a threat to our work, a huge identity hit. We live in a culture that places massive emphasis on identity, don't we?

[33:23] I mean, I feel like we place more emphasis on identity than any culture I've ever seen. But what deep anchors do we have to uphold people's mental health, sense of purpose and identity when these three waves arrive?

[33:38] Well, the Christian Bible starts with this important identity claim in Genesis 1, 27. It says this, From the beginning of the Bible, human identity is just given.

[34:12] It's not something that's earned. It's not a performance thing, is it? It's not a capabilities thing. It's a thing that's just given to us from God. You could say this is a top-down view of your deepest identity.

[34:26] Now, we have all sorts of other identities, don't we, that we sort of layer on top. But it's really good when you've got a fundamental identity that's kind of like the foundation of the house on which you can build things.

[34:37] And then you add other things. I like Thai food. I really like skateboards. I'm a Queenslander. You know, you put these things. But they're not your deepest identity, are they? And the Christian view is that your deepest identity is given top-down.

[34:50] The alternative is you have to make it bottom-up. You have to find yourself. You have to figure out who you're going to be. You have to struggle and claw your way to having an identity in a brutal world.

[35:03] The Nazi party, bit of a shock to bring them up, sorry. The Nazi party hated this particular truth, the idea of universal human value, universal human equality, the idea that all people matter.

[35:18] And we all matter equally. They hated this idea and they thought they knew where this idea came from. They actually thought the Apostle Paul had invented this idea of human equality, which gives us human rights.

[35:31] And they objected to the Apostle Paul. And they said this in one Nazi document. I don't think I've got it on the slides. But one Nazi document says this. It's the Jew Paul, who must be considered the father of all this equality, equality, human rights stuff, as he in a very significant way established the principles of the destruction of a worldview based on blood.

[35:55] The Nazis thought that Paul came up with the equality of the human race. But they were wrong. It's there on the first page of the Bible.

[36:05] Paul didn't invent it. From the beginning, the Bible says, you may not think you're special. You might not think you have any great intelligence or creativity. Or you might start to question your usefulness as AI takes over the economy.

[36:19] But the Bible says you're made in the image of God. You're loved by God. And the New Testament adds that God wants to adopt you into his family at great personal cost.

[36:30] So a Christian knows that AI is just a toy compared to the God who made the world. And the God who made the world counts every hair on your head. Here's a bit of a literary metaphor.

[36:44] You're like a ship that's anchored to firm ground. But you might be noticing that your friends or your colleagues or family members are ships that are starting to bob further and further away from the harbor.

[37:00] Feeling worried. Feeling like the world's changing faster than they want. Feeling insecure about what they can do and whether their company is going to need them anymore. And I reckon this is a really good opportunity for you to start talking to them about identity and God and what makes a person a person.

[37:21] And all these wonderful issues that AI is raising for us. Because I reckon they need answers right now. The doctrine of creation gives us, I said two things, categories and identity.

[37:32] And it also gives us a way to share the gospel. But I'll keep going because I said we're going to look at the start and the end of the Bible. Let's just have a quick look at the end before we do some questions together.

[37:45] So what might the Christian picture of the future have to offer as we think about AI? Well, AI offers a particular vision of heaven. Rake as well is an American technologist, written nine books on technology.

[38:00] And his is life's kind of creed. He says, I remain convinced of this basic philosophy. No matter what quandaries we face, business problems, health issues, relationship difficulties, as well as the great scientific, social and cultural challenges of our time.

[38:16] There is an idea that can enable us to prevail. And furthermore, we can find that idea. And when we find it, we need to implement it. He's saying that you can name any problem in the world.

[38:28] Are there any problems in Queensland right now that you want to share with me? Is there anything in Queensland that you think is not perfect or optimal? Or is this actually heaven? Is there anything wrong here?

[38:40] Oh, Canberra, I don't know. You tell me. Sorry? Homeless people. Homeless people, yes. There's homeless people in my city as well, in Canberra. Yeah. Anything else in Queensland that's not so great?

[38:55] Yes? People in power. The people in power. They're not so great. Okay. Sorry to hear that. We have the same problem. And our people in power are in power over you.

[39:10] Yeah. Okay. Any other ones anyone wants to throw out? No? Rent. Rent. I was going to say crocodiles. Crocodiles. Yeah. Yeah.

[39:21] Crocodiles. You have a lot of problems with animals. There's a lot going on there. All right. And then what else we got? We've got political problems. We've got environmental problems, don't we?

[39:33] Yep. Not many people know Jesus. Not many people know Jesus. Oh, that's a real problem, isn't it? Yeah. Probably not one of the problems he's thinking about. But you're right. There's all these problems.

[39:44] And then there's mental health problems. And there's family problems. And there's problems of health. And ultimately there's the problem of death. And his philosophy is you can name any problem. The ultimate cure or solution to each one of these problems is technology.

[40:01] And the lord of all technologies, actually the greatest and the final technology that we're ever going to invent is called artificial intelligence. And it will fix all these problems. It'll fix the environment, fix our politics, make us healthy.

[40:13] In fact, even eventually defeat death in his view. Do you see that he has a vision of heaven? And I think more and more people are taking on that vision of heaven and looking forward to this AI-driven utopia ahead of us.

[40:28] It's a vision of heaven, but there's no God in his vision. It's a vision of greater health, greater wealth, and a lot of freedom. So picture this. We'll imagine his vision for a moment.

[40:40] You open a door and you walk into a well-lit room where there's hundreds of people just sitting in comfortable chairs and each one of them, as far as you can tell, is in a coma. Each one has an implant on the back of their heads.

[40:52] Fluids are going into them through an IV and other fluids are going out. And each one is lost in a virtual world. Whatever better world you want. Artificial intelligence imagines it for you and gives it to you.

[41:05] Now, is this heaven or is it closer to hell? What would you say? Because in some sense, we're already face-to-face with this vision of heaven. People are spending more and more time in experience machines.

[41:20] TikTok, Netflix, League of Legends, Clash of Clans, virtual reality headsets. The experiences get better and better every year that goes by, right?

[41:31] Like when I was a kid, the things we did for entertainment in the backyard were so primitive compared to, you know, a PlayStation 5 or a Nintendo Switch or whatever comes next.

[41:42] Our students use the term marinating to talk about something a lot of them will do in the evening. They'll finish uni. They'll come home. They'll make a cup of tea, get into bed, pull up the doona, get their phone out and then flick through TikTok.

[41:56] And the lady that I was talking to said she'd do it for three or four hours a night. Now, just picture that you're in the most comfortable environment with your favorite drink, getting, you know, personally curated, AI, customized entertainment just fed to you for hour after hour after hour.

[42:14] The modern world is an ever-growing feast of food and drink and rest and pleasure and entertainment. And in our world of ever-growing pleasure, people are feeling really sad.

[42:28] Young people have more entertainment than any generation in history. And yet the percentage feeling persistently sad or hopeless has risen from 26% to 44% between 2004 and 2021.

[42:44] Persistently sad or hopeless. 44%. I mean, it's terrible, isn't it? Modern society offers you all of these wonderful things. And yet beneath all this pleasure, I think there's a growing sense of despair and hopelessness.

[43:00] For most of human history, we had this curse called scarcity. We just didn't have enough food or drink or shelter, right, since Genesis chapter 3.

[43:10] But the modern curse on us is abundance. We have all the things that people 200 years ago thought we needed, but we remain broken.

[43:21] More leisure, but there's more loneliness. More customization, but there's less community. And there's more affluence, but there's also more anxiety.

[43:33] Ray, the inventor I mentioned, says this about death. I might not have this one on the slides, actually. Sorry, but he says this. Death is a great tragedy, a profound loss, and I don't accept it.

[43:48] I think people are kidding themselves when they say they're comfortable with death. You see, he cannot accept death. All he can accept is pleasure and scientific and technological solutions.

[43:59] To provide him with pleasure. But this is Paul's, the Apostle Paul's attitude when he faces death. And I might have this one. He says, for me, I'll skip that.

[44:12] I think you've got it. Have we got Philippians 1? No, we might have rivals. I think you probably deleted it, Josh. What's your issue with Philippians 1, mate?

[44:27] All right, well, I'll read it out to you since he doesn't want you to hear it. It says this. For me to live is Christ. You've probably heard it before. For me to live is Christ and to die is gain.

[44:40] If I'm to live in the flesh, that means fruitful labor for me. Yet which I shall choose, I cannot tell. I'm hard pressed between the two. My desire is to depart and be with Christ.

[44:51] For that is far better. He actually says that dying would be a better outcome. But he's going to hang around because he's got a lot that he needs to do. But do you see how strong his hand is? He'll be fine no matter what happens, death or life.

[45:05] Rake as well says, I cannot accept death. But Paul says, I'd rather die, but I'm going to keep going anyway. Okay, Paul has something that's a whole lot more valuable than abundance.

[45:18] Paul has something called hope. And abundance is a really poor substitute for hope. Have you ever met someone who has abundance, but they don't have any hope?

[45:30] And you just think, oh, that poor person. No matter how many millions of dollars. But then sometimes you meet people who have a whole lot of hope and there's just no abundance in their life.

[45:42] But you think that person, I'd rather be that one than the former. AI offers a kind of hope, but it's a very cheap kind of hope. The hope of AI is unbundled freedom and individual pleasure.

[45:58] Unbundling is a marketing term. When you split up a product, you know, instead of having a whole Play TV subscription, you just get some channels. Instead of an album, you just get some singles. The hope of AI is unbundled freedom and individual pleasure.

[46:13] This historian, Stephanie Kunz, writes this. And this will be new for you, actually. Yeah. Consumer society has increasingly broken down our sense that we depend on others, that we have to live with trade-offs or accept a package deal in order to maintain social networks.

[46:30] We've begun to believe that we can shop around not only for things, but also for commitments. That we can play mix and match, even with our personal identities and most intimate relations.

[46:44] AI. AI. She's saying that the modern tendency is to parcel off or unbundle these things that formerly were held together. You want this, but not that. You want friendships, but you don't want sacrifice.

[46:56] You don't want to have to talk to people who don't have the same hobbies as you. So you find people who all have the same thing. You want to have marriage, but you don't want to have kids. Or you want to have kids, but you don't want to have a partner.

[47:07] Do you see what I mean? We just like to pull things apart and create new bundlings that didn't previously exist. I wonder if one of those is that we have welfare without community sometimes.

[47:19] And all these other little unbundlings. Now, they're not all bad, are they? But that's the trend. We want to unbundle things and customize things. Instead of relationships coming in a package, we try and strip relationships out of everything.

[47:32] Well, an AI could adopt every single one of your interests if you want. You could unbundle your friendships. Just befriend a chatbot as hundreds of millions of people are already doing.

[47:43] Or join an AI curated online community where everyone already agrees with you. Or already has the same hobby as you. You can unbundle your religion if you want. Stop going to church.

[47:55] Get your personalized religious content from an algorithm, right? I mean, that's going to give you your own customized thing in a way that St. Lucia never could.

[48:06] So, you look... I like the question. Yeah. He's going to have a go at me later. The hope of AI, unbundled freedom, and individual pleasure.

[48:18] But what's the hope of Christianity? It's totally different. Christianity offers a different hope. Instead of being drawn apart into different rooms, watching different screens, talking to different chatbots, we actually get drawn together.

[48:31] Instead of just caring for ourselves, we care for each other. And it's really hard. And it's full of sacrifices and little difficulties. And sometimes Helen sits in your favorite chair at morning church, right?

[48:46] And then you can't sit in your... You have to sit in your second favorite chair. Has that ever happened there? Or sometimes Dennis has two biscuits and he's only been allocated one, right?

[48:57] And then you don't get a bit... There's all these terrible difficulties when you come together in a community and you have to sacrifice and deal with maybe a little bit of disagreement as well. Sometimes you meet people and they don't actually hold the same opinions as you on all sorts of different topics.

[49:11] And you have to figure out how to listen and be patient. And sometimes you sin and you have to learn how to ask for forgiveness. And sometimes they sin and you have to go ahead and forgive them. It's really hard.

[49:21] But Christianity offers something deeper than entertainment. It offers truth, hope, community, joy and suffering. And even that buzzword resilience.

[49:33] And I'll finish with this quote because it really works. This is a professor at Harvard University in America. Professor Tyler Randlewell, and he says this, If one could conceive of a single elixir or like drug, magic drug, to improve the physical and mental health of millions of Americans at no personal cost, what value would our society place on it?

[49:57] If research quite conclusively showed that when consumed just once a week, this concoction would reduce mortality by 20 to 30% over a 15-year period, how urgently would we want to make it publicly available?

[50:10] Van der Weel thinks he's found such an elixir. When taken weekly, it lowers depression, increases your optimism, improves physical health, makes marriages much more stable.

[50:21] It expands people's social networks. It makes people more charitable and more engaged with society. They vote more often. They also go to parent-teacher nights and they have their neighbours over for dinner more often.

[50:32] If this was a drug, the company that owned it would be worth trillions of dollars. Because we try all sorts of interventions. We try and educate kids about things where the evidence isn't even in, and we don't even know if it's going to help or not, but we just give it a go.

[50:47] And this elixir has a massive impact, and there's a large body of evidence supporting it. Now, what do you reckon the magic drug is? Don't say it. Well, it's the magic drug that he's been talking about.

[51:00] Does anyone know? What do you reckon? Community. Sorry? Community. Community. What was the other answer? Cocaine. Cocaine, yes.

[51:10] That's right. That's what I've come here to tell you about. Imagine if I said that. Yeah. No, it's not cocaine. Cocaine. Yeah. Any other guesses?

[51:24] Cocaine. We've had... Okay, so what guesses? We've had community and cocaine. And one's probably a better guess than the other. But, yeah. Yes, that's the closest guess of the three. But, yeah?

[51:35] Church attendance. Oh, you're getting even closer. Yes? The Bible? Oh, further away. No. No.

[51:47] Oh, really? No. Yes? Church involvement. Church involvement is even closer. Yeah, actually, the answer, like what he said, his words are, regular attendance at church.

[52:00] And I think that's important because sometimes when you look at irregular attendance at people who go to church once a year, you actually see no impact. Or sometimes you see a negative impact. There's a few categories where you see a negative impact.

[52:12] They're living worse than the average of society on certain moral categories and issues in their marriages and all sorts of different things. But then you look at people who are regular attendees of church.

[52:23] Now, that means community, doesn't it? That means sacrifice. That means they are living not just for themselves but for the other people in their church. And when you look at that body of people, you see these extraordinary impacts on them.

[52:36] Even in an age of technological wonders, it's actually these old things of church and God and community that are the things that matter most. And that shouldn't surprise us because we know who made the world, right?

[52:49] We know that God made it. And we know the way he made it. So how about I stop there? How are we going to do questions tonight? I've got an idea.

[52:59] I'm going to give you 60 seconds. Turn to the person next to you. And what questions can they come up with? And then we're going to do some questions together.

[53:11] My name is Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Ryan Thank you.

[54:11] Thank you.

[54:41] And let's see what questions we have. Yeah, what have we got? So what questions do people have about the topic or anything sort of related?

[54:55] Yeah. Yeah. Yeah. So just thinking about the information you shared around people asking questions, asking AI these big questions.

[55:06] Yeah. So obviously the best thing for them to do is come to church and read the Bible and ask the big questions there. Yeah. How do you like, do you shut down the internet and go, ha ha, you can't use AI now?

[55:20] Or like, what do you, how do you, do you have any thoughts on how you actually get people who are just engaging with AI to go, no, you need Jesus? Come, you know.

[55:32] Do you have any thoughts around that? Wow. That is a really good question. No, you can't shut down the internet, can you? I think my instinct is I want to rejoice when the gospel is proclaimed.

[55:44] There's a whole bunch of technologies that have been involved in the proclamation of the gospel. Paul wanted the parchments and the scrolls because he was doing a technological ministry, right? Like he wasn't physically present in all these places all the time.

[55:57] He was actually trying to encourage and help and shepherd people by sending language to them. And then you've got sails and boats going across the Mediterranean. And then you've got roads, Roman roads.

[56:09] And then, you know, later on you've got the printing press and you've got Bible translation stuff. All sorts of technologies have been involved in spreading the goodness of the gospel. And I think my instinct is I want to say that's great. If all these people that found it hard to learn about the gospel are now finding it, they can look into this thing in the privacy of their own bedroom and learn a little bit.

[56:28] That's great. And I'm at least hopeful that it's a sort of pre-evangelism that gets people, oh, okay, that's really interesting. And, oh, I didn't know that was in the Bible.

[56:39] And then maybe with the same smartphone, they can flick into the gospel of Mark and start reading it. And, you know, there's all this. That's wonderful that it's every person in the world potentially has all of that available in their bedroom every night.

[56:51] How good is that? But I do hope that as they do that, they then, you know, they go, oh, okay, church. I need to figure out this church thing, you know, or ChatGBT says that this is the next step.

[57:02] And I do think that will happen. I think we will have people turn up at churches and stuff like that. So we've got to make sure that we keep, you know, doing our job, which is not to, you know, go and intrude into everyone's bedroom and stop them from using ChatGBT.

[57:15] That's beyond our control, isn't it? So what can we do? Well, we can welcome them. We can care for them. We can be ready for newcomers when they come along. I actually think, I feel like I've met quite a lot of particularly young men who say something like, I'm not a Christian.

[57:31] My parents aren't Christian. I don't even, I don't know anything about the Bible. But someone on YouTube has told me to go to church. Hi, I'd like to learn about the Bible. You know, and they're usually very overdressed.

[57:42] And it's like, great. Good idea. Have a seat. You know, let's give it a go. Yeah. So anyway, that's just one possible answer. Yeah. I think there's a lot of, like, trust being developed around AI.

[57:54] And, like, yeah, when we put our questions in, I think it's becoming less and less common to know that there are actually mistakes in AI and in the information that it spits back out at us.

[58:08] Do you think it would be possible, speaking like, you know, there are lots of different types of, like, brands of AI, I don't know what to call it. But for people to manipulate AI so that it actually puts God out of the picture if someone's seeking or, like, you know, changing facts so that you never hear the actual truth.

[58:26] Yeah, absolutely. Yeah. And that will particularly be the case in societies that are anti-God. At some point, North Korea will have its own large language model.

[58:40] I'm not going to name every country around the world, in case you put this up on your website, because I'm traveling to at least one of them. But there are countries around the world that don't particularly like God.

[58:52] And they will make their own large language models, and they'll figure out what goes in the training data and what gets excluded from the training data. There's the pre-training phase where, basically, AI just tries to predict what would be said next.

[59:04] And it just becomes this sort of mimic of human language scraped from the Internet. It's actually only the first layer. The next thing that goes on top of that is what you could call reinforcement learning, where you have people that will, you know, one example of it, reinforcement learning from human feedback, where humans sit there, they interact with a chatbot, and they go, oh, that was a good response, that was a bad response.

[59:26] And they actually optimize the chatbot to respond in particular ways that will generate. It's optimized for the goal of people-pleasing. And people-pleasing the people that have been selected to give it feedback.

[59:38] And those people have been selected by some committee at OpenAI or Google, according to some Google policy statement, you know. And they will decide what is a good answer and what's a bad answer.

[59:49] So it's not this objective third party giving us the, you know, the scientific truth of the universe. It's pre-training on a bunch of human data. And then it's the Google HR department's, you know, latest opinion on what is a good answer and a bad answer, right?

[60:05] And it can still be brilliant, even given all of that. But it's not objective. The closer it gets to something that you could call verifiable, like mathematics, the more I would say, yeah, that is objective.

[60:17] Because you're actually, then the reinforcement learning is, did you get the algorithm right? Did you predict where the ball would land? And all that sort of stuff, it becomes very objective. But most of what we're using AI for is in this subjective space.

[60:30] And you'll get a different answer from Gemini, then Claude, then ChatGPT, because they've all got a different code of conduct and policy. And you can have a Christian large language model as well.

[60:42] Or you can fine-tune a model to be Christian. And some of us who use Bible software like Logos are getting a sort of a Christian-y interaction. You can even fine-tune it yourself, because you can customize these things in settings.

[60:56] And you can say, I'm this kind of Christian, and you know, that sort of thing. Does that answer your question at all? Yeah, I think so. Yeah, okay. There's still, hallucinations is what people call it. When AI says something that's kind of truth-ish, like it sounds truth-y, but it's not true.

[61:11] And those are getting less over time, but it's still a real problem. Because we don't know how to reward AI for saying, I don't know. So instead, it just sort of bluffs the truth.

[61:24] But we're slowly getting better at that. Yeah. Any other questions? Sorry, that was such a long answer. Yeah. Anyone want to throw out any other questions? Yes. I often feel very wary about AI, and I'm a total hypocrite because I use it all the time anyway.

[61:38] Yeah. But I particularly feel uncomfortable about it in Christian settings. Like I think most of us would agree, you know, someone said, write me a sermon, and I would say, okay, that's crossing the line.

[61:51] Well, certainly. Come on, mate. But my question is, what I often come back to is this statement of, okay, well, AI doesn't have the Holy Spirit.

[62:06] To me, that's the distinguishing factor. Yeah, yeah. So I was very interested that your take was that it's not made in God's image. Like it sort of went to almost more foundational difference between humanity and AI.

[62:19] So I'm just interested between those two images, like is there a crossover there, or are they two different ideas? Like it's not made in God's image. It doesn't have the Holy Spirit. Are they sort of saying the same thing?

[62:30] Oh, I mean, they're slightly different, aren't they? Because we believe that non-believers are made in God's image. But they don't have God's Holy Spirit. Sometimes we actually use them in preparing sermons.

[62:44] You could get feedback from someone who knows a lot about, I don't know, sentence construction and grammar or public speaking, who isn't actually a Christian, but that could be really helpful to you. You could get really good feedback about the historical background of Corinth from an archaeologist who's not a Christian.

[63:02] So there isn't this sort of black and white thing. I would also say that we are often relying on Bible software and various other tools to prepare sermons or Bible studies.

[63:15] And the person who made it, yeah, maybe they had the Holy Spirit or maybe not. I don't know. But I'm not actually interacting with them. I'm interacting with some sort of gadget or tool.

[63:25] So my main thing is I want to say as long as it's a tool and you're the user of the tool, then I actually think it can often be okay. And I don't want to be overreactive to it.

[63:36] In the same way that we use Google and we use all sorts of stuff to help us prepare. But you're the driver of the bus. You know, you're the one that's the creative director and you're thinking about the passage deeply. And it's a research assistant or it's like a...

[63:49] But I think the issue is that more and more it will become actually it will become the creative director. And you'll become its tool. You know, it'll actually be the one that comes up with the whole Bible study.

[63:59] And then it just asks you to press control P, you know, and print it out. So at that point, that's wrong, I think. Because you have the Holy Spirit. You should lead your Bible study group.

[64:10] And you should be the one preaching. And it should only ever be a research assistant or someone that just like edits things or gives a bit of feedback. Hey, that's too long. Make that shorter. Yeah, that's my opinion.

[64:21] We've got to be tolerant because people can have different opinions as well. Yeah. I'll go to you. Yeah. Oh, yeah. You mentioned your kind of dystopian hypothetical of everyone's sort of plugged into their virtual reality and whatever.

[64:35] Yeah. And my reaction was, well, yeah, that does sound like hell. And I thought, well, why? Well, because it's not real. But then I thought about, like, what is real?

[64:47] And especially with you mentioning, like, social media and stuff, which is also kind of quasi real life. Yeah. So, like, where do we draw the line with that? I don't know if that makes sense. I think it is real.

[64:58] I think that video games are real. I think they're still part of God's creation. I think they're real. The issue is they are detaching you from the need to care for others.

[65:12] They're detaching you from opportunities to actually serve and love people. Particularly the way I described it was very individualistic. You know, you're on your own super yacht somewhere. So, you're potentially becoming totally detached from truth.

[65:28] And it would encourage you to suppress the truth about God, I think, because you're not looking at his creation and realizing your need.

[65:40] Your autonomy won't work. You're relying on God. All that sort of stuff. You've got all the entertainment you ever need. You won't ask any philosophical or faith-related questions. So, there's a bunch of problems, I think, with it.

[65:55] Yeah. I know a lot of people that do a lot of evangelism through video games, though. I don't think it's the worst thing in the world. Yeah. But that's not the individualism thing. That's actually they go in there and then they try and be purposeful.

[66:07] Yeah. Can you help us understand what intelligence is and what artificial intelligence is? Yeah. So, it's a very hard word to define. There's no settled consensus on what the word intelligence means, even amongst philosophers.

[66:21] Not that they have a settled consensus on very much. It's a complex thing. It's related to the ability to take in information, to rearrange information, to come up with novel solutions to things that you haven't seen in your past data.

[66:39] I think it's not regurgitation. It's not what a spreadsheet does or what a calculator does. It requires a little bit more kind of a principles-level ability to understand the deepest components of a concept and then apply it in surprisingly new ways.

[66:57] But I would say that there's a bunch of words that are contested words that are difficult words. Intelligence, consciousness, self-awareness.

[67:09] Actually, creativity even. I think we need to give the intelligence word to AI. I think we actually need to say it's okay for us to say that it's intelligent. Now, if it's got to this point where it's solving, getting gold medals and maths olympiads and it's, you know, it's a better writer than us and it's doing biology, but the same AI that can do the biology can tell you about some topic in ancient history that you've never heard, you know, and it can do the whole thing.

[67:34] And we go, but it's not intelligent. It's like, well, I think it is. I think it is now. It's artificial, I guess, because it wasn't made by human processes or, you know, natural processes.

[67:46] It wasn't kind of made by, you know, the passage of time and evolution or it wasn't made by God's hand or anything like that. It was made intentionally by machines. So that would be the two, the two words.

[67:57] But yeah, was there anything else that you, no? Yeah. So I think it's actually a really good word. Artificial intelligence. I think that's, that's actually giving us a sense of what it is. Yeah. It's artificial and it's very intelligent.

[68:09] I think there was one at the back was there. Yeah, sorry. I was going to just say that in my workplace, I think you had, um, I work in, um, you know, in education. But what annoys me sometimes is when people come up with their original, they come up with their idea and then they'll go, oh, so-and-so has thought of this idea.

[68:29] And it's clear when you, when you read the document that it's come from ChatGTB. And I'm like going, ooh. Yeah. Should we be, like, you know, when you're at uni and you've got in-text citation, do not play drive.

[68:42] Everything's got out in-text citation. Yeah, yeah. You don't have in-text citation. Yeah, yeah. And then we get out into the workforce and it's like, oh, I can't get this idea. And you're looking at it going, well, that's ChatGTB. So, really, you didn't.

[68:54] Yeah, yeah. So what's the Christian position? You can see a sentence and it spat out a one. Oh, absolutely. What's the Christian, you know? But what's the Christian position? What should a Christian do?

[69:04] Give credit? Yep. Yep. I just think that's a very basic principle there, which is honesty. But it's right. You're lying. Yeah, you're lying. You're lying, for starters. So, again, but what a, maybe an opportunity, once again, for Christians to show that they do have a moral code.

[69:20] No, I'm not going to lie. I'm going to, I could easily have gotten away with that, but I'm going to, you know, tell you. There's also, there's issues of, yeah, there's deceit there and authenticity. And I'd rather a real person, you know.

[69:33] And same with the preaching thing. I'd rather a real person preaching. I think it undervalues people. Yeah. Yeah, yeah, yeah. Cool. All right. How about I just do one last one? Or how much time have we got, Josh?

[69:45] One last one. One last question? Yeah. If you like. Okay. Good. Tom's got one. Yep. I've got another one. Maybe two. Maybe two. Okay. All right.

[69:55] Tom. You mentioned, one of the things you mentioned was that how sort of chat box and things wouldn't be used for therapy. Yeah. And it's that huge amount of people are using this for that purpose.

[70:11] Yeah. Most likely. Yeah. And I was just wanting to ask whether there is a, that is an interim good?

[70:22] Like, is, is them being cared for? Yeah. Yeah, yeah. By this tool, something we can see as a positive? Or is that something we definitely see as a negative?

[70:34] Because it's not a person caring for. It's not, you know, people reaching out to their loved ones or being able to be there for their loved ones. Yeah. I'm wondering this because if it's becoming more and more sort of prevalent, it's like, and people are coming along to church and things like that, how do we then love them coming in inside of that, not attachment to the world, but attachment to the intelligence, to the artificial?

[71:09] Yeah. That's good. Yeah. So, just trying to figure out what place does that have and where do we as Christians want to join the process of loving whoever that is?

[71:21] Yeah, yeah, yeah. Church is an interim good, I think. Yeah. Yeah, you can disagree with me. I think it's an interim good. I think church is not the perfect ultimate answer to God being with His people gathered with them.

[71:33] It's actually a, it's a window or a hint of that ultimate answer, which is us gathered around the throne of God. But church is an interim good. Actually, I think our world is full of interim goods. This is not heaven.

[71:45] Now, that interim good is probably a step back even from having a professional therapist or a pastor or a community of a church caring for you.

[71:56] If ChatTBT is your primary therapist, then that's a pretty shoddy interim good. And I would just want to think about the pros and cons of that. So there would be some situations where it's actually, you know, it's actually an improvement on someone being in complete isolation or someone with no money that they just cannot get any help.

[72:15] And finally, they get a little bit of medical advice or therapeutic advice or something like that from ChatTBT. If it does it well, I mean, you know, of course, we've got all our doubts about whether it's going to do a good job or not.

[72:25] But if it can do a good job, then I might think, okay, there could be some cases where that's actually helpful. My worry is just going to be about all the cases of people who get a worse solution because it's just an easier solution.

[72:39] So it's hard, isn't it? Go and talk to a pastor about your problem. That's hard. So now we just created this opportunity for all these people to backslide into something that's easier. Oh, I'll just talk to ChatTBT.

[72:49] I've got this. There's this issue in my marriage. I don't want to confront it. In the 1980s, I would have gone and talked to my pastor, but I can just talk to ChatTBT now, right? Yeah.

[73:00] So now that would be really, really bad, isn't it? So I think there's going to be a lot of cases where people are harmed, some cases where people are helped. And it's not for me to say whether that's something I rejoice in or not.

[73:11] I don't know. That's an indicator, isn't it, of where people are at in our society, that so many people feel lonely and so many people are looking for companions. Yeah. Don't know if that answered everything.

[73:23] I think... I was going to say that's the difference, though, I think, between Google and AI. Yeah. Google, when you do a search, you know where it's come from, the information's come from, whereas when AI gives you an answer, you don't know where it's come from.

[73:36] You can. You can. You can say... So on medical things, it should have all its citations and stuff. Yeah. It should. Yeah, you can. And I think a lot of us are still in a phase where we're using the free models.

[73:47] And we've got... These free models are... They're giving you 1% of the capabilities of... Even the most basic paid models are a huge step up. And so if you want to use it for things that matter, you want to use it professionally, but a lot of us don't want to pay the 20 bucks a month.

[74:05] So... It's great at feeling informed. It is good at feeling informed. Yeah. Absolutely. All right. You had the last question. Actually, I don't know. The questions are multiplying in my head.

[74:16] Oh. I started with less questions and now I don't know. Yeah. That's why you need to read the book. I've read the book. Oh. It's great. That didn't fix the problem. I've got this question.

[74:28] One thing I'm wondering about is the difference between brain and mind. Yeah. And what you think maybe that the mind can do that the brain can't do.

[74:41] Or you said we have these two aspects to us, say the spiritual side and the physical side. But I think I know what the physical side can do. You know, I can punch things and walk and stuff.

[74:53] You have two great skills. You can do it together if you can. Yeah.

[75:04] Okay. You're asking the wrong question. Because... You're asking the same question that our Australian friends...

[75:15] Oh, he's got a mental blunt. Not Richard Dawkins, but our capabilities friends. Peter Singer. You're asking the Peter Singer question, which is the capability question. What can it do? And you're trying to find some difference in the output of what it can actually do.

[75:28] But I think the main difference is what it is. You know? So this is a different thing. And even if this can have the same output as that, it doesn't make them the same thing. Do you know what I mean?

[75:40] A mimic... A technological mimic of you is not the same thing as you. So it's an output way of thinking. So it could be that you can't actually find something that a mind can do that a brain can't do.

[75:56] Maybe they do do the same things. But what it is is deeply different. An artificial brain versus a natural brain. They may be able to do the same things, but they are different things.

[76:10] One of them is living. One of them is not. But perceptually, there's a big difference between a brain and a mind in the sense that a mind has a singular sensation of existence. And I don't think that exists in just a brain or particularly in a neural network.

[76:26] It doesn't have a singular sensation of existence. It really... It isn't a singular being. It's just trillions of bits of matrix algebra. Then you do all the calculations and then you get to the end of all those calculations and then you get the answer.

[76:41] And then that's the output that goes into the chat window. But it isn't a being. It's actually trillions of lines of algebra. Of equations. But you're a being.

[76:53] A being that actually can run through a neural network. You've got a neural network. It's called a brain. You can run stuff through it. You can do calculations, all that sort of stuff. But you're not that.

[77:05] You're more than that. You're a singular being. Consciousness is the word philosophers throw out for it. Yeah. Now it's hard to... I think that's what I'm wondering about. Yeah, yeah, yeah. The idea is like, can AI trust?

[77:17] Can AI love? No. Can AI know? No. Like, because you wanted to protect the word intelligence for it. Yeah, yeah. But I wonder if I sort of like the word intelligence for myself.

[77:28] Yeah. I like the word... I'm trying to think, you know, the difference between knowing and knowing. And I kind of think, oh, right, yeah, a calculator knows the answer to my question, but it doesn't know.

[77:40] It doesn't understand. Yeah, yeah. When I see AI answer amazingly on the one hand and then be a complete idiot on the other hand, I kind of think, oh, you don't know.

[77:52] Yeah. You don't understand. You're not... It'll never do X. But someday it'll be good at both. And it'll do that...

[78:03] It won't trust though. No, yeah. But eventually it will do all of the things well. And we won't be able to point to the thing where it's weak. And then I do think it will deserve the intelligent label.

[78:15] Yeah, no, maybe. Yeah, I'm happy to use the word no to describe what it's doing. It's a different kind of knowing. And there isn't that singularity. But there is a kind of knowing.

[78:25] Yeah. And it's... I don't... Because it's deeper than just a calculator and it is... It is... It's not so much a memoriser as a principal extractor. I think one of the things it does, it's so much deeper than copy-paste.

[78:39] It extracts deep principles of understanding. You can't solve unsolved problems in maths unless you've picked up some really deep principles of understanding. And so I do think we've got to give it some of those words.

[78:50] But anyway, I'll find some safer ground and talk about emotions. Because this is one of the key differences, right? Because we're beings, we have emotions. We have desires and all these sort of things that I don't think it's fair to give to.

[79:04] I mean, it doesn't feel anything. It's a bunch of maths. There's no feelings whatsoever. It can mimic what an emotional person would say, but it doesn't feel anything.

[79:16] But I would say that there are good representations of knowledge in the neural network. And I think they can be really useful representations to talk about intelligence. But anyway, we're getting deep into semantics, aren't we?

[79:27] It's always very hard to... And to be honest, the way you use your terms is allowed to be different to the way someone else uses their terms, as long as you define what your terms mean, right?

[79:40] Because we're all... That's what language is, I think, at the end of the day. Anyway, that's good. What about wisdom? Wisdom. Oh. I don't know.

[79:53] I am just probably more willing to give some of these words to AI than some people. I think, let's say you're in a really tricky situation.

[80:05] There's three complex people. They've all got overlapping interests. It's really hard to figure out the right course of action. And you throw it to ChatGPT and it gives you the best counsel you've ever gotten and it solves the problem.

[80:17] And you throw it to, like, a really wise person and even they can't think of an answer that good. You go, well, it's not wise. Well, all right. If you want to use your words that way. But I'm happy to...

[80:28] Personally, I'd be happy to say, yeah, that was a pretty wise bit of advice from ChatGPT. But we can disagree about that one. I might stop there.