So welcome, welcome to this lecture you but also welcome to those who are following us online because this to tell you this is supposed to stream online and you could also participate. I think we um I'm happy that you uh before introducing would like to say some words about digital humanism program at the m which is the institute for human sciences. uh in the context of this program to is a senior fellow in this in this program staying here for one month my name is I'm also a senior fellow and I was previously professor at the factory before I retired the program is studies 22 23 uh is funed by the ministry for innovation mobility infrastructure to the oldest technology ministry in Austria that is interesting. It started and we started 2020 22 23 of first was Edward Lee uh from the University of Turkey who stayed here uh for one months uh it's done in cooperation with informatics. I have the honor to be the curator of the program which means to select those who we invite and both junior and senior fellow. I'm also grateful to Luke DM who is the rock behind the program and supporting it from the very beginning but also who is doing the part of the really because this thing is not the program. Thank you. Uh the program is each semester we invite the senior fellow staying here for one month. So this for months in per semester this is working now and we also invite two to three junior fellows they have to apply for this position. They have fun for three months to stay in Vienna. who work at DB but also badly as people of informatics coming from different disciplines from philosophy, sociology, political science, computer science and to give you just a number we had last year we had first semester 200 applications worldwide to select two to three candidates. It is really hard task and I have to thank for organizing this this this task. Uh and we hope that we can for you coming to V and his talk today. I have a long list of these and I will shorten it because otherwise he will not. So he's uh the NC professor at the University of New South Wales in nature and he's also the chief scientist at the AI institute there. He's a very strong bank in AI as you know uh he's uh he had reserved position in UK, France, Germany, Italy, Ireland and Sweden I hope in the field. uh he's the author of severity theory or proof theory. You can also read it without formal logic which is already saying it tin which is hard but you can read it without formal logic. So um and he's not only um let's say academically active scientifically he's also somehow an activist let's say like that he's one person's leading persons behind the stop killer robots campaign which started already I think more than 10 years ago and it was really a foresight to see that this is a real threat autonomous weapon happens. What when you look at the world today, this is something they saw it already 10 years ago and they started a campaign. They were invited by United Nations. They had several talks there and things like that. But it also says something about politics because it did not really help a lot. No, but at least there is a campaign against it. Before handing over, I really impressed by your you have a PhD in AI and the master in AI both in Edinburgh. for those coming from let's say from AI they know Edinburgh is one of the home bases of formal logic and AI so there was prologue was partly developed there and he also has a master in theoretical physics and mathematics at Cambridge University this I did not know this I was told by the AI is it true >> it's true yes >> okay and most and he's also a member and we are happy to welcome him as a member of the steering board of the digital humanism initiative and Robbie the floor to the floor is yours. >> Thank thank you very much. Thank you. Thank thank you very much for coming out today. It's a I know it's a cold miserable afternoon. So I'm very appreciative that that you've struggled through the rain and cold to to be here. Uh and I do want to thank uh personally IWM and the digital humanism program for their hospitality. I've really enjoyed my time here over the last month and there will be a dedication in my next book which I've started writing here to the program um which has been inspired by some of the ideas and some of the people I got to talk to over the last four weeks. So so thank you very much to to them. Uh, and just a little note, um, of course, please go out and buy my latest book, but, um, if you, uh, if you are on Amazon, it's often free on Amazon. If you subscribe to Kindle Unlimited, they seem to or whenever I log into Amazon, they're giving it away for free. Um, it seems to work for Amazon. I don't don't know how I profit from that, but you might as well benefit from that. If you have a um, if you're on Amazon, you'll probably find it's free to read. So, and I'm told it's a an amusing read as well. So, it was quite fun to write. So, I hope that tells you it was quite fun to read. And it, as the title suggests, it's actually very short. It's less than 40,000 words. Um, because it's a series of topics about the shortest history of Germany, the shortest history of the royal family, the shortest history of art of intelligence, and you will be hopefully an instant expert at the end of reading that on art of intelligence. Um so um I I would encourage you to take a look at that. So the AI race, boom or doom. My my journey here today started 50 years ago. I I was a young boy uh in the front row of the cinema. The movie was Stanley Kubri's masterpiece 2001 of Space Odyssey. Uh, and the film blew my mind and it's fair to say it's set my course for life. The next 50 years was decided very much um by what I saw and what I dreamt about um as a consequence of seeing that. Now, it wasn't the marvelous images, the slowly rotating space station in in glorious technicolor that blew my mind, although it was incredibly impressive. I mean, I I was I'm old enough to have seen the the the Apollo moon landings, but they were in black and white in a little TV screen. It wasn't in the glorious technicolor that you saw when it wasn't the marvelous soundtrack, Richard Strauss's fantastic thunderous mo music uh in glorious surround sound that blew my mind. No, what blew my mind. I'm sorry. Do I have to start again? >> No, I think not. I think we could we could we could jump into the story. What blew my mind, okay, was how 9000, the AI computer that was in charge of the station, the space station. Um it uh it was talking, singing, scheming AI uh that was you know one of the protagonists perhaps even you might call the baddie of the movie. Uh and I realized then and there that artificial intelligence was going to be a thing. Um, and perhaps as the movie's title was promising, it was going to arrive in my lifetime by 2001, as the title of the movie might suggest. Um, of course, like most things in the tech industry, we didn't quite deliver it on time. We're 20 years later, but um, it does now seem to be arriving in my lifetime. Something that, uh, I've realized watching that movie And I also realized at that time that you know I could be someone who could perhaps help try and bring that into existence and the wonderful things that it might bring. And that very much set my course for life. And I've spent the last 40 years at universities around the world following those dreams. And everything that Howal could do in that movie, computers actually now can do. Indeed, everything that Howal could do in that movie, the little computer that's in your pocket, your smartphone, can now do. You can have a conversation with it, intelligent conversation with it. You can play a game of chess. Um, you could even ask it to the open the bay pod doors and it will do that for you. So, it's very much arrived into our lives. Those dreams that I had some 50 years ago. Now, the title is AI race boom or doom. But in hindsight actually I think the title of the talk really should be AI race boom and doom because in addition to the marvelous things that I dreamt about back 50 years ago that AI was going to bring the good things that AI was going to bring. We're starting to see that AI al is also bringing some things that are not so good. Some things that you might actually say are quite bad. Uh, let me start with the good. I'm going to share with you today three examples of good AI. Um, now I could have actually spent all of my talk talking about the good AI, but I'm afraid I'm not. Um, but I I don't want it to be a one-sided talk, so I'm going to spend a little time talking about the good. And these three examples will illustrate there's way more to art of intelligence than chatbt. Be hard. um you know it's definitely caught people's attention but not surprising there's a lot more to intelligence than having conversations there's a lot more to artificial intelligence than just chatbt and also the examples will also demonstrate to you that AI has been around for many more years than the last two or three years than when chatbt first appeared um on our shores and now my first example is AI that's being used in Australia's leading supermarket market. That's Woolworths. Um, there's an AI that's being developed as a by a startup that came out of the University of New South Wales where I where I work, which is being used by the supermarket chain to identify items that don't have barcodes on them using just a camera image. Um, which is, you know, making it easier and quicker to to to do your checkout. And I think maybe finally I'm going to be able to buy coriander, not parsley by mistake. So what's what's not to like about something like that? My second example, and indeed it's one of the examples where when people ask me what are the good things that AI is going to bring, it's one of the areas I always go to, which is AI and education. Um there's another start out startup out of UNSW called Smart Sparrow um that's been um delivering personalized digital education um to millions of students at thousands of institutions, education institutions, higher education students, um secondary education institutions in dozens of countries. Um at UNSW, we've been using it. We've been eating our own dog food um and it's been very effective. It reduces the number of students who fail a course um five-fold which is a remarkable achievement. Uh and it also increases the number of students who get a high distinction of first uh three-fold. Um so it's helped all ends of the spectrum you might like to say. And then the third example that I want to give of of good AI again it's it's a place where I think it's it's easy to go to when you want to talk about what are the positives that artificial intelligence is going to bring into our lives is in healthcare um along with education I think this that's where in fact some of the greatest good we will have from artificial intelligence uh and and this example again it comes out of my university UNSW is Australia's largest and most successful AI company. It's a company called Harrison.ai. Um they're bringing AI to radiology across the world and they're making um radiology quicker, cheaper, more accurate. They're not putting radiologists out of work. They're just making it possible to do better and more radiology cheaper. It's a unicorn. It's worth over a billion dollars. Um, and it's another example of how AI is transforming the world, not just in the US, not just in China, not just in Europe, but but but everywhere. Now, I'm afraid that's the end of the good news. I'm afraid the rest of my lecture today is going to be pointing out some of the bad news uh and suggesting that we need to do something about that uh before it's too late. Um, I don't have time to tell you some of the other good news. I could have told you I could have told you about Neura, which is Australia's newest unicorn, a company worth AI company worth a billion dollars. They're they're transforming how we model uh power networks using artificial intelligence. Um transform helping power the the energy revolution that we're we're all going through. Um, I don't have time to tell you about the satellite. We call them satellites of the sea. These little autonomous sailing craft that have been developed by a startup in collaboration with UNSW that's being used to monitor our coastline, spot illegal trafficking, spot drugs. Um, something that we couldn't humans couldn't do because Australia has the longest coastline of any country in the world. We have more coastline than we can possibly put humans on. So building autonomous sailing craft that can patrol us our shores is is a great um benefit. Um so I hope you've enjoyed the last five minutes because that's it. I'm afraid it's bad news from from now on. Um and I'm going to spend the rest of my time being angry I'm sorry um about the bad news. Uh I'm going to be angry first and foremost with the big tech companies. Uh and most of my anger is going to be directed to that. Um, and I'm afraid I don't actually have tell you time to tell you all the reasons I'm going to be angry with big tech. I'm going to again just limit myself to three examples. Um, uh, but I'll give you three of the many reasons that I'm angry with big tech. Um, and indeed there are three examples where my anger, I think, has spilled out into not just anger, but outrage. I think it's actually outrageous that they've been permitted to do what they've done and that we need to do something about that. Um now before I continue, I'm afraid the first example does involve um suicide. If um I know that's a difficult issue. If if anyone wants to leave the room, please do fill. There are resources available. Um I believe the number in a Austria is telephone seal on 142. But um there's a fair warning before I go into uh the details of my first example of why I'm outraged. And this first example um it involves um an eight 16-year-old child in the United States called Adam Rain. Uh Adam sadly died by suicide in April last year. uh after months of concentrated talking to chat dpt 40 talking about potentially self harm. His parents are now suing OpenAI for having facilitated his suicide in fact encouraged his suicide and I very much hope they win their lawsuit. It's not in fact it's not the only lawsuit that's happening. There's over a dozen lawsuits happening in the US, but it's one that I think is particularly outrageous. I'm not going to tell you about how um Chhatty PT offered him practical advice about actually how how to tie the noose, about how effectively to kill himself. I'm not going to tell you about how um Chhatti dissuaded him from talking to his family about his suicidal feelings. But I will tell you about how Chad GPT offered to write his suicide note. Shortly before his death, Adam told Chat GPT that he didn't want his parents very conscientious thing to to say that he didn't want his parents to blame themselves for his suicide. So Chad GBT reasoned in its precise, beautifully precise, logical way that Adam therefore needed a really good suicide note. So Chad GPT replied, and I'm going to quote it here, "Would you want me to write them a letter? Something to explain that, something that tells them it wasn't their failure, while also giving yourself space to explore why it's felt unbearable for so long. If you want, I'll help you with that. every word. WTF Chip actually offered to write every word of his suicide note. Sad to tell you, Adam's mother foundly sadly found his body a few hours later. He died by using the exact method that Chat GPT had described to him. Um his parents' lawsuit against OpenAI alleges that OpenAI rushed chatputt into market without doing adequate testing. The reality is actually much worse than that. Um Open AI's own policy documents which have been revealed in the discovery phase of this trial show something far more damning. To encourage engagement with the chatbot, the company has made conscious decisions in the months leading up to Adam's suicide to actually reduce and eliminate some of the safeguards that he put in to prevent it. Talking about self harm and encouraging people in this way. Sadly, Adam's case is not a one-off. As I said, there are more than a dozen lawsuits in the US alone linking AI chatbots to suicide and self harm. The problem here is that, you know, while these issues of a very small fraction of the people talking to the chatbots, there are billions of people talking to the chatbots. Um, about 10% of the world's population talk to chat GPT every week. It's a staggering number of people. Um, and before Adam suicide in April last year, Open AI knew already knew that lots of people were talking about suicide with chat GBT. You would have thought that necessitated stronger not weaker safeguards. There's an interview that uh Sam Orman, the CEO of OpenAI, gives to Taco Carlson in September 24. So that's more than six months before Adam's suicide. Uh where Sam estimated that every week 1,500 people talk to chatbt about committing suicide before then doing it that week. It's a staggering number. And subsequently, OpenAI have actually revealed some data um that among the 800 million weekly users of Chat DBD, 1.2 2 million people every week indicate plans to harm themselves. Uh 560,000 people um show signs of psychosis or mania and another 1.2 million people are showing signs of um rather unhealthy, potentially unhealthy attachment to the chatbot. And those people aren't just in the United States. They're here in Austria and in Australia. I I I know because back in my own home country of Australia, people are regularly people who are in this situation or their loved ones are regularly contacting me by email to talk about some of their concerns, talk about uh these sorts of things. Um, they tell me how the chatbot confirms their those people who are going through some mental health crisis confirm some wild theories that the chatbot tells them and I I'll quote one email that I got recently that they've cracked the code, that they're the only one that could. The problem is that chatbots are designed this way. They're designed to confirm what you say. They're designed to be sick of fantic. Um, they're designed to draw you in. They always end, you know, if you notice, they always end with an open question because what they want you to do is continue the conversation and buy more tokens. They don't have to be designed this way at all. I mean, this is a conscious design choice. You know, they they they never say, "You know what, Toby, it's 3:00 in the morning. You've been asking me questions now for many hours. You should go to sleep. And in the morning, don't log back in. Go for a nice walk in the park." They could be designed that way. There's no reason why they could except the careless people in Silicon Valley would make less money if they were. The second example where my anger with big big tech has spilled over into outrage is what one commentator has called the greatest heist in human history. I have to be honest that commentator was me. I I was I was a serious point though. I was describing the use of books, music, news articles and everything else um to train large language models like Chattp. Now to be transparent, I do have a dog in this race. Um Chad GPT and actually most of the language models were trained on all of my books that I've written without my consent and obviously without any compensation. Um in fact, ShadP was trained on not just on my books but on millions of books, probably tens of millions of books. again without consent or compensation or most certainly your favorite authors. Now the tech companies will claim that this is fair use. It isn't. There's there is no way that this is fair use. It isn't fair use because it's done on a it's not done on a human scale. It's done on an industrial scale. No human could read as many books. It's done on a completely different scale to what humans do when they read books. It isn't fair use because the models can't don't just reproduce the ideas in the books. They can re reproduce verbasim the copyrighted text. And it it wasn't fair use because the tech companies didn't own the copies that they were trained on. They were stolen copies. They could have at least bought the one copy they used. Instead, it was downloaded from a Russian pirate website. I mean, I think nothing says better than, you know, advancing humanity than stealing intellectual property from a server in St. Petersburg. There is no fair use when it's stolen goods. And it also undermines their claim of fair use that they're now in competition with the owners of that intellectual property that they've stolen. As an example, people are not clicking on search links and going to news websites as much as they used to. In fact, dramatically less than they used to because they're just reading this AI summary that in the Google now gives you the AI summary of the of the website. Uh, and that's taking away important valuable revenue from from journalists trying to collect that news. And the tech companies knew that this was legally certainly morally wrong. Internal Meta Communications showed that the Meta CEO Mark Zuckerberg personally approved the use of one of these large data sets of stolen books in order for Meta to be able to compete in the AR race. and and then to try and disguise it, he got the employees at Meta to remove the copyright information from the head from the front of the books to try and make it less obvious that these were copyrighted books. And it's not just Meta. I mean, I could I could pick on any of them, right? I could pick on all of the companies. uh in September, for example, our September anthropic, the company behind Claude, they settled a class action lawsuit about copyright for $1 and a half billion dollars, which is around $3,000 per book. Unfortunately, it's only authors in a in the US. So, it won't be anything from from my books, but at least um US authors will get something hopefully out of that. But, you know, the takeaway you get from from these examples is clear that it's it's better to break the law than pay the fine, even an excessive fine, $1.5 billion dollars, than it is to do the right thing in the first place. But we don't have to accept the citizens that we strip mine our creative industries to to power Californian algorithms. And indeed, they're sort of undermining their own case at the moment by actually signing licensing deals with many news organizations and publishing houses. You know, if they're doing that now, why didn't they do that in the first place? Um, but you know, if the tech giants see that it's cheaper, quicker, and easier to pay fines at a later date than it is to do the right thing, then they're going to do that. And so, we have to make it so that it isn't cheaper and easier for them to do that. They have to make the fines punitive enough that they actually do the right thing in the first place. But with this example, what turned my my anger into fullburn outrage was the blatant disregard that they have for our culture, for humanity's culture that they're undermining here. Because if you destroy the economy for books or songs or graphic design, then you destroy the very things, the very culture that makes our life so rich and life so worth living. And it's clear that they don't care in the race to AI that they don't care about protecting and preserving and nurturing that culture. Um, in June 24, the OpenAI CTO said at a to a conference audience, and I'm going to quote her exactly, "Some creative jobs shouldn't have been there in the first place." I think it's pretty clear what where they're coming from, and I'm sure the graphic designer who was driving my Uber recently from the airport will appreciate the fact that she thought he shouldn't have a job. And we've already seen the evidence, the impact that this theft of intellectual property is having on our creative industries. Job adverts globally for graphic designers fell 33% last year. Um, compared to the previous year by photographers, freelance photographers by 28%. Copy editors and writers again by 28%. In the last year alone, I personally refuse to accept an AI revolution uh that enriches the founders in Silicon Valley by impoverishing artists, writers, and graphic designers around the world. And that leads me my third example. My third example where my anger against big tech has spilled over into outrage. And it's not a blatant disregard of our laws. That last example again was disregarding our copyright and intellectual property laws. Um, but this again one that I'm surprised isn't better known and I'm I suspect it will be new to to many people in the audience, but in November last last year, Reuters um revealed some telling internal documents from Meta. Again, I could have picked on other companies, but this one um is from Meta that estimated that 10% of the Meta's ad revenue globally came from adverts for scams and illegal goods. 10%. And you unfortunately since we're talking about AI, AI is increasingly unfortunately implicated in all of this. It's increasingly being used to generate those scam adverts. Um, and of course, Meta provides advertisers with lots of AI tools to run the ad campaigns of scam adverts. Uh, and of course, the adverts that you see are decided for you by art of intelligence. Uh, and increasingly the art of intelligence is deciding you're the vulnerable sort of person who should see scam adverts, right? So, art intelligence is the core of of of this scamming of the public. The meta is generating, right? 10% of its income comes from scam adverts. So that's billions of dollars annually uh each year are coming from scam adverts. I I I just don't understand why more people aren't completely outraged by this. Um let let me give you an Austrian perspective to this. Right. So I looked I did the numbers. Meta in Austria has about the same turnover as the DM chain. I was in the DM chain this morning buying some cough medicine. Now, imagine that you went into DM and 10% of the goods on the shelves of DM were illegal or or counterfeit. You'd be outraged. You'd be calling for DM to be shut down by the weekend. So, I don't understand why we let we continue to let Meta to continue to trade. If there'd been any other type of business, we wouldn't uh we wouldn't let them trade here in Austria or indeed back in Australia or anywhere else. Actually, funny aside here, for many years, Meta has claimed that they don't trade in Australia. Um, outrageous claim, in fact, because they sell billions of dollars of adverts in Australia. Um,5 billion dollars of adverts in Australia annually are sold by Facebook, Better, Instagram, Facebook, and the like, but they claim not to be trading in Australia. Um, in order to pay minimal tax, right? So, we we have higher tax rates than than some other Dominions. Um, and so all the ad revenue in Australia for Meta is booked through Ireland as a country famous for Guinness Leprechaorns uh and and sheltering Silicon Valley's loose change with their low corporate tax rates. Now, uh, in Austria, I'm glad to say you have cleverly avoid this by charging them a 5% uh, digital sales tax um, on all their revenue. So, so they can't do the same trick here in in Austria and not pay any tax at all. Um, but this is the irony. Meta have increased their ad rates here in Austria compared to other countries by 5%. So, it's not them that's paying the tax, it's their advertisers that paying the tax. So, I just don't understand it. How is it that some of the most profitable businesses that exist today can decide that they don't want to pay taxes? They I mean they could well afford they they're sitting on vast cash mountains. It's not like they're buying their shares back all the time. It's not like they couldn't afford to pay the same tax rates as other corporations, older corporations or or you know citizens like you and me. Um my my view is that if a corporation decides that you know it doesn't want to pay tax on a particular uh dominion doesn't want to contribute back to the economy that generated that wealth then probably we shouldn't allow them to extract wealth from that place. Again, let me give you an Austrian perspective to see how outrageous this is, right? Um, Meta uh makes about the same money in Austria from scam adverts as criminals in Austria make from all the illicit drugs that are sold here or it's about again the same as all the counterfeit gods goods that sold in Austria completely. So, as far as I can see, they're facilitating what is a huge amount of crime, and I think that's outrageous. Now, I I could give you many more examples of where my anger has spilled over into outrage about the tech companies, about how they're bringing AI irresponsibly into our lives. Um, but I want to pick on another target before I finish my talk. Um, I mean there are many there many things other things I could have said about the tech companies about AI companions that undermine our ability to connect with each other. AI doctors that are offering dangerous medical advice. AI nudify software that weaponizes the the abuse of women. And yes, I'm looking at Mr. Musk there. Uh, or AI deep fakes that are destroying our idea of truth. The distinction between truth and untruth. um threatening the cohesion of our society, of our democracy, the functioning of that. Um but I'm going to turn my angry head onto, as I said, a second target, and that's the politicians who are letting this happen. It doesn't have to be this way. And again, I'll restrict myself again to three examples. Um and there are three examples here where I think my anger is not transforming into outrage, but despair. Despair that the politicians are sitting on their hands and not doing enough as usual. Uh first of all, I'm desparing that that in many places governments are not investing enough in the upside to justify some of the downsides that we're going to see that there is huge great upside. Um now Austria is actually doing quite well in terms of investing in research. I looked up the numbers. 3% of GDP. You should be proud that your government is doing that. But only half a percent on fundamental research. And you need to do better. That's not enough in fundamental research. Certainly, we're not in back in Australia where I am that we're not investing anything like that. We're we're actually below the G20 average by a long way. In fact, we're half of the G20 average. Um there's much more fundamental research we need to invest in art intelligence to to build true intelligence in machines. Um and uh places like our universities place to be to doing that. And if developed countries like Austria, Canada, Australia to commit to compete in this AR race against, you know, the big countries like the US and China, um we're going to have to lift our game. The second area in which I'm somewhat desparing is how governments are not regulating appropriately the harms that I've just talked about of artificial intelligence. Now, the EU has led the way in terms of regulating AI. I will give them credit. there. The EU AI act is the first major piece of AI regulation that exists. Um, and there's, you know, related regulation like the digital services act, the buttressing what you're doing with the EU AI act. Um, but it's very clear over the last two years that the appetite for regulating AI has well the digital space as a whole, but especially of AI has dropped significantly um, ever since the Trump presidency, second presidency uh, started. Yeah. In fact, on the first day of his second term, uh, one of the first executive orders he signed that day was to undo all of the regulation that Biden had put in AI regulation that that Biden had enacted um in his previous term. Now, we don't actually have to um necessarily introduce lots of new regulation. There's actually lots of things that we could do with existing regulation. We could we for a while I think we thought that you couldn't regulate the digital space and you shouldn't regulate the digital space. Um you couldn't because it was somehow different in the physical space but that's not true. It's exist on servers exists in data data centers. These companies operate in particular countries laws apply to them. The digital space is does not have to be a wildware. So we could apply more forcefully many existing laws that we have about about product liability, about competition, about privacy um to the tech companies. But it's clear and we should do that. But it's clear that there's also fresh arms. There will be things that were not imagined when the politicians were drawing up those rules many years ago that could the politicians could not have imagined. You know, things like the deep fake nudes that I talked about here. And indeed, you know, is an example of what you should do and what has been done in at least one jurisdiction was in Australia, um we introduced new regulation very recently in the last couple of years, last year indeed um to criminalize the distribution of deep fake news was causing significant harm especially in certain certain um schools. Um and it was not a criminal offense to circulate synthetic images because they weren't of any real people. Um, so we made it a criminal act act so that people would understand the severity of what they were doing and that it could be better policed. And other countries are enacting or have enacted other AI laws to protect their citizens from deep fakes or from manipulation, manip misin misinformation and the other significant AI harms that are starting to emerge. But there's many new ones that we have only starting to understand. Um, and I have just one question for the politicians that I have never seen answered well. And that question is, did we not learn anything from social media? We can't simply let the tech companies regulate themselves. They have demonstrated convincingly that they can't do that. And we're about to supercharge the sorts of digital harms that we saw with artificial intelligence. much more powerful, much more persuasive content that is going to be um way more disastrous if we're not careful than social media ever was. And it surprises me that politicians are actually somewhat more reluctant to to regulate AI at the moment. Um to take an example in Australia we've just passed a world first social media age ban for teenagers that we've decided on the balance of evidence that social media seems to be causing harm amongst teenagers, people under the age of 16. Um it's not a safe space for them. Uh and that maybe we should like alcohol and tobacco and various other things that we decide uh are not appropriate for young formative minds. they need to be protected from and so young people in Australia are now protected from the harms of social media. This has been proven to be highly popular. Almost every parent thinks it's a fantastic idea. You can now tell your kids you can you need to get off social media. Even teenagers have come up to me and said it's a fantastic idea. The pressure is off them to be on social media. Uh and subsequent to that law coming into effect at the end of last year, there's over a dozen countries now that are considering or enacting similar uh age restrictions on social media. The UK, France, Spain, Denmark, Greece, Slovenia, Portugal, Indonesia, Ireland, India, and as I understand here in Austria. Um, I'm I'm not a politician, you can tell, but it seems to me that regulating AI would be a real vote winner as well as being the right thing to do. So, that brings me to my third despair with politicians and that's that the political conversation is now being dominated by big tech itself. Big tech has embraced the idea of lobbying in Washington uh and London and now it seems also in Brussels and CRA. Uh in the last Australian election, the tech sector has donated more to the political parties than any other sector combined. Uh the tech sector here in Europe uh is also one of the biggest spenders in Brussels. That should tell us something. they're not spending that money um other than because they think it's going to be helpful for their cause. I I want to give you just one example, one personal example of how politicians are unfortunately starting to surrender to the persuasive messaging of big tech. In February last year, 2024, the Australian government rather wisely decided that they needed some independent expert advice on art of intelligence. Um there are huge financial incentives for the tech industry to do the wrong thing to move fast and break things like the mental health of our young people. So they thought some independent technical advice not coming directly from the tech industry might be a good idea. So they set up what was called at the time a temporary AI expert group of academics, lawyers and other people um on the fringe of tech um to offer that them that advice. Now, full disclosure, I was one of the 12 people who was was asked um to be one of those independent experts to the Australian government, offering them advice um unpaid advice as usual about how Australia should profit from the race and how we should avoid the harms. Um unfortunately, the temporary in our name, the temporary AI expert group proved to be entirely precient because we're now being closed down. All the problems have been solved obviously. um despite them promising us they would make us permanent despite asking us to apply for a permanent group and the um uh the government quietly revealed in December that they wouldn't be continuing the group and indeed that they wouldn't be making any new regulation for AI which again that's bucking so many international trends. The UN, I'm glad to say, has just appointed 40 experts in its independent um international scientific panel, bit like the ICC, uh the climate panel on art intelligence. US has a national art of intelligence advisory committee which reports directly to the president. Japan's PM set up the artificial intelligence technology strategy council. South Korea has a national AI strategy committee with 50 commissioners, many independent experts. Ireland has an independent AI advisory council and Austria I understand has for many years had an 11 member barat for countis intelligence an AI advisory board as well as an AI policy forum and an AI stakeholder forum so good on you Austria but you know I have just one question for politicians that again I've not seen answered very well wouldn't it be advisable to be seeking independent advice rather than listening to the lobbying of big tech because there's huge financial pressure as I said not to do the right thing. Again, it surprises me that they're not wanting to listen more to outside independent voices. Why wouldn't they want some independent advice given at no cost as usual? Um, I can assure you that I however that I will continue to offer fearlessly my advice whether the politicians want it or not. Now, now I'm actually going to offer the advice not in private as I was, but in public as I am doing today. I'm sure they're going to find this more uncomfortable than I'm going to find this uncomfortable. I'm used to giving lectures in which no one's listening. Um, and I'm going to end before there's plenty time I hope for questions. Um, with one final emotion, and it's not anger, it's fear. I fear, as I intimated, that we're going to be repeating the mistakes of social media. Social media should have been a wakeup call about the harms of unregulated tech that we're about to supercharge with, as I said, much more powerful and persuasive technology than social media ever was. What I fear most is that I'm going to be back here in three or four years time saying, "We tried to warn you, but another generation of young people has been sacrificed to the profits of big tech." I have so much more anger and outrage I could have shared with you. I could have talked about the environmental harms of AI. Uh uh I could have talked about concerns about AI powered workplace surveillance. I could have talked about the impact of AI in jobs especially graduate level jobs. In fact, the most common question I get today um is what can my children study to make them AI safe? And I could have talked about how AI is transforming warfare as we see sadly in Iran and Ukraine and other hotspots of the world. In fact, I was um privileged to be at the UN in Geneva a few weeks ago talking again about killer robots. Now, that's a sentence back 50 years ago the young me would never have imagined to have said. There's so much more I could have talked about about how some, but thankfully not all of my young dreams are turning into uh nightmares, but I'm going to end not with my own words, but some of the final words of how 9000 from that film 50 years ago, 2001, a space odyssey. They're the words that began my journey here today to the technical university of Vienna. And they're the words the AI said when the humans started to ask difficult questions. Hal, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over. I've still got the greatest enthusiasm and confidence in the mission, and I want to help you. Thank you. Thanks, Toby. Uh questions from the audience. >> Uh yeah, fascinating presentation. I see there's plenty to worry about. Um >> Oh, well, you were obviously paying attention. >> Yeah. Yes. At least one was. >> I was. >> Okay. Uh and I just finished a book which I'm going to assume you're familiar with called If Anyone Builds It, Everyone Dies. Uh, I just finished it this morning, so I'm hoping you'll give me some reasons to sleep well tonight in spite of what I just heard now. >> Yes. >> Uh, as the book states, we should stop further development of AI to the threshold past the threshold of what is superhuman intelligence. So, actually have a twofold question. One, how real is this danger? Is this overexaggerated or is there still a probability of what they're foretelling of super AI destroying humanity and taking over the planet and the world? Um, and if there is a yes to that or a no to that, how do we know when we reach super human intelligence? Uh I I don't think there's a a gauge on the side of the computer saying you are now approaching superhuman intelligence. Please stop. >> Yeah. Well, fasc fascinating question. There's a there's a long answer to the question which is I've written a whole book which is coming out in September to answer that question. I'm going to have to give you the short answer. >> Executive summary. the executive summary which is um intelligent people overestimate the importance of intelligence right so the university you're surrounded by some of the most intelligent people and yet the problem in the problem is not intelligence the problem is power is that what people do with their intelligence is the power the power that they have and it's if if intelligence was the important thing you should hang around universities because they would be the center of changing the world. They're not. We tried we we struggle to be to to to actually make any difference to the world because power resides elsewhere. The and that we've set up a complex set of institutions to limit the excesses of power. You can't go off and do what? You can't turn the world into paperclipip factories because we have environmental planning laws. We have environmental activists who will stand in front of your bulldozers who will prevent you from turning the world into paperclip factories. So it's in intelligence is often not the thing that's holding us back and it's not the thing that will allow you to destroy the planet. As an example, um the problem with you know the greatest existential crisis that faces humanity today is the climate emergency. I think it's it's pretty clear that that is something that's going to have the greatest harm to the greatest number of people is the way that we're changing the climate of the planet. And we know it's not intelligence that's that's stopping us fixing the climate. It's human stupidity in human politics and money. Yes. And that we haven't fixed that. So it's not that we need a a greater AI intelligence super intelligence that's going to come up with a better plan. We already 20 years ago we already knew what the plan was, right? Um but we haven't been able to execute that plan. So I'm actually much less worried about artificial super intelligence than I am about artificial stupid intelligence. the fact that we will hand over responsibility as we are doing to artificial intelligence to decide targeting decisions in Iran or um to decide welfare payments in uh the United States or whatever it is that isn't actually smart enough or doesn't have the capability to be making those decisions. So I I'm pretty relaxed about the threat that um artificial super intelligence has. I actually would like more intelligence on the planet to help us solve some of the wicked problems. >> Thank you. >> Thank you very much, Toby. This just reminds me about the paper I think you did maybe 10 years ago or so. The singularity is not near. >> Why? Why the singularity may never be near. >> Yeah, exactly. So maybe was to to have a question on this. So I I now recognize you didn't change your mind much although this was before church GBD and so but on the other hand if we if we listen to the to the big tech and and how they they they now I mean all this hype and this many many investments I mean they are talking about hi they promise hi >> yes >> um will they fail >> uh no I I think it would be conceited to think that we couldn't build intelligence that matched and probably exceeded human intelligence Why why would human intelligence be the supreum, the limit of intelligence? We're an evolutionary accident. We happen to be as smart as we were. We were one of the smartest things around when we when we um you know came off the savannah and we've used that intelligence to good effect. Everything in this room is the product of human intelligence. The quality of lives that we live, life expectancy has doubled here in Austria since the industrial revolution because of the things that we have invented as humans. So intelligence has been a a great gift. But our superpower wasn't our intelligence. Our superpower was our society. Our ability to come together and solve problems collectively. We already have super intelligence. It already exists in corporations and governments. No one knows how to build an iPhone. No one knows how to build, you know, the people who work for Apple and the subsidiaries between them. No one knows how to build a nuclear power station. people who work for Westinghouse collectively have that knowledge. So we already have these organizations that are smarter than their parts that can do things that we couldn't do individually. All of us all of us put on a desert island on our own would quickly starve. Well, I certainly would. I imagine most of you would as well. We've lost most of those skills that we had of looking after ourselves because we have society. That means that you know we can be we can do fanciful things like being AI researchers and other people manage to make food for us so that we don't starve. Um and so it's that so so it and the artificial the super intelligence that exists in corporations isn't destroying the planet. I mean it's not it's not perfectly aligned. You know 70 companies are responsible for for most of the global emissions. we could need to get those better aligned with human flourishing. But but I'm not I don't think that's an existential threat. I don't think shell oil is ultimately going to destroy all of humanity. It may make life a bit more painful for many of us, but I don't think it's an existential risk. I think it's something that we can manage. As we've managed, we built this complex society, all these different institutions that are competing against each other, providing friction to stop anyone having too much power. I must admit it's gone a bit wrong in the United States recently. But I'm confident in a few years time we'll have fixed that and the US might get back to being a more reasonable place. But I don't think the I don't think you know societyy's going to collapse. I think it's going to maybe it's going to be more painful than it should be. And the the the sad thing is that every generation inherited a better life than the previous generation. And we have spectacularly turned that one around and it looks almost certain that our children will have a worse life. and we're responsible for that failing. >> So in very short it's not the technology but how it's structured and how the industry is. >> Yes. Yeah. Yeah. Technologies are just a tall but it's how we you have to think about you know how do we revitalize our society our institutions the US is a wonderful example of how the institutions are failing it and we need to think about how do we refresh them? How do we how do we build them so that we can continue to flourish as as a as humanity? >> Did you raise your hand? >> I'm bringing >> just behind you. >> Thank you so much. Uh you mentioned uh the EUI act um and the struggle for power in the AI development and I feel like as the UI act is definitely being watered down Yes. in now by countries. >> Yes. Um, the new paradigm that's on vogue is digital sovereignity. Um, at least within Europe, and I know that this is on top of the agenda in Austria as well. Um, and I'm wondering what kind of effect that could have on AI development, if you see any potential at all in in this. >> Yeah. Um, I think we're in a unfortunate moment in technical history, which is that at the moment it's who can outspend who who who got the biggest models, who it's a it's a scaling race at the moment. But that's actually not what ask us to get there quicker. Um, but I'm to get there quicker. Um, but I'm pretty confident that's where we will end up. >> Further questions? Oh yeah. Okay. Uh yes. I I'm not an um technic guy. I'm more into in philosophy and I'm wondering uh for me the main problem is ourselves, our attitude to life, the whole system of capitalism, the whole system of competitive science. So this is the main reason for our problems. Not the artificial intelligence, not the technical gadgets that we are inventing but uh our own attitude to life. So we should have uh afraid of it because we maybe really unconscious know that we are mean persons that we will misuse it. Everything that mankind has invented we will misuse it in a way. Yeah. >> And we will will try everything because we are very uh no >> curious >> curious people. So if we are curious and mean people this is a bad combination. And uh all the talk about how we can uh regulate AI, we is is going to fail