x

    READY TO BECOME A MEMBER?

    Stay up to date on the digital shelf.

    x

    THANK YOU!

    We'll keep you up to date!

    Podcast

    Worried about AI? Byron Reese is 93% Sure You Don’t Have to Be, with Byron Reese, Futurist and Author of “We Are Agora”

    There are tremendous amounts of trepidation and hope being invested in AI right now, with often trepidation winning out. Is all this angst really necessary? We turned to Byron Reese, an endlessly provocative futurist and author of the newly released book We Are Agora, to provide a long view of transformational technologies through the ages and therefore, the positivity we can take into a sometimes uncertain future. Grist for all the transformation cheerleading you do in your careers and lives.

    Transcript

    Our transcripts are generated by AI. Please excuse any typos and if you have any specific questions please email info@digitalshelfinstitute.org.

     

    Peter Crosby (00:00):
    Welcome to unpacking the Digital Shelf where we explore brand manufacturing in the digital age.
    (00:16):
    Hey everyone. Peter Crosby here from The Digital Shelf Institute. There are tremendous amounts of trepidation and hope being invested in AI right now with often trepidation winning out. Is all this angst really necessary? Rob Gonzalez and I turned to Byron Reese, an endlessly provocative futurist and author of the newly released book, "We are Agora" to provide a long view of transformational technologies through the ages and therefore the positivity we can take into a sometimes uncertain future grist for all of the transformation cheerleading you do in your careers and lives. Byron Reese, welcome back to the podcast. We're so happy to have you here.
    Byron Reese (00:59):
    I'm so happy to be here. Thank you for having me.
    Peter Crosby (01:02):
    Of course, we first engage with you around your book Stories, dice and Rocks that think how humans learn to see the future and shape it. Then we reached out to you recently about your new book. We Are Agora, how Humanity Functions as a Single Super Organism That shapes our world and our Future. Those subtitles are the things that really grab me, I've got to say. So thank you for those. They really make one want to read it. Those are awesome. But when we had our prep call for that podcast, which will be coming in several weeks from now, we discovered that we immediately had to have you on to talk AI because that's what we do in this world right now, certainly in tech. So I wanted to start with a quote that I saw on your website just as electricity in the assembly line weren't bad for workers in spite of shrill predictions. Otherwise, AI and robots won't be either. You say, in fact they will create so many new jobs that our bigger problem will be a labor shortage. What makes you think so?
    Byron Reese (02:05):
    Well, where does one start with that? So humanity, we see you get to edit. Let me just do that. Sure. What makes you say so? Well, I've spent a long time writing about this and working on this problem, and I started thinking about the half-life of a job. And it took me a long time to work all this out, A lot of historical records, but I think it's about 50 years. I think every 50 years we lose half of all the jobs. I think that's been going on for 250 years and we'll continue to do so. I don't have any citizen any faster, and if anything I could argue that it's much slower. And the reason is, is I used to have a podcast where people came on and made all these dire predictions. It was always five years out, five years out, we're going to have all this massive unemployment and we're going to have all these jobs lost and it's going to be terrible.
    (03:04):
    Now, this was more than five years ago, and here we are. And to this day, I cannot think of one job that has been eliminated. One, even one, the world went all crazy. There was an article that said, okay, we're going to automate trucks, and then the whole system's going to fall apart because once the truck drivers are automated, then the diners go out of business and then all the support things go out of business and this and this, and then it all cascades and it all falls apart and it's more of the same. And so this is just, and I think I know why I used to be cynical. I used to think it was just the media saying, what in your tap water is killing you? Tune in at six to find out. They used to think it was that, and I don't think that anymore. I think they seize on it, but I think the people who say it really do believe it. And here's why I think they do. If you were to have gone back to 1993
    Rob Gonzalez (04:01):
    Ancient history,
    Byron Reese (04:02):
    Correct, the very early beginning of the internet, 19 94, 19 95,
    Rob Gonzalez (04:07):
    The wheel fire
    Byron Reese (04:09):
    1990, and you showed somebody a browser today and you said, in quarter century, billions of people are going to be on this thing. What is that going to do to jobs? They would say correctly, oh man, it's going to be terrible. All the travel agents are going to go out of work. All the stockbrokers are going to go away because people, it's buy the stock, selling the shopping malls are going to close because people order stuff online. The newspapers are going to shut down because people get their news online. The post office will handle far fewer letters. So they're going to contract and on and on and on the yellow pages, they're gone. People just look it up online. And you know what would've been right about every single one of those. But what nobody would've said is, oh, there's going to be eBay, Etsy, Airbnb, Uber, Amazon, and a million other companies.
    (04:59):
    So you see, you can always see what these technologies are going to destroy that nobody can see what they're going to create when a new piece of technology comes out. We only are able to conceive of it compared to what we already have. So when TV came out, they said, what's tv? Oh, it's radio with pictures. And so the earliest TV shows were just people standing reading scripts like radio plays. So it takes time. There's a reason Uber wasn't started in 1997, that it takes time for the technology to sink itself. It's understandable. People can see what it will destroy and cannot see what it will create. However, then you could say, well, and I think it's also true that we've lost half of all the jobs every 50 years and we've maintained full employment. Now think about that for a minute. Half of the jobs lost every 50 years and we always have full employment.
    (05:57):
    And so you say, well, what's going on there? So what does technology do? Technology basically increases human productivity. And that is always, and I'm going to make a blanket statement that it's always good for people always. If you don't believe that, then you should advocate for a law that requires people to work with one arm tied behind their back. Now, if you did that, you would've just created a whole lot of jobs because you need more people to do anything to trim your hedges and mow your yard. You now need more people. But guess what? Those jobs won't pay very much because you just destroyed everybody's productivity. So if technology just increases worker productivity, then that's giving a person another arm. And with ai, it's giving a person another brain. Anything that device can do, all of a sudden I can do. So I'm about to wrap up here.
    (06:54):
    So what people say though is they say, well, that's good in theory, but let's be honest, here's what really happens. The world is full of jobs and some are high tech, high paying jobs, and some are low skill, low pay jobs like an order taker to fast food restaurant. And they say, the problem is that technology creates new jobs way up at the top like a geneticist, but the jobs it destroys are down at the bottom, like the order taker at the fast food store. And then they always say a variant of this. They say, do you really think that order taker has the skills to become a geneticist and can you teach a coal miner to code? And it's like, no, that is not what happens. What happens is a college biology professor becomes a geneticist, then oops, there's an opening at the college.
    (07:44):
    So the high school biology teacher gets hired on at the college. Now the high school has an opening, so they hired a substitute teacher. And all the way down the line, the question isn't everybody can, the people whose jobs are destroyed do the jobs of tomorrow? The question is, can everybody in this country do a job a little harder than the job they have today? And that is 250 years of economic history. Technology creates jobs at the top, destroys ones at the bottom, and we all shift up a notch. And that's why you have rising wages, full employment for 250 years while you're destroying half the jobs. It can't happen. It just cannot happen that we lose jobs.
    Rob Gonzalez (08:20):
    I have two of my favorite stories along the lines of what you're talking about are one is the invention of the ATM, the automated teller machine. And when this came out, this predates even 1993 if you can believe it, but this
    Byron Reese (08:37):
    1979, I believe
    Rob Gonzalez (08:40):
    That's right. And so when it came out, people were talking about how there wouldn't be any tellers anymore. And at the time, I forget what the number was, but it was a decent percentage of the workforce were working in banks as tellers, and this is before the widespread usage of credit cards and debit cards and things like that. So people were going to the bank all the time to deposit checks and do lots of kind of mundane banking activity that we kind of take for granted today. And so the ATM comes out and they think, okay, well there's 3% of the population or 2% of the working population or something like that that are just going to be out of jobs in 10 years. ATMs are going to do all the work. And then what's happened is actually this is the year we're recording this in 2024, the gross number of tellers like human tellers at banks in the United States is the exact same as the year that the ATM was launched.
    (09:37):
    So it's possible that without the ATM, there would be strictly a larger number of tellers, but it's not like they've destroyed the jobs that were there. They just sort of allowed for new types of jobs, jobs in the credit card processing industry and so on and so forth. And another one is a more recent one, which is when index funds were invented and popularized and index funds are wildly cheaper to manage than actively managed mutual funds. And in the eighties, mutual funds were all the rage. It was a way to have a diversified portfolio without having to do the stock picking yourself, but you had to pay a couple points to the managers of the fund. Index funds come out and you're talking about paying just a couple basis points. They're just buying the market so they can be automatically managed and they're just a lot cheaper from a management fee perspective.
    (10:27):
    And therefore it turns out over time they just kind of perform better than mutual funds. So people thought, okay, well mutual fund managers, they're gone. This is a dated industry. And it turns out, no, they're still doing lots of mutual funds. The mutual funds just get more complicated. They move into working in fields like private equity or they just embrace the fact that what they're doing is just for funsies and they create ETFs, like ETFs that are two times Nvidia, right? And so levered bets on Nvidia, if you want to go on Robinhood and do levered bets on Nvidia, you can get this ETF. And so that's what the mutual fund managers are now doing these days. So it's not like anyone lost jobs. The jobs evolved and it freed people up to do more levered activities than simply cashing checks. They do more customer relations in the ATM world.
    Byron Reese (11:17):
    Absolutely. Rocket lawyer didn't eliminate lawyers. TurboTax did to eliminate tax accountants. The open source movement didn't eliminate programmers. What is happening though that's very interesting is that those jobs transition to become relationship based instead of transactional. You see, any job a machine can do if you make a person do that job that we have a word for that is dehumanizing. You tell that person, I don't want you to be a human being. I want you to be a machine that I do not own. And those are terrible jobs for anybody to be regarded as a machine. So what happened? But nobody wants to have a relationship with the computer. So what happens is these jobs get pushed into the relationship thing. You see in a world of choice of more and more choice, you say, why are there so many pasta sauces at the grocery store? It turns out the answer is is because there's a plurality of perfection. You like chunky, you like smooth, you like this, you like that. And in a world of evermore choice, trusted guides become valuable. So that bank teller now says, Hey, why not a student savings account? Or do you want to refi your home? That open source programmer customizes the code, the lawyer takes your basic will and then customizes it and it becomes more about people.
    Rob Gonzalez (12:40):
    Yeah, speaking along those lines, I think one of the interesting things about AI is historically people, I think what broadly agree with you, if something is like a boring repetitive task and you could get a machine to do it so that the human can do the non boring, creative higher order stuff, then that's a trade we should make every single day. In broad strokes, to your point, over 250 years, that's mostly the trade we've been making. The fear with AI is that it can do some of these higher order things, right? AI can write decent articles for blogs, and that used to require the human creativity. We didn't think that AI could write blogs before. And so there's negative fear out there. I don't know if negative fear is the right expression. There's concern out there that maybe this time is different. And a lot of DSI members who listen to this podcast, their companies are talking about legal risk with ai.
    (13:47):
    If you're using AI chatbots and other models, are they sort of learning from your interactions and taking your IP and building them into the model and democratizing your secret sauce? There's fear from media companies that the web is simply going to be wiped out because AI will provide the answers rather than people going to search and then going to specific websites to learn our news sites. Do they have a future if AI is simply scraping the news and giving answers and giving summaries and all this sort of stuff? So these are types of activities that I don't think anyone thinks of as necessarily mundane or rote. So what do you think about these fears? I mean, do you think that the angst that companies have is misplaced or do you think we should be full ahead this time?
    Byron Reese (14:34):
    Yes. I mean all of that that you could have said about the internet in the nineties, oh, the Internet's going to make all of these things obsolete in the New York Times. You go out of business and all of these things. But I think of it this way. I mean, and again, you're looking at just one half of the equation. It's true that it can write a mediocre post, but it can't do anything exceptional. It can't do like investigative journalism, don't ask it to drive a car, right? You can teach a human 16-year-old and 40 hours to do a passable job driving a car. We have untold billions of dollars we've spent trying to teach AI to do that, and it can't, the universe of things it cannot do is vastly larger than the tiny little use cases that you just gave out of things that it does do.
    (15:26):
    Now, the reason I think though, that those fears are irrelevant, I have to tell a long story here, long in the time span that it covers, it begins 4 billion years ago. I'm afraid the earth is I got to get comfortable. No, I'll speed through the first several billion years. So the earth gel's and life immediately forms, which is a big mystery by the way, why it formed so quickly and has only formed once and that life had DNA was DNA based. There are reasons we know that. And for three and a half billion years, there was one place data could be stored on this planet, and that was in DNA, and it is just data storage. It had 600 meg of data storage, four letter alphabet, and because of that, you couldn't read and write to it except in geologic time, 10,000 year kind of timeframe to hope for some little mutation.
    (16:26):
    Then about 500 million years ago, we evolved brains and a brain was the second place to store information, and it was very fast and you could write to it very quickly. And when you got brains, that's when everything, the Cambria, and that's when everything just took off. Then 5,000 years ago, we got a third place that we could store information. We still could store it in DNA, we still stored it in our brains, but we learned how to externalize it and we could write and we could store it externally, but it was very expensive. But that tiny innovation, that ability to store more information gave us civilization. That's when everything, cities and all of that form, then Gutenberg comes along and he says, I can do it cheap. And that gave us the enlightenment. That gave us the modern era. Then you keep going forward, and we got the internet and we said, wow, we're now going to digitize it.
    (17:16):
    We're going to take the 26 letter alphabet and make it two letters, zero in one, and we're going to make it researchable. Now, when we put all that knowledge online, we put all that knowledge online, we get a big boost. But there's a huge problem. Libraries and the internet are kind of like that warehouse at the end of the first Indiana Jones movie where they put the arc. You can't find things in it. You can't see when you do a Google search and you say, what's the difference between a cold and the flu? Google says, I got 22 million answers for you, and here's the first one, and here's the second one, 22 million. But you don't want 22 million. So all human knowledge is still fragmented in 50 billion different silos. Now, the people, as you know, die. When you die, everything dies with you. It's a big reset.
    (18:03):
    Only a few little things remain. And so it's almost like every generation, we write stuff down and we know it, but there's this huge reset. And so humans have learned and then die. They learn something, then they die. Then the next generation learns and die. Die, and they die and they die. We don't ever progress. So what we're doing with all these sensors that we're connecting to the internet and with things like these large language models is we're developing an actual memory for the planet where every cause and effect is recorded. So what those large language models do is they consolidate all information. You ask chat, GPT, what's the difference between cold and fluid? It's going to give you an answer. The answer. And so we are about to have this amazing leap forward because every time we get other places to store information, we get these quantum leaks and we're about to have the biggest of 'em all because we're going to take all human experience and put it in a single knowledge base that you can query.
    (19:00):
    So sure come to me and say, is it going to write inferior blogs and reports, sports stories and all of that? Sure. But what it will do, it will make everyone in the future wiser than anyone who has ever lived, it's going to make everybody's life the data, the information, every cause and effect in your life becomes that that makes other people's lives better going forward. And so it's bigger than any of these small worries that we have, these transitional worries about it because in the end, it's a productivity tool that makes people more productive. All those people who are threatened by, well figure out how to use it and they'll be able to do more. But the big thing is when it consolidates all human knowledge into a single knowledge base, we're going through the stars with that.
    Rob Gonzalez (19:47):
    I think that's, sorry, just responding to that though. One of the counter arguments to that I've heard, I'm not saying I agree with this, but a fear is that the giant human database is a inherently centralizing force, and so that there will be a winning database. And you said at the beginning, it's not like the internet bankrupted the New York Times. Well, a lot of newspapers went bankrupt and they didn't go bankrupt because of the internet. Exactly. They went bankrupt because all of a sudden they were all competing with the New York Times and the New York Times won, and the New York Times is a much bigger newspaper than it was in 2000, but Chicago Tribune is gone and LA Times is gone.
    Byron Reese (20:39):
    But to jump in, could you really argue that back in the day when we had three TV channels, that somehow the internet isn't ultimately empowering to people. There's 50 billion websites you can go and look into in the end, it empowers people, doesn't it? It doesn't restrict
    Rob Gonzalez (20:59):
    Choice. I mean, I personally tend to agree with that, but I, I'm just giving the counter argument that I'd like you to respond to. One is, so if you look at newspapers, all the newspapers that were kind of a little bit, they all consolidated onto just one, which is the New York Times. And then another thing that the internet did is it sort of polarized the conversation in the media. So you look at television and newspapers in 1990 were all kind of centrist, and they were all kind of centrist because a little bit, there wasn't that much choice. If you lived in New Jersey and you wanted to read a newspaper, you had the New York Tides. The New York Times only had a limited geographic area where it's trucks could get from 4:00 AM. And so in order to maximize their revenue, they had to produce a paper that appealed to the most number of people within the geographic area, which meant that they couldn't be too far left or too far right.
    (21:51):
    They had to sort of be centrist and they had to appeal to a broad number of people so that more people in their limited geographic reach could subscribe to them with the internet, they don't have to worry about that. They can just go all the way left and compete with all the papers that are on the left and consolidate all the readership. Who wants those papers that are on the left and the New York Times makes a ton of money and they win. There's an argument which is, I'm not so sure the world is better off if you've got a paper of record. That's kind of a monopoly paper on the point of view on one end of the spectrum with a strong point of view compared to the world that we had before, which was that there was a little bit of competition among the papers and they all tended to be a little bit more centrist.
    (22:33):
    And so with the large language models, I mean to your analogy of having one giant human database, if that is indeed where this is going and it's a centralizing force, wouldn't a concern be? Well, what's the point of view of this centralizing force? If there's one model, what perspective is it going to offer? Is it going to be from a US language political perspective, left leaning or right leaning? Is it going to be more like a European plurality of points of view? How is it going to represent all of these different perspectives within one model? Is centralization of a model even something that we want to go after? Do we want plurality? So I think there are people that are worried about that as this being kind of more of a centralizing force and less of a force, which is sort of what you said the upside of the internet is it sort of empowers and enables everybody. Maybe it's not empowering and enabling if it's sort of coercing a per one perspective. Does that make sense?
    Byron Reese (23:36):
    Of course. Of course it makes sense. And I'll say that the same tools that can be used to all the medical journals and find early indicators, new ways to discover cancer can also be used by authoritarian regimes to bind dissidents. The authoritarianism has always been limited by the fact that you can't follow everybody and you can't listen to every conversation and you can't read every letter. With ai, you can do all three of those cameras everywhere. You can follow everybody, you can read every email and you can hear every phone conversation. And so very much so the price of liberty is eternal vigilance. And in nations where there's strong rule of law and a strong tradition of that, we just have to make sure that we ensconce those kinds of protections in the laws.
    (24:29):
    But it is a tool to be sure to be used by oppressors. Nobody can argue against that. The thing I would say though, and it is also true that the promise of the internet was that you could find your tribe. And what we didn't realize is that that promotes tribalism. And that has happened to be sure people find their tribe, and sometimes it's not. It does divide people. What we need to remember I think, is that this technology is brand new. Social media is like two decades old. There's no reason we should be good at it yet, right? I mean when the printing press came out, what did they do? They started printing reprinting, religious works, and then they wanted new stuff. They knew they wanted novel stuff. They're like, yeah, novel. We'll invent the novel. And that's actually where we get the word. And so again, it's the same thing when these technologies come out.
    (25:31):
    It's not going to be our generation. It's not going to be people who remember before the technology that are going to figure out how to use it. It's going to be people who have no memory of the old way we did it and will. So it is not written that there's going to be one Uber. I said there's going to be one knowledge base that might just mean there's going to be one set of data that everyone can access any number of ways we may have, but I don't know. I don't know how it will shake out. I do fundamentally believe in people though when there was a time in our distant past where we got down to 800 mating pairs of humans, 800, we were in endangered species, and you say, well, how did we get through that? So 80,000 years ago, we have no walls, we have no writing, we have no knowledge, we have no this.
    (26:18):
    How did we get through that? Our most vulnerable time through rigid utilitarianism dog eat dog? No, our remains, we find that people back then we took care of old people, we took care of people who were injured, you can tell by the bone. So in our most vulnerable, we're funda fundamentally, I think good people and you have to have faith that it isn't going to be that 80,000 years. And we got through all of it that what got us was we had too much information. And I mean that's just not going to be our epitaph. We're much better than that. I think only 7% of people are bad.
    Peter Crosby (26:57):
    That's a specific percentage.
    Byron Reese (26:59):
    I know why. I'll tell you why I got it. I think it's 7%. There was a guy who was trying to figure out how to help turtles get across a road, a highway, and he had this big rubber turtle he put beside the highway and he found 7% of people swerved to hit it. A harmless turtle trying to cross the street, 7% swerved to hit it. So that's my number. 7% of us are bad and swerve to hit the turtle, 93% of us. And that gives me an enormous amount of faith in us.
    Peter Crosby (27:28):
    Boy, Byron, I really hope that that number is true. And with social media coming in during my lifetime, I think it used to be that I don't know in your lifetime, in your neighborhood, something you might avoid meeting 7% of people that are evil. Now you can, if you want to meet the evil people in the world, they're out there and they're shouting at you every day. I find that experience to be different. It doesn't mean that all that wasn't there before, but now it seems to be able to feed on itself more. And I don't know whether you agree with that or whether that's just ultimately, if you look at 80,000 years ago, we will still win. I'm including myself hopefully in the 93%.
    Byron Reese (28:18):
    I once ran, I owned a website. I started one called happy news.com, I still own it. And everybody said, you can't make money with Happy news. And I was like, I will prove you wrong. Turns out they were right.
    (28:30):
    You can't. But what I learned in it is there's something called Black Cloud syndrome that people overestimate the likelihood of all these bad things happening to them. What are the odds that you're going to get murdered? They always overestimate what are the odds or how's it going to get broken into always overestimated. And it's because news is the exception. It's the strange thing that happened. It's the anomaly and that's what makes it news. But when you hear it all the time, you get the dark cloud syndrome. And in the nineties it is well established. The murder rate fell down every year throughout the 1990s and the amount of TV network coverage of murder went up every year. And so it just gives you a distorted view of the world. And so I do think maybe one interpretation is that there's always been these two extremes, and then all the people kind of in the middle are just kind of quiet by now because it's just such a cacophony of hatred and vitriol. But they're all still out there. The 93% who don't swerve to hit the turtle,
    Peter Crosby (29:30):
    And that 93% is inspiring when you see the power of it coming to life and to make a somewhat awkward but necessary shift from kind of societal level of view and spanning across billions of years to our audience who fine keep us on topic. I know. Well, I'm going there because one very major important part of society is in fact commerce when we think and people that are creating that are spending their lives creating something. And that's what I think our audience is because they really have been creating the world of e-commerce from scratch and digital, and they're in it every day. And so therefore tend to be, I think on the side of technology can be good. Technology can transform. We just need to have the right processes and hire the right people to tend to that garden. Well, and I'm wondering, as you think about AI coming into that realm, how do you think AI transformation, what do you think is possible in commerce and how will it play into the overall societal and economic impacts you're thinking about?
    Byron Reese (30:52):
    I'm very bullish on it. You start at one extreme and you say, let's say you put up a billboard selling your product. What do you know about the person that drives by it? Well, they're in a car and that's it. That's all. And they have vision. They can see that's all. About 7%
    Peter Crosby (31:13):
    Of them have recently hit a turtle.
    Byron Reese (31:15):
    Yeah, 7% have recently slurred to hit a turtle, nothing about them. And then you have to think, what would be the other extreme of that? The other extreme would be if in real time I'm a little thirsty and somehow water just appears in front of me, that would be a wonderful world, a world that kind of bends around me and my needs and my wants in real time. And that's what we'll get, and that's what commerce will do. It will ultimately know you well enough to be able to predict exactly what it is you need and sell it to you at exactly a price. I know I was on Amazon, I needed to buy a printer paper and I went to my normal printer paper, you buy a ream of it. And I saw they also sold a pallet of it, and I was like, I wonder what a pallet of paper costs.
    (32:02):
    So I click on this pallet of paper to see it, and that pallet of paper followed me around for three months. Every website I went to tried to sell me a pallet of paper. It would always be in the retargeting stuff. Now that's still very little different than the billboard, the internet. It sort of learns like you like tomatoes, you grow tomatoes. Well, it decides that's all you care about is growing tomatoes. And so everywhere you go, it's like showing you tomato plants and tomato baskets and all of that. So what will happen with these technologies is marketers will be able to, I would love to never see a commercial that doesn't tempt me. I would love to never see an ad that I don't think, oh, that would be nice. I would love that. And that's the world we will get so it'll get ever more efficient. And so I would just encourage people to always figure out ways to incrementally apply data and to make surmises about people. And I think there are ways to do it. Well, I digress.
    Peter Crosby (33:02):
    I love it when you digress, but I think the history of personalization has always been 97% creepy or 93% creepy and 7% welcome. I think those are the percentages that will exist for now. But it does feel like, to your point, I would be delighted if I discovered things that I want. And that judicious use of data I think is probably a key part of this, particularly if you say, as you say, this is going to be the most human knowledge ever gathered in one place, that creates a lot of opportunity for actually making great things happen. I agree.
    Byron Reese (33:52):
    I mean, the first time it happened to me was years ago I was on Amazon, I was buying something and they said, you might also want this. You may want these salt and pepper shakers that are robots who lined them up and they walk across the table. And I saw those and I was like, yes, I actually do want those. And I wasn't looking at anything robot related or salt and pepper related. And yet it was able to surmise that, and I would love my whole life, my whole online look, if I have to see ads, I at least want to see ads. I want to see, right? There's nothing I don't think controversial about that anyway,
    Rob Gonzalez (34:23):
    In Europe there is
    Peter Crosby (34:26):
    True.
    Byron Reese (34:27):
    Yeah, everybody's on their path. Everybody's on their path. Again, all these technologies are brand new and it is natural and you need those. You see, people often think the world would be better if everybody kind of thought the way they did, but that's actually not how civilization works. I'm enthusiastic and optimistic about these technologies. I think we need pessimists and people who are really down on them as well. That's how we get forward in a conversation as a species, not when we all think alike. And so I think it's great. Let them go down that path and figure out where it leads. And then there'll be wild west places too that it remains unregulated.
    Rob Gonzalez (35:12):
    So I am going to take a wild guess and guess that you didn't have not signed any of these Pause AI experiment, open letters that are out there.
    Byron Reese (35:27):
    That is correct. And I know the last time I checked the Future of Life Institute one, it was at 30,000 people. And when you look at the names of the people at sign up, those are amazing people. Those are amazing individuals, and they're people who kind of cynically speculate on their motives, and I don't, I think they probably all are afraid. And it really used to make me wonder, because what happens, you think, well, if those people are afraid, then mere mortals like me should be afraid too, right? If they are. And it took me a while to figure out what it was.
    (36:06):
    And when you read through all the material, it doesn't articulate any single fear. It is basically we don't know. And so we need to pause these for six months and it then has, we need a regulation. We need legal authority established. It goes through all these things that we need. And if you look at it, we don't even have any of those for the internet. You're not getting them in six months. We don't have any of those things for the internet and nor will we. And then it has the poll quote. So the big quote in the middle of the letter, and I am going to get it, 90% right? It says, these complex AI systems should not be developed until we are confident their effects will be good and their harm will be minimized. Now, if you read that, you might think, yeah, but what you have to realize is there's not been a single technology ever that could have passed that test to know ahead of time that it's going. Could you have known that about the proving press? Could you have proven that? Oh no, we know exactly what's going to happen with the printing press and it's going to be good and we'll minimize. I don't even know if it's true about the internet yet.
    Rob Gonzalez (37:20):
    Well, I mean, we just talked about the Internet's negative outcomes and the printing press also, I mean just not for nothing, but the printing press precipitated a significant amount of warfare and religious divide in Europe,
    Byron Reese (37:34):
    Protestant Reformation
    Rob Gonzalez (37:35):
    Because of it. Well, even worse, it precipitated book burning. So there's that. That's true.
    Byron Reese (37:42):
    And turtle Swed. But I mean everything air conditioning was that good? Well, I don't know. Automobiles, every technology. So here's what I think happened. Here's what I think happened. They built this machine that surprised them, and they're like, I don't know what's going to happen. All of a sudden, I don't know what the future's going to hold. I don't know how people are going to use this. So we just need to stop. We need to freeze. So they aren't afraid of some single thing. They're afraid that because they don't know what it's going to do. This machine that they built, Ernest ing said, the only way you can find out if you can trust somebody, Ernest ing said, the only way to find out if you can trust somebody is to trust them. And the only way to find out what these technologies going to be is to use them.
    (38:30):
    I can tell you, if you could pause it for six months, you will have paused us finding a cure for by six months, and you will have paused us finding clean energy by six months. Those are certainties. There is real misery in the world that knowledge is power and specifically is a power to make your life better. And the idea that somehow we would limit that just baffles. I think it's the future. You're not supposed to know what's going to happen. That's why they call it the future, and they just don't like, I don't know if this thing is going to do. We need to stop. That's what I think.
    Rob Gonzalez (39:03):
    I also think, like Mark Andreessen said, a couple of quips that I thought were pretty good. He says, look, these lms, it's just math. I'm not afraid of math. One of his responses, another one was, you've got the Doomers that do Doom are all worried about AI becoming sort of sent and then running away and then us being unable to stop it. But the reality is, I mean, these systems that run the largest language models are huge amounts of computing power. Not to say it's like a building of computing power, but it's kind of like that. It's like tons of GPUs, an absolute ton of power going into it. And so Andreessen has said, look, it gets too smart. Just unplug it. We know how to unplug things. We're capable of unplugging things.
    (39:56):
    It sort of seemed on some level, what are we worried about? It's one thing to think of. If you're worried about a doomsday scenario, then that's something that's maybe worth worrying about because it is different than other types of technological progress where other types of technology, the evils are survivable. And you could sort of address them over time. And maybe our generation is crappy at social media, but we've only had it for 20 years and the next generation's going to be fine or whatever. But if it's an extinction event, then you don't have a chance to keep kind of play it through. But it just seems like to me, the extinction event stuff is not credible. And if the extinction event stuff is not credible, then you're totally right. It's just like any other technology that's ever been invented, there's going to be upside. There's going to be downside. Whether there's upside or downside depends on your society and your laws and regs and cultural norms. So yeah, makes sense. I am with you.
    Peter Crosby (40:55):
    So Byron, just to close out here, one of the wonderful things that I appreciate about the work that you do and the thoughts that you have is the long view that you're able to take and remind us in the midst of the maelstrom that maelstroms have existed for billions of years, and yet on balance, 93% beat 7% any day. And so I'm just wondering, as you think about this present moment that we're in, lots of conflict, lots of uncertainties, lots of hope at the same time, this place that we find ourselves in and autocracy rising and all sorts of things, economies, et cetera. When you think about this or when you're talking to somebody about how they might view this present moment, what is a lesson that you give?
    Byron Reese (41:54):
    I think the history of humanity is there's never been enough stuff for everybody. There's never been enough food. So some people get it, some people don't. There's never been enough leisure, enough education or anything. And we learned a trick called technology where we were able to amplify what we were able to do and all of these kind of historic limits we're going to gradually be able to overcome. Now, that doesn't take 90% of people to grow our food in the west. It takes 3%. The story of human civilization, 10,000 years in view is up into the right. You can compare any place in the world to 50 years ago, a hundred years ago, a thousand years ago by almost any measure, life expectancy, infant mortality, access to education, status of women, individual liberty, self-rule. And you're going to find in almost all cases, things are better today than they were then.
    (42:46):
    We get caught up in the moment. But the big trend line is up into the right. I mean, I've always been a fan of utopian literature that tries to speculate on the better world. And as a genre, it began in 1650, and there are reasons for that I won't go into, but they started popping up. And the crazy ideas they had in these utopias, they said, wouldn't it be something if you got to pick your own ruler instead of having a king? And then they said, wouldn't it be something if you could decide who to marry or decide who your rulers were, or if we educated everybody, not just the rich, wouldn't it be all of these things that we haven't delivered for everybody, but we are on a path to delivering them to everybody. And so really what we should be doing in this moment is imagining what do we imagine the better future to be about to deliver on all the historic promises, I think.
    (43:41):
    And we need to start articulating a new vision and a new dream about what is a better world look like to us now from that vantage point. And I think that's the world we're going to build and we're going to build that world, and then we're going to wake up the next day and the next day we're going to keep doing it. So one day we wake up in a world we can't imagine all that much better, and that's the world we're going to take to stars. And I think we're going to populate a billion planets with a billion people each. So I would keep that view that that is our destiny. If we got down to 800 mating pairs of humans and no technology and nothing, and we managed to get to here, what will stop us at this point?
    Peter Crosby (44:19):
    I love that. And I think back to when I was coming out of college and I was thinking that damnit, I'm going to have a job that changes the world and I'm going to somehow have this enormous impact. And I've realized over time that for most of us, the impact that we have are on the people that we encounter every day, the actions that we take in our lives, the things that we choose to create or destroy. And so what I appreciate, and thank you so much for coming on and having this conversation, is that that will remain to be the case. No matter how large our languages get, the impact is within us. And so for the folks that listen to this podcast and for the work that they try to do every day, the families they raise, et cetera, I really appreciate you bringing this broader perspective to our audience. It makes a difference.
    Byron Reese (45:17):
    Well, thank you. And just to tee up the Agora one, the last line of the Agora book is about that. It says, some people believe they're not doing enough with their life or they need to be doing more. And I suggest you move past all that. And I go and explain why that isn't the way to think about it at all ago is way too big for any of us to move on our own. And it can only be moved by small acts of kindness, done in great numbers by billions of people. And that's what shapes the world. That's the conclusion of the book.
    Peter Crosby (45:49):
    And so I would say for those of you, the title is We Are Agora, and as I said, that it has how humanity functions as a single super organism that shapes our world in our future. We're going to have a podcast discussing that book with Byron coming out in the next several weeks. Treat it like a book club. Go get it on Amazon or your favorite commerce place for books and be there with us in the conversation that we'll have with Byron. But for now, Byron, thank you so much for bringing us your perspective on AI and where it's taking our world.
    Byron Reese (46:23):
    Thank you for having
    Peter Crosby (46:25):
    Thanks again to Byron for lifting us up with science. If I asked him, I'm sure he would also tell you to become a member of the digitalshelfinstitute@digitalshelfinstitute.org. Thanks for being part of our community.