managemnet company strategy managemanet What Can the Copernican Revolution Teach Us about the Future of AI?

What Can the Copernican Revolution Teach Us about the Future of AI?

What Can the Copernican Revolution Teach Us about the Future of AI? post thumbnail image

AZEEM AZHAR: Hi, my name is Azeem Azhar. For the past decade, I have studied exponential technologies, their emergence, rapid adoption, and the opportunities they create. I’m writing a book about it in 2021. It’s called The Exponential Age. Even with my expertise, I sometimes find it challenging to keep up with the rapidly changing field of artificial intelligence, and that’s why I’m excited to share with you a series of weekly insights into where we can learn the some of the most interesting questions about AI. In today’s reflection, I am talking about the Copernican moment of AI, the idea that the development of advanced AI models and their potential impact on society is a significant paradigm shift with far-reaching consequences. , similar to those that Copernicus and Darwin brought to their respective fields. Let’s go.

Let’s start with a confession. I find it really difficult to keep up with the constant flow of experimentation, innovation, research results emerging around these LLM technologies. There is so much going on right now. As I reflect on the course of it, I mean, bloody exponentials, I think I’m allowed to say that. And the challenge of keeping it going and the breadth and the depth of the experiment, I really think it’s an amazing, very powerful technology. And in a way, this is something that we have to, because of our knowledge of history, have to think again. We must enter a space of philosophical analysis, of imagination, to try to get out of the system in which we are now thinking about how we can order the things given by this set of changes. I was thinking, for example, about the Copernican era, right?

That moment when we started to realize that it took two hundred years for it to become common knowledge, it’s not yet common knowledge anywhere. That the earth is not the center of the universe, that the earth revolves around the sun. Or the Gutenberg moment where we started to democratize access to and then creation of knowledge. And these moments are the moments where, fundamentally, people living within the system are forced to change their system. Before Gutenberg in Western Europe, priests were ultimately the elite who controlled knowledge. And they completely control that knowledge and they restrict what people can believe and how they can believe in many ways. Of course, Copernicus came out and challenged that domain even more. And so I think we can look at the LLMs now and we start to see where there is some kind of friction. They hit copyright, they hit privacy. And start asking, at what point are we priests and at what point are we scientists from the point of view of the current worldview?

If you are the Catholic Church and you control the spread of knowledge through handwritten, hand-copied bibles, and someone comes out and allows many different versions of that to happen and eventually for other people to start to produce more and more material and raise the level of literacy, that’s dangerous, right? That is a risk to the current stability. This is a hazard to existing structures. It is a risk that challenges that basis for truth and truth as a social construct. And after Copernicus, it is no longer true, as it was before Copernicus, that the sun revolves around the earth. Now, of course, hearing that, you say, “The sun never revolved around the earth. The earth revolves around the sun.” But that’s not really what people believed to be true in the 15th and early 16th centuries in the periods before that.

And I think it’s important for us to ask questions about this technology especially when we play with it and we see what it tells us about the world and about our own assumptions. So where are we with these technologies, these LLMs that will be the basis for more advanced systems? I guess we have to accept that the cat is out of the bag, right? Now, while it’s true that more powerful models get harder and harder to build, the GPT-5 is much harder to build. Anything Anthropic built or Stability built today is hard to build. It is gated by technical capabilities, the ability to find a few hundred and exceptionally talented people to work on each project and the hundreds of millions of dollars to run for training and to be able to even get the chips from Nvidia. That is a difficult thing.

But really, over a 10-year, 15, 20-year period, it’s going to happen, right? It may not happen in a year, but it will happen. We don’t even need to think about the most sophisticated models because models with low GPT-3 power, 3.5 capabilities, things like Lama, which I wrote about, are capable of many things that we saw. the most advanced LLMs that do. And they run on desktop computers and mobile phones. And the cost of running them will simply decrease due, in part, to the reduction of computing costs. And on the other hand, the algorithmic optimizations and improvements, which have the effects of changing the step of the computational cost of the training. More than that, it has become the number one priority or certainly a top priority for most companies.

I won’t really reveal who I’ve talked to in the last two or three weeks, but I’ve talked to a lot of people in the industry and a lot of people who have a vision, a vision of a lot of industries and companies inside. about. And the thing that I hear is that it becomes a top priority. There are many clamors for companies to start using these technologies for competitive advantage. There is a kind of technical, research, and commercial momentum. And I think one of the things we have to do is start to understand what the upsides of all of this might be and how we paint it. And when we understand what that upside is, that we start to think fundamentally about what kind of institutional frameworks we want to develop around these technologies.

Finally, institutional frameworks, whether they are regulations or laws, are trade-offs, right? These are the trade-offs between freedom and the benefits that can be provided and provided inequitably and the costs attached to them. So for example, in the last two weeks, we started reading more about how these LLMs are using copyrighted material or maybe using private material and violating various privacy regulations or violating laws about of slander or false information or libel. I think the thing we have to do when we start looking at that, of course, is to take the material seriously. But also, start asking if we need to change the institutional frames and the laws around to face the realities of this technology. If you think about copyrights, they are only a few hundred years old. And it is an economic settlement in the face of new opportunities. Almost no one made money as an author before the printing press. And the printing press appeared in the blink of an eye.

An interesting example to look at is the use of DDT, the insecticide, the pesticide. So DDT was developed 60, 70 years ago, it was very effective. The pesticide turned out to have all kinds of problematic environmental effects and effects on human health, so it was largely banned about 50 years ago. However, many countries, especially India, continue to use DDT. So the question is, given all its evils, why does the Indian government continue to use DDT and allow its use? So there are reasons, right? It is very effective in controlling malaria. It is extremely cost effective. There are no other good alternatives. Basically, there is a trade-off that says, “For the harm done by DDT, in this case, it is worth tolerating given the benefits it provides, certainly within India.” And I think it’s really important for us to recognize that particular tension and what that expiration looks like.

This is a very powerful technology. The things that happened in days, not even weeks, were amazing. Their jaw dropped a bit. And as I reflect on how difficult it is to keep up with what’s going on, both with the many new applications and experiments that are happening on the one hand, and the really legitimate concerns that we can have about ethics, including . , not only the results from this technology, but also the way in which it greatly pushes our understanding of things like copyright and privacy. I think we should put all these things in our heads. We don’t want to have a thoughtless embrace, but we want to start looking at it with a deep analysis, some imagination, some sense of humanity about it.

I also think that we should not fail to return to Copernicus, for the sake of argument, which overturns the views of the world and forces us to think that it is really different, and Darwin is the same. And ask the question of the extent to which the system in which we operate may begin to change due to the nature of these types of discoveries. Well, thanks for tuning in. If you want to understand the ins and outs of AI, visit www.exponentialview.co where I share expert insights with hundreds of thousands of leaders every week.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post