There will be newer jobs that are created. There will be jobs which are made better, where some of the repetitive work is freed up in a way that you can express yourself more creatively. You could be a doctor, you could be a radiologist, you could be a programmer. The amount of time you’re spending on routine tasks versus higher-order thinking—all that could change, making the job more meaningful. Then there are jobs which could be displaced. So, as a society, how do you retrain, reskill people, and create opportunities?Â
The last year has really brought out this philosophical split in the way people think we should approach AI. You could talk about it as being safety first or business use cases first, or accelerationists versus doomers. You’re in a position where you have to bridge all of that philosophy and bring it together. I wonder what you personally think about trying to bridge those interests at Google, which is going to be a leader in this field, into this new world.
I’m a technology optimist. I have always felt, based on my personal life, a belief in people and humanity. And so overall, I think humanity will harness technology to its benefit. So I’ve always been an optimist. You’re right: a powerful technology like AI—there is a duality to it.Â
Which means there will be times we will boldly move forward because I think we can push the state of the art. For example, if AI can help us solve problems like cancer or climate change, you want to do everything in your power to move forward fast. But you definitely need society to develop frameworks to adapt, be it to deepfakes or to job displacement, etc. This is going to be a frontier—no different from climate change. This will be one of the biggest things we all grapple with for the next decade ahead.
Another big, unsettled thing is the legal landscape around AI. There are questions about fair use, questions about being able to protect the outputs. And it seems like it’s going to be a really big deal for intellectual property. What do you tell people who are using your products, to give them a sense of security, that what they’re doing isn’t going to get them sued?
These are not all topics that will have easy answers. When we build products, like Search and YouTube and stuff in the pre-AI world, we’ve always been trying to get the value exchange right. It’s no different for AI. We are definitely focused on making sure we can train on data that is allowed to be trained on, consistent with the law, giving people a chance to opt out of the training. And then there’s a layer about that—about what is fair use. It’s important to create value for the creators of the original content. These are important areas. The internet was an example of it. Or when e-commerce started: How do you draw the line between e-commerce and regular commerce?Â
There’ll be new legal frameworks developed over time, I think is how I would think about it as this area evolves. But meanwhile, we will work hard to be on the right side of the law and make sure we also have deep relationships with many providers of content today. There are some areas where it’s contentious, but we are working our way through those things, and I am committed to working to figure it out. We have to create that win-win ecosystem for all of this to work over time.Â
Something that people are very worried about with the web now is the future of search. When you have a type of technology that just answers questions for you, based on information from around the web, there’s a fear people may no longer need to visit those sites. This also seems like it could have implications for Google. I also wonder if you’re thinking about it in terms of your own business.Â