Late Thursday night, Oprah Winfrey aired a special on artificial intelligence, titled “AI and Our Future.” Guests included OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and current FBI Director Christopher Wray.
The prevailing tone was one of suspicion and caution.
In prepared remarks, Oprah noted that the AI genie is out of the bottle, for better or worse, and that humanity will have to learn to live with the consequences.
“AI is still beyond our control and largely beyond our understanding,” she said. “But it is here, and we will live with technology that can be our ally as well as our competitor. We are the most adaptable creatures on the planet. And we will adapt again. But watch what is real. The stakes could not be higher.”
Sam Altman overpromises.
In his first Oprah interview that night, Altman made the dubious argument that today’s AI learns concepts within the data it’s trained on.
“We show the system a thousand words in a sequence and ask it to predict what comes next,” he told Oprah. “The system learns to predict, and then learns the underlying concepts.”
But many experts may disagree with this view.
AI systems like ChatGPT and o1, which OpenAI introduced last Thursday, already predict the next most likely words in a sentence. But they are simply statistical machines — they learn patterns in data. They are not intentional; they are just making educated guesses.
Although Altman may have overestimated the capabilities of today’s AI systems, he emphasized the importance of knowing how to test the integrity of these systems themselves.
“One of the first things we need to do — and this is happening now — is to get the government to start working out how to do safety testing of these systems, like we do with airplanes or new medicines,” he said. “I personally probably have a conversation with a member of the government every few days.”
Altman’s push for regulation may be self-serving. OpenAI opposed California’s AI safety bill, known as SB 1047, saying it would “stifle innovation.” However, former OpenAI employees and AI experts like Geoffrey Hinton have come out in support of the bill, arguing that it will impose necessary safeguards on AI development.
Oprah also urged Altman to clarify his role as OpenAI’s leader. Asked why people should trust him, he largely avoided answering the question, saying his company tries to build trust over time.
Earlier, Altman said, Very directly That people should not trust him or anyone else to make sure that AI is good for the world.
The OpenAI CEO later said it was strange to hear Oprah ask if he was “the most powerful and dangerous man in the world,” as one newspaper headline suggested. He disagreed, but said he felt a responsibility to push AI in a positive direction for humanity.
Opera on Deepfake
As expected in a special episode on artificial intelligence, the topic of deepfakes was touched upon.
To demonstrate how convincing synthetic media can be, Brownlee compared sample footage from Sora, OpenAI’s AI-powered video generator, to AI-generated footage from a months-old AI system. Sora’s sample was far ahead—illustrating the rapid progress in the field.
“Now, you can still look at parts of that scene and know something’s not right,” Brownlee said of the Sora footage. Oprah said the scene felt real to her.
The deepfake presentation served as a prelude to an interview with Wray, who recounted the moment he first became aware of AI deepfake technology.
“I was in a conference room, and there was a group of [FBI] “People came together to show me how to create fake videos enhanced by AI,” Ray said. “They created a video of me saying things I’ve never said before and will never say.”
Ray spoke about the growing prevalence of AI-powered sextortion. According to According to cybersecurity firm ESET, there was a 178% increase in sextortion cases between 2022 and 2023, partly due to AI technology.
“Someone, posing as a peer, targets a teenager and then uses [AI-generated] “They trick the children by sending them real pictures in return. In fact, someone tricks the children by sending them real pictures in Nigeria, and once they get the pictures, they threaten to blackmail the child and say, ‘If you don’t pay, we will publish these pictures that will ruin your life.’”
Ray also touched on the media misinformation surrounding the upcoming US presidential election. While he stressed that “now is not the time to panic,” he stressed that “everyone in America” must “bring a heightened sense of focus and caution” about the use of artificial intelligence and how “bad guys could use it against us all.”
“We often find that what appears on social media and looks like Bill from Topeka or Mary from Dayton is actually, you know, a Russian or Chinese intelligence officer on the outskirts of Beijing or Moscow,” Ray said.
In fact, statistics Opinion poll A study found that more than a third of U.S. respondents saw misinformation — or suspected misinformation — on key topics by the end of 2023. This year, misleading AI-generated images of Vice Presidential candidates Kamala Harris and former President Donald Trump garnered millions of views on social networks including X.
Bill Gates talks about the artificial intelligence revolution
For a positive change in the technological trend, Oprah interviewed Microsoft founder Bill Gates, who expressed his hope that artificial intelligence will enhance the fields of education and medicine.
“AI is like a third person sitting in [a medical appointment,] “We do a revised version of the prescription, and we suggest a prescription,” Gates says. “So instead of the doctor facing a computer screen, they interact with you, and the software makes sure there’s a really good version of the prescription.”
But Gates ignored the possibility of bias caused by poor training.
One modern Studying Other studies have shown that speech recognition systems from leading tech companies were twice as likely to incorrectly transcribe audio from black speakers as from white speakers. And other research has shown that AI systems reinforce long-standing, incorrect beliefs that there are biological differences between blacks and whites — lies that lead doctors to misdiagnose health problems.
Gates said AI could be “always available” in classrooms and “understand how to motivate you… no matter what your level of knowledge.”
This is not exactly what many classrooms see.
Last summer, schools and colleges rushed to ban ChatGPT over concerns about plagiarism and misinformation. Some schools and colleges have since banned ChatGPT. inverse But not everyone is convinced of GenAI’s potential for good, pointing out that Surveys As the UK’s Safer Internet Centre found, more than half of children (53%) reported seeing someone their age using GenAI in a negative way – for example creating believable false information or images used to upset someone.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) announced late last year that to push Governments should regulate the use of GenAI in education, including implementing age limits for users and putting in place barriers to protect data and user privacy.