Google CEO Issues Warning About Future of Artificial Intelligence

The CEO of Google and its parent, Alphabet, warned about the future effects of artificial intelligence on humanity. CEO Sundar Pichai said that “every product of every company” will be impacted by the rapid development of AI, in an interview with CBS’ “60 Minutes” on April 16. He said that society needs to prepare for the revolution in AI technologies such as chatbots, which can mimic human thought processes and have risen in popularity in recent months. After reviewing several of Google’s artificial intelligence projects, Pichai told Scott Pelley of 60 Minutes, that he was “speechless” and found that the results of some of the tests were “unsettling.” He was mainly referring to the increasing abilities of systems like “Bard,” Google’s AI chatbot rival to OpenAI’s ChatGPT, which is backed by Microsoft. Google launched Bard for testing in March, following the release of ChatGPT last fall. Bard’s limited release came in the wake of Microsoft’s January announcement that its search engine Bing would soon include OpenAI’s technology. Meanwhile, Pichai privately told employees that the success of Bard now hinges on public testing, adding that “things will go wrong,” reported CNBC last month. Ethical Concerns Grow Over Rapid Advancements in AI Many people have been alarmed that quick advancements in AI tech will cause massive job losses and violations of data privacy, as overall anxiety about non-human intelligence grows. Rapid progress in AI chatbot development has also sparked a recent wave of discussion among ethicists and tech CEOs, regarding potential future limits to AI and what regulations are needed to control it. Leaders in Silicon Valley like Elon Musk, Andrew Yang, Steve Wozniak and dozens of academics signed a letter in March, calling for an immediate pause on new “experiments” for programs “more powerful than GPT-4,” the latest version of OpenAI’s chatbot. More than 50,000 people have signed the letter since then. “We need to adapt as a society for it,” Pichai told Pelley and admitted “this is going to impact every product across every company.” He agreed with other tech executives that AI may threaten existing professions like writers, accountants, architects, and even software engineers. “For example, you could be a radiologist, if you think about five to 10 years from now, you’re going to have an AI collaborator with you. You come in the morning, let’s say you have a hundred things to go through, it may say, ‘these are the most serious cases you need to look at first.’” Google recently released a 20-page document, “recommendations for regulating AI,” which includes proposals like “take a sectoral approach that builds on existing regulation,” and “adopt a proportionate, risk-based framework.” Pichai says he welcomes regulations for AI and that it may be necessary to have it “align with human values including morality,” including laws to punish abuse and international treaties to make the technology safe for public use. “This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers and so on,” Pichai said, and that “it’s not for a company to decide.” However, Pichai said misuse of the tech to spread “misinformation,” combined with fake news and images, is a “much bigger” problem than concerns over advancements in AI, adding that “it could cause harm” to its development. Microsoft founder Bill Gates has been less critical of AI’s rapid development, saying last month that he believes that “this new technology can help people everywhere improve their lives.” “At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities,” Gates added. Pichai Remains Optimistic Despite AI’s Potential Harm to Society During a tour, Pichai showed Pelley some of Google’s other advanced AI projects, like DeepMind, which involve robots playing sports and doing simple human-like tasks without human instructions. Bard even created a story with characters and wrote a plot after a prompt by the CBS correspondent, leaving him in shock. Pelley then asked Bard why it helps people and it replied “because it makes me happy.” After telling James Manyika, Google’s senior vice president of technology and society that the chatbot appeared to be thinking, the executive responded that Bard was not sentient or self-aware, but could “behave like” it. After he was asked whether society is prepared for AI technology like Google’s Bard, Pichai replied, “on one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.” The Google CEO told Pelley that he still

Google CEO Issues Warning About Future of Artificial Intelligence

The CEO of Google and its parent, Alphabet, warned about the future effects of artificial intelligence on humanity.

CEO Sundar Pichai said that “every product of every company” will be impacted by the rapid development of AI, in an interview with CBS’ “60 Minutes” on April 16.

He said that society needs to prepare for the revolution in AI technologies such as chatbots, which can mimic human thought processes and have risen in popularity in recent months.

After reviewing several of Google’s artificial intelligence projects, Pichai told Scott Pelley of 60 Minutes, that he was “speechless” and found that the results of some of the tests were “unsettling.”

He was mainly referring to the increasing abilities of systems like “Bard,” Google’s AI chatbot rival to OpenAI’s ChatGPT, which is backed by Microsoft.

Google launched Bard for testing in March, following the release of ChatGPT last fall.

Bard’s limited release came in the wake of Microsoft’s January announcement that its search engine Bing would soon include OpenAI’s technology.

Meanwhile, Pichai privately told employees that the success of Bard now hinges on public testing, adding that “things will go wrong,” reported CNBC last month.

Ethical Concerns Grow Over Rapid Advancements in AI

Many people have been alarmed that quick advancements in AI tech will cause massive job losses and violations of data privacy, as overall anxiety about non-human intelligence grows.

Rapid progress in AI chatbot development has also sparked a recent wave of discussion among ethicists and tech CEOs, regarding potential future limits to AI and what regulations are needed to control it.

Leaders in Silicon Valley like Elon Musk, Andrew Yang, Steve Wozniak and dozens of academics signed a letter in March, calling for an immediate pause on new “experiments” for programs “more powerful than GPT-4,” the latest version of OpenAI’s chatbot.

More than 50,000 people have signed the letter since then.

“We need to adapt as a society for it,” Pichai told Pelley and admitted “this is going to impact every product across every company.”

He agreed with other tech executives that AI may threaten existing professions like writers, accountants, architects, and even software engineers.

“For example, you could be a radiologist, if you think about five to 10 years from now, you’re going to have an AI collaborator with you. You come in the morning, let’s say you have a hundred things to go through, it may say, ‘these are the most serious cases you need to look at first.’”

Google recently released a 20-page document, “recommendations for regulating AI,” which includes proposals like “take a sectoral approach that builds on existing regulation,” and “adopt a proportionate, risk-based framework.”

Pichai says he welcomes regulations for AI and that it may be necessary to have it “align with human values including morality,” including laws to punish abuse and international treaties to make the technology safe for public use.

“This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers and so on,” Pichai said, and that “it’s not for a company to decide.”

However, Pichai said misuse of the tech to spread “misinformation,” combined with fake news and images, is a “much bigger” problem than concerns over advancements in AI, adding that “it could cause harm” to its development.

Microsoft founder Bill Gates has been less critical of AI’s rapid development, saying last month that he believes that “this new technology can help people everywhere improve their lives.”

“At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities,” Gates added.

Pichai Remains Optimistic Despite AI’s Potential Harm to Society

During a tour, Pichai showed Pelley some of Google’s other advanced AI projects, like DeepMind, which involve robots playing sports and doing simple human-like tasks without human instructions.

Bard even created a story with characters and wrote a plot after a prompt by the CBS correspondent, leaving him in shock.

Pelley then asked Bard why it helps people and it replied “because it makes me happy.”

After telling James Manyika, Google’s senior vice president of technology and society that the chatbot appeared to be thinking, the executive responded that Bard was not sentient or self-aware, but could “behave like” it.

After he was asked whether society is prepared for AI technology like Google’s Bard, Pichai replied, “on one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.”

The Google CEO told Pelley that he still remains optimistic over AI, because unlike previous major technological revolutions, “the number of people who have started worrying about the implications” have spoken up early on.

“I’ve always thought of AI as the most profound technology humanity is working on. More profound than fire or electricity or anything that we’ve done in the past,” Pichai added.

Pichai also admitted that Bard still had flaws, after the chatbot gave Pelley a garbled response about inflation and suggested five books that did not exist.

Microsoft and Google Rush to Upgrade Search Engines With Latest Chatbot Tech

Meanwhile, the mobile search battle between Google and Microsoft is now underway.

The New York Times reported on Sunday that Samsung may change its default search engine from Google to Microsoft’s Bing, which could cost Google $3 billion in yearly revenues.

The Times said that Google is now working quickly to update its search engine and create a completely new design powered by AI to compete with Bing’s AI-powered search abilities.

The new search engine project codenamed “Magi,” is aimed to be more personalized than the current platform and has the ability to anticipate users’ requests.

Even Musk, who has been publicly skeptical over the latest advancements in chatbot technology, has since started a new AI company, called X.AI, according to recent business filings in Nevada.