Don’t rush to invest in chat AI
- Vint Cerf, the “father of the Internet” and Google’s “Internet evangelist,” has warned entrepreneurs not to rush to make money from conversational AI just “because it’s really cool.”
- Cerf said the technology isn’t advanced enough to make short-term bets.
- “There’s an ethical issue here that I hope some of you will consider,” he said at Monday’s conference, before imploring the crowd to be careful about artificial intelligence.
Chandan Khanna |: AFP |: Getty Images:
Google’s chief evangelist and “father of the web,” Vint Cerf, has a message for executives looking to make quick deals on chat AI: “Don’t.”
Cerf implored attendees at a conference in Mountain View, Calif., on Monday not to try to invest in conversational intelligence just because “it’s a hot topic.” The warning comes amid an explosion in ChatGPT’s popularity.
“There’s an ethical issue here that I hope some of you will consider,” Cerf told the conference crowd Monday. “Everybody’s talking about ChatGPT or Google’s version of it, and we know it doesn’t always work the way we’d like it to,” he said, referring to Google’s Bard conversational AI, which was announced last week.
His warning comes as big tech companies like Google, Meta and Microsoft grapple with how to stay competitive in the conversational AI space while rapidly improving a technology that is still prone to errors. give:
Alphabet chairman John Hennessy said earlier in the day that the systems were still a long way from being widely useful, and that they had many issues of inaccuracy and “toxicity” that still needed to be addressed before the product could even be tested on the public.
Cerf has served as Google’s vice president and “chief Internet evangelist” since 2005. He is known as “one of the fathers of the Internet” because he designed some of the architecture that was used to build the foundation of the Internet.
Cerf cautioned against the temptation to invest just because the technology is “really cool, even though it doesn’t work all the time.”
“If you think “Man, I can sell this to investors because it’s a hot topic and everybody’s going to throw money at me, don’t do it,” Cerf said, earning some laughs from the crowd. “Be careful. You were right that we can’t always predict what’s going to happen with these technologies and to be honest with you humans are a big part of the problem, which is why we humans haven’t changed in the last 400 years or so. we are not talking about the last one. 4000.
“They will seek to do what benefits them and not yours,” Cerf continued, seemingly alluding to common human greed. “So we have to keep that in mind and think about how we use these technologies.”
Cerf said he tried asking one of the systems to append an emoji to the end of each sentence. It didn’t, and when it told the system it had noticed, it apologized but didn’t change its behavior. “We’re a long way from awareness or self-awareness,” he said of chatbots.
According to him, there is a gap between what the technology says it will do and what it does. “That’s the problem. … You can’t tell the difference between an eloquent answer and an accurate answer.
Cerf offered an example when he asked a chatbot to provide a bio about himself. He said the bot presented its answer as factual, even though it contained inaccuracies.
“In terms of engineering, I think engineers like myself should be responsible for finding a way to tame some of these technologies so that they are less damaging,” he said. “And of course, depending on the application. , not very good fiction is one thing. Advising someone can have medical implications. Figuring out how to minimize the potential for worst-case scenarios is critical.”