What can the new GPT-4 AI model from the maker of ChatGPT do?

65
Current Affairs | 16-Mar-2023
Description

The company behind the ChatGPT chatbot has released its latest AI model, GPT-4, in the next step of a technology that has captured the world's attention. The new system can calculate tax deductions and answer questions like a Shakespearean hacker, for example, but it still "blunders" the facts and makes reasoning errors. Here's a look at San Francisco-based startup OpenAI's latest enhancement to generative AI models that can spit out readable text and unique images:

WHAT'S NEW?

OpenAI says that GPT-4 "exhibits human-level performance." It is much more reliable, creative, and can handle "more nuanced instructions" than its predecessor, GPT-3.5, on which ChatGPT was built, OpenAI said in its announcement.

In an online demo Tuesday, OpenAI president Greg Brockman showed off some scenarios that demonstrated the capabilities of GPT-4 that seemed to show it to be a drastic improvement over previous versions.

He demonstrated how the system could quickly generate the proper income tax deduction after receiving tons of tax code, something he couldn't figure out on his own.

“It's not perfect, but neither are you. And together, it's this amplification tool that takes it to new heights,” Brockman said.

BECAUSE IT IS IMPORTANT?

Generative AI technology like GPT-4 could be the future of the internet, at least according to Microsoft, which has invested at least $1 billion in OpenAI and has caused a stir by integrating AI chatbot technology into its Bing browser.

It's part of a new generation of machine learning systems that can converse, generate readable text on demand, and produce new images and videos based on what they've learned from a vast database of e-books and online text.

These new AI advances have the potential to transform the Internet search industry long dominated by Google, which is trying to catch up with its own AI chatbot and many professions.

"With GPT-4, we are getting closer to life imitating art," said Mirella Lapata, a professor of natural language processing at the University of Edinburgh. He referenced the television show “Black Mirror,” which focuses on the dark side of technology.

"Humans are not fooled by the AI in 'Black Mirror,' but they tolerate it," Lapata said. "Similarly, GPT-4 is not perfect, but it paves the way for everyday use of AI as a basic tool."

WHAT EXACTLY ARE THE IMPROVEMENTS?

GPT-4 is a "large multimodal model", which means that it can work with both text and images that you use to find answers.

In an example posted on the OpenAI website, GPT-4 is asked: "What is unusual about this image?" His response: "What is unusual about this image is that a man is ironing clothes on an ironing board attached to the roof of a moving taxi."

GPT-4 is also "steerable", which means that instead of getting a response in the "classic" fixed tone and verbosity of ChatGPT, users can customize it by requesting hacker-style responses, like Shakespeare, by example.

In his demo, Brockman asked both GPT-3.5 and GPT-4 to summarize in one sentence an article that explained the difference between the two systems. The problem is that each word had to start with the letter G.

GPT-3.5 didn't even try, spitting out a normal sentence. The new version was quick to respond: "GPT-4 generates innovative and impressive gains, greatly boosting widespread AI goals."

HOW DOES IT WORK?

ChatGPT can write silly poems and songs or quickly explain almost anything on the Internet. It has also gained notoriety for results that might be far off, such as confidently providing a detailed but false account of the Super Bowl game days before it took place, or even being derogatory to users.

OpenAI acknowledged that GPT-4 still has limitations and warned users to be careful. GPT-4 "is still not entirely reliable" because it "blunders" the facts and makes reasoning errors, he said.

"Extreme care should be taken when using the results of the language model, especially in high-risk settings," the company said, though it added that hallucinations were considerably reduced.

The experts also advised caution.

"We must remember that language models like GPT-4 do not think in a human way and we must not be fooled by their language proficiency," said Nello Cristianini, a professor of artificial intelligence at the University of Bath.

Another problem is that GPT-4 doesn't know much about anything that happened after September 2021, since that was the cut-off date for the data it trained on.

IS THERE ANY WARRANTY?

OpenAI claims that GPT-4's enhanced capabilities "lead to new risk surfaces," so it has improved security by training it to reject requests for sensitive or "denied" information.

They are less likely to answer questions about, for example, how to make a bomb or buy cheap cigarettes.

Still, OpenAI cautions that while it's harder to "cause GPT to misbehave," it's "still possible to do so."

Comments
Load more comments.
Please Login or Sign up to comment.
logo
facebook youtube