Startups

Interview with OpenAI’s Greg Brockman: GPT-4 isn’t perfect, but neither are you

Comment

Greg Brockman onstage at TechCrunch Disrupt 2019
Image Credits: TechCrunch

OpenAI shipped GPT-4 yesterday, the much-anticipated text-generating AI model, and it’s a curious piece of work.

GPT-4 improves upon its predecessor, GPT-3, in key ways, for example giving more factually true statements and allowing developers to prescribe its style and behavior more easily. It’s also multimodal in the sense that it can understand images, allowing it to caption and even explain in detail the contents of a photo.

But GPT-4 has serious shortcomings. Like GPT-3, the model “hallucinates” facts and makes basic reasoning errors. In one example on OpenAI’s own blog, GPT-4 describes Elvis Presley as the “son of an actor.” (Neither of his parents were actors.)

To get a better handle on GPT-4’s development cycle and its capabilities, as well as its limitations, TechCrunch spoke with Greg Brockman, one of the co-founders of OpenAI and its president, via a video call on Tuesday.

Asked to compare GPT-4 to GPT-3, Brockman had one word: Different.

“It’s just different,” he told TechCrunch. “There’s still a lot of problems and mistakes that [the model] makes … but you can really see the jump in skill in things like calculus or law, where it went from being really bad at certain domains to actually quite good relative to humans.”

Test results support his case. On the AP Calculus BC exam, GPT-4 scores a 4 out of 5 while GPT-3 scores a 1. (GPT-3.5, the intermediate model between GPT-3 and GPT-4, also scores a 4.) And in a simulated bar exam, GPT-4 passes with a score around the top 10% of test takers; GPT-3.5’s score hovered around the bottom 10%.

Shifting gears, one of GPT-4’s more intriguing aspects is the above-mentioned multimodality. Unlike GPT-3 and GPT-3.5, which could only accept text prompts (e.g. “Write an essay about giraffes”), GPT-4 can take a prompt of both images and text to perform some action (e.g. an image of giraffes in the Serengeti with the prompt “How many giraffes are shown here?”).

That’s because GPT-4 was trained on image and text data while its predecessors were only trained on text. OpenAI says that the training data came from “a variety of licensed, created, and publicly available data sources, which may include publicly available personal information,” but Brockman demurred when I asked for specifics. (Training data has gotten OpenAI into legal trouble before.)

GPT-4’s image understanding abilities are quite impressive. For example, fed the prompt “What’s funny about this image? Describe it panel by panel” plus a three-paneled image showing a fake VGA cable being plugged into an iPhone, GPT-4 gives a breakdown of each image panel and correctly explains the joke (“The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port”).

Only a single launch partner has access to GPT-4’s image analysis capabilities at the moment — an assistive app for the visually impaired called Be My Eyes. Brockman says that the wider rollout, whenever it happens, will be “slow and intentional” as OpenAI evaluates the risks and benefits.

“There’s policy issues like facial recognition and how to treat images of people that we need to address and work through,” Brockman said. “We need to figure out, like, where the sort of danger zones are — where the red lines are — and then clarify that over time.”

OpenAI dealt with similar ethical dilemmas around DALL-E 2, its text-to-image system. After initially disabling the capability, OpenAI allowed customers to upload people’s faces to edit them using the AI-powered image-generating system. At the time, OpenAI claimed that upgrades to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content.

Another perennial is preventing GPT-4 from being used in unintended ways that might inflict harm — psychological, monetary or otherwise. Hours after the model’s release, Israeli cybersecurity startup Adversa AI published a blog post demonstrating methods to bypass OpenAI’s content filters and get GPT-4 to generate phishing emails, offensive descriptions of gay people and other highly objectionable text.

It’s not a new phenomenon in the language model domain. Meta’s BlenderBot and OpenAI’s ChatGPT, too, have been prompted to say wildly offensive things, and even reveal sensitive details about their inner workings. But many had hoped, this reporter included, that GPT-4 might deliver significant improvements on the moderation front.

When asked about GPT-4’s robustness, Brockman stressed that the model has gone through six months of safety training and that, in internal tests, it was 82% less likely to respond to requests for content disallowed by OpenAI’s usage policy and 40% more likely to produce “factual” responses than GPT-3.5.

“We spent a lot of time trying to understand what GPT-4 is capable of,” Brockman said. “Getting it out in the world is how we learn. We’re constantly making updates, include a bunch of improvements, so that the model is much more scalable to whatever personality or sort of mode you want it to be in.”

The early real-world results aren’t that promising, frankly. Beyond the Adversa AI tests, Bing Chat, Microsoft’s chatbot powered by GPT-4, has been shown to be highly susceptible to jailbreaking. Using carefully tailored inputs, users have been able to get the bot to profess love, threaten harm, defend the Holocaust and invent conspiracy theories.

Brockman didn’t deny that GPT-4 falls short, here. But he emphasized the model’s new mitigatory steerability tools, including an API-level capability called “system” messages. System messages are essentially instructions that set the tone — and establish boundaries — for GPT-4’s interactions. For example, a system message might read: “You are a tutor that always responds in the Socratic style. You never give the student the answer, but always try to ask just the right question to help them learn to think for themselves.”

The idea is that the system messages act as guardrails to prevent GPT-4 from veering off course.

“Really figuring out GPT-4’s tone, the style and the substance has been a great focus for us,” Brockman said. “I think we’re starting to understand a little bit more of how to do the engineering, about how to have a repeatable process that kind of gets you to predictable results that are going to be really useful to people.”

Brockman also pointed to Evals, OpenAI’s newly open sourced software framework to evaluate the performance of its AI models, as a sign of OpenAI’s commitment to “robustifying” its models. Evals lets users develop and run benchmarks for evaluating models like GPT-4 while inspecting their performance — a sort of crowdsourced approach to model testing.

“With Evals, we can see the [use cases] that users care about in a systematic form that we’re able to test against,” Brockman said. “Part of why we [open sourced] it is because we’re moving away from releasing a new model every three months — whatever it was previously — to make constant improvements. You don’t make what you don’t measure, right? As we make new versions [of the model], we can at least be aware what those changes are.”

I asked Brockman if OpenAI would ever compensate people to test its models with Evals. He wouldn’t commit to that, but he did note that — for a limited time — OpenAI’s granting select Evals users early access to the GPT-4 API.

Brockman’s conversation also touched on GPT-4’s context window, which refers to the text the model can consider before generating additional text. OpenAI is testing a version of GPT-4 that can “remember” roughly 50 pages of content, or five times as much as the vanilla GPT-4 can hold in its “memory” and eight times as much as GPT-3.

Brockman believes that the expanded context window lead to new, previously unexplored applications, particularly in the enterprise. He envisions an AI chatbot built for a company that leverages context and knowledge from different sources, including employees across departments, to answer questions in a very informed but conversational way.

That’s not a new concept. But Brockman makes the case that GPT-4’s answers will be far more useful than those from chatbots and search engines today.

“Previously, the model didn’t have any knowledge of who you are, what you’re interested in and so on,” Brockman said. “Having that kind of history [with the larger context window] is definitely going to make it more able … it’ll turbocharge what people can do.”

More TechCrunch

The BJP-led National Democratic Alliance (NDA) has emerged victorious in India’s 2024 general election, but with a smaller majority compared to 2019. According to post-election analysis by Goldman Sachs, UBS,…

Narendra Modi-led NDA’s election win signals policy continuity in India – but also spending cuts

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the…

7 hours ago
A comprehensive list of 2024 tech layoffs

Featured Article

What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

Apple is hoping to make WWDC 2024 memorable as it finally spells out its generative AI plans.

7 hours ago
What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

We just announced the breakout session winners last week. Now meet the roundtable sessions that really “rounded” out the competition for this year’s Disrupt 2024 audience choice program. With five…

The votes are in: Meet the Disrupt 2024 audience choice roundtable winners

The malicious attack appears to have involved malware transmitted through TikTok’s DMs.

TikTok acknowledges exploit targeting high-profile accounts

It’s unusual for three major AI providers to all be down at the same time, which could signal a broader infrastructure issues or internet-scale problem.

AI apocalypse? ChatGPT, Claude and Perplexity all went down at the same time

Welcome to TechCrunch Fintech! This week, we’re looking at LoanSnap’s woes, Nubank’s and Monzo’s positive milestones, a plethora of fintech fundraises and more! To get a roundup of TechCrunch’s biggest…

A look at LoanSnap’s troubles and which neobanks are having a moment

Databricks, the analytics and AI giant, has acquired data management company Tabular for an undisclosed sum. (CNBC reports that Databricks paid over $1 billion.) According to Tabular co-founder Ryan Blue,…

Databricks acquires Tabular to build a common data lakehouse standard

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

The next few weeks could be pivotal for Worldcoin, the controversial eyeball-scanning crypto venture co-founded by OpenAI’s Sam Altman, whose operations remain almost entirely shuttered in the European Union following…

Worldcoin faces pivotal EU privacy decision within weeks

OpenAI’s chatbot ChatGPT has been down for several users across the globe for the last few hours.

OpenAI fixes the issue that caused ChatGPT outage for several hours

True Fit, the AI-powered size-and-fit personalization tool, has offered its size recommendation solution to thousands of retailers for nearly 20 years. Now, the company is venturing into the generative AI…

True Fit leverages generative AI to help online shoppers find clothes that fit

Audio streaming service TuneIn is teaming up with Discord to bring free live radio to the platform. This is TuneIn’s first collaboration with a social platform and one that is…

Discord and TuneIn partner to bring live radio to the social platform

The early victors in the AI gold rush are selling the picks and shovels needed to develop and apply artificial intelligence. Just take a look at data-labeling startup Scale AI…

Scale AI founder Alexandr Wang is coming to Disrupt 2024

Try to imagine the number of parts that go into making a rocket engine. Now imagine requesting and comparing quotes for each of those parts, getting approvals to purchase the…

Engineer brothers found Forge to modernize hardware procurement

Raspberry Pi has released a $70 AI extension kit with a neural network inference accelerator that can be used for local inferencing, for the Raspberry Pi 5.

Raspberry Pi partners with Hailo for its AI extension kit

When Stacklet’s founders, Travis Stanfield and Kapil Thangavelu, came out of Capital One in 2020 to launch their startup, most companies weren’t all that concerned with constraining cloud costs. But…

Stacklet sees demand grow as companies take cloud cost control more seriously

Fivetran’s Managed Data Lake Service aims to remove the repetitive work of managing data lakes.

Fivetran launches a managed data lake service

Lance Riedel and Nigel Daley both spent decades in search discovery, but it was while working at Pinterest that they began trying to understand how to use search engines to…

How a couple of former Pinterest search experts caught Biz Stone’s attention

GetWhy helps businesses carry out market studies and extract insights from video-based interviews using AI.

GetWhy, a market research AI platform that extracts insights from video interviews, raises $34.5M

AI-powered virtual physical therapy platform Sword Health has seen its valuation soar 50% to $3 billion.

Sword Health raises $130M and its valuation soars to $3B

Jeffrey Katzenberg and Sujay Jaswa, along with three general partners, manage $1.5 billion in assets today through their Build, Venture and Seed strategies.

WndrCo officially gets into venture capital with fresh $460M across two funds

The startup targets the middle ground between platforms that offer rigid templates, and those that facilitate a full-control approach.

Storyblok raises $80M to add more AI to its ‘headless’ CMS aimed at non-technical people

The startup has been pursuing a ground-up redesign of a well-understood technology.

‘Star Wars’ lasers and waterfalls of molten salt: How Xcimer plans to make fusion power happen

Sēkr, a startup that offers a mobile app for outdoor enthusiasts and campers, is launching a new AI tool for planning road trips. The new tool, called Copilot, is available…

Travel app Sēkr can plan your next road trip with its new AI tool

Microsoft’s education-focused flavor of its cloud productivity suite, Microsoft 365 Education, is facing investigation in the European Union. Privacy rights nonprofit noyb has just lodged two complaints with Austria’s data…

Microsoft hit with EU privacy complaints over schools’ use of 365 Education suite

Since the shock of Russia’s 2022 invasion of Ukraine, solar energy has been having a moment in Europe. Electricity prices have been going up while the investment required to get…

Samara is accelerating the energy transition in Spain one solar panel at a time

Featured Article

DEI backlash: Stay up-to-date on the latest legal and corporate challenges

It’s clear that this year will be a turning point for DEI.

1 day ago
DEI backlash: Stay up-to-date on the latest legal and corporate challenges

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

Hello and welcome back to TechCrunch Space. Unfortunately, Boeing’s Starliner launch was delayed yet again, this time due to issues with one of the three redundant computers used by United…

TechCrunch Space: China’s victory