OpenAI, the San Francisco tech company that garnered global attention after launching ChatGPT, said Tuesday it was unveiling a new version of its artificial intelligence software.

Called GPT-4, the software «can solve difficult problems more accurately, thanks to its broader general knowledge and problem-solving skills,» according to an announcement on the OpenAI website.

In a video posted by the company onlineHe said that GPT-4 had a variety of capabilities that the previous iteration of the technology did not, including the ability to «reason» based on images that users uploaded.

«GPT-4 is a large multimodal model (accepting image and text input, and emitting text output) that, while less capable than humans in many real-world scenarios, exhibits human-level performance at various points of academic and professional reference,» OpenAI wrote on its website

Andrej Karpathy, OpenAI Employee, tweeted that the feature meant the AI ​​could «see».

The new technology is not available for free, at least not until now. OpenAI said people could try GPT-4 on its ChatGPT Plus subscription service, which costs $20 a month.

OpenAI and its ChatGPT chatbot have shaken up the tech world and alerted many outside the industry to the possibilities of AI software, in part through the company’s partnership with Microsoft and its search engine, Bing.

But the pace of OpenAI releases has also caused concern because the technology has not been tested, forcing abrupt changes in the fields of education to the arts. The rapid public development of ChatGPT and other generative AI programs has led some ethicists and industry leaders to call for security measures in the technology.

Sam Altman, CEO of OpenAI, he tweeted on monday that “we definitely need more regulation on artificial intelligence.”

The company explained GPT-4’s capabilities in a series of examples on its website: the ability to solve problems, such as scheduling a meeting between three busy people; get a high score on tests like the uniform bar exam; and learn a user’s creative writing style.

But the company also acknowledged limitations, such as social biases and «hallucinations» that it knows more than it really does.

Google, concerned that artificial intelligence technology could reduce the market share of its search engine and its cloud computing service, in February released his own software known as Bard.

open AI thrown out backed in late 2015 by tech billionaires such as Elon Musk, Peter Thiel and Reid Hoffman, and its name reflected its status as a non-profit organization that would follow the principles of open source software shared freely online. In 2019, in transition to a «limited» profit model.

Now, it is releasing GPT-4 with some secrecy. in 98 pages paper Accompanying the announcement, company employees said they would keep many details under wraps.

Notably, the document said that the underlying data on which the model was trained will not be discussed publicly.

«Given the competitive landscape and security implications of large-scale models like GPT-4, this report does not contain further details on architecture (including model size), hardware, training computation, ensemble construction of data, the training method or the like». they wrote.

They added: “We plan to make more technical details available to additional third parties who can advise us on how to balance the above security and competitive considerations with the scientific value of increased transparency.”

The release of GPT-4, the fourth iteration of OpenAI’s foundational system, has been rumored for months amid growing hype around the chatbot built on top of it.

In January, Altman lowered expectations for what GPT-4 could do, narration the StrictlyVC podcast, «people are begging to be disappointed and they will be.»

On Tuesday, he requested comment.

«We’ve had initial training on GPT-4 for quite some time, but it’s taken a long time and a lot of work to feel ready to launch it,» Altman said in Twitter. We hope you enjoy it and we really appreciate feedback on its shortcomings.

Sarah Myers West, managing director of the AI ​​Now Institute, a nonprofit group that studies the effects of AI on society, said releasing such systems to the public without supervision «is essentially experimenting in the wild.»

“We have clear evidence that generative AI systems routinely produce error-prone derogatory and discriminatory results,” he said in a text message. “We cannot simply rely on companies’ claims that they will find technical solutions to these complex problems. ”

OpenAI said it was planning a line demonstration at 1 p.m. PT (4 p.m. ET) Tuesday on the Google-owned video service YouTube.