OpenAI has announced the arrival of its next language model, the mysterious GPT-5. The start-up says it has started training the AI, which is expected to be released later this year. It promises a new leap in quality…
This Tuesday, May 28, 2024, OpenAI published a new statement on its blog. Just two weeks after the announcement of ChatGPT 4o, the start-up mentioned the arrival of its next language model, the long-awaited GPT-5.
According to the American company, this AI model represents a new step towards the design of artificial general intelligence, the ultimate goal of OpenAI.
Everything points to the fact that AI training is underway and that an announcement will be made in the coming months. For the time being, though, OpenAI is careful not to detail what the next GPT will offer in terms of new features.
In addition, OpenAI’s press release is primarily dedicated to the creation of a “safety and security committee.” Comprised of directors Bret Taylor (chairman), Adam D’Angelo, Nicole Seligman and CEO Sam Altman, the committee will be “responsible for making recommendations to the entire board on critical decisions” to be made in terms of safety. To come up with these recommendations, the group will also rely on the advice of scientists and other security experts.
This decision was probably motivated by the recent scandal involving Scarlett Johansson. The actress believes that one of the voice versions of ChatGPT 4o, called Sky, has the same voice timbre as her. He accuses OpenAI of using his voice without asking her permission. The actress reveals that the company later offered her to lend her voice to ChatGPT. Faced with the anger of Scarlett Johansson, who steadfastly refuses to cooperate, OpenAI preferred to deactivate Sky. The start-up points out, however, that the voice is not that of the famous actress.
With the announcement of this committee, OpenAI is clearly trying to reassure its many detractors, who are concerned about the excesses of generative AI. The committee’s first task, which will span a period of 90 days, will be to analyze and propose improvements to OpenAI’s security processes. Once the evaluation is complete, the results and changes proposed by the committee will be made available to the public in a further effort to ensure transparency.