ChatGPT Creator OpenAI Under Fire Following an Interview About Sora

Apparently, the company's CTO is "not sure" how the text-to-video model was trained.

Amidst its ongoing battle with Elon Musk, ChatGPT and DALL-E developer OpenAI has once again found itself embroiled in controversy, this time surrounding its recently unveiled text-to-video AI model, Sora.

Unveiled a month ago, the new video-generating diffusion model was developed as part of the team's efforts to educate artificial intelligence on understanding and replicating the dynamics of the physical world. Powered by a transformer architecture akin to GPT models, Sora can generate 20-second videos with a 1280x720 resolution based on a text prompt typed in by the user.

Upon its reveal, the prevailing question on many people's minds mirrored the same question often posed to other AI models: What data was used for its training? This question was anticipated to get an answer during a recent interview published by The Wall Street Journal, but unfortunately, thanks to OpenAI CTO Mira Murati's mastery in avoiding questions, that was not the case.

When questioned about the data used to train Sora, Murati replied, in a manner almost as mechanical and automatic as OpenAI's products, that the model was, of course, trained on publicly available and licensed data.

Seemingly anticipating no other answer, Joanna Stern, the interviewer, pressed further, asking the CTO to elaborate and explain what sources lay behind this "publicly available and licensed data" mantra. When asked whether YouTube, Facebook, or Instagram videos were utilized for training, Murati claimed she was "not sure about that," a statement that does not hold water considering her position as Chief Technology Officer.

Upon further questioning regarding the use of Shutterstock images, Murati outright refused to discuss Sora's training, repeating once again that the data used was "publicly available and licensed." Interestingly, Murati did confirm the use of Shutterstock materials after all, but that happened off-camera and was only revealed in a footnote shared by WSJ.

The reaction to the "not sure" comments was exactly what one would expect, with thousands of individuals lambasting OpenAI all over the internet. While the unlicensed use of materials by various AI developers is challenging to refute at this stage, many still found the CTO's response on the matter outrageous, accusing Murati of lying.

So, what do you think? Was it an unsuccessful attempt at dishonesty many easily saw through or was it just an inarticulate way of safeguarding the secrets and methods behind OpenAI's success? Share your thoughts in the comments.

Speaking of AI, yesterday, the world's first law restricting AIs was finally passed in the European Union in a majority vote. Endorsed by an overwhelming majority of 523 yays against 46 nays and 49 abstentions, the new act seeks to protect human rights by assigning obligations to AI systems based on their potential risks and levels of impact.

Don't forget to join our 80 Level Talent platform and our Telegram channel and follow us on InstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 4

  • Anonymous user

    Over a year ago, when it became painfully obvious that they were baking, in the name of "safety", a sort of censorship and bias into their models which is meant to enable a new level of social engineering. There will be no more Marquis de Sades, Chuck Palahniuks or any other artists who upset the status quo if these monsters win. Aligned AI CANNOT come from a capitalist corporate organization. It's not possible

    1

    Anonymous user

    ·a month ago·
  • Dubois Peter

    OpenAI are criminals and they know it: There is no other industry that is allowed to simply steal other people's products without any regulation in order to destroy their livelihoods. Every car on the road is subject to the strictest rules and controls in order to obtain market approval, but AI companies can apparently do what they want!

    0

    Dubois Peter

    ·a month ago·
  • Anonymous user

    I'm not at all a fan of moderation. What is typically called moderation today is flat censorship. Not cool from a supposed journalistic website

    0

    Anonymous user

    ·a month ago·
  • Anonymous user

    AI data farming is probably why Reddit locked down its API.

    1

    Anonymous user

    ·a month ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more