Sam Altman, CEO of OpenAI, hints at the capabilities of GPT-5. Says “it’s gonna be better at everything across the board.” At the World Government Summit, Altman revealed GPT-5’s promise: smarter, faster, and more multimodal. Altman emphasized that while competitors might claim similar advancements, GPT-5’s enhanced intelligence remains the focal point.
GPT-5 being “smarter” means it can understand not just words, but also pictures, sounds, and different kinds of information quickly and accurately. The exciting part is imagining an AI that can think for itself and even create more AI without humans needing to monitor it all the time.
GPT-5 is the upgraded version of GPT-4, a chatbot developed by OpenAI. Unlike GPT-4, which requires a monthly fee, GPT-5 aims to be more personalized, make fewer errors, and handle a wider range of tasks, including eventually working with videos.
In anticipation of more details on GPT-5, OpenAI introduced its new AI model, Sora, on Friday morning. Sora is capable of creating minute-long videos from text prompts.
According to OpenAI, Sora has the ability to create complex scenes with numerous characters, precise actions, and detailed backgrounds. This model not only understands user requests but also understands how these elements would appear in real life.