top of page
  • Writer's pictureGaming and Tech

Open AI: First Look into Sora.

The first text to video AI tool taking the world by storm.

By: Samer Fakhri

OpenAI has recently launched one of its new AI model render videos from prompts which they have directly said ‘accurately interpret and generate compelling characters that express vibrant emotions”. However, this shift in AI is once again a massive game changer that has been met with its fair amount of skepticism and concern due to its potential to be misused if not properly regulated. 

The Company behind ChatGTP recently uploaded numerous samples of Sora in use onto their website of raw footage generated by the text to video AI without any additional modifications. A few of those prompts included a hyper realistic woman walking down a rainy Tokyo street or another prompt showing multiple giant wooly mammoths heading towards the camera through a snowy tundra. 


Whereas, this is a massive feat technological especially with how fast text to image generation became hyper accurate from AI tools such as Midjourny or at how ChatGPT has improved dramatically in its ability to answer questions from its 3.5v to its 4th iteration.

There still are concerns regarding its copyright issues and its potential misuse to create “deep fakes”, a deepfake refers to any form of media that has been altered to either replace someone or to create a scenario which someone was not present for. While we see everyday use of this for example TikTok trends of people creating AI photo prompts of celebrities running from each other, we also see dangerous use of deepfakes in adult industries as well as politics.

With the introduction of Sora, deepfaking just became a lot easier as now one doesn’t have to manually find content and alter it themselves when they can just generate the content from a simple line of text. 

Issues with Copyrights:

Moving forward, there are also once again issues of copyright, very similarly to other AI tools such as Midjourney or other text to image generators. Sora seems to be trained on non-AI footage of the world. A CEO of a non-profit AI firm added to this discourse by saying “what is the model trained on? Did the training data providers consent to their work being used? The total lack of info from OpenAI on this doesn’t inspire confidence”

OpenAI’s response to these claims were through a blog post, stating that they were engaging with artists, policymakers and others to ensure that there is a standard of safety to their tools before releasing them to the general public. OpenAI also stated “We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora.” 

OpenAI admitted that regardless of its extensive treatment of the AI tool that “we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it”. But from what we do know, these tools have been met with lawsuits by the New York Times to simple TikTok trends. How Sora will be used when it is fully open to the public is unknown at this current time but still raises an interesting question on just how far we can take AI and when do we need to start placing ‘limits’ on it.


bottom of page