Saturday, December 21, 2024

The Future Of Text-To-Video Based Generative AI Magically Appears Via Newly Released OpenAI Sora Turbo

Must read

In today’s column, I explain the hullabaloo over the advent of text-to-video (T2V) in generative AI apps and large language models (LLM). The upshot is this. There is little doubt that text-to-video is still in its infancy at this time, but, by gosh, keep your eye on the ball because T2V is going to gain significant advances that will ultimately knock the socks off the world. As Dr. Seuss might declare, oh, the things that you can do (hang in there, I’ll cover the possibilities momentarily).

As tangible evidence of what text-to-video can do right now, I’ll include in this discussion an assessment of the newly released OpenAI product Sora Turbo, a cousin of the wildly and widely popular ChatGPT. If you are tempted to try out Sora Turbo, it is initially only being made available to ChatGPT Plus and ChatGPT Pro users, meaning that you must pay-to-play. Sad face.

A notable consideration to keep in mind is that ChatGPT currently garners a reported 300 million weekly active users, and though not all of them are going to have ready access to Sora Turbo, an impressive many millions will. Competing products are likely to find that Sora Turbo becomes the 600-pound gorilla and the elephant in the room. By and large, a massive number of users and a massive amount of media attention is going to shift overnight toward Sora Turbo.

Let’s talk about it.

This analysis of an innovative AI advancement is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). For my coverage of the top-of-the-line ChatGPT o1 model and its advanced functionality, see the link here and the link here.

Getting Up-To-Speed On AI Modes

I’d like to lay out some foundational aspects so that we can then dive deeply into the text-to-video realm.

Generative AI and LLMs generally began by providing text-to-text (T2T) capabilities. You type in text as a prompt, and the AI responds with text such as an essay, poem, narrative, etc. That’s cool. Another exciting feature consists of text-to-image, whereby you enter a prompt, and the AI generates an image such as a photo-realistic picture, a digital painting, a still cartoon, or other kinds of static imagery. Those two modes of usage are nearly old hat now.

The dream for AI researchers is to allow a person to enter a prompt and then have the AI generate a video. A stripped-down way to do this is to focus solely on the visual video and not include any audio. Gradually, we will see the production of visual video elements that are hand-in-hand accompanied by AI-generated matching audio (some LLMs do this but in quite a limited fashion).

A bonus on top of doing text-to-video is the possibility of taking an image as input and turning that into a video. The image might be by itself as the source content, or the AI might accept both a prompt as text and an accompanying image. Finally, the topmost aim is to allow the use of a separate video as the input source, possibly accompanied by text and images, all of which the generative AI utilizes to produce a suitable video. I refer to that as the all-encompassing full-meal deal.

The Holy Grail Is Suitability Of The Generated T2V

Notice that I just mentioned that the quest or hope is that the generative AI will produce a suitable video. My emphasis on that point is the nature of suitability.

Suitability is the trickiest part of this grand scheme. Allow me to explain. If someone enters a prompt that tells AI to produce a video about a cat wearing a hat that is sitting in a box and riding on a moving train, I’d like you to take a moment and imagine what that video looks like.

Go ahead, envision away, I’ll wait.

I dare say that if you told someone what the video would precisely look like, their conception of the video is going to be quite adrift from what you had in mind. Sure, you would both undoubtedly include a cat of some kind, a hat of some kind on the head of the cat, a box of some kind with the cat inside, and a moving train of some kind. But all of those might vary dramatically from the other person’s conception. Yours could be photo-realistic while the other person imagined animation. The colors would differ, the sizes and shapes would differ, and the action of the cat and the moving train would differ.

I’m sure you get the picture (aha, a pun).

Suitability or the act of meeting the request posed by the human user is a tough nut to crack. Your first impulse might be that if a person writes a lengthy prompt, that would seemingly narrow things down. It might do so to some extent. On the other hand, the odds are still notably high that there would still be marked differences.

Sora Turbo Enters Into The Scene

Earlier this year, OpenAI made available on a limited basis their new product Sora. Sora is a generative AI app that does text-to-video. Though it is referred to as text-to-video, it also does allow for the input of images and the input of video.

As an aside, the ultimate aim of AI makers across the board is to have what is known as X-to-X modes for generative AI, meaning that X can be text, images, audio, video, and anything else we come up with. The angle is that the end game consists of taking any type of medium as input and having the AI produce any desired type of medium as the output.

Boom, drop the mic.

No worries, we’ll get there (or, maybe we should be worried, as I’ll bring up toward the end here).

After Sora had its limited availability tryouts, OpenAI made some important changes and has now released the modified and more advanced version, known as Sora Turbo. Clever naming. You might want to go online and watch some posted videos showcasing the use of Sora Turbo. I say that because it is difficult in a written form such as this discussion to convey the look and feel of the prompts and controls you can use, and likewise allow you to see the generated videos. The official Sora portion of the OpenAI website shows some handy examples, plus there are already tons of user-made videos available on social media.

Components Of High-End Text-To-Video AI Apps

The next aspects that I will cover are the types of features and functionality that we nowadays expect a high-end text-to-video AI app to possess. I bring this up to acquaint you with the ins and outs of AI-based text-to-video capabilities.

In a sense, this is almost as though you are interested in possibly using or buying a car, but you aren’t familiar with the features and functions of automobiles. It can be tough to shop for a car if you are in the dark about what counts.

I will briefly identify some of the keystone elements of text-to-video. In addition, I’ll provide an assigned letter grade for what I perceive of the just-released Sora Turbo capabilities. I want to clarify that my letter grading is based on a first glance. My to-do list consists of spending some dedicated time with Sora Turbo and subsequently doing an in-depth review.

Be on the lookout for that posting.

T2V Suitability Or Faithfulness

I already brought up the fact that suitability is the Holy Grail of text-to-video.

Somehow, once the AI parses the input prompt, a video is to be generated that matches what the user has inside their mind. Whoa, we aren’t yet at mind-reading by AI (well, there are efforts underway to create brain-machine interfaces or BMI, see my discussion at the link here).

The AI industry tends to refer to this suitability factor as faithfulness or honesty. The AI is supposed to do a bang-up job and reach a faithful or honest rendering in video format of what the user wants.

I am going to say that all the readily available T2V is still at a grade level of C, including Sora Turbo. Inch by inch, clever techniques are being devised to hone in on what a user wants. This is mainly being done in AI research labs and we will gradually see those capabilities come into the public sphere.

T2V Visual Vividness, Quality, And Resolution

The video that was generated in the early days of text-to-video was very rudimentary. They were mainly low-resolution. The graphics were jerky while in motion. I’m not knocking on those heroic initial efforts. We ought to appreciate the pioneering work else we wouldn’t be where we are today.

Tip of the hat.

My point is that thankfully, we’ve come a long way, baby. If you get a chance to see the Sora Turbo AI-generated videos, the vividness, quality, and resolution are pretty much state-of-the-art for T2V. I’ll give this an A-/B+.

Yes, I am a tough-as-nails grader.

T2V Temporal Consistency Across Frames

I’m sure that you know that movies consist of individual frames that flow past our eyes so fast that we perceive that there is fluid motion afoot in what we are watching. The conventional text-to-video generation adheres to that same practice. A series of one after one-after-another frames are generated, and when they flow along, you perceive motion.

The rub is this. Suppose that in one frame a cat wearing a hat is at the left side of the view. The next frame is supposed to show the cat moving toward the right side, having moved just a nudge to the right. And so on this goes.

If the AI doesn’t figure out things properly, the next frame might show the cat suddenly at the far right of the view. Oops, you are going to be jostled that the cat somehow miraculously got from the left to the right. It won’t look smooth.

This is generally known as temporal consistency. The AI is to render the contents of the frames so that from one frame to the next, which is based on time as each frame goes past our eyes, there should be appropriate consistency. It is a hard problem, just to let you know. I’ll give Sora Turbo a B and anticipate this will be getting stronger as they continue their advancements.

T2V Object Permanence

You are watching an AI-generated video, and it shows a cat wearing a hat. The cat moves toward the right side of the scene. Suddenly, the hat disappears. It vanished. What the heck? This wasn’t part of the text prompt in the sense that the user didn’t say anything about making the hat vanish.

The AI did this.

Parlance for this is that we expect the AI to abide by object permanence and not mess around with things. An object that is shown in one frame should customarily be shown in the next frame, perhaps moved around or partially behind another object, but it ought to normally still be there somewhere. I’ll score Sora Turbo as a B-/C+.

Again, this is a hard problem and is being avidly pursued by everyone in this realm.

T2V Scene Physics

This next topic consists of something known as scene physics for text-to-video. It is one of the most beguiling of all capabilities and keeps AI researchers and AI developers up at night. They probably have nightmares, vivid ones.

It goes like this. You are watching an AI-generated video, and a character drops a brittle mug. Here on planet Earth, the mug is supposed to obey the laws of gravity. Down it falls. Kablam, the mug hits the floor in the scene and shatters into a zillion pieces.

That is the essence of scene physics. The kinds of intense calculations needed to figure out which way objects should natively go based on ordinary laws of nature is a big hurdle. In addition, the user might have stated that physics is altered, maybe telling the AI to pretend that the action is occurring on the Moon or Mars. I’ll score Sora Turbo as a B-/C+.

T2V Grab-Bag Of Features And Functions

I don’t have the space here to go into the myriad of text-to-video features and functions in modern-day T2V.

To give you a taste of things, here’s a list of many equally important capabilities in T2V products:

  • Stylistic options
  • Remixing re-rendering
  • Video output timing length
  • Time to render
  • Sequencing storyboarding
  • Source choices
  • AI maker preset usage limitations
  • Watermarking of generated video
  • Intellectual Property restrictions
  • Prompt library
  • Prompt storage functionality
  • Video storage functionality
  • Prompt sharing and control
  • Etc.

One thing you ought to especially be aware of is that T2V right now is usually only generating video that consists of a relatively short length of time. When T2V first came around, the videos were a second or two in length. They were nearly a blink of an eye.

Nowadays, many of the mainstay players can do somewhere around 10 to 20 seconds of video. That’s probably just enough to provide a brief scene, but it certainly doesn’t equate to a full-length movie. You can usually use a sequencing or storyboarding function that allows you to place one generated scene after another. That’s good. The downside currently is that the scenes aren’t likely to line up in a suitable alignment. Scene-to-scene continuity is typically weak and telling.

Overall, across the extensive list above, I’ll say that Sora Turbo is somewhere on an A-/B+ and you’ll find plenty of useful controls and functions to keep you busy and entertained.

The Emerging Traumas Of Readily Usable AI Text-To-Video

Shifting gears, I said at the opening of this discussion that text-to-video is quite a big deal. Let’s do a sobering unpacking of that thought.

Envision that with the use of prompts, just about anyone will eventually be able to produce top-quality videos that match Hollywood movies. This sends shivers down the spine of the entertainment industry. AI is coming at all those movie stars, filmmakers, support crews, and the like. Some in the biz insist that AI will never be able to replicate what human filmmakers can achieve.

Well, it’s debatable.

Furthermore, if you construe that the writer of the prompt is a said-to-be “filmmaker” you could argue that the human still is in the loop. One twist is that there are already efforts toward having generative AI come up with prompts that feed into AI-based text-to-video. Blasphemous.

There is something else of more immediate concern since the likelihood of T2V creating full-length top-notch movies is still a bit further on the horizon. The immediate qualm is that people are going to be able to make deepfakes of an incredibly convincing nature. See my coverage of deepfake-making via the AI tools to date, at the link here and the link here, and what’s likely going to happen with the next wave of AI advances.

Utterly convincing deepfakes will be made upon millions and billions of them. At low or nearly zero cost. They are easily distributed digitally across networks, at a low or negligible cost. They will be extremely hard to differentiate from real-life real-world videos.

At an enormous scale.

Disconcertingly, they will look like they are real-life videos. Consider the ramifications. A person is wanted for a heinous crime and a nationwide hunt is underway. The public is asked to submit videos from ring cams, their smartphones, and anything they have that might help in spotting the individual.

It would be very easy to create a video that seemed to show the person walking down the street in a given city, completely fabricated by using AI-based text-to-video. The video is believed. This might cause people in that area to become panicked. Law enforcement resources might be pulled from other locales to concentrate on where the suspect was last presumably seen.

You get the idea.

It Takes A Village To Decide Societal Norms For T2V

In my grab-bag list above of T2V features, I noted that watermarking is a feature that AI makers are including in the generated video, allowing for the potential detection and tracking of deepfakes. It is a cat-and-mouse game where evildoers find ways to defeat the watermarks. Another item listed was the AI maker placing restrictions on what can be included in a generated video, such as not allowing the faces and figures of politicians, celebrities, and so on. Again, there are sneaky ways to try and overcome those restrictions.

If you weren’t thinking about AI ethics and AI laws before now, it is time to put on some serious thinking caps.

To what degree should AI makers have discretion in the controls and limits? Should new AI-related laws be enacted? Will such laws potentially hamper AI advancement and place our country at a disadvantage over others (see my analysis of AI advances as a form of exerting national political power on the world stage, at the link here).

OpenAI acknowledges the disconcerting dilemma and noted this as a significant point in their official webpage about Sora Turbo entitled “Sora Is Here” (posted December 9, 2024): “We’re introducing our video generation technology now to give society time to explore its possibilities and co-develop norms and safeguards that ensure it’s used responsibly as the field advances.”

Yes, we all have a stake in this. Go ahead and get up-to-speed on the latest in text-to-video, and while you are at it, join in spirited and crucial discussions about where this is heading and what we can or ought to do to guide humankind in a suitable direction.

There it is again, the importance of suitability.

Latest article