Training AI


How is AI software trained? both images and videos?


Given the volatility of this topic, I’m going to pre-emptively jump in here and emphasize that replies need to answer the asked questions, and generalized discussion about AI/the ethics of AI/the future of AI/AI vs anything else is not suitable for this thread. Thanks! :slight_smile:

1 Like
1 Like

What kind of data is used only depends on the task or the kind of training data that is available.
A lot of the AI projects use PyTorch or TensorFlow/Keras for the training process.

If you want to know exactly, Andrej Karpathy is doing a very nice small course in Youtube that uses Python (same language Blender addons use). Probably the best series of videos if you want fast and hard dive into the subject.

If you want a more general explanation, it’s like a blind dog trying to find a way back home without knowing where it is at the start. Slowly it can recognize familiar smells and it will remember where it was previously so it can always backtrack or go forward based on if it’s getting close or farther. Once it can’t find a way to improve its location, it’s home, and the network is trained so to speak.

I actually have a Smart TV with AI and the more I watch movies the better the quality of the movies.

how does this AI process work?

I how should I proceed to educate the AI? watching 4K movies? or something else ?


What exactly is getting better? Is it upscaling the movie and this is being improved?

Edit: Sounds more like misleading marketing

1 Like

both upscaling and 4K movies.

doesn’t look like marketing to me.
I would say clearly the picture is immensely better after three months of watching movies every day.

do you know how the AI learning mechanism works?
What should I do for TV to learn?


Maybe my mom was right and the TV is just rotting your brain.

The tv isn’t getting smarter, you’re just getting dumber.


You have to be more specific about what exactly the AI is improving (what does the description say it is learning? The more cloudy those descriptions are, the higher the likeliness there is nothing special about it.).


First you need to know what it is actually learning/improving.

1 Like

the whole picture is better.

with more beautiful colors
more real (it feels like I’m inside the movie)


That’s… not a thing, but it is a shining example of the placebo effect at work :slight_smile: AI can at best interpolate frames to create a higher frame rate, or change color grading slightly, it absolutely cannot override your movies frame by frame in realtime while you’re watching them and magically make them better somehow.

Also, even with color grading, movies aren’t 32-bit color, they’re 16-bit (or 8, more likely) RGB with compression, so there’s actually not much flexibility possible. Not to mention, your TV would have to have hundreds of terrabytes of storage to store each frame and process it individually, and the realtime processing you’re talking about would take a current-gen GPU, which your TV 100% does not have

It doesn’t matter what you think is happening. It is necessary to know what exactly is happening. Machine learning requires something called a loss function which compares the current prediction to the “desired” output. During the training it tries to change the predictions to be closer and closer to the “desired” output. The “desired” output is the training target and it is necessary to know it.

What could be happening in your TV is that it adjusts contrasts, makes the image sharper … to give you a certain look. For a marketing department that might be called AI, for others it is called filters.


If you want to know how AI upscale works, here’s a good source:

If you want to learn without doing the work, that’s just not how any of this works.

but in general
without considering this issue of Smart TV.

how is the video AI of a soft trained?

is it suitable to play 4K movies?
or any video resolution trains the AI?


First you need the training dataset. If we take the example of upscaling, you need the low resolution image as input and the high resolution image as output.
Now you need to define the neural network architecture (that’s the thing you are actually train). Here you need to make sure the output is two times as large as the input or even larger, depending on what you want. If your goal is to make it work for arbitrary input sizes, this is where you need to make sure things are setup properly.
What’s left now is the training pipeline, where you for instance define the loss function (how to calculate the difference between the prediction of the neural network and the actual target image). Another component here is the optimizer which computes how the individual trainable values of the neural network need to be updated after every training batch.

As a concrete example, you may have a look at this:

However, if you are interested in learning the basics, I would suggest you the courses (all freely available online).

It isn’t, in your case, as has been said repeatedly- your TV is not using AI on 4k movies, it is using filtering and other common tricks

as I said the question is not about my TV.
it’s about AI in general

answer objectively with yes or no:

how is the video AI of a soft trained?

is it suitable for playing 4K movies?
or any video resolution trains the AI?

another question :
the upscaling of Full HD videos to 4K is done by AI?

thanks a lot

There are plenty of techniques for that. It depends on the task and the computational requirements.

Depends on the hardware and the computational complexity of the used neural network.

Depends on the hardware and the computational complexity of the used neural network.

It can be done using neural networks. Whether those techniques are used is difficult to judge.

does that mean my upscaling can be done by AI?

so does it make any sense that my movies are being improved by AI?