Dystopian Future and Our Role In It

I wasn’t sure where to put this thread so I apologize to the moderator if you feel the need to move it, but I think this is as good a place as any as there are technical issues I would like to discuss. Please bear with me as I explain my rationale first.

So, as we know, Deepfakes and the ease with which anyone with a computer will be able to manipulate video footage is already pretty much here, but in the next few years will certainly be at the point where, the human eye/brain will not be able to differentiate between that which is real, and that which is not.

I have a keen interest in psychology (just a side interest), so I am becoming concerned with the implications of this technology regarding how it will effect not just elections and other “political” issues, but also the effect it will have on humanity’s concept of reality. It’s an abstract concept that I think people are underestimating the importance of.

Essentially, if I can put it in a more concise way, people will no longer be able to believe anything they don’t see happen directly/live, in front of their eyes. Everything on the internet will be questionable (already this has happened vis-a-vis vaccines for example), that which isn’t real will be regarded as real, and that which is real will be disregarded as fake.

Now obviously this issue isn’t only about deepfakes and CGI, but these are our domain, and I feel we have some responsibility to help counter this threat.

So, I had the idea that we could try to make some kind of software that can be added to browsers, that will somehow be able to signal to the viewer that the video which they are watching, contains elements that have been manipulated/added post process.

I am most definitely not a technical person, and I see this potential project as something not to make profit from, but in the very same spirit of our beloved Blender. That everyone, for free and with ease, should be able to install and use this.

So I’m looking to see first and foremost what you guys think, am I insane? :smiley: Is something like this even remotely feasible? Is anyone interested in the idea? How would we go about it and what are your ideas?

I will throw this idea out there first and see what you think… As light seems to be one of the most difficult things to fake, could the addon exploit this for example? Something that can detect anomalies in the lighting of the video?

Please feel free to productively critique anything I have written in this thread, the whole point is to get the Blender community involved, there is no way I could do something like this on my own.


I know that there are some algorithms for detecting doctored images, I imagine the same principles could be applied to deepfake videos, given enough thought and consideraion.

Here are a few examples:

1 Like

Being concerned is a good thing, as certainly today there are serious issues about folks no longer caring about the truth and such. The world won’t end because of it. People telling lies about other people isn’t an old concept. Bad enough though (speaking from experience), that one should give the villains better tools to do that.

About deep fakes, what I understand, one of the most successful methods are adversarial networks (https://en.wikipedia.org/wiki/Generative_adversarial_network). The fun thing about these is, that in addition to getting a network which is good at creating fakes, you also get one which is good at recognizing fakes, so it automatically counterbalances its own utility for creating indistinguishable fakes.

People don’t view traditional lies in the same way as they will convincing fake videos. Video has much more “credibility” to the viewer on a psychological level. Will it “destroy the world”? Who knows, if it got the wrong person elected/reelected then it’s entirely possible. The people using deepfakes for political reasons are inherently unlikely to be the good guys. (That’s not to say deepfakes can’t be “good”, just they need to be easily identifiable)

Regarding programs that can spot deepfakes etc, the point I am making is that it is not easy at the moment. You have to be dedicated to some degree to find a program and learn to use it. This would require “work” for someone, and people on the internet are generally not looking for work. The idea needs to be as simple as a couple of clicks to install, maybe a filter or 2, then every video they watch will automatically be identified as a fake in their browser.


I am really not the person to be dealing with algorithms :smiley: To be honest I just had an idea, smarter people than me would need to implement any coding.

I would be a little more optimistic that people are able to adjust to this kind of new world. After all being excellent at adjusting to new situations is one the key distinguishing capabilities of humans. Learning to distrust video is such an adjustment.

But then people would need to trust this “consortium-non-profit-non-biased-group”, and as we can see, people are short on trust these days. This is the reason we have to give people the power to authenticate themselves. “I discovered the video was fake/real, so I know it’s fake/real.”

The whole point in my idea is that I am taking into account the psychology of people in general, and specifically regarding the way they interact with the internet and consume content.

I think some new unalterable gold standard format will be pretty quickly put out, which also has an effect of anything in lesser quality becoming practically unwatchable. Like the new movie Gemini Man which I think is 8k 160 fps 3D, which would be difficult for average enthusiast to alter.

That is pretty eerie though if something happened and was recorded with a phone I can imagine the newscasters debating “Tom Cruise just knocked out that poor man!” “Is this real?” “It could be a fake…” etc.

Then again, it may be wise to delay any new format that should any footage such as this be easilier disregarded.

1 Like

The issue with that is whether or not they trust a plug in they find on the net. You more or less run into the same trust issue those people would have with a group of experts. They could easily believe a conspiracy theory that company X only created the plugin so that it would only flag videos that “expose” their agenda as fake, and certify that every deepfake they make themselves is real.

This issue is nothing new. My brother believes in conspiracy theories, and he doesn’t accept any perspective we show him that doesn’t align with his preformed world view. Really, it’s just a case where some folks are extreme pessimists by nature, who actively look to validate all those negative thoughts they have concerning what’s wrong with the world. That type of person really can’t be reached, and while there are a lot of them out there, the majority of people on earth are at least capable of looking at things objectively.

If there were a consortium like that, they would at least be credible enough to convince most people what’s real or fake. Besides, this is already being worked on:

It is more or less an arms race between neural networks that fake videos, and ones that spot those fakes. In time, it might be possible to have a simple program you could run to guess if something is fake, but atm I don’t think this could work as a simple plugin or filter.

1 Like

Quake in fear, my pretties.

Nothing is true, everything is permitted.

Ok, I’m done.

If these two vids are examples of ‘deep fakes’, then my question is how is anybody fooled by this? Somebody would have to be pretty gullible to believe either one of these.

Let’s walk through each one and see if we can put some perspective on them.

Let’s start with the acting - is was terrible to be blunt, it wasn’t believable in the slightest. Not even a little. In both vids it was contrived and hollow.

The robot one was titled “New robot makes soldiers obsolete”, yet the “robot”; which by the way was clearly motion captured because it moved way too much like a human and robotics hasn’t progressed nearly that far for fluidity of motion to that degree; couldn’t shoot a human. Does anyone else see a problem with a replacement for soldiers that can’t kill the enemy?
Yet in the end we see the “robot” clearly shooting it’s “creators” with plastic darts? Did anyone see the flaw with that? If we’re to believe the robot couldn’t shoot a human, as stated in the three laws of robotics (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics), which it clearly adheres to in the beginning, it wouldn’t be participating in the “comic relief” at the end since it would have no way of knowing the difference between a real weapon and a toy weapon.
Then there’s the final sequence where the “robot” can’t shoot the “robot dog”, we would have to believe that the “robot” suddenly acquired not only a sense of humanity for it’s fellow “robot”, but also human emotions to BREAK the three laws of robotics, take a shot at one of it’s creators and “rescue” the dog when it should have shot the other robot because that would clearly not conflict with, again the three lwas of robotics, not to mention the ability to not only exit the scene but suddenly be nearly a couple miles away and all in under 30 seconds.

On to the second vid, the acting, as I stated at the beginning, was terrible but it was even worse in this vid than the first. For starters, we would have to believe the guy wearing the too shiny halloween wig from behind (the reason for the hat I suspect to hold the wig on), just happens to be Keanu Reeves, who just happens to be hanging around a convenience store, yet he curiously doesn’t seem to be purchasing anything, not even so much as cigarettes or even a snack, so why would KR go there especially at night? Most people don’t go to convenience stores for no reason.
But this isn’t even the biggest problem, the pasted on face is. They used better pasted on faces in Jurassic Park, at least they got the eye lines close, because at one point in there he’s supposedly talking to the guy with the Neo picture but the pasted face is clearly looking somewhere else.
And the whole “robbery” sequence is not only unbelieveable, but it’s completely absurd and obviously staged.

To be honest, anybody who could believe these, or any of the other ones without question probably needs a psychiatrist because they’ve clearly lost touch with reality.

The first one I put up as sort of a joke because they already have real military robots footage and I just thought it would be funny with the Dr Evil to try and ‘pass off’ the CG as real. The robbery is their second attempt and is much better than their first Tom Cruise, they pretty much make MAD magazine skits, humorists.

The people I showed it to using AppleTV thought it was a Saturday Night Live skit, they didn’t really understand as I tried to explain A.I. replaced Keanu’s face, and luckily there was a ‘process’ video right after showing how they did it. I wouldn’t be surprised if some new laws against this spring up. Also remember the average I.Q. (70% population) is only ranged from 85 to 115, and I suspect it’s probably pear shaped at the 85, 15 points above retarded.

In the world of politics, I doubt politicians ever think of themselves as anything but the good guys. Which is just as bad if they get in the mindset that we the people don’t know what’s best for us so they need to resort to distasteful tactics in order to save us.

No sweat, we’ll just invent fakeBlock. Just like adBlock, but for fakes and super spam bots.
Websites will ask you to disable it whenever you randomly click on a search result and all.

Illegal like impersonating a police officer… Keanu and Tom Cruise should really say “Hey it’s kind of funny but I don’t like somebody making me do things… Look what I can make Keanu Reeves do!” Someone can make a video where Keanu negotiates a reunion of marriage between Brad Pitt and Angelina Jolie, or world leaders ranting about invading this or that.

WRT OP & DeepFakes, there’s adversarial AI developed to detect the fakes, but it’s little use in respect to the average Joe who will never bother using it.

We will have to rely on the likes of Google and Facebook and Microsoft and such to implement those AIs on their platforms, however can we really trust them? And doesn’t this put them in increasingly ever more powerful positions of authority than they already are?

Importantly it just means there will be an AI arms race of sorts between the faker AIs and detector AIs development.

Meaning the uncertainty remains irregardless.

That’s why I believe some new very affordable unalterable format is about to be released, otherwise it could be rampant turmoil and mischief. This way they can say “Those who believe a 2D 720 pixel mp4 video in these days are a damned fool! I believe it in Real3D™, and nothing else. Any damned fool with a graphics card can use these programs on lesser formats.”

RealSoft has gotcha covered. :upside_down_face:

1 Like