One of the mini writing debates I had with myself when writing this was over this part:
“So I worry that one day I may find an image of me, animated and saying something I never said, and when I claim that I didn’t consent to this happening, someone will point to an image online that I released under some license that technically gave people permission to do something that I didn’t even realize was possible at the time that I “consented” to my photo being used.”
I was worried because none of the technology that I was aware of was able to create video just based off of one image. Usually in order for someone to create a video of you, they had to have a video of you. So it didn’t seem exactly correct to say that the person would point to “an image” online; more likely, they would point to “a video” online. I tested out rewording it, but the wording I kept was the most clear, which is why I kept it.
Little did I know.
Today I discovered that there is technology that will create DeepFake videos just from one image. That is, any image of you found online can be used to generate videos of you saying things that you never said.
Given that we can already reproduce voices (though I’m not sure how much data was used to do that), I would not be surprised if within the next five years technology exists that from a single video of you, could recreate you saying things you never actually said, in a scene you were never actually in, with your own voice.
We’ll just have to wait (and actively encourage lawmakers to adapt to these types of technologies) and see.