We have seen examples of deepfake videos, some are good and some plainly easy to spot. The AI behind the video manipulation improves all the time and now it requires even less actual video footage to train the algo.
Apparently the most prolific use of deepfake is for making pornographic films with your favourite actor or actress playing the lead role. Why this should be the case baffles me but there you go.
Anyway, the need for good AI to fake paste ‘celebrity’ faces on to porn actors is effectively cross funding the technology for other applications. Presidents, dictators, politicians, sports personalities et al are all fair game now to deepfake for humorous or malicious purposes. However, fake news is hitting back! It’s easy to label these deepfake videos as fake news to the extent we don’t know easily what is fake or not! Apparently two allegedly fake videos from Gabon and Malaysia targeting political leaders turned up, but were they? How can we tell?
Perhaps Henry Ajder, head of research analysis at Deeptrace can help? Their mission is to help protect individuals and organisations from the damaging impacts of AI generated synthetic media. They owe their existence to the implications of the evolving world of AI deepfake over corporations and governments. I suppose every cloud has a silver lining.
Now, if there was such a company as Deep-find-porky (porky is shortened rhyming slang for porky pie or ‘lie’) it would be the perfect partnership for Deeptrace. We could identify the ‘fake news’ and deepfake videos at one go!
Or could we? There’s bluff and double bluff. Agents and double agents and I’m also minded, “Quis custodiet ipsos custodes?” Who watches the watchers?