Podcast: Play in new window | Download (Duration: 19:31 — 26.8MB)
Subscribe: Google Podcasts | RSS
In this episode of CHATTINN CYBER, Marc Schein interviews Paul Christopher, Senior Social Scientist at the RAND Corporation, where he serves as the principal investigator for various defense and security related research projects. In today’s conversation, Paul talks mainly about AI and the need for introducing/enhancing AI cybersecurity and advancing information technology protection with time.
Paul begins the conversation by discussing cognitive security, or the concept of protecting the safety of ideas and thought processes. From a national perspective, it is about protecting citizens from foreign interference in their right to think and participate in national politics. It is an old concept, rooted in the idea of war being a contest of wills and politics by other means.
Further into the conversation, he discusses AI and how it is affecting propaganda by allowing for automated amplification through the use of bots. As AI becomes more sophisticated, there is a greater danger of it being used for propagandistic purposes. One example is using a Gann, a generative adversarial network, where one AI generates messages and the other detects and prevents them, but in an unethical manner, the second AI could be removed and the messages could be directed at real people. Countries are spending more money on propaganda, but it is still cheaper than traditional military capabilities. The effectiveness of propaganda is difficult to measure, but the power of an integrated physical and informational campaign, as seen in the 2014 Russian annexation of Crimea, is highly effective.
Paul and Marc also discuss deep fakes and shallow fakes—methods of creating fake videos using AI technology and how both these types of fakes can be effective in deceiving people. They also discuss counter propaganda—a method of countering the effects of propaganda by providing counter messaging or a counter narrative to counteract it.
Towards the close of the conversation, Paul highlights the human vulnerability to misinformation and disinformation and how it’s important for everyone to remember that we are challenged cognitively. Humans often think fast and use heuristics, which make them more susceptible to being tricked, manipulated, or deceived. He also mentions the cognitive bias called Blind Spot bias, where people are willing to see vulnerabilities in others but not in themselves. He advises people to be aware of these vulnerabilities, not to believe everything they see and to find ways to improve their media literacy and to use tools to screen disinformation or at least pop up warnings when there’s an uncredible source.
“If you’re countering propaganda, either your counter messaging or doing a counter narrative, where you’re trying to claim the opposite of whatever the propaganda is, or overwhelm it with the truth or counteract it. Which unfortunately, the research in social psychology suggests isn’t very effective, because the first mover advantage is hugely important.”
“There are things that the government can do to pass laws and regulations to make foreign propaganda, either require labels or to be illegal so that you can then indict foreign propagandists and affect them.”
“There’s this thing called Blind Spot bias, where we’re willing to see these vulnerabilities in others but we imagine that we ourselves are special or magical or invulnerable.”
[00:14] – How Paul ended up becoming a senior social scientist at the Rand Corporation
[01:35] – What is cognitive security?
[04:15] – Are countries spending money on propaganda campaigns?
[06:26] – Distinguishing deep fakes and shallow fakes
[12:21] – Understanding counter propaganda and the ways to curb it
[17:24] – Final thoughts
Connect with Paul: