Podcasts

What’s Really Happening on Fringe Networks and Why it Matters to Corporate Security

Unfortunately in today’s threat landscape, a Google search – even a few pages deep – won’t capture nearly enough information to grasp the risk a company faces.  That’s why Welton Chang, Co-Founder and CEO of Pyrra Technologies, set it upon himself to go after the content on fringe sites because of the threats that continue to emerge out of these nearly unmoderated social conversations.

Pyrra Technologies is a threat intelligence company that scans unmoderated social media with AI. Most recently Chang was the first Chief Technology Officer at Human Rights First and founded HRF’s Innovation Lab. Prior to joining HRF, he was a senior researcher at the Johns Hopkins Applied Physics Laboratory where he led teams and developed technical solutions to address disinformation and online propaganda. 

Key topics of Chang’s discussion with host Fred Burton include:

  • Inspiration behind developing Pyrra Technologies and his research on disinformation and online propaganda. 
  • Challenges emerging from generative AI that corporate security will face in 2024 
  • Advice for corporate security on confronting the lack of trust their audience faces due to the uptick in generative AI online

Key takeaways:

03:09: Welton Chang: Our company started when moderation taking place on the bigger platforms – Twitter and Facebook. I was primarily when people were being de-platformed from the larger ecosystems and seeking refuge in the smaller places where they can continue to have those social conversations but basically face less scrutiny. Our focus as a company is on those spaces. Unfortunately, these are places online where mass shooters have posted their manifestos previously and where folks have become radicalized over time by the content. We go after the content on these places because of the threats that we’re seeing emerge out of these less moderated, almost unmoderated social conversations because the costs are very high for these types of criminal incidents to occur at or near their property.

13:33: Welton Chang: One of the biggest challenges that I see – and I’m going to put my psychology hat on for a second – is the truth has been contested for quite some time, especially online social media certainly plays a role in that. But one of the bigger challenges that’s going to emerge from generative AI and that range of technologies is everyone’s spinning up about ‘oh man, there’s going to be deep fakes related to the election and Chatgpt is going to be used to create disinformation and it’s going to flood the zone.’  I’m not as concerned about the content itself. 

What I am concerned about is the erosion of trust in actual evidence. Certainly, technology is one of the reasons why we’re encountering the problem. But really, it’s about convincing and persuading people that something is real is real. In the past we would say things like, picture tells a thousand words.