The ActiveFence Engineering Blog
How did we get to a deep learning model for symbol recognition with a small amount of labeled information?
Online platforms rapidly update with petabytes of data from the content generated by users. Hidden within this data is harmful and malicious content. Learn how subject-domain expertise and AI can detect such content and keep users safe

Racism, Hate, and Weddings- 3 Ways to Avoid Biased Data
Working with biased data presents many problems. Here, we share how to diagnose, handle, and reduce such data

Constructing and querying a data model for online harm- Part 1
Online content, media creators, and user interactions must be captured to build algorithms that detect harmful activity. Here, we discuss the data model that captures the complexities of this online ecosystem and how we use it to detect malicious activity at scale

Constructing and querying a data model for online harm – Part 2
Online content, media creators, and user interactions must be captured to build algorithms that detect harmful activity. Here, we discuss the data model that captures the complexities of this online ecosystem and how we use it to detect malicious activity at scale

The Metaverse: Helping Platforms Keep Us Safe in New Digital Territory
The metaverse presents a unique opportunity to foster community and drive innovation online. However, it also opens the door for a greater risk of harm, which is more difficult to detect and proactively combat. However, with contextual AI, safety in the metaverse stands a chance

ActiveFence R&D on Practical AI Podcast
Listen to how we build AI that solves the complex challenges of content moderation on this podcast with Matar Haller, Daniel Whitenack & Chris Benson

ActiveFence R&D at PyData Conference
Check out “Constructing and Querying Data models to Detect Online Harm” presented by our data team
