Mitigating AI Bias

Share this post

Mitigating AI Bias

Sound off on AI: More is better…right?  

More is not necessarily better. Imagine connecting an electronic drum set to an amp—you may be pleased with the amplified sound, but your neighbors may have a completely different perspective. Not only may it be unpleasant noise for the neighbors, often the sound becomes distorted, broken, and ultimately leads to equipment failure. 

Or consider playing acoustic drums. New drummers sometimes hit the drums harder thinking that it will deliver more sound—instead they are setting themselves up for broken drumsticks. In both cases, more leads to lack of control and inadequate output. 

In the OSINT community—a shift in technology has amplified the speed and capacity at which information can be gathered: Artificial Intelligence (AI). AI can gather OSINT 24×7 without stopping to sleep or take a coffee break, like their human OSINT operator counterparts.  

Is more, better? 

Sometimes. And sometimes not.  

Yes, in theory, accessing more sites, using more AI tools, and sifting through countless data points quickly sounds promising. But just like amplifying sound—without proper controls, AI-fueled OSINT can lead to distorted results and inadvertently amplify technical and AI bias in the process.  

What is AI bias?  

AI bias describes how AI-based online data collection and analysis systems can reflect societal biases based on things such as race, gender, age, and culture. AI systems use algorithms to gather, analyze, rank, and interpret online data.  However, humans write the algorithms. Thus, behind every algorithm is possibly one or more cognitive or other biases. Although unintentional, those biases can transfer into results of collected data. For this reason, only trusted AI applications should be integrated into OSINT workflows. 

What impact will AI bias have on OSINT research? 

AI is quickly becoming infused into almost every area of our society: from healthcare to hiring, from customer support to criminal investigations, and the national security community is no exception. 

AI biases can drastically affect the admissibility, effectiveness, fairness, or even legality of OSINT research results. It can trigger ethical issues, privacy issues, and more. Thus, the more reliance that is put on AI throughout the OSINT cycle, the more consideration should be given to AI bias at every stage of the OSINT workflow. Mitigating bias must begin as soon as you open your browser. 

The Benefits are Here to Stay: How to Mitigate AI Bias  

Let’s face it, AI-driven OSINT is here to stay. There are positives aplenty when it comes to AI. The tools can sift through millions of lines of online data without stopping to sleep or take a vacation like humans do. So, what’s the solution to avoiding AI bias?  

With rapidly increasing general availability of AI OSINT tools, it’s easy to look past the risks to take advantage of accessing more data. Many proprietary algorithms are opaque leaving you in awe with what they do, yet uncertain of how they work. This lack of transparency invites risks like mission activity exposure, adversarial tracking, and data leakage.  

To ensure the security and effectiveness of AI algorithms, its critical to work with trusted providers. The human-side of operational risks, including bias, exists throughout the OSINT cycle, whether data collection and analysis are managed by AI algorithms or manual workflows. 

The key to risk reduction is adopting a holistic platform that prioritizes both digital signature management, sophisticated AI capabilities, and auditing and oversight. Adding more—speed and capacity—to data collection does not proportionally add control and quality to the output. Work with trusted providers (did we mention that already?) to reap the benefits of AI while still protecting your organization, mission, and operators. 

Auditing and Oversight 

Maintaining operational awareness at all times can be the difference between mission success and failure. Ntrepid offers an integrated auditing and oversight toolset that provides the medium by which harmful user behavior can be assessed and rectified to safeguard your mission. Administrators can view which, how many, and how often websites are visited. They can export screenshots of webpages via Safehold along with all activity metrics. They can view and download automatic screen captures in the Browser History Data Report. These capabilities are supported by the Admin Tool, Insight, and the Video application. 

So, more can be better when methodically integrated into your OSINT workflow. Just as AI can amplify speed and capacity of data collection, it may distort your results adding operational risk, lack of control, and AI bias if not given careful consideration. Balance is everything. We can help you achieve that.