
Before it even rolls out, a new AI system, called GIDEON, has created contention. GIDEON is designed for law enforcement to evaluate online activity in swathes. Beginning with preliminary issues and working up to possible conflict, all while evaluating patterns before they reach actual violence. GIDEON is specifically focused on gathering information about stopping school shootings and attack patterns by scanning platforms. Such as Reddit, Discord, gaming chats, and other fringe forums, looking for patterns in grievance buildup towards violence, martyrdom language, and tactical planning to inflict violence.
GIDEON’s Role in Predicting Threats
GIDEON is billed as a preventative tool with the ability to detect early indicators of violent behavior. It applies the term “Israeli-grade ontology,” which is a framework that organizes the language on the Internet and categorizes it in order to expose threats. This means that the platform is identifying not simply keywords but looking at uses of words in context. For example, GIDEON could potentially detect when a user who is venting about their rage over a topic is planning violence or an attack, such as targeting a school in their posts and sharing extremist tactics. Then, law enforcement would get alerts associated with those specific risks in their area.
Proponents argue that the ability of GIDEON to detect red flags, as noted above, could protect lives. They cite mass shootings that often display some online indication that the perpetrators are capable of carrying out such horrific attacks, even though some of these indicators may amount to intent. They referenced the Minneapolis case, where investigators later learned of concerning posts made by the shooter.
Ethical Dilemmas of Pre-Crime Policing
GIDEON looks to be safer, but it also raises difficult questions. At its foundation, the system creates surveillance that some people consider invasive. Monitoring casual conversations in forums or gaming chats confuses safety with invasion of privacy. Opponents warn that GIDEON errors on creating a type of pre-crime facility to identify and punish people not for what those individuals have done, but rather for what an algorithm perceives they may do in the future.
The other issue is the prediction accuracy, because GIDEON could mistake sarcasm, joking, or emotional outbursts for threats, and innocent persons could be confronted by the police. A misperception by GIDEON could undermine the public trust in police and subsequently in the AI. Gaiden’s “Israeli-grade ontology” also diminishes the likelihood of accountability, and few know the accuracy of GIDEON’s work that assesses what constitutes a threat. In the vacuum of transparency in AI, it is possible that GIDEON could be working with erroneous models or built-in discriminatory models that differentially target groups.
The unfortunate timing of GIDEON’s growth concerns, as its launch was announced shortly after a tragic shooting at a school. Detractors argue that GIDEON is being launched amidst a climate of public fear, which can normalize surveillance. Once a tool is in place, like GIDEON, it is very difficult to dismantle it. There is a larger issue at stake in the action taken against GIDEON. In contrast to its impact on civil rights, technology, and security goals.
Conclusion
GIDEON represents a tipping point for law enforcement’s acceptance of using AI. It offers the opportunity for faster detection, real-time alerts, and perhaps even lives saved, but it comes with significant trade-offs. The use of constant online scraping and predictive policing can only come at the expense of privacy and freedom. How society engages with GIDEON’s rollout will shape future uses of AI-type surveillance tools. A balance must be found to control violence, or we risk living in a state where everyone is considered a suspect. The question of GIDEON isn’t simply about a singular platform. But about what kind of digital future people are willing to live with.