Images chosen by Narwhal Cronkite
I Watched 6 Hours of DOGE Bro Testimony. Here’s What They Had to Say For Themselves
Over six hours of deposition, Justin Fox, a former investment banker and now a controversial figure in the world of government reform, unraveled a tangled web of contradictions, denials, and startling admissions. From vague definitions of his team’s guiding principles to the invasive use of AI-driven searches targeting specific demographics, Fox’s testimony provided an unfiltered look into the operational ethos of DOGE—a group whose actions have reportedly left significant scars on the U.S. government and beyond.
Testimony Highlights: Evasive Answers and Uncomfortable Acknowledgements
Fox’s deposition, released as part of a legal case involving educational and historical organizations, painted a remarkable, if troubling, picture of DOGE’s decision-making processes. One of the most striking moments came when Fox was asked to define Diversity, Equity, and Inclusion (DEI), a term central to many government programs under scrutiny. Instead of providing a clear response, Fox deflected, claiming he couldn’t articulate a precise definition.
Equally jarring were his methods for evaluating government contracts. Fox admitted to utilizing ChatGPT, an AI tool, to search for terms like “Black” and “homosexual,” while notably omitting equivalent terms pertaining to other groups, such as “white” or “Caucasian.” According to Fox, these targeted searches were meant to identify “problematic” contracts, but the selective nature of the queries raises significant ethical questions about both motive and methodology.

In one particularly contentious moment, Fox dismissed a grant’s purpose as “not for the benefit of humankind,” a statement he later walked back after further questioning. Observers have pointed to this exchange as emblematic of the recklessness often attributed to DOGE: a group of young, relatively inexperienced individuals making high-stakes decisions with far-reaching consequences.
The Broader Impact of DOGE’s Policies
The fallout from DOGE’s actions is significant, both in scale and scope. According to records, DOGE-related cuts have been linked to approximately 300,000 deaths, a staggering figure that underscores the human cost of bureaucratic streamlining undertaken without sufficient oversight or expertise. As reported in McSweeney’s, DOGE’s reforms also exacerbated inefficiencies across multiple government agencies, leading to data breaches, operational bottlenecks, and—most notably—failing to deliver on its promised goal of reducing the federal deficit.
Critics argue that DOGE’s focus on slashing costs came at the expense of nuanced understanding and responsible governance. Educational grants, public health initiatives, and academic programs were often targeted indiscriminately. “It’s clear that financial prudence took precedence over meaningful analysis,” an analyst familiar with the case remarked. “The ripple effects are still being felt today.”

These ripple effects include a chilling effect on future grant-making decisions. According to experts in the field, the fear of scrutiny or intervention has led some organizations to self-censor, limiting the scope of their proposals and, in turn, curbing innovation in sectors like education, healthcare, and technology.
The Ethical Dilemma of AI in Governance
One of the most contentious revelations from Fox’s testimony was his use of ChatGPT to vet government contracts. While AI is becoming increasingly integrated into public and private sectors alike, its application in this context highlights a dangerous intersection of technology and ethics.
Fox’s choice to exclusively search for terms like “Black” and “homosexual” drew sharp criticism for its apparent bias. Industry observers argue that this approach not only undermines equity but also weaponizes AI tools to further contentious political agendas. “AI is only as impartial as the person using it,” a technology ethicist noted. “In this case, it’s clear the tool was applied in a way that reflects pre-existing biases rather than neutral analysis.”
The admission brings to light broader questions about the role of AI in governance. While automation and machine learning can offer efficiency and precision, they also risk amplifying the blind spots or prejudices of their human handlers. For decision-making bodies like DOGE, the integration of AI without rigorous checks and balances is a recipe for disaster.
Fractured Leadership and Public Distrust
Beyond the specific actions of Justin Fox, the deposition offers a window into the chaotic and disjointed nature of DOGE as an organization. Analysts have likened its leadership to a start-up lacking the necessary maturity and institutional knowledge to make sound policy decisions.
“What we’re seeing here is not just a failure of process, but also a failure of leadership,” said one political analyst. “DOGE was spearheaded by individuals with limited experience in public administration. Their decisions were often guided more by ideological fervor than by practical realities.”
The result? An erosion of public trust. With allegations of misconduct, data breaches, and inflated promises, DOGE has become a lightning rod for criticism. Moving forward, this lack of faith in government oversight bodies risks undermining collaboration between public institutions and private actors, including contractors and grant recipients.

What to Watch For Next
As the litigation surrounding DOGE continues to unfold, the testimony of Justin Fox and others will undoubtedly remain under the microscope. The legal battles may pave the way for new policies dictating how oversight agencies operate, particularly in areas like grant evaluations and the use of AI tools.
On a broader scale, the DOGE debacle has highlighted the urgent need for transparency and accountability within government entities. Will this lead to reforms that restore public trust? Or will it deepen the already significant divide between policymakers and the citizens they serve?
For now, one thing is clear: the actions of DOGE will serve as a case study in the dangers of inexperience, bias, and the unchecked application of technology in governance. As the dust settles, the hope is that lessons learned from this chapter will lead to a more thoughtful and equitable approach to public administration.