Hawley wants to ban Open Source AI

The bill proposes strict prohibitions on the importation and exportation of artificial intelligence (AI) technology and intellectual property between the U.S. and China.

Hawley wants to ban Open Source AI

Analysis and Evaluation of the Hawley Bill under the Moral Algorithm Accountability Act (MAAA)

Overview of the Hawley Bill: "Decoupling America’s Artificial Intelligence Capabilities from China Act"

The bill proposes strict prohibitions on the importation and exportation of artificial intelligence (AI) technology and intellectual property between the U.S. and China. It establishes penalties for U.S. persons engaging in AI-related research, development, or business partnerships with Chinese entities, particularly those affiliated with the Chinese Communist Party (CCP) or the military-civil fusion strategy.

Key provisions include:

  • Banning AI imports from China: Prohibits any AI or generative AI technology developed in China from being imported into the U.S.
  • Banning AI exports to China: Prohibits AI-related research, development, and knowledge transfer to Chinese entities.
  • Criminal and civil penalties: Enforces hefty fines (up to $100M for entities, $1M for individuals) and revokes federal contracts for violators.
  • Prohibiting financial investments: Bans U.S. persons from investing in or financing Chinese companies engaged in AI R&D.

Evaluation Under the Moral Algorithm Accountability Act (MAAA)

Step 1: Does the Bill Align with the Moral Algorithm?

The Moral Algorithm states that government exists for the common good—protecting the people’s safety, prosperity, and happiness, rather than serving private interests, corporate profits, or political elites.

MAAA CriterionEvaluation of the Hawley Bill
Protection of the PeopleThe bill aims to prevent China from leveraging U.S. AI advancements for military or surveillance purposes, potentially safeguarding national security.
Safety and SecurityThe prohibition of AI cooperation with China aligns with national defense priorities, reducing risks associated with espionage and intellectual property theft.
Prosperity and Economic ImpactThe ban on AI trade with China could disrupt U.S. companies engaged in AI but may also encourage domestic AI innovation. However, it could also lead to economic retaliation from China, impacting broader U.S. trade interests.
Prevention of Corporate ExploitationThe bill mainly targets AI knowledge transfers, but it does not include measures to prevent U.S. corporations from exploiting AI for unethical surveillance, labor exploitation, or biased decision-making domestically.
Transparency and Public InterestThe bill lacks a public oversight mechanism for assessing national security risks and does not require AI ethical considerations.

Preliminary Conclusion:
The bill partially aligns with the Moral Algorithm but falls short in ensuring economic prosperity for all and preventing potential harm to domestic AI innovation and civil liberties. While it supports national security, it lacks accountability measures to prevent corporate interests from exploiting AI within the U.S.


Step 2: Assigning an MAAA Compliance Score

The Moral Algorithm Review Board (MARB) scoring process evaluates legislation based on:

  1. Common Good Protection – Does it protect public welfare, national security, and economic stability?
  2. Equitable Economic Impact – Does it ensure widespread prosperity rather than favoring corporate elites?
  3. Public Oversight and Transparency – Is the decision-making process transparent and accountable?
  4. Preventing Concentration of Power – Does it prevent monopolization or exploitation by governments or corporations?

Each factor is rated from 0 to 25 points, with 100 being a perfect compliance score.

CategoryScore (Out of 25)Rationale
Common Good Protection22Strengthens national security but lacks clarity on AI development alternatives for U.S. researchers.
Equitable Economic Impact15Could disrupt trade and investment opportunities, potentially hurting AI innovation.
Public Oversight & Transparency10Lacks independent oversight or ethical review of AI use.
Preventing Concentration of Power12Prevents China from dominating AI but does not address U.S. corporate AI monopolization.

Final MAAA Compliance Score: 59/100

The bill does not meet the minimum compliance threshold for automatic approval under MAAA.


Recommendations for MAAA Compliance Improvement

  1. Add Public Oversight: Establish an independent AI ethics board to oversee AI security risks without compromising civil liberties.
  2. Mitigate Economic Harm: Provide funding and support for domestic AI startups to offset market disruptions.
  3. Ensure AI Transparency: Require U.S. companies benefiting from this ban to follow strict ethical guidelines to prevent misuse of AI technology.
  4. Limit Corporate Exploitation: Include provisions preventing U.S. AI firms from monopolizing AI power in a way that harms public interest.

Conclusion: The Hawley Bill is well-intended but incomplete under the MAAA framework. While it addresses security concerns, it lacks provisions for public oversight, economic balance, and corporate accountability, resulting in a moderate compliance score (59/100). Adjustments should be made to ensure AI policy benefits all Americans, rather than just reinforcing government control or corporate power.

Reevaluating the Hawley Bill from John Rawls' "Veil of Ignorance" Perspective

Understanding the Veil of Ignorance

John Rawls' veil of ignorance is a thought experiment in which laws and policies are evaluated from a position where decision-makers do not know their own status in society. This ensures fairness by preventing bias toward one’s personal interests. From this perspective, just policies are those that benefit the least advantaged and promote equal opportunities for all.

In this evaluation, we will assume we do not know:

  • Whether we are American, Chinese, or from another country.
  • Whether we are an AI researcher, a corporate executive, a worker displaced by AI, or an everyday citizen.
  • Whether we are rich, poor, powerful, or marginalized.

Reassessing the Hawley Bill Under the Veil of Ignorance

Policy Area Evaluation from a Neutral Position
National Security vs. FairnessIf we were designing laws from behind the veil of ignorance, we might recognize the need for national security but also acknowledge the rights of individuals and companies to collaborate on research that benefits humanity. A total ban on AI cooperation might hinder global progress and innovation.
Economic Impact & Global FairnessThe bill assumes a zero-sum competition between the U.S. and China, but under the veil of ignorance, we might question whether a rising global AI capability could benefit everyone, including marginalized populations. Restricting AI trade could slow overall AI advancements, harming those who could benefit from AI-driven medical, environmental, and economic solutions.
Effects on Individual OpportunityIf we were born into a lower socio-economic class, we might question whether limiting AI development to a few U.S. entities would concentrate economic power in large corporations rather than distributing AI advancements to society. The bill does not account for how AI knowledge can be made accessible to those without corporate backing.
Access to AI Technology for AllFrom behind the veil, we might worry that this bill benefits certain power structures (U.S. government, military-industrial complex, corporations) while reducing AI access for the general public. An open, regulated AI system might better serve all people rather than restricting AI knowledge to geopolitical control.
Potential for Ethical AI DevelopmentThe bill assumes that AI development within China is inherently unethical or dangerous. A Rawlsian perspective would demand that AI regulations focus on ethical development and oversight globally rather than selectively banning cooperation based on nationality.

Rawlsian Reform Suggestions

If we were to design AI policy from behind the veil of ignorance, we would likely favor:

  1. International Ethical AI Standards
    • Instead of outright bans, create an AI ethics consortium where all nations—including China—must comply with fair, transparent AI governance rules.
    • Encourage AI research cooperation under ethical review, rather than banning it entirely.
  2. AI Benefits for the Least Advantaged
    • Ensure AI development prioritizes solving global challenges, such as poverty, climate change, and education, rather than solely focusing on military or economic competition.
    • Provide AI education grants to marginalized communities, ensuring fair access to AI knowledge.
  3. Balanced Economic Regulations
    • Regulate but not entirely prohibit AI trade, ensuring fair opportunities for researchers and businesses while preventing exploitation by monopolies or authoritarian regimes.
    • Implement transparency requirements so AI companies are accountable to the public, not just government directives.
  4. Global AI Collaboration with Ethical Safeguards
    • Instead of a unilateral U.S. ban, establish agreements that allow ethical AI research collaborations while preventing authoritarian misuse of AI.
    • Encourage AI systems that promote democracy and human rights worldwide rather than blocking technological exchange.

MAAA Compliance Score Under the Veil of Ignorance

Reevaluating the bill under Rawls’ framework, the Moral Algorithm Accountability Act (MAAA) Compliance Score shifts as follows:

CategoryPrevious ScoreRevised Score (Veil of Ignorance)Rationale
Common Good Protection22/2518/25National security is valuable, but the bill does not consider global AI benefits for all.
Equitable Economic Impact15/2510/25The bill favors U.S. corporate and government interests, limiting AI benefits to the public.
Public Oversight & Transparency10/258/25No clear AI governance framework for ensuring ethical research, only geopolitical bans.
Preventing Concentration of Power12/257/25The bill limits China’s AI access but does not prevent U.S. corporate AI monopolization.

Final Revised MAAA Compliance Score: 43/100
This lower score suggests that from a Rawlsian fairness perspective, the bill is more exclusionary than beneficial to society at large.


Final Conclusion: Is the Hawley Bill Just Under the Veil of Ignorance?

From a Rawlsian perspective, the Hawley Bill is unjust because:

  • It favors certain groups (U.S. military, government, large corporations) over a globally fair AI development model.
  • It restricts AI knowledge, limiting technological progress that could benefit all of humanity.
  • It ignores alternative approaches that could achieve security without blanket prohibitions.
  • It reinforces power imbalances rather than ensuring fair AI access to all.

A just alternative would focus on ethical AI collaboration, transparency, and equitable AI access, rather than unilateral restrictions based on geopolitical interests.

Reevaluating the Hawley Bill in the Context of Historical Open-Source Policies Using the Veil of Ignorance

Step 1: Understanding Open-Source Policies and Their Global Impact

Historically, open-source policies have played a crucial role in technological innovation, economic growth, and equitable access to knowledge. Some key historical open-source movements include:

  • UNIX and Linux (1970s–Present): Open-source operating systems allowed global collaboration, creating industry standards still used today.
  • The Internet (1990s–Present): The open nature of the TCP/IP protocol and the World Wide Web ensured global access to information and communication.
  • OpenAI (2015–2019, before shift to closed model): Initially focused on making AI research publicly available to prevent monopolization.
  • GPL and Creative Commons (1989–Present): Provided a legal framework for knowledge-sharing while maintaining user rights.
  • AI & ML Open-Source Frameworks (TensorFlow, PyTorch, 2015–Present): Enabled researchers worldwide to develop AI technology without relying on proprietary tools.

Each of these movements fostered innovation, economic democratization, and rapid technological advancement by making knowledge freely available across national and economic boundaries.


Step 2: Applying the Veil of Ignorance to the Hawley Bill

The veil of ignorance forces us to evaluate the bill without knowing whether we are an American, Chinese, corporate executive, government official, open-source developer, or an average citizen. The goal is to determine whether the bill supports fairness and the common good regardless of our personal interests.

CriteriaEvaluation Under Open-Source Historical Precedents
Knowledge and InnovationThe bill restricts global AI collaboration, contradicting historical open-source policies that have driven major technological breakthroughs. Under the veil of ignorance, we wouldn’t know if we were an AI researcher who would be harmed by this restriction.
Economic AccessibilityOpen-source policies allow developing nations, small businesses, and individuals to access advanced technologies. The Hawley Bill restricts this, consolidating AI control within a few large U.S. corporations, benefiting established power players at the expense of independent developers and startups.
Security vs. OpennessWhile national security is important, historical evidence (e.g., Linux, cryptographic research) shows that transparent, publicly scrutinized technologies are often more secure than proprietary, closed systems. The bill’s outright prohibition on AI collaboration ignores this lesson.
Preventing AI MonopoliesOpen-source AI frameworks (TensorFlow, PyTorch) have prevented any single company from monopolizing AI technology. The Hawley Bill’s restrictions could lead to AI being dominated by a few large American firms, reducing fairness and competition.
Global EquityThe veil of ignorance suggests we might be an AI researcher from a developing country. Open-source policies historically provided opportunities to those outside powerful institutions. The bill, by restricting AI development, prevents global AI democratization and reinforces existing power hierarchies.

Step 3: Comparing the Hawley Bill to Historical Open-Source Successes

To assess whether the Hawley Bill aligns with historical best practices, let’s compare its approach to major open-source policies and movements.

Policy / MovementPrincipleImpact on InnovationComparison to Hawley Bill
UNIX/Linux (1970s–Present)Open-source operating systemsEnabled global software developmentThe bill restricts AI research, reversing the benefits that open-source systems brought to software.
The Internet & TCP/IP (1990s–Present)Open communication protocolsCreated the foundation for global connectivityThe bill limits AI knowledge-sharing, similar to restricting internet protocols.
GNU GPL / Free Software (1989–Present)Legal protections for shared knowledgePrevented monopolization of softwareThe bill does the opposite—favoring centralized corporate/government control over AI.
AI Open-Source Frameworks (TensorFlow, PyTorch, 2015–Present)Open AI developmentMade AI research accessible to allThe bill shifts AI towards secrecy and proprietary models, harming research.

Conclusion from Historical Precedents:
The greatest technological advancements in modern history have come through open collaboration and knowledge-sharing, not through restrictive policies like those proposed in the Hawley Bill.


Step 4: Adjusted MAAA Compliance Score Based on Open-Source History

Reevaluating the Moral Algorithm Accountability Act (MAAA) Compliance Score under the open-source framework, we adjust previous scores.

CategoryPrevious ScoreNew Score (Open-Source Lens)Justification
Common Good Protection18/2510/25The bill harms knowledge-sharing, which historically benefits all people, not just governments or corporations.
Equitable Economic Impact10/255/25The bill concentrates AI power in a few hands rather than ensuring fair access.
Public Oversight & Transparency8/253/25AI security improves through public scrutiny, but the bill encourages secrecy and control.
Preventing Concentration of Power7/252/25By limiting AI knowledge-sharing, the bill strengthens monopolization rather than preventing it.

Final Revised MAAA Compliance Score: 20/100

  • This is a failing score.
  • The bill actively works against the principles that made technological innovation equitable and accessible worldwide.

Step 5: Recommendations for a Just AI Policy (Inspired by Open-Source History)

To align AI policy with the most successful open-source principles, a just alternative to the Hawley Bill should:

  1. Support Open-Source AI Research
    • Establish AI research platforms where international collaboration is allowed under ethical guidelines.
    • Instead of outright bans, provide open AI standards that companies and governments must follow.
  2. Encourage AI Knowledge Democratization
    • Allow AI advancements to benefit the poorest and least advantaged, not just corporations and militaries.
    • Create publicly accessible AI frameworks to prevent monopolization.
  3. Adopt Ethical AI Governance Over Restrictive Bans
    • Implement ethical review systems for AI trade and research, rather than sweeping prohibitions.
    • Encourage AI transparency laws similar to GNU’s General Public License (GPL).
  4. Incentivize Responsible AI Research Without Blocking It
    • Rather than restricting AI collaboration with China entirely, establish a review board to monitor AI partnerships for ethical compliance.
    • Mandate public accountability reports for all major AI advancements to prevent misuse.

Final Verdict: Is the Hawley Bill Just Under the Veil of Ignorance & Open-Source History?

🚨 NO – The Hawley Bill Fails the Veil of Ignorance Test & Violates Open-Source Principles 🚨

  • Technological progress thrives on openness, not restrictions.
  • The bill consolidates AI power, rather than democratizing it.
  • History shows that open-source knowledge-sharing has driven innovation, while restrictive policies lead to monopolization, stagnation, and lost opportunities.

Final Thought: What Would an Open-Source AI Future Look Like?

A just AI policy should not seek to control AI knowledge but to ethically regulate its use, ensuring that AI benefits all people, not just governments or corporations.

Instead of restricting AI through a geopolitical battle, a more ethical and historically proven approach would be global AI cooperation under open-source principles—making AI a tool for human advancement, not state power.

Subscribe to The Moral Algorithm

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe