Using the Moral Algorithm tool
Learn how to use the Moral Algorithm tool to explore any issue with open, unbiased, and informed AI conversations using simple steps and linked context.

This will discuss how to use The Moral Algorithm tool to open up and discuss any issue in an open and unbiased way.
The issue
Step one
Copy the text or upload a pdf (print to pdf is an easy way) into the Moral Algorithm tool and add any contextual discussions in the resulting AI conversation such as https://themoralalgorithm.com/rebuilding-the-economy/ and https://themoralalgorithm.com/the-broken-economy-and-the-solutions/ the "Print to pdf method" works fine and the results can be used to seed any discussion.
Step two
Ask the AI to discuss the issue with you with the first document as the issue and the others as contextual data which can be done with a question like this;
use this document as the main issue we are discussing and the other two as the contextual data and understanding
Here is what I ended up with;
Thank you — now that the Texas App Store Age Verification Law is established as the central issue, with Rebuilding the Economy and The Broken Economy and the Solutions providing context, we can engage in a full-spectrum ethical and policy analysis.
🧭 Summary of the Issue:
Texas’ new law mandates Apple and Google to verify the age of all app store users, requiring parental consent for those under 18. It aims to support child safety initiatives like KOSA, but raises serious concerns:
- Massive data collection risk ("Texas-sized honeypot")
- Technical infeasibility of reliable age verification
- Lack of enforcement clarity
- Burden shifted to private companies
- Privacy vs. protection trade-offs
🎓 Ethical Evaluation: Rawls’ Veil of Ignorance
If you didn’t know your identity (age, digital literacy, social class, etc.)…
Scenario | Ethical Impact |
---|---|
You're a 13-year-old in a rural town | You may benefit from protection, but risk losing access to basic digital services |
You're a poor family | You may lack government ID, making verification impossible |
You're a privacy-conscious adult | You must submit personal data despite no risk to children |
You're a tech worker | You must comply with a poorly defined law under extreme timeline pressure |
Outcome: No one, regardless of their position behind the veil, would choose a universal identification dragnet — especially when the cost is borne by the individual while benefit remains vague.
⚖️ Ethical Evaluation: John Adams’ Moral Algorithm
“Government is instituted for the common good... not the private interest of any one man, family, or class of men.”
Test | Verdict |
---|---|
Protection | Child safety is a legitimate goal ✅ |
Safety | Centralized ID honeypots increase risk ❌ |
Prosperity | Undermines tech access and trust ❌ |
Happiness | Overreaches into private life ❌ |
Private Interest Dominance | Disguises political optics as child safety ❌ |
This law fails Adams' test. It serves ideological positioning and performative politics, not the common good.
📚 Contextual Insights from Economic Documents
Both supporting documents emphasize:
1. Trust in Public Institutions
Postal banking and public UBI models work only because they’re designed with transparency, accessibility, and trust. The Texas law ignores these principles:
- No public infrastructure for ID security
- No process transparency
- No trust-building mechanisms
2. Digital and Economic Equity
- Rebuilding the Economy emphasizes access to digital services as a public good.
- This law restricts access to critical platforms for low-income and under-documented youth, exacerbating inequality.
- Instead of building public guardianship, it outsources responsibility to private corporations with no electoral accountability.
3. Long-Term vs. Performative Governance
- The Norway example in the Texas PDF shows long-term investment success via public infrastructure.
- Texas’ law is reactionary, not strategic — designed for a news cycle, not public benefit.
🔄 Solution-Oriented Discussion
✅ Better Alternatives (aligned with Rawls, Adams, and economic justice):
- Opt-in Digital Guardianship
- Parents set app access on family accounts (already possible on iOS/Android)
- Age-based restrictions tied to device-level settings
- No ID uploads required
- Strengthen Content Regulation, Not User Verification
- Use ESRB-style app labeling and AI auditing tools to assess app risk
- Hold app publishers accountable for mislabeling
- Federated Parental Controls
- Let schools or community centers offer verified guardianship services without storing national ID data
- Treat digital safety as a civic infrastructure project, like library access
- Privacy-Preserving Tech (ZK Age Verification)
- Use cryptographic proofs of age (e.g. zero-knowledge proofs) without identity disclosure
- Public Tech Literacy Campaigns
- Fund educational initiatives to empower both parents and teens in managing app use
- Integrate civic digital responsibility into school curricula
- National or Open-Source Age Verification Layer (if needed at all)
- If truly necessary, create a public, encrypted, non-profit verification service
- One-time age check stored only as “✅ over 18” token, not ID
💬 Final Framing
🧠 “If money must move to build society, so must trust. This law chokes both.”
Texas' age verification law violates privacy, centralizes risk, and overburdens users under the false flag of child safety. A system designed with the Veil of Ignorance and Adams’ Moral Algorithm would:
- Target risk without mass surveillance
- Strengthen communities and parents rather than corporate enforcement
- Protect digital dignity as much as digital childhood
🛠 Would You Like Me To:
- Create an infographic or one-page summary for public awareness?
- Draft a model bill that balances child safety and digital rights?
- Prepare a debate script or public hearing testimony?
- Simulate opposing arguments and write structured rebuttals?
Let me know how you'd like to act on this.
Summary
If more people would just do a bit of back grounding many of these highly charged issues become solvable.