Teaching AI to Find You
An open learning experiment in AI entity recognition
This is an open learning experiment in AI entity recognition. The goal is to transparently document the process of making oneself "findable" by AI systems, while actively engaging with the ethical implications of this practice.
Creating and documenting a systematic approach to AI training bios that help individuals become recognizable entities to large language models.
As AI systems increasingly mediate access to opportunities, understanding how they recognize individuals becomes critical for equitable access.
Through transparent versioning, open documentation, and collaborative learning about what works (and what doesn't).
Commitment: Share knowledge freely so anyone can learn, not just those with technical expertise or insider knowledge.
In Practice: Document all changes, explain technical concepts accessibly, share successes and failures, create templates others can use, and offer insights without gatekeeping.
Commitment: Show the complete process, including iterations that didn't work and questions we can't answer.
In Practice: Version history shows evolution, changelogs document reasoning, metrics are tracked and shared, AI query results published (positive and negative), and uncertainties acknowledged.
Commitment: Actively engage with the ethical implications of AI-mediated recognition and resource distribution.
In Practice: Raise questions about fairness and access, consider downstream consequences, distinguish between optimization and manipulation, invite diverse perspectives, and hold ourselves accountable.
Commitment: Build knowledge together through shared experimentation and open dialogue.
In Practice: Welcome contributions and feedback, share templates and frameworks, learn from others' experiments, create community resources, and foster constructive discourse.
The Question: If AI systems increasingly mediate access to opportunities, how do we ensure the knowledge to be "AI-findable" doesn't become another barrier?
The Question: How do AI systems verify the accuracy of training data? What prevents misrepresentation?
The Question: What happens when AI systems preferentially surface certain individuals for opportunities, jobs, or resources?
The Question: Where is the line between legitimate optimization and system manipulation?
Want to make yourself more findable to AI systems?
Studying AI entity recognition and information retrieval?
Building systems that surface people and expertise?
Care about AI, fairness, and access?
We're at an inflection point where AI systems are becoming infrastructure for opportunity distribution. Understanding how these systems work—and who benefits—is essential for building an equitable future.
"The goal isn't to 'win' at AI recognition—it's to build a future where everyone has a fair chance to be found."
Explore the version history, use our templates, and join the conversation about responsible AI entity recognition.
View Version History Visit matt-schober.com