Modeling responsible AI use is a powerful form of digital citizenship. In my context as a librarian, bibliotherapist, educator, and fan community member, it’s more than policy to practice. It is formation.
1. I will use ChatGPT as a collaborator, not a crutch. I affirm that my voice, insight, and experience are primary. AI can support my clarity and output, but it will not replace my discernment, values, or lived knowledge.
2. I will protect the privacy of people in my care. When working on bibliotherapy stories, student support materials, or community narratives, I will anonymize names and details, and I will never upload sensitive personal or medical data.
3. I will use AI to strengthen my advocacy, not compromise it. Whether I'm crafting workshop materials or writing about fandom justice, I commit to using ChatGPT to amplify truth, care, and dignity—not to dilute or sanitize uncomfortable realities.
4. I will fact-check and attribute. For any citations, lyrics, research, or shared ideas, I will verify sources and acknowledge creators. AI-generated responses will be cross-checked and revised before being used in public platforms.
5. I will remain reflective about the power and limits of AI. I understand that ChatGPT is trained on vast, sometimes biased datasets. I commit to questioning, rewording, and reframing outputs that may reinforce colonial, ableist, or extractive thinking.
6. I will honor my process and my pauses. Not every question needs an immediate answer. I will use silence, solitude, and community check-ins alongside my digital tools. I trust my pace and my rhythms.
No comments:
Post a Comment