Navigating AI in Human Services: Why Policy Matters Now
As artificial intelligence tools become increasingly accessible to human services organizations, the need for clear policy frameworks has never been more urgent. AI offers genuine benefits—more efficient caseload management, earlier identification of at-risk populations, and reduced administrative burden—but without thoughtful governance, we risk algorithmic bias in eligibility determinations, privacy violations with sensitive client data, and unclear accountability when AI-assisted decisions go wrong. Many organizations are already experimenting with AI applications, making inconsistent implementations a real compliance and equity concern. Effective AI policy should establish governance around appropriate use cases, mandate transparency so clients know when AI influences their services, require regular bias testing particularly for vulnerable populations, preserve human authority for consequential decisions, and ensure robust data protection. The goal isn't to block innovation but to guide it responsibly, ensuring that as we adopt new technologies, we don't compromise the human-centered values that define our sector. Now is the time for human services leaders and government managers to collaborate on developing these frameworks—the technology won't wait, but we can still shape how it's used to serve our communities ethically and effectively.
… The above was generated by Claude AI. Were you able to spot it?
Recently, our firm researched the evolving landscape of AI use in the workplace. We found that existing federal and state guidelines are limited and often vague; unsurprising given how rapidly AI has been adopted. Much of the current policy centers on promoting innovation and maintaining national economic competitiveness, leaving significant gaps in guidance. We believe AI can be a valuable tool, but we also recognize the importance of transparency with our clients and the communities we engage in the human services sector.
To that end, last month Koné developed an internal policy outlining our approach to responsible AI use, which has been incorporated into our employee handbook. Below are standards that our firm is committed to:
Use only approved AI tools. Our firm has approved the use of Claude and Otter in our organization, after thorough review of their data protection policies.
Confidentiality & Data Protection. All personally identifiable information and confidential project information must be removed or de-identified before inputting content into AI tools.
Consent in Data Collection. Facilitators must notify participants before recording/transcribing using an AI system.
Transparency. Our team is committed to clearly indicating when AI tools have been used for analysis, writing, or other content creation.
Accuracy & Review. Our team reviews and fact-checks all AI-generated content before it’s distributed.
What’s your organization's policy, and how is it evolving? Share in the comments!