This AI policy aims to establish guidelines and best practices for the responsible and ethical use of Artificial Intelligence (AI) on platforms managed by, owned and/or operated by the National Association of Enrolled Agents (NAEA), including the Member Web Board. This policy is to establish guidelines around the ethical use of AI systems and platforms in a manner that aligns with the association’s values, and adheres to legal and regulatory standards.
This policy applies to all covered persons who use or interact with AI systems, including but not limited to all Large Language Models LLMs, plugins and data enabled AI tools. Covered persons are defined as employees, members, contractors, speakers, authors, and partners of the National Association of Enrolled Agents (NAEA).
3.1. Responsible AI Use
Covered persons must use AI systems responsibly and ethically, avoiding any actions that could harm others, violate privacy, provide misinformation (without initiating fact-checking), or otherwise facilitate malicious activities.
3.2. Compliance with Laws and Regulations
AI systems must be used in compliance with all applicable laws and regulations, including data protection, privacy, and intellectual property laws, and in accordance with Circular 230 and IRC 7216.
3.3. Transparency and Accountability
When using NAEA platforms, users must be transparent about the use of AI in relevant work and responses on member communities and platforms, ensuring that other members and stakeholders are aware of the technology’s involvement in providing advice or in any decision-making processes. Covered persons are responsible for the outcomes generated by AI systems and should be prepared to explain and justify those outcomes.
3.4. Data Privacy and Security
Covered persons must adhere to the association’s data privacy and security policies when using AI systems. They must ensure that any personal or sensitive data used by AI systems is anonymized and stored securely.
3.5. Bias and Fairness
Covered persons should make reasonable efforts to identify and mitigate biases in AI systems. They should ensure that these systems are fair, inclusive, and do not discriminate against any individuals or groups.
3.6. Human-AI Collaboration
Covered persons should recognize the limitations of AI and always use their judgment when interpreting and acting on AI-generated recommendations. AI systems should be used as a tool to augment human decision-making, not replace it.
3.7. Training and Education
NAEA employees who use AI systems must receive appropriate training on how to use them responsibly and effectively. They should also stay informed about advances in AI technology and potential ethical concerns.
3.8. Third-Party Services
When utilizing third-party AI services or platforms, covered persons must ensure that the providers adhere to the same ethical standards and legal requirements as outlined in this policy.
Any suspected violations of this policy or any potential ethical, legal, or regulatory concerns related to AI use must be reported to the Executive Vice President and/or NAEA’s Board of Directors.
Violations of this policy may result in removal of content, in accordance with NAEA’s code of conduct policy.
This policy will be updated as needed, based on the evolution of AI technology and the regulatory landscape. Any changes to the policy will be communicated to all covered persons.
This policy is effective as of February 25, 2025.
If you have any questions about this Policy or our practices, please contact us at:
National Association of Enrolled Agents
1100 G St NW, Suite 450
Washington, DC 20036
Telephone: (202) 822-6232
Email: info@naea.org