-
Approval Requirement for GenAI Toolsets
Prior authorization from GTA is required for any generative AI tools intended for regular organizational usage, including but not limited to AI-driven transcription, summarization, note-taking, or decision-making assistance. Unauthorized use of such tools is strictly prohibited.
-
Virtual Meetings
While AI capabilities for recording and transcribing virtual meetings offer convenience, they should be used thoughtfully. Agencies must ensure that the use of AI tools in virtual meetings complies with all relevant laws and regulations, including those related to data protection, intellectual property, and labor laws. AI should not be used to manipulate or misrepresent the contributions of participants. Be mindful of the potential for AI to misinterpret or misrepresent human communication. AI outputs should be used only to supplement, not replace, human judgment.
Once created, meeting recordings and transcriptions become public records and are subject to retention rules. Further, there are storage and cost considerations associated with maintaining recordings. Meeting organizers are responsible for determining the need for recordings and automated transcriptions and keeping any records created.
Organizers of virtual meetings must also ensure that only invited participants attend meetings. Bots or other automated entities are not to be created or admitted representing participants or capture information from a meeting.
-
Use of Large Language Models (LLMs)
Employees are required to seek authorization from GTA to use any third-party GenAI toolsets, including large language models (LLMs). While LLMs like OpenAI or ChatGPT can boost convenience and efficiency, they also present risks in the form of misinformation, bias, and threats to cybersecurity.
Employees must not enter non-public information into LLMs; any data entered an LLM becomes public as it is accessible to anyone using the tool. Below is a non-exhaustive list of information types that must not be used in LLMs:
- Confidential or privileged communications
- Personally identifiable information (PII)
- Protected health information (PHI)
- Code that includes passwords or secrets
- Information that could undermine trust in the state of Georgia
In addition to exercising care with data entered LLMs, employees should ensure the accuracy of information obtained from LLMs. The tools have the potential to generate content that may be incorrect or fictitious, though it may appear reasonable and not easily distinguishable from information. It is imperative that employees thoroughly review all information obtained from LLMs for accuracy, integrity, and completeness, as they would with information from any other source.
-
Attribution
If AI is used to generate text or images, it must be clearly acknowledged. Where possible, the name and version of the AI tool and the date the content was generated should be cited.
Transparency in disclosing the use of AI protects trust and helps to set reasonable expectations among readers or users of the content (for example, AI-powered assistants).
-
AI Training
GTA is committed to providing training to state employees to ensure safe and responsible use of AI tools. AI training modules will be added to the cybersecurity awareness training that staff are required to complete.
AI Tool Use
Guidance for Agencies
Emerging technologies like artificial intelligence (AI) bring new capabilities – and responsibilities – to state government operations. GTA champions responsible and ethical deployment of AI, and we provide oversight in developing and implementing policy and ensuring transparency, fairness, and accountability. [Please see the Enterprise AI Responsible Use Policy (PS-23-001) and AI Responsible Use Standard (SS-23-002).]
Generative AI presents additional considerations. In collaboration with the AI Council, GTA is preparing guidelines for state government’s use of generative AI, but in the meantime, please be aware of areas where particular care is warranted. The guidance below applies to all employees, contractors, and third parties who use AI tools on behalf of your organization.