AI Guidelines & Recommendations


Artificial Intelligence (AI) tools transform our work. They have the potential to automate tasks, improve decision-making, and provide valuable insights into our operations. 

However, AI tools also present new challenges to information security and data protection. This document provides recommendations and guidance on being safe and secure when using AI tools, especially when sharing potentially sensitive company and customer information.

AI Tools

AI tools are software applications that use artificial intelligence algorithms to perform specific tasks, automate tasks, solve problems, analyze data, create items, and improve decision-making.

Most FamiliarGen AI – Generative AI (GenAI) is a general term for artificial intelligence that creates new content by generating new data samples similar to the training set. These generative models learn patterns, structures, and features from the input data and can create content with similar characteristics.

  • Gen AI Services for the U-M Community – ITS has vetted and provided these. (Recommend using these)
    • U-M GPT is a user-friendly interface that allows faculty, staff, and students to engage in chat-based queries and benefit from the GenAI technology.
    • U-M Maizey – Empowers users to extract valuable insights, discover patterns, and gain more profound knowledge from the datasets you have and link to. Maizey will be fee-based.  Pricing information
    • U-M GPT Toolkit – Designed for those who require complete control over their AI environments and models.

Other AI Tool Categories

  • Advertising
  • Audio
  • Coding
  • Copywriting
  • Customer support and sales
  • Data
  • Data Science
  • Design
  • Education
  • HR and recruiting
  • Image design & generation & editing
  • Meetings
  • Productivity
  • Video
  • Writing and content creation


The purpose of this document is to provide recommendations and guidance in using AI. Central to that use is security and handling data responsibly and confidentially. The document provides recommendations and guidelines for AI tools, including evaluating security risks and protecting confidential data.

Security Best Practices

  • U-M Gen AI Guidance
    • With the rise of numerous GenAI-based tools, knowing the best tool is essential. Tools provided by the University of Michigan, such as U-M GPT, are private, secure, and free for faculty. Data you share while using these tools will not be used for training these models and, hence, are not at risk of being leaked.
    •  For more information, select the applicable group below.
  • Evaluation of AI tools: Evaluation of the security of any AI tool before using it is the first critical step. An assessment includes reviewing the tool’s security features, terms of service, and privacy policy, as well as checking the reputation of the tool developer and any third-party services used by the tool.
    • When looking at a new AI tool, please follow the steps below.
      1. Submit a ticket to Ross IT Support at, asking for the tool you are interested in to be evaluated.
      2. In the ticket, include the name of the tool and website URL.
      3. Describe how you want to use the tool. Providing detailed examples is helpful.
      4. Provide any crucial timelines to know that would relate to the tool usage.
    • Use reputable resources: Ensuring a tool is safe and the vendor has a solid reputation is vital. Should you want to use tools outside of U-M’s offerings, it is critical that you work with Ross IT so that an evaluation can be done. Any AI tool must follow and meet university security and data protection standards.
    • As with any product, check with Ross IT first regarding its legitimacy, security, and use.
    • Current legitimate companies in the space that have a variety of products:
  • Privacy risks: In most cases, the data you share is not private and will be accessible by external parties hosting the GenAI-based tools. Do not share confidential or sensitive information, such as credit card information, or personal details, such as ID numbers or addresses.
    • Protection of confidential data: Do not upload or share any sensitive data, including confidential, proprietary, or protected data. This includes data related to customers, employees, or vendors.
    • Data privacy: Exercise discretion when sharing information publicly. As a first step, employees must ask themselves, “Would I be comfortable sharing this information outside of the university? Would we be okay with this information being leaked publicly?” before uploading or sharing any data into AI tools.
  • Access control: Do not share login credentials or sensitive information with others or third parties. Use strong passwords, keep software up to date, and follow the university’s safe computing practices.
  • U-M Guidelines for Secure AI Use
    • AI Tools should only be used with institutional data classified as Low. Examples include public information and data, which, if disclosed, poses little to no risk to individuals and/or the university.  Anyone, regardless of institutional affiliation, can access without limitation.
    • AI tools like Chat CPT should not be used with sensitive information such as student information regulated by FERPA, human subject research information, health information, HR records, etc.
    • AI-generated code should not be used for institutional IT systems and services unless humans review it and meet the requirements of Secure Coding and Application Security
    • Open AI’s usage policies disallow the use of its products for many other specific activities. Examples of these activities include but are not limited to the following.
    • Illegal activity:
      • Generation of hateful, harassing, or violent content
      • Generation of Malware
      • Activity that has a high risk of economic harm
      • Fraudulent or deceptive activity
      • Activity that violates people’s privacy
      • Telling someone they have or do not have a certain health condition or providing instructions on how to cure or treat a health condition
      • High-risk government decision-making

Considering the Ethics of Using GenAI

Following are some things to consider as you use GenAI-based tools and how it may affect your usage of them in your day-to-day life:

  • GenAI is not sentient:   
    GenAI models or Large Language models (LLMs) might appear to possess sentience or self-awareness as a human would, but are simply systems trained on large and biased datasets. LLMs are designed to output the most likely or most common results possible based on their data and will invariably tend to suppress less common or marginalized information.
  • GenAI is biased:   
    GenAI models carry implicit biases that make them unsuitable for ethical deliberation and decision cases and should not be used in those circumstances. Furthermore, this data is from the past, which results in a loss of context for current social changes.
  • GenAI can mislead:   
    GenAI, in its current stage, will tend to ‘hallucinate’ or make up random data that is not true. Models have no real sense of what is true or false. These models are built to output what is most likely in a verbose manner, even if there might not be enough actual information to back it up.
  • GenAI prefers English:
  • LLM models are heavily biased toward Standard American English. This means that writing styles and dialects adopted by other cultures and ethnic groups, such in cases of African American or Indigenous English, are at risk of being penalized for a privileged White-dominated form of writing instead. Questions to Ask Yourself

As GenAI poses to be a revolutionary tool that can change higher education and beyond, it is essential for you to understand why and how you intend to use these new, powerful tools. These are a few questions to consider and note that the answers will vary for each person.

  • Does using a GenAI-based tool help me learn more and think better?
  • Does using a GenAI-based tool enable or hinder my ability to do my job effectively?
  • Is the content I generate accurate and verifiable? Is it free of biases that might harm other groups of society?
  • How will I treat content that might have been generated using a GenAI-based tool?
  • How can my actions using a GenAI-based tool lead to the greater good of society?

Understand that using GenAI-based tools can give you the means to better yourself and society as a whole, and there is an ethical responsibility.

Additional Information:

Last Updated on March 5, 2024