Considerations for using Generative AI

Questions? Contact Versa

Overview

Understanding the limitations and ethical challenges of Generative AI

Considerations for the use of Generative AI

Getting the best out of Generative AI means understanding and attempting to mitigate the limitations and ethical challenges presented by this shift in computing paradigm.  

Data Quality and Biases

Remember algorithms, machine learning systems, and AI models will replicate the stereotypes and biases of their training data, model builders, and testers. 

Discriminatory and prejudiced biases present in the training data will likely replicate themselves in the outputs of these system. Note that biases can be a function of context, and can be useful when the biases are those we intend to instruct the models with, e.g., always look right before you cross the street is a bias that may save your life. But the same bias in a different context can instead be dangerous, e.g., you should first look left when crossing the street in many non-US countries.

The AI models also contain biases present in the languages they were trained in. The models lack context specific knowledge unless specified in the query or prompt. And the models may oversample or lack representation of viewpoints depending upon the model.

Keep in mind that as humans we may also have automation biases – a propensity to favor suggestions from systems, we users of AI systems should exercise enhanced caution when using outputs from these system relative to information provided by humans..

Security and Privacy

By interacting with AI models, you may also be providing data to 3rd party systems. Versa is currently the only cloud Generative AI platform authorized for use with UCSF protected data. Versa, by design, does not save any interactions. 

Responsible Use

Standards and guidance on the responsible use of generative AI continue to evolve. A best practice is to disclose use AI when it significantly effects the work you are presenting or submitting. 

In addition, copyright and attribution in most models remains an open question. 

Be aware when using these tools to reproduce or diagnose errors that many models have a well-known potential to generate offensive or harmful content when prompted to do so. Remember to review and verify results. 

AI and generative AI specifically, use large amounts of electricity, water, and create a large carbon footprint. 

Hallucinations

Generative AI hallucinations are essentially output anomalies where the AI, despite generating coherent text, shares information that is incorrect, misleading, or devoid of real-world sense. To avoid making decisions based on AI hallucinations, it’s crucial to remember that AI doesn’t produce facts, but rather, generates outputs based on patterns it has learned. Always cross-verify the information generated by AI with trusted sources. If the AI output is critical, have it reviewed by a domain expert. Additionally, be cautious about using AI-generated information in sensitive or high-stakes situations, where the potential for harm from incorrect information is greater.

Designing Better Prompts to Minimize Hallucinations

Designing, aka engineering, prompts more thoughtfully can help reduce the incidence of AI hallucinations. Here are some tips for better prompt engineering:

  • Be Explicit: Make your instructions as clear and detailed as possible. If there’s a specific format you want the answer in, or certain details you want included, make sure to specify this in the prompt.
  • Request Fact-Checking: Ask the AI to think twice before it gives you an answer. For instance, you can include instructions like “Please provide information that you’re highly confident about”.
  • Limit Creativity: If accuracy is more important than creativity for your task, guide the AI to be less creative and more factual. This can also be controlled using the "model temperature" setting in Versa to adapt the creativity of the model to the task at hand.