OASIS Litigant Portal (LP) TC

 View Only

LP TC: The Goal and Inquiry to AI Vendors (Prompts Re-Run)

  • 1.  LP TC: The Goal and Inquiry to AI Vendors (Prompts Re-Run)

    Posted 08-10-2023 20:49
    LP TC Members,   Per your request, the following AI chatbot-related information is provided for your review and consideration.   In an effort to properly articulate the LP TC’s purpose and desired outcomes, “prompts” were created and run through the Microsoft Bing and Google Bard AI chatbot engines.  The intent of this approach was to validate whether the information provided would yield the kind of feedback desired by the LP TC from AI chatbot vendors.  The Bing and Bard responses follow the Prompts section.   Regards,   Jim   >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>     PROMPTS   Deleterious – A harmful effect that often occurs in a subtle, slow-acting, unexpected, and not readily apparent way.   Bad Actors - A person, group, country, etc. who purposely and usually repeatedly engages in very bad behavior (such as committing crimes or causing harm to others).   Rogue State (also known as "Outlaw State") - A formal, pseudo, or proxy government entity whose ongoing mission is to threaten world peace and stability.    Traditional journalism reported the facts about people, places, and things without bias or preconceived notions.   Modern journalism has introduced biases and preconceived notions via online information feeds that are frequently and intentionally designed to confuse, make angry, and cause mental and physical harm to individual people and the general public at-large.   Government agencies want the information they make publicly available about their products and services prominently referenced as trusted sources of truth and fact.   Government agencies do not want their products and services misrepresented by bad actors and rogue states for deleterious purposes or not prominently referenced as trusted sources of truth and fact.   How can government agencies ensure that the information they make publicly available about the products and services they provide are prominently referenced as highly trusted sources of truth and fact via artificial intelligence chatbots?   How can government agencies protect the information they make publicly available about the products and services they provide from being misrepresented by bad actors and rogue states who attempt to influence artificial intelligence chatbot algorithms designed to confuse, make angry, and cause mental and physical harm to individual people and the general public at-large?   What can a government agency do to best organize and present its products and services information so that citizen-submitted structured, unstructured, poorly structured, and incomplete questions are completely and accurately processed by artificial intelligence chatbots (e.g., how can I change my name in Arizona)?   -------------------------------------------------------   MICROSOFT BING RESPONSES   Government agencies can ensure that the information they make publicly available about their products and services is prominently referenced as highly trusted sources of truth and fact via artificial intelligence chatbots by implementing a citizen-centric approach to delivering government services.  This involves understanding citizens’ needs and priorities, measuring citizen satisfaction, and prioritizing areas for improvement 1 .  Additionally, government agencies can leverage technologies such as cloud computing, artificial intelligence, and machine learning to deliver faster and better results for their citizens 2 .   To protect the information they make publicly available from being misrepresented by bad actors and rogue states, government agencies can implement measures such as proactive posting of certain categories of information online, including frequently requested records 3 .  They can also work with social media platform and tool developers, citizens, and partners to encourage greater accessibility 4 .   To best organize and present their products and services information so that citizen-submitted structured, unstructured, poorly structured, and incomplete questions are completely and accurately processed by artificial intelligence chatbots, government agencies can implement a customer-centric approach to service design and delivery.  This involves measuring citizen satisfaction, transforming service delivery based on citizens’ needs and priorities, and continuously improving digital services to provide better experiences for their citizens 1 2 .   [Jim Price’s Note]   In a prior prompt run, Bing also provided the following responses that the LP TC may find useful:   “According to the [ Digital.gov ] guide on Improving the Accessibility of Social Media in Government, government agencies have a responsibility to ensure that their digital services are accessible to all citizens, including people with disabilities. This guide provides helpful tips, real-life examples, and best practices to ensure that social media content is usable and accessible to all citizens. Additionally, government agencies can work with artificial intelligence chatbot developers to implement measures that prevent the chatbots from misrepresenting their information. For example, chatbot developers can implement algorithms that detect and flag potentially harmful or misleading content. Government agencies can also provide feedback to chatbot developers on how their information is being represented by the chatbots and work with them to improve the accuracy of the representation.”   -------------------------------------------------------   GOOGLE BARD RESPONSE   Here are some ways that government agencies can ensure that the information they make publicly available about their products and services are prominently referenced as highly trusted sources of truth and fact via artificial intelligence chatbots: Use a variety of sources to gather information. Government agencies should not rely on a single source of information when creating their chatbots. Instead, they should gather information from a variety of sources, including government websites, academic journals, and news organizations. This will help to ensure that the information in the chatbot is accurate and up-to-date. Fact-check the information. Once the information has been gathered, it is important to fact-check it to ensure that it is accurate. This can be done by cross-referencing the information with multiple sources and by contacting experts in the field. Use plain language. The information in the chatbot should be written in plain language so that it is easy for people to understand. Avoid using jargon or technical terms that people may not be familiar with. Use examples. To help people understand the information, the chatbot should use examples. For example, if the chatbot is providing information about how to change your name in Arizona, it could provide an example of a completed name change form. Test the chatbot. Once the chatbot is complete, it is important to test it to make sure that it works properly. This can be done by having people use the chatbot to ask questions about the products and services that the government agency provides. Update the chatbot regularly. The information in the chatbot should be updated regularly to ensure that it is always accurate and up-to-date. This can be done by monitoring the news and by responding to feedback from users. Here are some ways that government agencies can protect the information they make publicly available about the products and services they provide from being misrepresented by bad actors and rogue states who attempt to influence artificial intelligence chatbot algorithms designed to confuse, make angry, and cause mental and physical harm to individual people and the general public at-large: Use a variety of techniques to detect and filter out bad actors. This could include using machine learning to identify suspicious patterns in user behavior, or using human moderators to review user content. Encrypt the information in the chatbot. This will make it more difficult for bad actors to access and modify the information. Use a secure chatbot platform. The chatbot platform should be designed to protect the information from unauthorized access. Educate the public about the risks of misinformation. People should be aware of the ways that bad actors can use misinformation to manipulate them. They should also be aware of the signs of misinformation, such as extreme claims or emotional appeals. Report bad actors to the authorities. If you see someone trying to spread misinformation, report it to the authorities. This will help to protect the public from harm. Here are some things that a government agency can do to best organize and present its products and services information so that citizen-submitted structured, unstructured, poorly structured, and incomplete questions are completely and accurately processed by artificial intelligence chatbots: Use a standardized format for the information. This will make it easier for the chatbot to process the information. Use clear and concise language. The information should be easy for people to understand, even if they are not familiar with the products or services. Provide examples and illustrations. This will help people to understand the information more easily. Use natural language processing techniques. This will help the chatbot to understand the nuances of human language. Test the chatbot with a variety of users. This will help to ensure that the chatbot can understand and respond to a wide range of questions. By following these tips, government agencies can help to ensure that the information they make publicly available about their products and services is accurate, trustworthy, and accessible to everyone.