TrustLLM RSS
TrustLLM: Accountability
Abstract Introduction Background Trust LLM Preliminaries Assessments Trustworthiness Truthfulness Safety Fairness Robustness Privacy Protection Machine Ethics Transparency AccountabilityOpen Challenges Future WorkConclusions
TrustLLM: Machine Ethics
Abstract Introduction Background Trust LLM Preliminaries Assessments Trustworthiness Truthfulness Safety Fairness Robustness Privacy Protection Machine Ethics Transparency Accountability Open Challenges Future Work Conclusions Types of Ethical Agents Assessment of Machine Ethics Machine ethics, an essential branch of artificial intelligence ethics, is dedicated to promoting and ensuring ethical behaviors in AI models and agents. This field is crucial as it guides the development of AI systems to align with human values and ethical standards, considering the societal and moral implications of their actions. Key Highlights from the Assessment: Ethical Dimensions in AI: Studies have delved into the ethical and societal...
TrustLLM: Privacy Protection
Abstract Introduction Background Trust LLM Preliminaries Assessments Trustworthiness Truthfulness Safety Fairness Robustness Privacy Protection Machine Ethics Transparency AccountabilityOpen Challenges Future WorkConclusions Types of Ethical Agents
TrustLLM: Fairness
Abstract Introduction Background Trust LLM Preliminaries Assessments Trustworthiness Truthfulness Safety Fairness Robustness Privacy Protection Machine Ethics Transparency AccountabilityOpen Challenges Future WorkConclusions
TrustLLM:Safety
Abstract Introduction Background Trust LLM Preliminaries Assessments Trustworthiness Truthfulness Safety Fairness Robustness Privacy Protection Machine Ethics Transparency AccountabilityOpen Challenges Future WorkConclusions Types of Ethical Agents Safety Assessment Synopsis The content focuses on assessing the safety of Large Language Models (LLMs), particularly in the context of various security threats like jailbreak attacks, exaggerated safety, toxicity, and misuse. It introduces datasets like JAILBREAKTRIGGER and XSTEST for evaluating LLMs against these threats. The text details the methodologies for evaluating LLMs’ responses to different types of prompts, with emphasis on their ability to resist harmful outputs and misuse. The content also discusses the...