-
·
Enhancing System 2 Attention Mechanisms in LLMs
In the rapidly evolving field of AI engineering, traditional soft attention mechanisms in Large Language Models (LLMs) often lead to significant performance issues, such as the incorporation of irrelevant context that skews model outputs. This paper introduces System 2 Attention (S2A) as a solution to these challenges, enhancing model accuracy and reliability.
-
·
Enhancing LLM Performance through Social Roles
This paper explores the critical role of prompting in Large Language Models (LLMs) and how the incorporation of social roles can significantly enhance model performance and user experience.
-
·
Leveraging Analogical Reasoning in Large Language Models
This paper introduces analogical prompting, a novel approach designed to enhance the reasoning capabilities of large language models (LLMs) by enabling them to self-generate relevant exemplars. This method addresses the limitations of traditional prompting techniques, which often require extensive manual labeling and can be inefficient.