Context by Cohere
Emerging Trends in Generative AI Research: Top Research Papers August 2023

Emerging Trends in Generative AI Research: Top Research Papers August 2023

Stay at the forefront of NLP advances with Cohere For AI's community-curated August 2023 research 🔍🧠

Share:

TL;DR:

Explore top NLP papers for August 2023, curated by Cohere For AI, covering topics like reducing hallucinations, addressing limitations in RLHF, instruction tuning, aligning LLMs, and more. Stay updated in the fast-evolving NLP field, and consider joining Cohere's research community.


Generative AI enthusiasts and practitioners, get ready for a thrilling ride as we delve into the latest breakthroughs in natural language processing! Our team at Cohere has worked tirelessly to research and collaborate with our research community to bring you the most up-to-date developments in the Generative AI domain. In this post, we’re excited to give you an overview of some of the latest progress in this fast-evolving field, so you can stay well informed and ahead of the curve.

Cohere is dedicated to making LLMs readily available to both developers and enterprises, so they can unleash their true potential. In pursuit of this mission, we continually seek passionate individuals to join our research community and contribute to the advancement of this innovative technology. By participating in Cohere For AI, you can actively help shape the future of NLP and be a part of a collaborative and groundbreaking journey. We invite you to apply and become an integral member of our thriving research community. In particular, check out the research scholars program, with applications closing on September 11!

Top Papers of August 2023 Highlighted by Our Research Discord Community

These papers were highlighted by C4AI research Discord community members. We are very thankful to @Herumb Shandilya, @mohamdy, Sara Hooker, and the rest of the Cohere For AI research community for participating!

ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs

Authors: Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun

TLDR: This paper takes a step towards making open-source LLMs better performing at following human instructions.

Summary:
Open-source large language models have advanced a lot, but they still fall short when it comes to higher level tasks, such as following human instructions to use APIs. In contrast, close-source APIs tend to be better at this type of tasks. In order to close this gap and enhance the instruction-following capabilities of open-source LLMs, the authors of this paper have used a large language model to create a new framework called ToolLLM for data construction, model training, and evaluation. The framework has three key components:

  • ToolBench is an instruction-tuning dataset for tool use. ToolBench contains a variety of real-world APIs from different categories. They generated human-like instructions for these APIs, covering both using one tool and combining multiple tools.
  • ToolLLaMa is a fine-tuning of the open source model LLaMa on ToolBench. This model demonstrated the ability to handle both single-tool and complex multi-tool instructions, and it could also adapt to new APIs by just using their documentation, performing almost as well as the closed-source models.
  • ToolEval is used for evaluating the models, measuring their success in executing instructions and comparing different solution paths.

In addition, the authors created an API retriever to automatically suggest appropriate APIs for instructions, removing the need for manual selections. This combination of tools and methods enhances open-source models’ abilities to perform tasks that require interacting with various external tools and APIs.

ToolBench and ToolLLaMa. Source: https://arxiv.org/abs/2307.16789v1

Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Authors: Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell

TLDR: Reinforcement learning from human feedback (RHLF) is a wonderful way to train LLMs. However, it comes with its own challenges. This paper talks about some of these challenges, and some proposed solutions.

Summary:
This article talks about a technique called “reinforcement learning from human feedback” (RHLF) which is used to train AI systems to better understand human goals. This technique is commonly used to improve advanced language models and to make them more aligned with what humans want. However, despite its popularity, there hasn’t been much public discussion about its limitations. The article addresses three main issues:

  1. Concrete challenges with RLHF: It explores the problems and challenges associated with RHLF and similar methods, categorizing them into issues with the human feedback, the way rewards are learned, and the optimization process used by the AI system.
  2. Incorporating RLHF into a broader technical safety framework: It suggests ways to complement RLHF to make AI systems safer, including additional techniques that can be used alongside RLHF.
  3. Governance and transparency: It discusses how transparency and rules can be improved to make sure AI systems trained with RLHF are used responsibly.

In short, the article looks at the difficulties and shortcomings of using RLHF to train AI models and suggests ways to make these models safer and more reliable.

RLHF and Challenges. Source: https://arxiv.org/abs/2307.15217

OctoPack: Instruction Tuning Code Large Language Models

Authors: Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre

TLDR: This paper talks about making computers better at understanding and writing code.

Summary:
In addition to writing text and following instructions, one of the most fascinating tasks that LLMs can do is write code. In this article, the authors use a way of training LLMs to understand and write code, called instruction tuning. Instruction tuning has been shown to improve the performance of LLMs on many writing tasks. This article introduces a large dataset called CommitPack, which consists of 4 terabytes of Git commits from 350 programming languages. A smaller subset of this dataset is used to fine-tune the LLMs and the performance is compared to other natural and synthetic code instructions datasets.

The authors also introduce a new benchmark called HumanEvalPack, which is a collection of coding tasks across 6 programming languages. The results in the paper show that the LLMs fine-tuned with CommitPack perform better in writing code than other models, using this benchmark.

Octopack Overview. Source: https://arxiv.org/abs/2308.07124v1

Aligning Large Language Models with Human: A Survey

Authors: Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu

TLDR: LLMs are very good for many language tasks, but sometimes they exhibit problems, such as hallucinations. This paper shows different ways to improve LLMs, including data collection, training, and evaluation.

Summary:
LLMs are trained in very large amounts of data, and have performed very well at generating coherent and fluent text, and following instructions. However, they are prone to some limitations, such as generating biased answers or factually incorrect information (hallucinating). To address these limitations, researchers have been working on aligning LLMs with human expectations. This survey paper presents a comprehensive overview of alignment techniques for LLMs, including the following:

  1. Data collection: Methods for collecting high-quality data, including human-provided instructions, instructions from strong LLMs, and self-instruction.
  2. Training methodologies: The main methods for training and aligning LLMs, including online and offline human preference training, and parameter-effective training.
  3. Model evaluation: Methods and benchmarks for evaluating the effectiveness of human-aligned LLMs
Taxonomy of research in aligning LLMs. Source: https://arxiv.org/abs/2307.12966

Self-Alignment with Instruction Backtranslation

Authors: Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, Mike Lewis

TLDR: This paper presents a method for training language models. The method is inspired by the backtranslation method from machine translation, and it (roughly) consists of creating training examples by generating prompts from web documents, and then selecting the high-quality examples for fine-tuning the model.

Summary:
This article presents a scalable method of training language models to follow instructions using unlabeled data and iteratively self-improving. The method is inspired by the backtranslation method from machine translation (in which a piece of text is translated into another language, then translated back to its original language, and comparing them in order to fine-tune the model). In this case, a corpus of unlabelled documents is used, and the instruction backtranslation approach uses a language model to create training examples by predicting prompts that would correctly be answered by the web documents in the corpus. The model is then used to predict the quality of the generated training examples, and only the highest quality pairs are used to fine-tune the model. The procedure is repeated, using the improved model to better curate the instruction data and re-training to produce an even better model. The resulting model, called Humpback, outperforms all other existing non-distilled models on the Alpaca leaderboard.

Overview of the instruction backtranslation method. Source: https://arxiv.org/abs/2308.06259

Halo: Estimation and Reduction of Hallucinations in Open-Source Weak Large Language Models

Authors: Mohamed Elaraby, Mengyin Lu, Jacob Dunn, Xueying Zhang, Yu Wang, Shizhu Liu

TLDR: This paper introduces a method to measure and reduce hallucinations in small language models.

Summary:
LLMs, in particular those with fewer parameters, are very prone to hallucinations. In this paper, the authors measure and reduce hallucinations in BLOOM 7B, a small open-source model. The main goals of this work is (1) to quantify the severity of hallucinations in LLMs, and (2) to enhance the knowledge of LLMs without resorting to instruction tuning, and (3) to investigate if a more robust teacher LLM can guide a weaker LLM. The authors introduce a lightweight sampling-based BlackBox framework called HALOCHECK used to quantify the severity of hallucinations in an LLM. Furthermore, they explore other techniques that alleviate hallucinations, such as knowledge injection and teacher-student approaches. Knowledge injection involves fine-tuning the small LLM with domain-specific knowledge, without relying on manual instruction, or instructions obtained from larger LLMs. Finally, they were able to use a high-performance LLM to guide weaker LLMs by generating detailed answers to questions.

The HALO framework. Source: https://arxiv.org/abs/2308.11764v2
Keep reading