lattereview

A framework for multi-agent review workflows using large language models


Keywords
ai, deep-learning, large-language-models, multi-agent, multi-agent-systems, multiagent, review, review-tools
License
MIT
Install
pip install lattereview==0.6.0

Documentation

LatteReview 🤖☕

PyPI version License: MIT Python 3.9+ Code style: black Maintained: yes

LatteReview is a powerful Python package designed to automate academic literature review processes through AI-powered agents. Just like enjoying a cup of latte ☕, reviewing numerous research articles should be a pleasant, efficient experience that doesn't consume your entire day!


🚨 This package is in BETA stage: Major changes and breaking updates are expected before v1.0.0!


🎯 Key Features

  • Multi-agent review system with customizable roles and expertise levels for each reviewer
  • Support for multiple review rounds with hierarchical decision-making workflows
  • Review diverse content types including article titles, abstracts, custom texts, and even images using LLM-powered reviewer agents
  • Define reviewer agents with specialized backgrounds and distinct evaluation capabilities (e.g., scoring or concept abstraction or custom reviewers of your own preferance)
  • Create flexible review workflows where multiple agents operate in parallel or sequential arrangements
  • Enable reviewer agents to analyze peer feedback, cast votes, and propose corrections to other reviewers' assessments
  • Enhance reviews with item-specific context integration, supporting use cases like Retrieval Augmented Generation (RAG)
  • Broad compatibility with LLM providers through LiteLLM, including OpenAI and Ollama
  • Model-agnostic integration supporting OpenAI, Gemini, Claude, Groq, and local models via Ollama
  • High-performance asynchronous processing for efficient batch reviews
  • Standardized output format featuring detailed scoring metrics and reasoning transparency
  • Robust cost tracking and memory management systems
  • Extensible architecture supporting custom review workflow implementation

🛠️ Installation

pip install lattereview

Please refer to our installation guide for detailed instructions.

🚀 Quick Start

LatteReview enables you to create custom literature review workflows with multiple AI reviewers. Each reviewer can use different models and providers based on your needs. Below is a working example of how you can use LatteReview for doing a quick title/abstract review with two junior and one senior reviewers (all AI agents)!

Please refer to our Quick Start page and Documentation page for detailed instructions.

from lattereview.providers import LiteLLMProvider
from lattereview.agents import ScoringReviewer
from lattereview.workflows import ReviewWorkflow
import pandas as pd
import asyncio
from dotenv import load_dotenv

# Load environment variables from the .env file in the root directory of your project
load_dotenv()

# First Reviewer: Conservative approach
reviewer1 = TitleAbstractReviewer(
    provider=LiteLLMProvider(model="gpt-4o-mini"),
    name="Alice",
    backstory="a radiologist with expertise in systematic reviews",
    inclusion_criteria="The study must focus on applications of artificial intelligence in radiology.",
    exclusion_criteria="Exclude studies that are not peer-reviewed or not written in English.",
    model_args={"temperature": 0.2},
)

# Second Reviewer: More exploratory approach
reviewer2 = TitleAbstractReviewer(
    provider=LiteLLMProvider(model="gemini/gemini-1.5-flash"),
    name="Bob",
    backstory="a computer scientist specializing in medical AI",
    inclusion_criteria="The study must focus on applications of artificial intelligence in radiology.",
    exclusion_criteria="Exclude studies that are not peer-reviewed or not written in English.",
    model_args={"temperature": 0.2},
)

# Expert Reviewer: Resolves disagreements
expert = TitleAbstractReviewer(
    provider=LiteLLMProvider(model="gpt-4o"),
    name="Carol",
    backstory="a professor of AI in medical imaging",
    inclusion_criteria="The study must focus on applications of artificial intelligence in radiology.",
    exclusion_criteria="Exclude studies that are not peer-reviewed or not written in English.",
    model_args={"temperature": 0.2},
    additional_context="Alice and Bob disagree with each other on whether or not to include this article. You can find their reasonings above.",
)

# Define workflow
workflow = ReviewWorkflow(
    workflow_schema=[
        {
            "round": 'A',  # First round: Initial review by both reviewers
            "reviewers": [reviewer1, reviewer2],
            "text_inputs": ["title", "abstract"]
        },
        {
            "round": 'B',  # Second round: Expert reviews only disagreements
            "reviewers": [expert],
            "text_inputs": ["title", "abstract", "round-A_Alice_output", "round-A_Bob_output"],
            "filter": lambda row: row["round-A_Alice_evaluation"] != row["round-A_Bob_evaluation"]
        }
    ]
)

# Load and process your data
data = pd.read_excel("articles.xlsx")  # Must have 'title' and 'abstract' columns
results = asyncio.run(workflow(data))  # Returns a pandas DataFrame with all original and output columns

# Save results
results.to_csv("review_results.csv", index=False)

🔌 Model Support

LatteReview offers flexible model integration through multiple providers:

  • LiteLLMProvider (Recommended): Supports OpenAI, Anthropic (Claude), Gemini, Groq, and more
  • OpenAIProvider: Direct integration with OpenAI and Gemini APIs
  • OllamaProvider: Optimized for local models via Ollama

Note: Models should support async operations and structured JSON outputs for optimal performance.

📖 Documentation

Full documentation and API reference are available at: https://pouriarouzrokh.github.io/LatteReview

🛣️ Roadmap for Future Features

  • Implementing LiteLLM to add support for additional model providers
  • Draft the package full documentation
  • Enable agents to return a percentage of certainty
  • Enable agents to be grounded in static references (text provided by the user)
  • Enable agents to be grounded in dynamic references (i.e., recieve a function that outputs a text based on the input text. This function could, e.g., be a RAG function.)
  • Support for image-based inputs and multimodal analysis
  • Development of AbstractionReviewer class for automated paper summarization
  • Showcase how AbstractionReviewer class could be used to analyse the literature around a certain topic.
  • Adding a tutorial example and also a section to the docs on how to create custom reviewer agents.
  • Adding a TitleAbstractReviewer agent and adding a tutorial for it.
  • Evaluating LatteReview.
  • Writing the white paper for the package and public launch
  • Development of a no-code web application
  • (for v>2.0.0) Adding conformal prediction tool for calibrating agents on their certainty scores
  • (for v>2.0.0) Adding a dialogue tool for enabling agents to seek external help (from helper agents or parallel reviewer agents) during review.
  • (for v>2.0.0) Adding a memory component to the agents for saving their own insights or insightful feedback they receive from the helper agents.

👨‍💻 Author

Pouria Rouzrokh, MD, MPH, MHPE
Medical Practitioner and Machine Learning Engineer
Incoming Radiology Resident @Yale University
Former Data Scientist @Mayo Clinic AI Lab

Find my work: Twitter Follow LinkedIn Google Scholar Email

❤️ Support LatteReview

If you find LatteReview helpful in your research or work, consider supporting its continued development. Since we're already sharing a virtual coffee break while reviewing papers, maybe you'd like to treat me to a real one? ☕ 😊

Ways to Support:

  • Treat me to a coffee on Ko-fi ☕
  • Star the repository to help others discover the project
  • Submit bug reports, feature requests, or contribute code
  • Share your experience using LatteReview in your research

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

🤝 Contributing

We welcome contributions! Please feel free to submit a Pull Request.

📚 Citation

If you use LatteReview in your research, please cite our paper:

# Preprint citation to be added