Table of Contents

ReviewGrounder ReviewGrounder

Improving Review Substantiveness with Rubric-Guided, Tool-Integrated Agents

1 Texas A&M University, 2 University of Toronto, 3 University of Waterloo, 4 UC San Diego, 5 Lambda, 6 University of Oregon, * Equal Contribution † Corresponding Authors
ACL 2026

ReviewGrounder decomposes reviewing into collaborating agents: (a) Review Drafter generates an initial draft; (b) Multi-dimensional Grounding Agents (Literature Searcher, Insight Miner, Result Analyzer) enrich the draft via tools; (c) Review Aggregator synthesizes the draft and evidence into a coherent, accurate, and actionable review.

Introduction

Peer review is the primary mechanism through which the research community filters and improves new scientific work before publication. The rapid growth of submissions at major AI conferences (with counts at leading venues surpassing 10,000) has placed sustained pressure on peer review workflows. Meanwhile, recent advances in LLMs have spurred growing interest in using them to assist or complement the peer review workflow.

Despite these advances, prior work has highlighted notable shortcomings in existing LLM-based peer review frameworks: they produce routine, template-like critiques; accept authors' claimed novelty or limitations without thorough verification; and lack technical details, actionable suggestions, as well as justification grounded in the paper. These limitations can be traced to the underutilization of two crucial sources: (1) Reviewer Guidelines and Rubricsβ€”top-tier venues provide well-established guidelines; and (2) Context from Existing Workβ€”assessing novelty inherently requires situating a paper relative to existing work.

We introduce ReviewBench, a benchmark that leverages reviewer rubrics in an explicit and systematic manner, and ReviewGrounder, a rubric-guided, tool-integrated, multi-agent framework for producing grounded, content-rich reviews. ReviewGrounder decomposes reviewing into collaborating agents: the drafter produces an initial draft, and subsequent grounding agents (Literature Searcher, Insight Miner, Result Analyzer) refine it using tools. Experiments show that ReviewGrounder with a Phi-4-14B-based drafter and GPT-OSS-120B-based grounding consistently outperforms baselines including GPT-4.1 and DeepSeek-R1-670B across rubric-based dimensions and human-aligned metrics.

ReviewBench: Rubric-Driven Evaluation Benchmark

Overview of the ReviewBench construction pipeline. For each paper, paper-specific rubrics are instantiated by an aggregated reference review, the submission PDF, and meta-rubrics.

Similarity-based metrics and LLM-as-a-Judge approaches used by prior studies either fail to capture fine-grained review competencies or rely on ambiguous evaluation criteria. We introduce ReviewBench, a benchmark built on DeepReview-13K that augments each paper $p$ and its human reviews $\mathsf{H}_p$ with: (1) an aggregated reference review $r^*_p$; and (2) a set of paper-specific rubrics $\mathsf{R}^{\text{paper}}_p$. By leveraging these alongside an evaluator $\mathcal{E}$, ReviewBench enables accurate, multi-faceted assessment.

We define eight paper-agnostic meta-rubrics: Core Contribution Accuracy, Results Interpretation, Comparative Analysis, Evidence-Based Critique, Critique Clarity, Completeness Coverage, Constructive Tone, and False or Contradictory Claims (pitfall). Each meta-rubric is instantiated into paper-specific rubrics $\mathsf{R}^{\text{paper}}_{p,i}$ using the reference review and paper content. The overall content score is $S(p,\hat{r}_p) = \sum_{i=1}^{8} s_{p,i}$.

ReviewGrounder: Rubric-Guided, Tool-Integrated Agents

Overview of ReviewGrounder. (a) Review Drafter: Generates an initial draft based on the paper. (b) Multi-dimensional Grounding Agents: Literature Searcher retrieves and summarizes related work; Insight Miner verifies methodology and core contributions; Result Analyzer checks experimental results. (c) Review Aggregator: Synthesizes the draft and evidence into a coherent, accurate, and actionable review.

ReviewGrounder casts reviewing as a staged process that progressively refines an initial draft via targeted analysis, external evidence, and structured synthesis.

Stage I: Draft Review Generation. Given a paper $p$, a fine-tuned Drafter $\mathcal{P}$ generates an initial draft review $r^{(0)}$ that captures basic structure and stylistic conventions.

Stage II: Multi-dimensional Review Grounding. Three specialized agents collaboratively enrich the draft: Literature Searcher $\mathcal{S}$ situates the submission within contemporary literature via Semantic Scholar API and reranking; Insight Miner $\mathcal{M}$ consolidates conceptual understanding and refines method-focused critiques; Result Analyzer $\mathcal{A}$ strengthens empirical grounding by examining experiments, datasets, baselines, and quantitative evidence.

Stage III: Rubric-Guided Synthesis. The Aggregator $\mathcal{G}$ synthesizes the draft and grounded evidence $\mathsf{E}(p)$ with meta-rubrics $\mathsf{R}^{\text{meta}}$ to produce a coherent, accurate, and actionable final review. Paper-specific rubrics are not exposed at generation time, preventing evaluation leakage.

Featured Agents

ReviewGrounder decomposes reviewing into collaborating agents with specialized capabilities

Case Studies

Qualitative comparison: Comparing reviews generated by ReviewGrounder vs. DeepReviewer-14B on the same paper

ReviewGrounder produces concise, evidence-grounded critiques with specific references (sections, equations, tables). DeepReviewer-14B tends to generate verbose, repetitive text that echoes prior reviewers without adding substantive, paper-specific insight. Toggle below to compare the two approaches.

Experimental Results

We conduct evaluation on ReviewBench using two complementary metric families: (1) Rubric-based Evaluationβ€”assesses textual quality across eight paper-specific rubric dimensions (Sec. 3.2); (2) Numeric-field Evaluationβ€”measures predicted ratings with MSE/MAE and decisions with ACC/F1 (Sec. 3.3). Paper-specific rubrics and the evaluator are fixed across all methods; models access only the paper text at generation time.

Baselines: (1) Foundation Models: Qwen3-32B, QWQ-32B, GPT-4o, GPT-4.1; (2) Agentic Frameworks: AI Scientist, AgentReview (instantiated with GPT-4o/GPT-4.1); (3) Fine-tuned Reviewers: CycleReviewer-8B/70B, DeepReviewer-7B/14B. ReviewGrounder uses Phi-4-14B as Drafter and GPT-OSS-120B for grounding agents.

Table 1: Rubric-based Evaluation

Overall, ReviewGrounder consistently outperforms all baselines. Relative to the best foundation model (Qwen3-32B), +38%. Against AgentReview and AI Scientist (GPT-4o): +121% and +193%. Exceeds DeepReviewer-14B by 36%. Outperforms GPT-4o across all dimensions (+135%).

MethodModelCoreRes.Comp.EBCClr.Cov.ToneContradict.OverallΞ”
FoundationQwen3-32B1.700.760.580.141.611.152.00-0.157.80↑38%
QWQ-32B1.690.650.350.121.680.952.00-0.087.35↑46%
GPT-4o1.200.100.030.001.050.331.98-0.124.58↑135%
GPT-4.11.760.700.340.111.631.172.00-0.047.66↑41%
AgentReviewGPT-4o1.130.160.110.131.340.592.00-0.164.87↑121%
GPT-4.11.030.130.120.001.410.631.98-0.164.96↑117%
AI ScientistGPT-4o0.850.000.020.000.670.181.76-0.193.68↑193%
GPT-4.11.670.480.360.081.561.131.94-0.097.09↑52%
CycleReviewerLlama-3.1-8B0.990.100.060.010.580.151.66-0.453.10↑248%
Llama-3.1-70B1.020.160.100.010.770.261.85-0.643.52↑206%
DeepReviewerPhi-4-7B1.420.450.330.131.371.061.94-0.406.32↑70%
Phi-4-14B1.630.650.500.351.681.291.99-0.197.90↑36%
ReviewGrounderPhi-4-14B1.851.410.911.481.921.332.00-0.1210.77β€”

Higher scores indicate better performance. Contradict. is a pitfall dimension scored in {βˆ’2, βˆ’1, 0}; others in {0, 1, 2}. Core=CORE CONTRIBUTION ACCURACY, Res.=RESULTS INTERPRETATION, Comp.=COMPARATIVE ANALYSIS, EBC=EVIDENCE-BASED CRITIQUE, Clr.=CRITIQUE CLARITY, Cov.=COMPLETENESS COVERAGE, Tone=CONSTRUCTIVE TONE, Contradict.=FALSE OR CONTRADICTORY CLAIMS.

Table 2: Numeric-field Evaluation

Compared with all baselines, ReviewGrounder achieves the lowest rating error (MSE: 1.1607, MAE: 0.8597) and highest decision accuracy (ACC: 0.6809, F1: 0.6699). Relative to the strongest AI Scientist variant (Gemini-2.0-Flash-Thinking), improves ACC by 8% and reduces MSE by ~63%.

MethodModelACC↑F1↑MSE↓MAE↓
AgentReviewClaude-3-5-sonnet0.28260.25412.84061.2989
Gemini-2.0-Flash-Thinking0.42420.42422.61861.2170
DeepSeek-V30.31400.25061.99511.1017
AI ScientistGPT-o10.41670.41574.30721.7917
Claude-3-5-sonnet0.55790.44403.09921.3500
Gemini-2.0-Flash-Thinking0.61390.48083.92321.6470
DeepSeek-V30.40590.39884.80061.8403
DeepSeek-R10.42590.41614.77191.8099
CycleReviewerLlama-3.1-8B0.23540.39883.13241.3663
Llama-3.1-70B0.15450.41561.84401.0643
DeepReviewerPhi-4-7B0.63810.60681.44420.9416
Phi-4-14B0.66670.52041.35270.9041
ReviewGrounderPhi-4-14B0.68090.66991.16070.8597

Table 3: Ablation Study

Impact of Drafter Backbones. When trained on the same SFT data, Qwen3-4B outperforms Phi-4-7B but remains inferior to Phi-4-14B. Smaller Drafters (e.g., Qwen3-4B: 10.6418) still benefit substantially from grounding and aggregation.

Impact of Grounding Agents. Omitting any agent (Searcher, Miner, Analyzer) degrades performance relative to the full model (10.7699), underscoring the importance of each component.

Drafter Phi-4-14BDrafter Phi-4-7BDrafter Qwen3-4BSearcherMinerAnalyzerOverall
βœ“βœ“βœ“βœ“10.6418
βœ“βœ“βœ“βœ“10.5928
βœ“βœ“βœ“10.6568
βœ“βœ“βœ“10.6526
βœ“βœ“βœ“10.0186
βœ“βœ“βœ“βœ“10.7699

Additional Analyses

Hyperparameter Study (Literature Searcher). 10 reranked papers per keyword yields the highest overall score. OpenScholar-Reranker significantly outperforms BAAI-BGE Base/Large.

Ablation on Literature Searcher

Figure 3: Ablation on Literature Searcher configurations under rubric-based evaluation.

Defense Against Adversarial Attacks. With malicious instructions injected into input papers (500-sample subset): ReviewGrounder shows strong resilience (10.70β†’10.65, βˆ’0.05); DeepReviewer-14B drops from 7.70 to 7.30. Rubric-based evaluation prevents score inflation via instruction injection.

Attack robustness

Figure 4: Comparison with baselines under normal and attack scenarios via rubric-based evaluation.

Human Evaluation. On 120 papers, expert raters (avg. 2,000 Google Scholar citations) closely align with the LLM-evaluator: Pearson r=0.8954, Spearman ρ=0.7923, MAE=0.0969, Pairwise Error=0.1494.

Share ReviewGrounder

BibTeX

@inproceedings{reviewgrounder2026acl,
    title={ReviewGrounder: Improving Review Substantiveness with Rubric-Guided, Tool-Integrated Agents},
    author={Li, Zhuofeng and Lu, Yi and Zhang, Haoxiang and Zhang, Yu},
    booktitle={Proceedings of the Association for Computational Linguistics (ACL)},
    year={2026}
}