top of page

Systematic Review Automation: Elicit vs ASReview LAB

Systematic-review automation now comes in two very different flavours: Elicit’s Systematic Review workflow, a closed-source “AI research assistant” that chains generative models around a fixed review pipeline, and ASReview LAB, an open-source framework that uses active-learning algorithms to prioritise records interactively. Both tools accelerate the classic search → screen → extract → synthesise loop, but they diverge in how much you can customise the AI, how transparent the ranking logic is, and where the time-savings occur. The comparison below drills into each step of the workflow, highlights published performance data, and lays out when you might pick one over the other.

1 | Core Workflow at a Glance

Stage

Elicit Systematic Review

ASReview LAB

Search/import

Queries 125 M Semantic Scholar papers and any PDFs you upload (support.elicit.com)

Imports RIS/CSV from any source; no built-in search (asreview.readthedocs.io)

Scoping presets

Fast (50 titles, 10 full-texts) • Balanced (500/25) • Comprehensive (500/40) (support.elicit.com)

No hard cap—user screens until the active-learning curve plateaus (ASReview)

Screening logic

Large-language-model relevance classifier + rules you edit; pilot set to fine-tune (The Elicit Blog)

Active-learning models (e.g., SVM, neural net) retrain after every label (GitHub)

Efficiency gains

Screens ≥1 000 papers in background; you review only AI-flagged subset (The Elicit Blog)

83 % mean workload saved at 95 % recall in Nature study (Nature)

Data extraction

AI auto-fills 20-column evidence table; quote-level links to PDFs (elicit.com)

Manual or semi-auto (via plug-ins); JSON export for downstream tools (asreview.readthedocs.io)

Output/report

One-click research report + CSV/BibTeX/RIS exports (pro.elicit.com)

Export labelled set; synthesis done in R/Python or PRISMA templates (SpringerLink)

Licensing / cost

SaaS; Pro plan with 200 PDF‐extractions/mo (The Elicit Blog)

Apache-2.0 open source; free to self-host or extend (GitHub)


2 | Search & Import Stage

Elicit

  • Enter a PICO-style question; the tool retrieves and ranks literature from its 125 million-paper index, then augments with any PDFs you drag-and-drop. (support.elicit.com, The Elicit Blog)

ASReview

  • You bring the corpus (RIS, EndNote XML, etc.). The strength is neutrality: it lets teams merge PubMed, Embase, Scopus and grey-literature dumps without vendor lock-in. (asreview.readthedocs.io, ASReview)

3 | Title & Abstract Screening

  • Elicit pre-runs a transformer classifier and shows a pilot set so you can correct edge cases before committing to a larger batch. (The Elicit Blog)

  • ASReview starts with a handful of user-labelled records; after each click, an active-learner retrains and resurfaces the next most-informative paper, rapidly homing in on inclusions. Published validation shows workers found 95 % of relevant studies after screening just 8–33 % of titles (83 % mean labour saved). (Nature)

4 | Full-Text Extraction

  • Elicit auto-populates up to 20 customisable columns (sample size, effect size, design, etc.) and hyperlinks every cell to the quote in the PDF, giving an instant audit trail. (elicit.com)

  • ASReview focuses on screening; extraction is out of scope but can be added via its Python API or community plug-ins—ideal if you prefer full control or niche data fields. (asreview.readthedocs.io)

5 | Algorithms & Transparency

Aspect

Elicit

ASReview

Model type

Proprietary LLM classifier + generative explanations (The Elicit Blog)

Pluggable ML (Logistic Reg., SVM, NB, CNN, BERT, etc.) (GitHub)

Explainability

Provides extracted sentence but not model weights / features

Full access to model choices, hyper-params and logs

Extensibility

Limited to options exposed in UI

Write plug-ins, swap embeddings, run on GPU cluster


6 | Performance & Accuracy Evidence

  • Elicit has not yet published peer-reviewed benchmarks, but internal docs promise background processing of >1 000 PDFs and pilot testing to mitigate false negatives. (support.elicit.com)

  • ASReview is validated in multiple peer-reviewed studies: Nature Machine Intelligence article (2021) reports 67-92 % workload reduction at 95 % recall; education-psychology simulation confirms similar gains. (Nature, SpringerLink)

  • Independent ergonomics studies show semi-automated tools like ASReview can “halve the screening workload while achieving high recall levels of 95 % and above”. (PsychArchives)

7 | Collaboration, Reproducibility & Compliance

  • Elicit stores projects in the cloud; share links let co-authors view decisions but cannot fork the underlying model.

  • ASReview supports multiple screeners, crowdscreening, and Git-versioned project files, aligning better with open-science and PRISMA-2020 reporting. (ASReview)

8 | Strengths & Limitations

Tool

Strengths

Limitations

Elicit

Fast “push-button” pipeline; quote-level traceability; no coding

Closed source; fixed extraction fields; pay-per-PDF quota

ASReview

Open, customisable, peer-reviewed accuracy, massive workload cuts

No built-in search or data-extraction; more manual setup


9 | Choosing Between Them

  • Pick Elicit if you need a turn-key rapid review with built-in extraction tables and are comfortable with a commercial SaaS.

  • Choose ASReview when you want full transparency, custom ML models, or to embed the screening engine inside an academic workflow, and you’re prepared to handle search/export outside the tool.

Both can dramatically shorten the road to a trustworthy systematic review, but they trade off between managed convenience (Elicit) and open, reproducible control (ASReview).

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Post: Blog2_Post

​Message for International and Thai Readers Understanding My Medical Context in Thailand

Message for International and Thai Readers Understanding My Broader Content Beyond Medicine

bottom of page