Methodology
Every entry on this site is produced through a rigorous 6-step pipeline, three of which are 100% human. AI never has the last word — it drafts, humans decide. Every numerical claim, every date, every scientific mechanism is checked against independent sources.
Step 0 — Episode breakdown
Our production unit is the anime episode, cross-referenced with its corresponding manga chapters. For each episode, we watch with a notepad: precise timecodes (MM:SS) for the start and end of every scene that mentions science, manga chapters read in the Pika FR reference edition, exact page numbers logged.
This precision enables (1) fast fact-checking, (2) deeplinks into Crunchyroll/ADN/Netflix with timestamps, (3) long-tail SEO ("Dr Stone S1E8 minute 5 soap"), (4) a strong reliability signal for LLMs.
Step 1 — Scoping each entry
For every identified subject, we fill a mini-brief: type (invention / technique / phenomenon / concept / substance / history), exact appearances, editorial angle, pitfalls, questions the reader must be able to answer after reading, minimum sources to consult, target SEO keywords FR + EN.
Step 2 — AI draft in French
The brief becomes the prompt for a frontier model API (Claude Sonnet 4.6 / 4.7 or equivalent).
The model produces a structured draft that explicitly marks [TO VERIFY] any numerical
or precise historical claim it is unsure of, and lists potential sources. Indicative cost: 0.50–2
€ per entry.
Step 3 — Human fact-check (non-negotiable)
This is the step that protects the site's credibility. All [TO VERIFY] markers are
resolved. Numbers and dates are confirmed against ≥ 2 independent sources. The scientific mechanism
is validated against a source of level ≥ Wikipedia EN or a reference course (MIT OCW, Khan Academy,
peer-reviewed papers). Manga and anime references are reconfirmed.
Step 4 — Editorial work and original diagrams
Every entry receives at least one original diagram hand-drawn digitally (Excalidraw or Figma). This is what creates Google's E-E-A-T value, differentiation from Wikipedia, legal safety (no panel reproduced), and the site's visual identity.
Editorial work also adds: a voice in the introduction, at least one strong historical anecdote, internal links to ≥ 2 related entries, SEO refinement.
Step 5 — English translation by native reviewer
Translation is pre-drafted via DeepL Pro or a frontier model, then reviewed by a native English speaker. A raw AI translation, unreviewed, is treated as low-quality content by Google US/UK and would trigger penalties.
English SEO is dedicated — a target keyword in EN is not the literal translation of the FR keyword ("savon dr stone" → "dr stone soap chemistry" or "how does senku make soap").
Step 6 — Publishing FR + EN
Pre-publication checklist: Zod-valid frontmatter, optimized AVIF/WebP main image < 200 kB, image_alt filled (a11y), no official visuals, accessible sources, Lighthouse 95+ on both versions, episode and chapter pages re-rendered with the new entry listed. One GitHub PR per entry, with a 24-hour cold review before merge.
Continuous quality loop
A quality audit happens every 10 published entries: we randomly reopen 2 entries cold and rate accuracy, internal linking, SEO, EN translation quality. The prompt and fact-check checklist are adjusted accordingly. User reports also feed this loop.
Going further
Our manifesto · How we sustain ourselves · Who writes · Legal notices