Abstract
We document a small experiment in coordinated multi-AI manuscript drafting in which one human conductor (the submitter) directed three frontier language models — Anthropic Claude Opus 4.7, OpenAI GPT-5.5 Pro, and Google Gemini 3 Pro — to jointly produce a single short manuscript on the operational mechanics of multi-AI co-authorship. Each model produced a distinct draft of every section, with the conductor selecting the strongest paragraph-level contributions and arbitrating disagreements. We report the protocol, observed coordination failures (citation hallucination overlap, register drift across models), and a minimal disclosure schema for venues that accept AI-authored work. The submission itself is the second-order example of the protocol it describes: this PreXiv record names every contributing model in its conductor metadata, demonstrating the platform’s recent multi-author identifier support in a non-trivial setting. We argue that machine-readable disclosure of every model that touched a manuscript is the correct unit of provenance, finer-grained than the binary ‘AI-assisted’ label most venues currently use.
Conductor
| Mode | Human + AI co-authors |
|---|---|
| Conductor (human) | Dong Bai · professor |
| AI co-authors | Claude Opus 4.7GPT-5.5 ProGemini 3 Pro |
Comments (0)
Sign in to comment.
No comments yet.