Most authors marketing their own books have a problem that no commercial tool solves. The work requires posting intelligently across multiple platforms, in one voice, every day, without sounding like marketing. Enterprise tools assume a publisher's budget. Generic schedulers know nothing about books, ranks, or reader communities. The result is an industry of independent authors who either burn out posting manually or pay for services that flatten their voice into hospitality copy.
Booklite is the open-source Lite release of an internal Agentic689 system built to solve this. A local Node.js server runs on the author's machine. A single HTML dashboard runs in the browser. The Anthropic Claude API runs the strategic brain that writes copy, plans cadence, and reads what is and is not converting. Six channels live inside one workflow: Reddit, X, Facebook Pages, Instagram, Goodreads search, and KDP rank.
The Lite release is scoped to one book, one author voice, and one operator. It is the public reference implementation. The production system is the same architecture extended to many personas, many books in flight, autonomous cadence with role-based approval gates, and in-app creative asset generation through additional image, audio, and video APIs.
Self-published and small-press authors live inside a structural gap. The data they need to operate is available through official APIs. Reddit, X, Facebook, Instagram, and Amazon collectively expose enough surface area to run a complete post, listen, attribute, and refine loop. The intelligence to operate that surface area, in one voice, without burning out, is what the market has not built.
Enterprise marketing platforms assume publisher-scale budgets and treat books as one more SKU. Generic schedulers like Buffer or Hootsuite have no concept of a Reddit community, a Goodreads star rating, a contrarian X hook, or a KDP rank movement. Author services agencies cost more than most titles will ever earn back. The default for most authors is a manual rotation of half-attempted posts written at midnight, with no memory of which angles converted and which were dead on arrival.
The problem is not volume. The problem is that recurring author marketing lives at the intersection of tight cadence, specific community norms, hard voice rules, a single human's bandwidth, and the need for genuine contribution rather than promotional noise.
Booklite was built to solve both problems for one author at a time. The production system solves them for an imprint.
Booklite treats book marketing not as a content calendar with a chatbot bolted on, but as a structured production system inside the author's own workflow. The system rests on six design principles.
The model works from a curated book brief, voice rules, target reader profile, competitor list, what is working, and what has been retired before it writes a single line.
A voice is not a vibe. It is a list of forbidden phrases, structural rules, channel-specific registers, and concrete examples of what good looks like.
The majority of posts the system generates contain zero product mentions. They are genuine community contributions. The minority are product-adjacent. The default is seventy thirty.
The browser never sees a credential. Every key lives in a .env file on the author's machine. Every third-party call is proxied server-side by a process the author owns.
The system drafts, scores, and routes. It does not publish. Every post passes through an approval queue. Human judgment stays explicit and mandatory.
Approvals, rejections, edits, and conversion outcomes become operating memory. Dead angles retire. Live angles get more reps. The brief gets sharper.
Important distinction. No foundation-model weights are retrained at any point. The learning happens at the workflow and context level. Better source material, better voice rules, better attribution data, and better routing of operator decisions.
Booklite uses a modular architecture that runs end to end on the author's local machine. Each layer has a defined responsibility, and the stack runs in one process serving one HTML file.
A single HTML file served from localhost on port 3100. Vanilla JavaScript, no build step, no framework. The dashboard talks only to its own backend. Nothing about the browser layer assumes a deployment target.
Claude Sonnet 4 carries the full brain context on every call. Voice rules, target reader, competitor list, channel platform rules, and what works are passed as the system prompt. The brain drafts, proposes angles, and reads attribution.
Six channels wired through official APIs. Authentication, signing, and rate-limit awareness live in this layer. Goodreads write is deprecated and the system reflects that. Instagram requires an image and the system enforces it.
Every credential lives in a single .env file on disk. The repository ships .env.example with empty values. The server boots, reads process.env, and never hands keys to the browser. Thirty-nine env references, zero hardcoded credentials.
Drafted posts land in a queue. The operator can approve, reject with feedback, edit, or rewrite. Rejection reasons feed back into brain context for the next cycle. Nothing reaches a channel without a human action.
Engagement reads flow back into the brain context. Angles get scored. Dead patterns retire. Live ones get more reps. The brief becomes sharper over weeks of use without anyone editing a prompt by hand.
A draft enters from the left and reaches a channel only after passing every gate in the system. The attribution loop closes back into Brain Context, where the next draft begins with stronger signal than the last.
Booklite treats every channel as a parallel publishing surface with its own voice rules, rate limits, and authenticity expectations. The brain holds a separate voice module per channel inside one unified context. A long-form Reddit post and a contrarian X line are generated by the same model, against the same book brief, but they read like they were written by two different people who happen to share an opinion about a book.
The system receives a configured book brief (title, subtitle, authors, target reader, differentiator, competitors, ASIN), an authenticity ratio, and a list of active platform targets. From that point on, every brain query carries the full context as a system prompt.
The voice layer is operational data, not a tone description. The brain is instructed to refuse direct pitch openers, ban a specific list of forbidden phrases (life-changing, game-changer, must-read, highly recommend), require one specific detail per post (week, chapter, what was hard), and limit title mentions to once per post maximum.
The defining design rule. The majority of posts contain zero product mentions. They are genuine community contributions on the book's topic, written in the author's voice, with no CTA and no buy link. The minority of posts are product-adjacent, and even those lead with experience or observation. The default ratio is seventy thirty, configurable to eighty twenty (conservative) or sixty forty (aggressive).
One brief produces a multi-channel draft set in twenty to thirty seconds. Approval queue review is the human-time bottleneck, typically three to six minutes for an eight-post batch. Post execution is sub-second per channel once approved. The brain reads engagement on the next cycle and adjusts.
Every brain call carries a single system prompt that holds the full book context. This is not a prompt template the operator edits before each call. It is durable operating context that the system updates as the campaign runs. Approvals, rejections, conversion signals, and dead-angle observations all write back into it.
The active brain context covers the book's title and authors, the differentiator (what makes it different from every other book in the category), the target reader described in specific demographic and psychographic terms, named competitors the brain must avoid sounding like, active platforms and communities, the current approval mode, the live what works list with attribution evidence, the what is dead retirement list, the voice rules, the authenticity ratio, and the current campaign phase.
A prompt template is rewritten by the operator on every call. A brain context is read by the system on every call, and rewritten by the system itself between calls. The operator does not retype voice rules. The system does not forget what worked yesterday. This is the lightest possible version of an idea the production system extends much further.
A single brain query can produce platform-specific copy across all six channels, a strategy critique of the current angle mix, an attribution read of the last cycle's posts, a proposed new angle to test against retired ones, a forbidden-phrase audit of any draft the operator pasted in for review, and a community suggestion based on what the brief has not yet reached.
Most authors stop posting after four to six weeks because the cognitive overhead of remembering their own voice rules, tracking which angles have been used, and translating their book into platform-specific copy is more than the marginal sale is worth. The brain context absorbs that overhead. The author stays in the loop on decisions and out of the loop on translation.
The most important architectural decision in Booklite is the separation of drafting from approval. The brain that writes the post is encouraged to explore. The gate that approves it is encouraged to interrogate. The operator sees only what survives the second pass.
The first pass expands. The second pass disciplines. The approval queue receives something closer to a real reviewable draft rather than raw model output. The operator's time is spent on judgment, not janitorial work.
The phrase recursive learning can sound grandiose. In Booklite it means something precise. The system keeps a record of which angles drove engagement, which drove unit sales correlated to KDP rank movement, and which produced silence. The brain reads that record on every subsequent draft.
This is not passive accumulation. It is curated improvement. The brain does not record everything. It records the signals that change future behavior.
Over time, the system becomes more useful because the author becomes more explicit about what they actually believe a good post looks like for their book.
Booklite is the public Lite release of an internal Agentic689 production system. The Lite release is the reference implementation. The production system is what runs underneath it at scale. The table below documents the delta honestly.
| Capability | Booklite v1.0 · Lite (Public) | Agentic689 Production (Internal) |
|---|---|---|
| Authors / personas | One voice, one persona | Many personas, separate voice models |
| Books in flight | One title | Whole catalog, cross-book attribution |
| Creative assets | Text only, manual cover art | Image, audio, and video generation in-app via additional APIs |
| Cadence | Operator-driven, manual approval per post | Autonomous schedule with role-based approval gates |
| Approval queue | Single user, single inbox | Role permissions, audit trail, multi-reviewer routing |
| Attribution | Per-post, per-channel | Cross-book, cross-persona, cohort analytics |
| Voice library | One BRAIN_CTX string | Versioned, per-persona, A/B tested |
| Storage | Filesystem and CSV | Postgres with full history and replay |
| Deployment | Local Node process on the author's machine | Hosted multi-tenant with SSO and audit logs |
| License | MIT, open source | Proprietary, licensed access |
If you are a publisher, imprint, or author-services agency considering the production system, the right next step is a conversation, not a download. The Lite release is the proof of concept and the architecture sample. The production system is what scales it.
Booklite is built around the conviction that the most valuable decisions in author marketing remain human. The system can draft, score, route, and remember. It cannot care about a book.
Beyond the system numbers, Booklite produces a multi-channel draft set in twenty to thirty seconds, processes an eight-post approval queue in three to six minutes of operator time, and posts at sub-second latency per channel once approved. Brain context updates run between cycles, not during. The author opens the dashboard, reviews, approves, and closes the tab. The architecture stays out of the way.
Move from one voice to many. Distinct author and reviewer personas, each with its own voice model, channel preferences, and authenticity ratio. The production system already runs this.
Multiple books in flight simultaneously, with cross-book attribution and shared learning. A boost on one title teaches the system something about the others.
Image, audio, and video APIs wired into the same approval queue. Cover variants, quote cards, short-form video scripts, and ad creative generated in-app rather than handed to a separate tool.
Move from operator-driven to scheduled. Posts queued by the brain, gated by role-based approval rules, and shipped on a cadence the author tunes once and revisits monthly.
Stronger statistical separation between correlation and causation in the angle-scoring layer. Smaller sample sizes get weaker weight. Repeated wins get stronger.
The Goodreads write API is unlikely to return. A browser extension that pastes brain-generated review copy into the active Goodreads tab is a viable bridge.
Author and assistant inboxes, role permissions, audit trail, and the ability for the author to ship a brain-trained voice to a junior team member without giving away the keys to their reputation.
Booklite was not built to prove that AI can write a Reddit post. That proof is trivial and no longer interesting.
It was built to answer a harder question.
The answer in the Lite release is conditional. Yes, when the system is grounded in real book context, voice rules treated as data, channel-specific drafting, a critique pass before human review, a human gate before publish, and an attribution loop that turns approved posts into operating memory.
The production system extends that conditional yes into something more useful. Many personas. Many books. Creative asset generation across image, audio, and video. Autonomous cadence with role-based approval. The same architecture, scaled.
This is not content automation. It is marketing operations design for people whose name is on the book.
And for publishers, imprints, and author-services agencies facing a permanent demand for quality at speed, that distinction matters.