LLMgram is a curation layer. It doesn't produce news — it ingests public AI content from ~100 sources, runs it through a language model to extract structure, and ranks by a per-item signal score. Everything on this page explains exactly how that happens, because if you can't see how a score is made you shouldn't trust it.
Data sources
Every surface on llmgram pulls from a specific, named upstream. Nothing is manually curated in the sense of "we picked what we like" — everything comes from a feed, an API, or a scraper that runs on a fixed schedule.
| Surface | Upstream | Method | Cadence |
|---|---|---|---|
| AI Signal | 106 RSS feeds (labs, blogs, research) | RSS fetch + Grok analysis | Every 2 h |
| Git Signal | GitHub API (curated repo list) | Repo metadata + README + Grok | Every 2 h |
| AI Papers | OpenAlex + SSRN | API query on tracked authors + Grok | Every 2 h |
| LLM Architectures | Sebastian Raschka gallery + handwritten notes | Manual + checker cron | Weekly |
| Hermes Live | @Teknium Twitter + GitHub PR watch | Scrape + digest | Hourly |
| Claude Code Live | Top CC voices Twitter | Scrape + digest | Hourly |
| Company Radars | Per-lab Twitter + public profiles | Scrape per lab | Varies |
| Academy | Hao Hoang — Top 50 LLM Interview Questions | Static, with permission | On update |
What "Grok-scored" means
Every ingested article, repo, and paper passes through Grok (xAI's language model, via API). Grok is asked a structured prompt that extracts four things from each item:
- Category — one of ~20 buckets (model release, paper, framework, infra, agent, safety, etc.). Lets users filter.
- Themes — up to 5 free-form keywords capturing what the item is about. Drives the search index.
- Audience — researcher / practitioner / both. Not every paper matters to every reader.
- Signal score — a number from 0 to 1 representing "how much should a reader care about this item right now".
How the signal score is prompted
The Grok prompt asks for a score based on a weighted mix of novelty (is this new information, or the Nth write-up of the same thing?), importance (does this change how practitioners work?), rigor (is there evidence, or is it hype?), and freshness (when was this published?).
0.85 + Rare. Genuinely important — models launches, breakthrough papers, major framework releases. These are the items you'd regret missing.
0.50 – 0.84 Useful context. Good reads, but skippable if you're pressed for time.
Below 0.50 Noise. Kept in the corpus for search, hidden from default views.
Limitations you should know
Grok is a language model. That means the scoring is opinionated and inconsistent across runs. We mitigate but don't eliminate this:
- The prompt is fixed and versioned. Changes are rare and noted in the changelog.
- Scores are rounded to 2 decimals to discourage false precision.
- Every item links to its raw source. If Grok miscategorized something, you can see the original in one click.
- Sampling bias exists. English-first sources dominate the feed. Labs outside the US/EU/China are underrepresented.
Refresh cadence
Different surfaces refresh at different rates, based on how fast the upstream signal changes.
- AI Signal / Git Signal / Papers — every 2 hours via cron. If an item is live within 2 h, it's indexed within 2 h.
- Hermes Live / Claude Code Live — hourly. These are high-velocity streams.
- Company Radars — mixed. Some daily, some weekly, depending on posting frequency.
- LLM Architectures — checker runs every 2 h against the upstream gallery; new additions surface within hours.
- Academy — static. Updated when a new version of the source PDF lands.
What we don't do
- We don't tell you what to think. The score is an opinion. Always click through to form your own.
- We don't re-write articles. Summaries are Grok-generated from the original; we don't paraphrase or re-publish content.
- We don't accept paid placement. Ranking is signal-based, never sponsored. If that ever changes, it'll be labeled clearly.
- We don't sell data or track you beyond basic analytics. No fingerprinting, no cross-site tracking.
Provenance & source code
LLMgram is built and operated by supersocks.io as a public lab notebook. Pipelines, HTML, and data are in private repos for now. Every content item links back to its original source — nothing is published without attribution.
If you find a miscategorized item, a stale score, or a bug in the scoring, ping @iamsupersocks on X/Twitter or @llmgram.
Changelog
- 2026-04 — v1.0 methodology page published. Signal score definition locked.