Encode sources and a prompt in a URL.
Paste it into any AI.
A Context URL encodes sources and a prompt in a single URL.
Because context matters. Grounding AI in sources cuts hallucination rates from 66% to 10%. But the world’s private data is a million times larger than the public web — and a prompt can only carry what you paste into it. Context URLs can use both, even private data an AI user isn’t allowed to directly see (see below for details).
Fetches two recipe sites and suggests dinner ideas.
Uses the Purdue writing guide to help draft a cover letter.
Combines official React docs with Stack Overflow answers to explain hooks.
Points the AI at the actual IRS publication and asks a specific tax question.
Each agent is backed by the world's leading sources in its field, with a prompt that defines its expertise. A single ๐ฅ becomes a shareable, grounded specialist.
Backed by AllRecipes, Food Network, and Serious Eats.
Backed by USDA Nutrition, Academy of Dietetics, and Harvard Nutrition.
Backed by OWASP Top 10, Cheat Sheet Series, and CWE/MITRE.
Backed by Web Vitals, Chrome DevTools, and MDN Performance.
Backed by Reuters, Associated Press, and BBC News.
Backed by The Economist, The Atlantic, and Foreign Affairs.
Backed by Buffett's Letters, Bogleheads, and Investopedia.
Backed by WallStreetBets, Seeking Alpha, and ARK Invest.
Backed by Plain Language, Google Style Guide, and Strunk & White.
Backed by Purdue OWL, Chicago Manual, and Grammarly.
Backed by Khan Academy, MIT OpenCourseWare, and Coursera.
Backed by Brilliant, Quizlet, and OpenStax.
Wrap each agent in brackets and they form a swarm โ each grounded in its own sources, debating from its own expertise. The recursive structure means every agent can also exist independently.
Pick your recipe sources โ each a top food site โ and let them debate.
Pick your reviewers โ each backed by leading sources โ and let them audit your code.
Pick your outlets โ each a leading newsroom โ and let them debate.
Pick your sources โ each a top finance site โ and let them debate.
Pick your style guides โ each a leading reference โ and let them critique.
Pick your courses โ each backed by top resources โ and build your study group.
Give each agent in a swarm its own persona. The inner prompt after ! tells each bracketed agent how to behave — turning generic sources into opinionated experts that debate from their own perspective.
Pick sources and give each a cooking persona โ they debate from that perspective.
Pick your reviewers and give each an audit focus โ they critique from different angles.
Pick your outlets and give each an editorial voice โ they analyze from different angles.
Pick your resources and give each a teaching style โ they tutor from different approaches.
Unlike a swarm, a group avatar merges many perspectives into one voice. Each source gets a weight โ dial the sliders to shape the blend. This is broad listening.
Pick your outlets and dial their influence โ they merge into one voice.
Weight your advisors โ conservative or aggressive โ and get one recommendation.
Blend style authorities into a single editorial voice for your team.
Weight your textbooks and course materials โ they merge into one tutor.
The output of a ๐ฅ is the webpage. Ask the AI to generate interactive HTML with JavaScript, dynamically synthesized from your sources.
A lasagna recipe becomes a clickable shopping list with serving multiplier.
OWASP cheat sheets become an interactive security audit with pass/fail checklist.
Wire services become an interactive news digest with category filters and timeline.
When a generated page contains links that are themselves Context URLs, each click generates the next page. Add &browser=claude to open all links through Claude. It's an infinite, AI-generated web โ each page synthesized on demand from sources.
Wikipedia becomes an infinite AI-generated encyclopedia โ every link spawns a new page.
MDN docs become a browsable interconnected reference โ each API page generated on demand.
Weighted group avatars with custom personas, debating as a swarm, generating a web world. Select sources, adjust weights, edit personas — the URL updates live.
Everything above uses the open web. But the world's most valuable data — your medical records, your company's docs, your unpublished research — is private. Context URLs work at every privacy level. No matter how sensitive your data, there's a way to use it with AI without giving up control.
Public cURLs use sources on the open web. Everything above is a public ๐ฅ.
Your data lives at a URL that isn't indexed — the URL itself is the secret. You already have these. And some are live — edit the document and the AI sees the latest version every time.
An unlisted link works until someone leaks it. A proxied link goes through a relay you control. Turn the proxy off and the link stops working — even if someone saved the URL. Revocable access with zero configuration.
This is a proxy to Hacker News. The owner can revoke it anytime — the real URL is never exposed.
You publish content on the open web but declare rules about how AI can use it. These are machine-readable signals — text you add to your website that tells crawlers and AI systems what's allowed.
The web already has a growing ecosystem of these protocols. None of them are enforced by code — they rely on AI companies choosing to comply. But they establish intent, and increasingly, legal standing.
Soft gates ask nicely. Hard gates enforce. A gatekeeper proxy sits between your data and the AI — checking every request against your policy before touching your content. You share the gatekeeper's URL. Your actual data URL stays secret.
The gatekeeper is a proxy with judgment. "Only answer cooking questions." "Only for educational use." "Block queries about personal information." Your rules, enforced on every request.
Multiple experts debating based on their private experience — to answer your question. Each source's data is fetched inside a hardware enclave. The AI processes the combined data, but the AI company can't see who asked or which sources contributed.
This is multi-party computation. The same | that joins public sources now joins private ones — each inside its own enclave, contributing to one answer. Same URL syntax. Radically different infrastructure.
You don't trust anyone — not even the AI company. A CPU enclave fetches each source's private data. A GPU enclave runs an open-source model. Nobody outside the enclaves ever sees plaintext. The room is made of math.
Slower than Confidential, but mathematically airtight. The model itself runs inside auditable hardware. No human — not the user, not the source owners, not the model operator, not the cloud provider — can observe the computation.
Multiple sources contribute to a single answer, but no one — not even the user — can reverse-engineer any individual source's data from the output. Differential privacy adds calibrated noise so the result is useful but each source is protected.
This is the endgame: sources that would never share data with each other can safely contribute to the same computation. A hospital and a pharma company. Competing banks. Rival labs. The math guarantees that joining the query doesn't leak more than staying out of it.
The formal grammar of a Context URL.
context-url = base-url "?q=" spec
spec = source-list "!" prompt
source-list = source *( "|" source )
source = [ weight "*" ] ( url | "[" spec "]" )
weight = float ; 0.0 to 1.0
prompt = token *( "+" token )
base-url โ The resolver endpoint. Default: https://contexturl.ai/v1/
spec โ One or more sources followed by a prompt, separated by !
source โ A URL, optionally preceded by a weight, optionally wrapped in brackets
weight โ A float from 0.0 to 1.0. Controls context allocation and declares attribution share
prompt โ Words joined by +. Spaces are encoded as +
Sources are URLs separated by | (pipe). Each source is fetched and its content is placed in the AI's context window. Sources are presented in order โ first source listed is first in context.
| โ Source separator. a.com|b.com means both sources.
Sources MUST be valid URLs. The resolver fetches each source and extracts its text content.
Sources MAY require authentication. See the Security chapter for access levels.
The number before each source controls influence and declares attribution. Weights also determine revenue splits when sources require payment.
The prompt follows ! and tells the AI what to do with the sources. Words are joined by +. A spec without sources (just !prompt) is valid โ it's a pure instruction with no context.
! โ Separator between sources and prompt. MUST appear exactly once per spec level.
+ โ Word separator within prompts. Decoded as spaces by the resolver.
Wrap a source group in brackets to give it its own prompt. The inner spec is resolved first โ its output becomes a source for the outer spec. Brackets nest arbitrarily. Each bracketed group can exist as an independent ๐ฅ.
Prompts can describe programs. LLMs with code execution will write and run them. No sources needed โ just a URL that computes.
The entire spec can be encoded as a sequence of 2-digit numeric codes. If the query value is all digits, the resolver decodes it. Otherwise the spec is interpreted as raw text. Both formats are accepted.
a=01 b=02 ... z=26
.=27 /=28 :=29 -=30 _=31
?=32 ==33 |=34 *=35 !=36
[=37 ]=38 +=39 0=40 ... 9=49
Encoded format is preferred for sharing. Hero URLs on this page remain human-readable for clarity.
By default, the AI shows you the list of sources and asks for consent before fetching anything. You see exactly what will be loaded into context โ no surprises.
Add &trust=domain.com to only allow sources from domains you trust. Any source from another domain is automatically rejected. Pre-filter before you even see the consent dialog.
Every request to a source includes the full context: the prompt being answered, the source's weight, and what other sources are involved. Sources always know how they're being used.
Sources can specify a free-text policy that AI agents must obey. Allowlists, blocklists, usage restrictions โ written in plain English. The AI reads the policy and follows it.
Sources can require that requests only come from secure enclaves. Raw source data never travels to the AI user โ it stays inside hardware-encrypted memory. Only the final result leaves.
Sources can require anonymization โ only participating in queries where they're part of a group. The output can't be reverse-engineered to reveal any individual source's contribution.
Sources can require that requests only come from enclaves that enforce their policies. Non-enclave IP addresses get blocked. The source doesn't have to trust โ the hardware guarantees compliance.
If a source wants to see and log prompts, the AI user must reveal them. But if the source prefers not to handle that data, an enclave can still enforce the source's policy โ without the source ever seeing the prompt.
Five years from now, will people still build webpages by hand to display content?
Or will end-users choose which data sources they trust — including design sources — and have a website synthesized for them in the moment?
If the latter: how do you get consistency? How do you find your way back to a page you trusted last week? How do you send it to someone else and know they'll see the same thing?
You bookmark the recipe, not the meal.
A ๐ฅ is that recipe. Same sources, same weights, same prompt — same result. Shareable. Bookmarkable. Reproducible. A permanent address for a synthesized experience.
๐ฅs are a rudimentary form of AI with
attribution-based control.
Built with ๐ฅ by