The single most important sentence in this audit: ChatGPT, Perplexity, and Claude cannot currently read filmcuts.io at all. When their crawlers identify themselves honestly, the server returns HTTP 403 Forbidden. Five out of seven major AI crawlers we tested were blocked. Only Google AI Overviews (via Google-Extended) and Apple Intelligence (via Applebot-Extended) get a successful response. That alone explains why Filmcuts is absent from every AI-search citation we tested for queries the brand should own ("shot on film stock footage", "authentic analog footage for brand campaigns", "Super 8 footage subscription").
The second story is the content layer underneath. Even when a crawler does get through, what it sees is a JavaScript shell. The homepage, about page, resources page, and pricing page all serve "Processing" as the H1, no real H2 structure, and zero JSON-LD schema. Product pages are better — they have real titles and descriptions — but no VideoObject, Product, or Organization schema anywhere on the site. And the third story is authority: the founders have a serious commercial reel (Aman, Amway, Park Hyatt) that is completely invisible to AI because it lives in a third-party interview rather than on filmcuts.io itself.
Filmcuts is locked out of the three AI platforms that matter most for brand discovery. Fixing the lock is a one-day infrastructure change. Fixing what's behind the lock is the real work — and it's surprisingly tractable.
| Domain | www.filmcuts.io |
| Platform | Next.js (custom), Cloudflare in front (inferred) |
| Crawlers tested | Standard Chrome browser, GPTBot, OAI-SearchBot, ChatGPT-User, PerplexityBot, ClaudeBot, Google-Extended, Applebot-Extended |
| Pages crawled | Homepage, About Us, Resources, Pricing Plans, a representative clip page, Sitemap index, two sub-sitemaps, robots.txt |
| AI agent endpoints | /llms.txt, /llms-full.txt, /agents.md, /sitemap_agentic_discovery.xml, /.well-known/ucp, /api/ucp/mcp, /.well-known/mcp.json, /ai.txt |
| Schema audit | JSON-LD extraction across homepage, clip page, about, resources |
| Content audit | Heading hierarchy, definition blocks, FAQ presence, HowTo presence, body text character count in raw HTML |
| Authority audit | WHOIS, founder identification, press mention search across NoFilmSchool / PremiumBeat / Stash / Cinema5D / Motionographer / Hypebeast, listicle inclusion check, third-party review surfaces (Trustpilot, G2, Capterra, ProductHunt, Reddit), Wikidata / Wikipedia entity check |
| Citation test | Live SERP and AI-result queries for category-defining searches; mapped who AI cites today |
The goal was to answer one practical question: when an AI assistant tries to read filmcuts.io, what does it actually see, and is that the version of the brand Austin and Fin would want it to cite? We bypassed JavaScript entirely (which is what AI crawlers do) and read the raw HTML each crawler is served — then compared what the brand sells against what AI is currently telling buyers when they ask category questions.
This is the single largest finding in this audit. We sent eight HTTP requests to https://filmcuts.io/, each identifying as a different crawler. Five came back blocked.
| Crawler | HTTP status | Body size | Used by |
|---|---|---|---|
| GPTBot | 403 | 25 B | OpenAI training crawler |
| OAI-SearchBot | 403 | 25 B | ChatGPT Search |
| ChatGPT-User | 403 | 25 B | ChatGPT (when a user asks a question that triggers a live fetch) |
| PerplexityBot | 403 | 25 B | Perplexity AI |
| ClaudeBot | 403 | 25 B | Anthropic / Claude |
| Google-Extended | 200 | 126 KB | Google AI Overviews, Gemini |
| Applebot-Extended | 200 | 126 KB | Apple Intelligence |
| Chrome browser | 200 | 126 KB | Baseline |
The pattern is clean: anything that announces itself as a major LLM crawler gets a 25-byte error response. Anything that looks like a Google or Apple property gets the full page. robots.txt is not the problem. It explicitly allows everything (we read the file directly). The block lives one layer above robots.txt — most likely a Cloudflare bot-management rule or a Next.js middleware rule keyed on user agent.
The practical consequence: when a buyer asks Perplexity "where can I license shot-on-film footage for a brand campaign", Perplexity tries to fetch Filmcuts, gets a 403, and routes the citation elsewhere — to Filmsupply, to Artgrid, to Filmpac, to Stockfilm. Every one of those competitors returns 200 to the same request. We tested.
Until the 403 is fixed, nothing else in this audit can move citation counts. It is the prerequisite to everything that follows.
If Filmcuts is on Cloudflare, the fix is in Security → Bots → Configure Super Bot Fight Mode → explicitly allow verified AI bots (Cloudflare added a one-click toggle for this in mid-2024). If the block is in Next.js middleware, the fix is removing the user-agent gate. Either way: under one engineering day. We can be specific once we see the infra.
In the last twelve months a quiet platform shift began: large language models and AI shopping agents started looking for a specific set of files that brands publish to make themselves machine-readable. Shopify auto-publishes most of them. Custom Next.js stacks (like Filmcuts) have to ship them deliberately. Today, Filmcuts ships none.
| File or endpoint | Purpose | Status |
|---|---|---|
/llms.txt | High-level brand summary for LLMs | 404 |
/llms-full.txt | Full-detail version with structured data links | 404 |
/agents.md | Agent operating instructions | 404 |
/sitemap_agentic_discovery.xml | AI-agent-specific sitemap | 404 |
/.well-known/ucp | Universal Commerce Protocol profile | 404 |
/api/ucp/mcp | MCP endpoint for agent-driven licensing | 404 |
/ai.txt | AI training opt-in / opt-out declaration | 404 |
/.well-known/mcp.json | MCP service discovery descriptor | 404 |
Two of these matter immediately: /llms.txt and /agents.md. They take an hour each to write and they're the canonical place an AI looks for "tell me about this brand in 200 words". The next two — UCP and MCP — are the early-mover infrastructure for agent-driven commerce. A buyer's AI agent will eventually want to license a clip directly through the MCP endpoint, not by sending the buyer to a browser. Stores set up for it today will be the ones agents transact with first.
Filmcuts has a specific advantage here that most stock-footage libraries lack: the catalog is structured (clips, packs, artists, licenses) and the licensing model is two-tier (Editorial $49, Agency $999) — clean enough to expose as an MCP tool with very little adaptation. It's a strategic moat that's available to seize for a quarter of effort.
Filmcuts can become the agent-commerce-ready stock-footage library before any of the larger incumbents do.
Artgrid, Filmsupply, Pond5, Storyblocks — none of them publish these files yet. Filmcuts shipping llms.txt, agents.md, and a basic MCP endpoint in the next quarter would put the brand on the discoverable side of agent-driven licensing while the category is still empty. The cost is real but the window is open.
This was the most surprising single thing we found. AI crawlers anchor extraction to heading tags — they use the H1 to understand page topic and H2s to understand section structure. When we read the raw HTML that Filmcuts ships to a crawler, here's what we got on the homepage:
| Element | What's there now |
|---|---|
| H1 count (raw HTML) | 1, with content "Processing" |
| H2 count (raw HTML) | 1, with content "Staff Picks" |
| Visible body text in raw HTML | ~50,800 characters — mostly footer, nav, and metadata, not the visible hero / use-cases / testimonials |
| Brand definition block | Not present in raw HTML |
| JSON-LD blocks | 0 |
| og:url | https://filmcuts.com (broken domain — see Finding 7) |
The same pattern appears on /about-us and /resources: H1 is literally the loading-skeleton string "Processing", and no real H2s exist in the server response. Everything else hydrates after JavaScript runs in a browser. AI crawlers don't run JavaScript. They read "Processing" and move on.
<h1>Real Shot-on-Film Stock Footage, Licensed for Brand Use</h1> <p>Filmcuts is a stock-footage library of authentic Super 8, 16 mm, and 35 mm footage, captured by working filmmakers and scanned at up to 4K ProRes. Founded in 2022 by Austin Divine and Fin Matson. Used by directors and brand teams who need analog texture without the cost of an original shoot.</p>
Section H2s should describe the section directly: Browse the Library, How Filmcuts Differs From Digital Stock, Used By (the brand logo strip), Pricing, What Editors Say, Originals: New Films We Funded This Quarter. This should ship as server-rendered HTML, not client-only.
The homepage is the most-cited page on any domain. Right now AI has no anchor for what Filmcuts is.
When ChatGPT or Perplexity reads a page with no real H1 and one section labelled "Staff Picks", the model fills in the gap from elsewhere on the web — often inaccurately, often from a competitor's roundup. Adding a real H1 and a brand definition block tells the model "this is the canonical answer", and most of the citation work flows from there. The brand voice on the live site is excellent ("Real Film. No Filters.") — it just needs to also exist in the version AI reads.
Filmcuts is built on Next.js, which is capable of server-side rendering. The current build is not using it for the pages that matter most. Here's the pattern across the site:
| Page | SSR H1 | SSR H2 count | SSR JSON-LD | Verdict |
|---|---|---|---|---|
| / (Homepage) | "Processing" | 1 ("Staff Picks") | 0 | JS-only |
| /about-us | "Processing" | 0 | 0 | JS-only |
| /resources | "Processing" | 0 | 0 | JS-only |
| /pricing-plans | Not present in SSR; hydrated | 0 in SSR | 0 | JS-only |
| /explore/media/<id> (clip) | Real clip title (e.g. Shanghai City Skyline) | 2 ("More From This Pack", "You May Also Like") | 0 | SSR works; schema missing |
The good news inside this: clip detail pages are server-rendered properly. Their H1 is the real clip name and their meta description is well-written and evocative. The catalog itself is technically discoverable.
The bad news: the pages that define what Filmcuts is as a brand — homepage, about, pricing, resources — are exactly the pages that AI crawlers can't read. The catalog is visible but the story behind it isn't.
Move the homepage, about, pricing, and resources pages to server-side rendering. Keep the catalog as it is.
In Next.js this is either making each page a Server Component, switching it to getServerSideProps, or pre-rendering as static. Whichever fits the existing data flow. The verification test is one command: curl -A "GPTBot/1.0" https://filmcuts.io/pricing-plans | grep "\$999". If $999 appears in the response, the page is now AI-readable. If it doesn't, the page still isn't.
Schema (JSON-LD) tells AI what kind of content a page is. It's how a model knows "this is a product, with this price, and this name, and these reviews" versus "this is general marketing text". We checked every key page on filmcuts.io. There are zero JSON-LD blocks. None.
| Schema type | Status | What it would unlock |
|---|---|---|
| Organization | MISSING | Entity definition for "Filmcuts" — founders, address, social profiles. The block AI uses to disambiguate the brand from FilmCut (Android app), Filmcuts Studio, and other near-namesakes. |
| WebSite + SearchAction | MISSING | Tells AI how to search the catalog. Unlocks sitelink-style answers and direct AI-mediated catalog search. |
| VideoObject (per clip) | MISSING | Largest single miss for a stock-footage library. Lets AI cite individual clips with thumbnail, duration, upload date, content URL. Standard for every competitor with native video. |
| Product (per pack) | MISSING | Lets AI surface pack pricing, licensing tier, and availability when buyers ask about specific use cases. |
| Offer (per license tier) | MISSING | Encodes the $49 / $999 tier structure so AI can quote pricing accurately when a buyer asks "how much does Filmcuts cost". |
| FAQPage | MISSING | Direct citation on questions like "is shot-on-film better than digital", "what license do I need for a commercial", "how is Filmcuts different from Artgrid". These are high-intent buyer queries. |
| HowTo | MISSING | Direct citation on "how to use shot-on-film footage in a brand campaign", "how to license a clip from Filmcuts". Process queries get extracted as numbered steps. |
| Person (founders) | MISSING | Authority signal for Austin Divine and Fin Matson, with sameAs to their Instagram and other platforms. AI uses founder identity as a citability check. |
| BreadcrumbList | MISSING | Tells AI the site's information architecture. Standard for SEO; minor for AI. |
For a stock-footage library, the single most expensive schema to leave out is VideoObject. Every clip on the site should have one. With it, the catalog becomes addressable by AI as individual licensable assets. Without it, the catalog is just decoration around an unstructured page.
AI systems don't just cite the most relevant page, they cite the most credible one. The credibility signals are domain age, founder identity, press coverage, third-party reviews, and proprietary data. Filmcuts has more of these than is visible to AI today.
| Signal | Status | Note |
|---|---|---|
| Domain age | 2.7 years | Registered 17 Sep 2022 (NameCheap, Iceland privacy). Reasonable but not authoritative. |
| Founder identity | STRONG — but off-site | Austin Divine (founder, Chiang Mai, ex-boutique production company with Aman, Amway, Park Hyatt as past brand clients). Fin Matson (co-founder, creative director, photographer). Visible in one third-party interview, not surfaced on filmcuts.io itself. |
| Press / editorial mentions | THIN | One discoverable feature: archivalfootageservice.com (small, niche site). Zero coverage at NoFilmSchool, PremiumBeat, Stash, Cinema5D, Motionographer, Hypebeast, or Booooooom — the publications AI weights for filmmaking authority. |
| Listicle inclusion | NOT INCLUDED | "Top Artgrid alternatives" roundups (SlashDot, SourceForge, WebCatalog, Wrapbook, Studiobinder) do not list Filmcuts. These are heavily cited by ChatGPT and Perplexity for category recommendations. |
| Third-party review surface | NONE FOUND | No Trustpilot, no G2, no Capterra, no ProductHunt listing. On-site testimonials exist (5 visible) but use Instagram handles rather than "Name, Title at Company" — weaker for AI extraction. |
| Wikidata / Wikipedia | ABSENT | No entity for Filmcuts, no entity for Austin Divine as filmmaker, no entity for Fin Matson. AI assistants pull heavily from Wikidata to disambiguate brand names — particularly important here because of name collisions (see next finding). |
| Proprietary data | PARTIAL | The Originals program (one filmmaker funded per quarter) is a genuine, citable, unique angle. It's mentioned on the homepage but has no dedicated page with structured data, named filmmakers, attribution, or year. The asset exists; it's not structured for citation. |
This is the most underused asset in the audit. Austin spent close to a decade directing video for international brands including Aman, Amway, and Park Hyatt before launching Filmcuts. That is exactly the kind of credential AI looks for when deciding whether to cite a source on filmmaking topics. Today, it lives on a single third-party site. If it lived in a Person + sameAs JSON-LD block on a real /about page, plus a one-paragraph bio with named clients above the fold, the citation eligibility lift would be measurable inside a month.
Every page on filmcuts.io declares its canonical Open Graph URL as https://filmcuts.com — not the live .io domain. We tested filmcuts.com directly: the connection times out completely (HTTP 000, no response). Any AI system that uses the og:url tag to verify a canonical source for Filmcuts content will see an unreachable domain and downweight the page. Any human who hears "Filmcuts" and types .com never lands on the brand. This is a single line of metadata to change.
| Domain | Status | Risk |
|---|---|---|
| filmcuts.io | LIVE | The actual site |
| filmcuts.com | UNREACHABLE | Declared as og:url; not owned or not resolving. Brand-by-ear traffic falls off a cliff. |
| filmcuts.net | UNREACHABLE | Defensive registration not held |
| filmcuts.co | 404 | Defensive registration not held; one-letter typo risk against .io |
| filmcuts.app / .studio | UNREACHABLE | Defensive registration not held |
Filmcuts shares a name with at least three other entities AI can confuse it with:
The mitigation isn't engagement with any of them. It's making Filmcuts's first-person identity unmissable: a structured Organization schema block, a real H1 that includes the word "footage" or "library", named founders with Person schema, and a Wikidata entity creation. The clearer the entity, the harder it is for AI to conflate.
Authority is the slowest pillar to move. The reason to start it this month is that lead times are quarters, not weeks.
Most of this audit's recommendations show measurable results in weeks. Authority is different: founder media placements, listicle inclusions, Wikidata entity, press hits — each has a long lead time. The right time to start is in parallel with the on-site fixes, not after. The good news is that Filmcuts has more genuinely citable material to work with than most brands at this stage — the founder story, the Originals program, the Aman/Amway/Park Hyatt commercial reel, the Bali workshop model.
You said you weren't sure what queries to target. Here's how to think about it. AI search visibility splits into four query types, ordered by how fast Filmcuts can realistically win each.
| Tier | Example queries | Difficulty | How Filmcuts wins |
|---|---|---|---|
| Branded | "what is filmcuts", "is filmcuts legit", "filmcuts pricing", "filmcuts review", "filmcuts vs artgrid", "who runs filmcuts" | EASY | Unblock crawlers, ship SSR for homepage and pricing, add Organization + Offer + Person schema, add a structured FAQ. All on filmcuts.io. Brand owns the answer. |
| Educational | "what is shot-on-film stock footage", "Super 8 vs 16mm vs 35mm for commercials", "what is film grain", "how to license analog footage for a brand campaign", "is real film better than digital LUTs" | MEDIUM | Editorial pieces with definition blocks, attributed sources, HowTo schema. Filmcuts has more first-hand subject-matter authority here than any incumbent stock library. The Resources hub is the natural home — currently has 3 articles, needs 8–12. |
| Category / commercial | "best authentic film footage for brand campaigns", "alternative to Artgrid", "shot-on-film stock footage subscription", "Super 8 footage library for music videos" | MEDIUM | A clear positioning page that explicitly compares Filmcuts to Artgrid, Filmsupply, Filmpac, Stockfilm and raw.film on the dimensions that matter (actual film vs digital, pack-based vs per-clip, license tiers, scan resolution). AI loves a clean comparison table. |
| Listicle / awareness | "best stock footage sites", "best Artgrid alternatives", "premium video assets for filmmakers" | HARD | Third-party PR play. Pitches to Wrapbook, Studiobinder, No Film School, Cinema5D for inclusion in their roundup posts (which are the canonical AI sources for category queries). Quarter-long timeline. Strongest angle: the Originals program is a unique editorial hook. |
Order of operations: branded first (one to two weeks of on-site work, fully controllable). Educational and category in parallel (one to two quarters). Listicle as the long-haul play. The Originals quarterly-funded-filmmaker program is the easiest unique angle to lead with — there is no other shot-on-film library globally running an artist-funded model like that, which means AI has no competing source for "how Filmcuts supports filmmakers" type stories. Use it.
Ordered by impact divided by effort. The first four items together close most of the "AI can't read us" problem in under a week of engineering time. The middle group is the content layer. The final group is the slower authority play that runs in parallel over a quarter.
curl -A "GPTBot/1.0" https://filmcuts.io/ returning HTTP 200. 2–4 hrs, infrahttps://filmcuts.com to https://filmcuts.io across the site (it's almost certainly a single environment variable or layout component). 10 min, eng/llms.txt and /agents.md. A 200-word brand summary and an agent-instruction file. These are the canonical files AI looks for when asked to "tell me about Filmcuts". 1 hrsameAs links to Instagram and LinkedIn. Office location. Founding year. This is the entity-level disambiguation block. 1 hr, eng/about-us page with named founders and prior client list. Austin's commercial reel (Aman, Amway, Park Hyatt) is currently invisible to AI. Surface it in extractable form on-site. Add Person schema. 2 hrsWe've now run this audit across a few dozen brands. Filmcuts's starting position is unusual: the brand is in a category that's about to be very interesting to AI search (analog texture is the explicit trend Filmsupply called out in its 2026 commercial filmmaking report), the founders have a serious commercial reel, the catalog has a unique angle (real film, not digital), and the Originals program is the kind of editorial story AI hunts for because it can't get it anywhere else.
What's holding it back is almost entirely technical. The crawlers are blocked. The HTML they would read if they got in is a JavaScript shell. The schema doesn't exist. The authority signals are real but off-site. Each of these has a clean fix, and the order they're fixed in matters: you can't measure citation lift from content work if the crawlers can't reach the content. Section 11 is sequenced to fix that root cause first.
The brand is more cite-worthy than most. It's just locked behind a door that takes a few hours to open and a few weeks to make worth walking through.
This audit is a snapshot of one moment in time. AI search changes fast: schema models shift, new crawlers appear, citations rotate every few weeks, and competitors who are also reading their analytics will not stand still. A static document can tell you what to fix. It can't tell you whether the fixes are working, which is where the measurement layer comes in.
/llms.txt, /agents.md, and an Organization + Person JSON-LD block. The minimum agent surface.For the prompt and scan side, we use our own AI Visibility platform. It's the same tool we run for our other clients, and it's how we know whether each fix is moving the needle in AI search rather than just in our intuition.
| What it does | How it works for Filmcuts |
|---|---|
| Custom prompt library | 60 to 100 prompts built around Filmcuts's categories. Awareness ("best stock footage for brand campaigns"). Consideration ("shot-on-film vs digital", "Super 8 footage subscription"). Decision ("filmcuts review", "filmcuts vs artgrid"). Comparison (Filmcuts vs Artgrid, Filmsupply, Filmpac, Stockfilm, raw.film by name). Niche ("Originals program", "real Super 8 footage for music videos"). |
| Multi-platform scan | Each prompt runs against ChatGPT, Claude, Gemini, and Perplexity. Roughly 240 to 400 responses per scan. |
| Response classification | Every response tagged as recognised correctly, hallucinated, clarification request, or not mentioned. Hallucinations and competitor misattributions surface automatically — particularly important here because of the FilmCut / Filmcuts Studio name collision risk we found in Finding 7. |
| Competitor tracking | We track Artgrid, Filmsupply, Filmpac, Stockfilm, raw.film, PeriscopeFilm, and Storyblocks by name so we can see which queries route to them today and which start routing to Filmcuts as the on-site fixes ship. |
| Recurring cadence | Weekly or monthly re-scans on the same prompt set. Each scan is a delta against the last one. The dashboard shows what moved, what didn't, and which fixes earned which citations. |
| Citation venue map | For every recognised mention, the platform logs which source URL the AI cited — filmcuts.io, a press placement, a third-party listicle, a Reddit thread — so we can see whether the citation surface is the brand's own site or a borrowed one. |
The platform replaces the "test 5 queries on 4 platforms every Monday" workflow that most brands try to run by hand and abandon within a month. It's the same operating layer we use for our other clients to say exactly which model invented which fact about the brand, and which fixes shifted recognition by which percent.
Baseline scan first, then the on-site fixes, then re-scan after four weeks. The delta is the proof.
First scan establishes the before-state across Filmcuts's prompt set. The fixes from Section 11 ship in parallel — we can either advise or take them on directly. Four weeks later we re-scan and compare. The full engagement shape (scope, timeline, pricing) lives in the proposal doc, and we can walk it through together whenever you're ready.
Anyway, that's where Filmcuts sits right now. The bot block is genuinely the gating issue — and it's the cheapest one to fix in the whole audit. Once that's done, the brand has more legitimately interesting material to put in front of AI than most stock libraries do.
Whenever you want to look at it together I'll walk you through the dashboard live so you can see what tracking Filmcuts specifically would look like. Easier shown than described, and I can pull up another client's at the same time so you can see what month four of doing this actually looks like.
Cristoforo