Skip to main content

Enterpret in Notion Custom Agents

Build Notion Custom Agents on top of your customer feedback

Written by Team Enterpret
Updated today

Connect Enterpret as a tool inside a Notion Custom Agent to build agents that run on your customer feedback. Schedule weekly coaching digests, auto-fill PRDs with customer evidence, keep your help center current against what users are actually asking for — all inside Notion, no context switching.

Notion Custom Agents are the most capable self-configurable Enterpret surface today. Notion supports the full MCP protocol (tools, resources, and prompts), so Enterpret's initialization logic loads the same way it does in Claude.

What this unlocks:

  • Scheduled agents that write findings to a Notion database on a cadence

  • Event-triggered agents that fire when a page, comment, or property changes

  • In-page @mentions that let you ask Enterpret to back a claim with customer evidence as a comment on the page itself


Before you start

You need:

  • A Notion Business or Enterprise plan. Custom Agents are not available on Free, Plus, or personal workspaces.

  • An Enterpret account. If your organization uses SSO or invite-only provisioning, confirm you can sign in to dashboard.enterpret.com before starting.

  • Workspace admin access in Notion — or an admin willing to enable one setting for you. One-time setup, covered below.


Step 1 — Enable custom MCP servers in Notion (admin, one-time)

A Notion workspace admin enables this once for the whole workspace.

  1. In Notion, open Settings

  2. Navigate to Notion AIAI connectors

  3. Turn on Enable Custom MCP servers

After this is on, any member who can create Custom Agents can add Enterpret to their agents.


Step 2 — Create a Custom Agent

  1. In Notion, click NewCustom Agent (or go to the Agents panel and click Create agent)

  2. Give the agent a name and a one-line description — this is what teammates see when they invoke it


Step 3 — Connect Enterpret as a Custom MCP server

  1. In your agent, open SettingsTools & Access

  2. Click Add connectionCustom MCP server

  3. Fill in:

  4. Click Save

  5. Notion opens an authorization window. Sign in with your Enterpret account and authorize access.

Once connected, the Enterpret tools appear under the connection:

  • get_organization_details — loads your org's name, ID, and dashboard URL

  • get_schema — discovers your Knowledge Graph structure (product areas, themes, data sources)

  • execute_cypher_query — queries feedback data to surface insights, trends, patterns

  • search_knowledge_graph — looks up terms, product areas, or themes

Tool execution modes:

  • Leave read tools (search, list, fetch) on Run automatically — they don't modify anything

  • For write tools exposed by your other MCP connections (create, update, delete), Notion defaults to Always ask confirmation. Keep this on for anything that edits external systems


Step 4 — Add initialization instructions

Notion loads Enterpret's MCP resources automatically, so the agent gets initialization context out of the box. For reliability, we recommend also pasting the following into the agent's Instructions field — it anchors every run against the same baseline.

You have access to Enterpret, which contains customer feedback for our
organization across all sources. Before running any query:

1. Call get_organization_details to confirm the active organization
2. Call get_schema to understand available product areas, themes, and data
sources
3. Use search_knowledge_graph to find the right terms before querying
4. Use execute_cypher_query for retrieval — prefer tight, scoped queries
over broad scans

When presenting findings, include:
- Specific feedback record IDs so they can be clicked through
- Record counts, not just narrative summaries
- Date ranges and sources so the reader knows what was included

Then write the rest of your instructions as an outcome, not a sequence of steps. Notion's own guidance: "Describe what success looks like rather than prescribing how to achieve it."

See the example agents below for instruction patterns that work well.


Step 5 — Test before scheduling

  1. Click Run to execute the agent once, manually

  2. Open the activity log (clock icon) to see exactly what the agent called and returned

  3. Adjust instructions or tool access, then run again

  4. Only turn on schedules or triggers after two or three manual runs look right

Start lower-frequency (weekly) and only scale up once output is reliable. Running daily on day one burns credits and amplifies every mistake.


Example agents

1. Support coaching digest (scheduled)

Goal. Surface coaching opportunities for support agents based on negative-sentiment customer interactions.

Setup.

  • Trigger: recurring schedule, every Monday at 9am

  • Destination: a Notion database with columns for agent name, negative-sentiment %, primary coaching opportunity, evidence link, status

  • Instructions below:

Every week, pull support interactions from the last 7 days where customer
sentiment was negative. Group by support agent. For each agent where
negative-sentiment rate exceeds the team average:

- Add a row to the Coaching Opportunities database
- Fill in: agent name, their negative-sentiment %, team average for
comparison, the top coaching opportunity (drawn from recurring themes
in their negative interactions), and a link to a representative
feedback record
- Set status to "New"

If no agent exceeds the team average, do nothing. No row, no message.

Why this works. The chat window is not the product here. The Notion database is. Once the agent writes rows, you can build Notion dashboards on top (trend over time, distribution by team, highest negative-sentiment agents) that the support manager opens each Monday morning.

2. Self-maintaining help center (scheduled)

Goal. Keep internal help articles current against what customers are actually asking.

Setup.

  • Trigger: recurring schedule, every Friday

  • Destination: comments on existing help-article pages in the Help Center database

  • Instructions below:

Every Friday, for each article in the Help Center database, check whether
recent feedback from the last 14 days surfaces questions or issues the
article doesn't currently address.

When you find a gap, add a comment to the article with:
- The specific gap (one sentence)
- Two or three representative feedback record links as evidence
- A suggested addition or revision

Do not edit the article directly. Comments only — a human reviews and
accepts.

If an article has no gaps, do nothing.

Why this works. Comments preserve human review. The agent surfaces candidates; the editor decides. Over time the help center converges on what customers actually need documentation for.

3. Back a PRD claim with customer evidence (on-demand, @mention)

Goal. Anchor a PRD in real customer data as you're writing it.

Setup.

  • No schedule or trigger — invoked by @mention from inside any page

  • Instructions below:

When @mentioned inside a page, read the surrounding paragraph or the
specific selection the author highlighted. Identify the claim being made
about customers, their pain points, or their requests.

Respond as a comment on the page with:
- Whether the claim is supported by customer feedback (yes / partially / no)
- Record count backing the claim (how many feedback records, over what
date range, from which sources)
- Two or three representative feedback record links
- If the claim is only partially supported, one sentence on the gap

Keep the comment under 120 words.

Why this works. PRDs drift toward hunches. An @mention that fact-checks against Enterpret data turns every claim into a small verification loop without breaking the author's flow.

4. Fill a table column from Enterpret (on-demand)

Goal. You have a Notion table with 20 feature requests. You want a "customer evidence" column filled for each.

Setup. Highlight the column, @mention your Enterpret agent:

"For each row in this table, search Enterpret for feedback matching the feature request in column A. Fill the Customer Evidence column with: number of matching records, top two source types, and a link to the most representative record. If you find fewer than 3 matching records, write 'No strong signal'."

The agent works row by row and writes results back into the table.


Writing good instructions

Instruction quality is the single biggest factor in whether your agent works.

Lead with the outcome, not the process. Write "Create a weekly status update summarizing completed tasks, blockers, and next steps," not "First query this, then do that, then…" — the agent figures out its own path better than a prescribed sequence.

Show, don't describe. Paste an example of the output you want. A previous report, a filled-in database row, a formatted summary. One example beats three paragraphs of description.

Be explicit about what to do when there's nothing to say. "If no agent exceeds the team average, do nothing. No row, no message." Without this, agents invent work to appear useful.

Keep scope tight. Point the agent at one database, or one page, or one specific date range — not "all feedback." Smaller scope = more reliable output and fewer credits burned.

Short beats long. If instructions run past a page, split the workflow into two agents.


Authentication fallback: Bearer token

Most users should use OAuth (the default flow in Step 3). If your security team requires static credentials, Enterpret supports Bearer token auth:

  1. In Enterpret, go to SettingsMCP

  2. Under Auth Token, click Generate

  3. Copy the token

  4. In Notion's Custom MCP server setup, select header-based auth

  5. Add header: Authorization: Bearer YOUR_AUTH_TOKEN_HERE

Tokens expire after 6 months. Rotate and update the Notion connection when they do.


Troubleshooting

"Custom MCP server" option is missing

Your workspace admin hasn't enabled custom MCP servers yet. Ask them to complete Step 1.

OAuth window closes without connecting

Pop-up blocker. Allow pop-ups from notion.so and retry.

Agent returns no results

Either your Enterpret account doesn't have access to the relevant data, or the query scope is too narrow. Check account permissions and widen the date range or theme filter.

Agent retries the same failed query every run

Instructions are probably too prescriptive on query syntax. Rewrite as an outcome (what you want) rather than a sequence (how to get it). Let the Enterpret tools handle query construction.

Scheduled agent stops producing output

Notion emails the agent owner on failure. Check your inbox and open the agent's activity log (clock icon) to see the error.

Connection shows "Server unavailable"

Network or Enterpret-side issue. Retry, and verify you can reach https://wisdom-api.enterpret.com/server/mcp in your browser.


Related

Did this answer your question?