# Labels

## Overview

The Labels API is a server-side text classification endpoint that returns raw moderation labels for a piece of text. It is designed for integrations that need the classification signal on its own — without creating review-queue items, flags, actions, or webhooks.

Use it when you want to label text and decide yourself what to do with the result: filter content, route it, tag it for analytics, or feed it into your own downstream systems. For the full automated moderation workflow (review queue, actions, appeals), use the [Moderation Check](/moderation/docs/<framework>/content-moderation/check/) endpoint instead.

### Storage model

Calls to the Labels API are **stateless by default**. The request is classified by your configured providers (see [Provider Routing](#provider-routing)), the response is returned synchronously, and the content, labels, and classification are **not persisted** by Stream.

Optional storage is available for organizations that need to query labels historically (for analytics, trend analysis, investigations, or custom dashboards). When storage is enabled, results containing at least one label are persisted and can be queried via the `queryLabelResults` method. Content that returns zero labels is not stored.

<admonition type="note">

Storage is an opt-in capability for high-volume use cases and is not enabled by default. Please [contact us](mailto:support@getstream.io) if you want persistence turned on for your organization. When storage is disabled, the classification endpoint works normally and the query endpoint returns empty pages.

</admonition>

---

## Provider Routing

The classification endpoint runs different providers depending on the request. No setup is required for the default path — send text, get labels.

| Mode              | Trigger                                       | Provider                                                                                                                                                        |
| ----------------- | --------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Default (NLP)** | No `policy`, `content_type` omitted or `text` | Stream's managed NLP text classifier.                                                                                                                           |
| **Custom policy** | `policy` set on the request                   | Whatever the policy configures — LLM-based classifier, NLP, blocklists, or a combination. See [Policies](/moderation/docs/<framework>/configuration/policies/). |
| **Username**      | `content_type: "username"`                    | Username-optimized text classifier.                                                                                                                             |

To customize classification (custom LLM prompts, specific blocklists, your own harm taxonomy), define a [policy](/moderation/docs/<framework>/configuration/policies/) and pass its key as `policy` on the request.

---

## Classify Content

Classifies a single piece of text and returns labels synchronously. Server-side only.

<Tabs>

```js label="Node"
const response = await client.moderation.labels({
  content: "you are a fucking idiot",
  content_type: "message",
  category: "chat",
  content_id: "msg-123",
  user_id: "user-42",
});
```

```python label="Python"
response = client.moderation().labels(
    content="you are a fucking idiot",
    content_type="message",
    category="chat",
    content_id="msg-123",
    user_id="user-42",
)
```

```go label="Go"
response, err := client.Moderation().Labels(ctx, &getstream.LabelsRequest{
    Content:     "you are a fucking idiot",
    ContentType: getstream.PtrTo("message"),
    Category:    getstream.PtrTo("chat"),
    ContentID:   getstream.PtrTo("msg-123"),
    UserID:      getstream.PtrTo("user-42"),
})
```

```java label="Java"
LabelsResponse response = client.moderation()
    .labels(LabelsRequest.builder()
        .content("you are a fucking idiot")
        .contentType("message")
        .category("chat")
        .contentID("msg-123")
        .userID("user-42")
        .build())
    .execute()
    .getData();
```

```php label="PHP"
$response = $client->moderation()->labels(new LabelsRequest(
    content: 'you are a fucking idiot',
    contentType: 'message',
    category: 'chat',
    contentID: 'msg-123',
    userID: 'user-42',
));
```

```ruby label="Ruby"
response = client.moderation.labels(
  GetStream::Generated::Models::LabelsRequest.new(
    content: "you are a fucking idiot",
    content_type: "message",
    category: "chat",
    content_id: "msg-123",
    user_id: "user-42",
  )
)
```

```csharp label=".NET"
var response = await client.Moderation.LabelsAsync(new LabelsRequest
{
    Content = "you are a fucking idiot",
    ContentType = "message",
    Category = "chat",
    ContentID = "msg-123",
    UserID = "user-42",
});
```

</Tabs>

### Request Params

| Field          | Required | Type                      | Description                                                                                                                                                                    |
| -------------- | -------- | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `content`      | yes      | string (max 20,000 chars) | The text to classify.                                                                                                                                                          |
| `content_type` | no       | string                    | One of `text` (default), `message`, or `username`. Stored alongside the result so stored queries can filter by source. `username` routes to the username-optimized classifier. |
| `category`     | no       | string (max 128)          | Free-form tag for grouping results. Filterable on the query endpoint.                                                                                                          |
| `content_id`   | no       | string (max 256)          | Your own identifier for the content being moderated (e.g. a message id). Echoed back on the response and stored for correlation.                                               |
| `user_id`      | no       | string (max 256)          | Identifier for the user who authored the content. Enables filtering stored results by user.                                                                                    |
| `policy`       | no       | string (max 128)          | Policy key to use instead of the default NLP classifier. Must match `^[\w-:]*$`. See [Policies](/moderation/docs/<framework>/configuration/policies/).                         |

### Response

```json
{
  "labels": ["insult", "vulgarity"],
  "harm_type": "hateful",
  "directed_at": "user",
  "recommended_action": "keep",
  "severity": "low",
  "language": "en",
  "content_id": "msg-123",
  "masked_content": "you are a ******* *****",
  "duration": "119.85ms"
}
```

| Field                | Description                                                                                                                                                   |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `labels`             | Specific violation labels detected in the content (e.g. `insult`, `threat`, `vulgarity`). Always lowercase. An empty array means no violations were detected. |
| `harm_type`          | High-level harm category (e.g. `hateful`, `sexual`, `neutral`). Always lowercase.                                                                             |
| `directed_at`        | Target of the content, when the provider can infer it (`user`, `group`, `everyone`, `none`). Omitted when unavailable.                                        |
| `recommended_action` | Suggested action for the content: `keep`, `flag`, or `remove`.                                                                                                |
| `severity`           | Severity of the detected harm: `none`, `low`, `medium`, `high`, or `critical`.                                                                                |
| `language`           | Detected language code (ISO 639-1), e.g. `en`, `fr`, `es`.                                                                                                    |
| `content_id`         | Echoed from the request for correlation.                                                                                                                      |
| `masked_content`     | Returned only when a blocklist rule with a `mask` action rewrote the content. Contains the original text with matched tokens replaced by mask characters.     |
| `duration`           | Time spent running the classifier.                                                                                                                            |

<admonition type="info">

All enum-style values returned by this API (`labels`, `harm_type`, `directed_at`, `recommended_action`, `severity`, `language`) are lowercase. Filters on the query endpoint use the same lowercase values.

</admonition>

### Error handling

- Provider errors (rate limits, upstream timeouts) surface as errors. Content is not silently returned as `keep` when classification fails.
- Requesting a `policy` that doesn't exist returns a `400` error.
- The endpoint is server-side only. Client-side tokens are rejected.

---

## Query Stored Results

Query persisted label results with filtering, sorting, and pagination. Requires storage to be enabled on the organization — see [Storage model](#storage-model).

<Tabs>

```js label="Node"
const response = await client.moderation.queryLabelResults({
  filter: {
    category: "chat",
    language: "en",
    harm_type: "hateful",
    date_range: "2026-04-01_2026-04-23",
  },
  sort: [{ field: "created_at", direction: -1 }],
  limit: 25,
});
```

```python label="Python"
response = client.moderation().query_label_results(
    filter={
        "category": "chat",
        "language": "en",
        "harm_type": "hateful",
        "date_range": "2026-04-01_2026-04-23",
    },
    sort=[SortParamRequest(field="created_at", direction=-1)],
    limit=25,
)
```

```go label="Go"
response, err := client.Moderation().QueryLabelResults(ctx, &getstream.QueryLabelResultsRequest{
    Filter: map[string]any{
        "category":   "chat",
        "language":   "en",
        "harm_type":  "hateful",
        "date_range": "2026-04-01_2026-04-23",
    },
    Sort: []getstream.SortParamRequest{
        {Field: getstream.PtrTo("created_at"), Direction: getstream.PtrTo(-1)},
    },
    Limit: getstream.PtrTo(25),
})
```

```java label="Java"
QueryLabelResultsResponse response = client.moderation()
    .queryLabelResults(QueryLabelResultsRequest.builder()
        .filter(Map.of(
            "category", "chat",
            "language", "en",
            "harm_type", "hateful",
            "date_range", "2026-04-01_2026-04-23"
        ))
        .sort(List.of(SortParamRequest.builder().field("created_at").direction(-1).build()))
        .limit(25)
        .build())
    .execute()
    .getData();
```

```php label="PHP"
$response = $client->moderation()->queryLabelResults(new QueryLabelResultsRequest(
    filter: [
        'category' => 'chat',
        'language' => 'en',
        'harm_type' => 'hateful',
        'date_range' => '2026-04-01_2026-04-23',
    ],
    sort: [new SortParamRequest(field: 'created_at', direction: -1)],
    limit: 25,
));
```

```ruby label="Ruby"
response = client.moderation.query_label_results(
  GetStream::Generated::Models::QueryLabelResultsRequest.new(
    filter: {
      category: "chat",
      language: "en",
      harm_type: "hateful",
      date_range: "2026-04-01_2026-04-23",
    },
    sort: [GetStream::Generated::Models::SortParamRequest.new(field: "created_at", direction: -1)],
    limit: 25,
  )
)
```

```csharp label=".NET"
var response = await client.Moderation.QueryLabelResultsAsync(new QueryLabelResultsRequest
{
    Filter = new Dictionary<string, object>
    {
        { "category", "chat" },
        { "language", "en" },
        { "harm_type", "hateful" },
        { "date_range", "2026-04-01_2026-04-23" },
    },
    Sort = new List<SortParamRequest>
    {
        new() { Field = "created_at", Direction = -1 },
    },
    Limit = 25,
});
```

</Tabs>

### Available Filters

| Filter               | Operators  | Description                                                                       |
| -------------------- | ---------- | --------------------------------------------------------------------------------- |
| `category`           | `eq`, `in` | Filter by category tag set on the request.                                        |
| `language`           | `eq`, `in` | Filter by detected language.                                                      |
| `content_type`       | `eq`, `in` | Filter by the `content_type` sent on the request (`text`, `message`, `username`). |
| `harm_type`          | `eq`, `in` | Filter by harm category.                                                          |
| `labels`             | `eq`, `in` | Filter by label name — `eq` matches a single label, `in` matches any of a list.   |
| `recommended_action` | `eq`, `in` | Filter by recommended action.                                                     |
| `severity`           | `eq`, `in` | Filter by severity.                                                               |
| `policy`             | `eq`       | Filter by policy key.                                                             |
| `content_id`         | `eq`       | Look up results for a specific content id.                                        |
| `user_id`            | `eq`, `in` | Filter by author user id.                                                         |
| `date_range`         | —          | Date range in `YYYY-MM-DD_YYYY-MM-DD` format.                                     |

### Response

```json
{
  "label_results": [
    {
      "id": "30dff5ac-6b80-486c-9da1-fcb938849642",
      "created_at": "2026-04-16T10:30:00Z",
      "content": "you are a fucking idiot",
      "masked_content": "you are a ******* *****",
      "content_type": "message",
      "category": "chat",
      "labels": ["insult", "vulgarity"],
      "harm_type": "hateful",
      "directed_at": "user",
      "recommended_action": "keep",
      "severity": "low",
      "language": "en",
      "content_id": "msg-123",
      "user_id": "user-42"
    }
  ],
  "next": "...",
  "prev": "..."
}
```

Use `next` / `prev` for cursor-based pagination.

---

## Labels vs Check

|                                             | Labels                     | [Moderation Check](/moderation/docs/<framework>/content-moderation/check/) |
| ------------------------------------------- | -------------------------- | -------------------------------------------------------------------------- |
| **Purpose**                                 | Text classification signal | Full moderation workflow                                                   |
| **Creates review-queue items**              | No                         | Yes                                                                        |
| **Triggers actions (flag, remove, bounce)** | No                         | Yes                                                                        |
| **Webhooks**                                | No                         | Yes                                                                        |
| **Storage**                                 | Opt-in                     | Always stored in the review queue                                          |

Use Labels when you want the classification result and will handle the rest yourself. Use Check when you want Stream to run the end-to-end moderation workflow.

---

## Rate Limits

| Endpoint            | Default limit                       |
| ------------------- | ----------------------------------- |
| `labels`            | High tier (10,000 requests/window)  |
| `queryLabelResults` | Normal tier (1,000 requests/window) |

If your workload requires higher throughput, [contact us](mailto:support@getstream.io) to arrange a per-organization override.

---

## Related APIs

- [Moderation Check](/moderation/docs/<framework>/content-moderation/check/) — Full moderation workflow with review queue and actions.
- [Policies](/moderation/docs/<framework>/configuration/policies/) — Configure custom classifiers (LLM prompts, blocklists, harm taxonomies).
- [AI Text (NLP)](/moderation/docs/<framework>/engines/ai-text/) — The NLP engine used by default.
- [AI Text (LLM)](/moderation/docs/<framework>/engines/ai-llm-text/) — LLM-based classification available via policies.


---

This page was last updated at 2026-04-23T18:43:24.559Z.

For the most recent version of this documentation, visit [https://getstream.io/moderation/docs/node/content-moderation/labels/](https://getstream.io/moderation/docs/node/content-moderation/labels/).