SQS

Stream can send payloads of all events from your application to an Amazon SQS queue you own.

A chat application with a lot of users generates a lots of events. With a standard Webhook configuration, events are posted to your server and can overwhelm unprepared servers during high-use periods. While the server is out, it will not be able to receive Webhooks and will fail to process them. One way to avoid this issue is to use Stream Chat's support for sending webhooks to Amazon SQS.

SQS removes the chance of losing data for Chat events by providing a large, scalable bucket that holds events generated by Stream Chat in a queue for your server or other .

The complete list of supported events is identical to those sent through webhooks and can be found on the Events page.

Configuration

You can configure your SQS queue programmatically using the REST API or an SDK with Server Side Authorization.

There are 2 ways to configure authentication on your SQS queue:

  1. By providing a key and secret

  2. Or by having Stream's AWS account assume a role on your SQS queue. With this option you omit the key and secret, but instead you set up a resource-based policy to grant Stream SendMessage permission on your SQS queue. The following policy needs to be attached to your queue (replace the value of Resource with the fully qualified ARN of your queue):

{
  "Sid": "AllowStreamProdAccount",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::185583345998:root"
  },
  "Action": "SQS:SendMessage",
  "Resource": "arn:aws:sqs:us-west-2:1111111111:customer-sqs-for-stream"
}

To configure an SQS queue, use the event_hooks array and Update App Settings method:

// Note: Any previously existing hooks not included in event_hooks array will be deleted.
// Get current settings first to preserve your existing configuration.

// STEP 1: Get current app settings to preserve existing hooks
const response = await client.getAppSettings();
console.log("Current event hooks:", response.event_hooks);

// STEP 2: Add SQS hook while preserving existing hooks
const existingHooks = response.event_hooks || [];
const newSQSHook = {
  enabled: true,
  hook_type: "sqs",
  sqs_queue_url: "https://sqs.us-east-1.amazonaws.com/123456789012/MyQueue",
  sqs_region: "us-east-1",
  sqs_auth_type: "keys", // or "resource" for role-based auth
  sqs_key: "yourkey",
  sqs_secret: "yoursecret",
  event_types: [], // empty array = all events
};

// STEP 3: Update with complete array including existing hooks
await client.updateAppSettings({
  event_hooks: [...existingHooks, newSQSHook],
});

// Test the SQS connection
await client.testSQSSettings();

Configuration Options

The following options are available when configuring an SQS event hook:

OptionTypeDescriptionRequired
idstringUnique identifier for the event hookNo. If empty, it will generate an ID.
enabledbooleanBoolean flag to enable/disable the hookYes
hook_typestringMust be set to "sqs"Yes
sqs_queue_urlstringThe AWS SQS queue URLYes
sqs_regionstringThe AWS region where the SQS queue is located (e.g., "us-east-1")Yes
sqs_auth_typestringAuthentication type: "keys" for access key/secret or "resource" for role-based authYes
sqs_keystringAWS access key ID (required if auth_type is "keys")Yes if using key auth
sqs_secretstringAWS secret access key (required if auth_type is "keys")Yes if using key auth
event_typesarrayArray of event types this hook should handleNo. Not provided or empty array means subscribe to all existing and future events.

SQS Permissions

Stream needs the right permissions on your SQS queue to be able to send events to it. If updates are not showing up in your queue add the following permission policy to the queue:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1459523779000",
      "Effect": "Allow",
      "Action": [
        "sqs:GetQueueUrl",
        "sqs:SendMessage",
        "sqs:SendMessageBatch",
        "sqs:GetQueueAttributes"
      ],
      "Resource": ["arn:aws:sqs:region:acc_id:queue_name"]
    }
  ]
}

Here's an example list of messages read from your SQS queue:

Payload Compression

SQS honours the same enable_hook_payload_compression flag — see the webhooks overview for enablement and the production checklist. When compression is on, the message body is gzipped + base64-encoded (SQS only accepts UTF-8) and the producer sets these message attributes:

{
  "content_encoding": "gzip",
  "content_type": "application/json",
  "payload_encoding": "base64"
}

Reading messages from SQS

Call parseSqs on your chat client with the SQS Body string — it reverses the base64 + gzip envelope and returns a typed event (with an UnknownEvent fallback), the same shape verifyAndParseWebhook returns for HTTP. The same call works whether or not compression is on (encoding is detected from the body bytes, so the content_encoding / payload_encoding attributes are only a hint).

No X-Signature on SQS. Stream does not ship an HMAC signature on SQS deliveries. The transport is authenticated end-to-end — the queue is gated by IAM, so only your account can read it, and only Stream's account can write to it.

// message is the SQS Message object you received from ReceiveMessageCommand
const event = client.parseSqs(message.Body);
// event.type, event.message, event.user, ...

Where each argument comes from

ArgumentSourceExample
bodyBody field of the SQS message (UTF-8 string)message.Body

parse_sqs takes only the message body. No HMAC is involved; use your API secret only for other chat API calls, not for parsing SQS payloads.

Need a stateless helper? The same decode + parse logic is exposed as a static / module-level parse_sqs (see the webhooks overview for per-language imports). Use it in workers that don't keep a chat client around.

Building this without a Stream SDK? Expand the per-language reference implementation below.

Enabling compression and the production checklist are documented on the webhooks overview — the same enable_hook_payload_compression flag covers SQS.

SQS Best practices and Assumptions

  • Set the maximum message size set to 256 KB.

Messages bigger than the maximum message size will be dropped.

  • Set up a dead-letter queue for your main queue.

This queue will hold the messages that couldn't be processed successfully and is useful for debugging your application.