TL;DR:
- Any chat message pairing a patient identifier with health information is PHI, making your entire messaging infrastructure a HIPAA compliance surface.
- HIPAA's three rules translate into concrete engineering requirements: role-based access, encryption, tamper-evident audit logs, and PHI-safe push notifications.
- Building compliant chat from scratch needs 2-3 dedicated engineers; a HIPAA-eligible API like Stream compresses that to weeks of integration work.
- No vendor makes you compliant on its own. Stream signs the BAA and provides the technical safeguards, but you still own the policies, risk assessments, and training.
Every message in a healthcare chat contains protected health information the moment a clinician pairs a patient name with a diagnosis, lab result, or appointment. That makes your chat infrastructure a HIPAA compliance surface, whether you designed it to be one or not.
This guide covers what HIPAA actually requires of a messaging system, how those requirements translate into specific architectural decisions, and how to build a compliant chat app without spending 18 months on infrastructure you could buy. We'll work through the regulatory framework first, then map each rule to the technical controls it demands, and close with the build-versus-buy tradeoff for teams shipping telemedicine apps today.
The HIPAA Problem for Messaging and Chat
If you’ve ever had the misfortune to be in hospital, you’ll notice one thing about modern medicine: doctors and nurses are on their phones a lot. Of course they are. They’ve got to chat about last night’s The Pitt. Seriously, phones are the primary means of communication for everyone now, and as healthcare professionals have to constantly be on the move, messaging and chatting on their mobile devices is the easiest way to pass information:
- "Room 302's BP is 180/110, want to adjust the lisinopril?"
- "Can you check on Mrs. Chen? Her O2 sat dropped to 91 after we moved her off the drip."
- "Thompson's imaging came back, looks like a retinal detachment. Can you fit him in today?"
Mrs. Chen, Thompson, and the person in Room 302 are all helped by this easy sharing of information. Still, each of those is health data tied to an identifying detail, making it protected health information (PHI) and subject to the HIPAA privacy rule.
When chat messages become protected health information
Here’s the exact definition of PHI from the HHS website:
The Privacy Rule protects all "individually identifiable health information" held or transmitted by a covered entity or its business associate, in any form or media, whether electronic, paper, or oral. The Privacy Rule calls this information "protected health information (PHI)."
- "Individually identifiable health information" is information, including demographic data, that relates to:
- the individual's past, present, or future physical or mental health or condition,
- the provision of health care to the individual, or
- the past, present, or future payment for the provision of health care to the individual,and that identifies the individual or for which there is a reasonable basis to believe it can be used to identify the individual.
What does this mean? Two things must be true at the same time: the information relates to someone's health, healthcare, or payment for healthcare, and it identifies (or could reasonably identify) that person. In a chat system, those two conditions collide in almost every conversation.
A message reading “John Smith's MRI shows a herniated disc at L4-L5” is unambiguously PHI. But so is “Mrs. Rodriguez has a dermatology consult Tuesday” (name plus provision of care), “Please refill patient #4521's metformin 500mg” (medical record number plus treatment), and even “The patient in room 117, A1C came back at 8.2” if the room number can identify the individual.
This is not just restricted to professional-to-professional communication. Any healthtech product with a messaging feature creates the same exposure:
- A telehealth app in which a patient describes symptoms to a provider via chat is generating ePHI with every message in that thread.
- A medication management app creates PHI the moment a user types their name next to a question about dosage.
- A mental health platform with in-app messaging, a patient intake widget that collects reasons for visit, and a caregiver coordination app where family members discuss a loved one's treatment plan.
All of these products handle PHI the instant an identifier meets health information in a message.
The consequences of getting this wrong are not abstract. HIPAA violations carry penalties up to $2.19 million per violation category per year. In 2024, HHS collected $9.9 million in fines across 22 investigations, up 37% from the year before.
So what does HIPAA actually require of the systems carrying these messages? The answer lives in three rules: the Privacy Rule, the Security Rule, and the Breach Notification Rule. Each one maps to specific architectural and operational decisions in how you build, buy, and manage chat infrastructure.
The three HIPAA rules that govern your chat app infrastructure
HIPAA isn’t a single thing. Check out how Stream talks about HIPAA in our Trust Center:
The compliance obligations that apply to messaging come from three separate rules, each with a different focus. Understanding which rule creates which requirement makes the difference between checking boxes and actually building a system that holds up under scrutiny.
The Privacy Rule
The Privacy Rule governs how PHI can be used and disclosed. HHS has explicitly stated that covered providers may communicate electronically with patients, including through email and messaging, provided they apply reasonable safeguards.
What the Privacy Rule does require is discipline around access and scope. The minimum necessary standard requires covered entities to limit the use and disclosure of PHI to only what's needed for the intended purpose. For a chat system, this has direct architectural implications:
- A billing team member shouldn't have access to clinical discussion channels.
- A specialist consulted on one case shouldn't see the full patient roster in a group thread.
- Search functionality should respect role boundaries, not surface messages across every channel a user could theoretically access.
The good news is that this is good UX regardless. You should always be doing this when building chat apps. But in a healthcare scenario, they're also Privacy Rule requirements.
The Privacy Rule also gives patients the right to request specific communication channels. If a patient asks to be contacted only through your app's secure messaging, not by phone, you're obligated to accommodate that request where reasonable. Your chat system needs to support these preferences rather than override them.
The Security Rule
The Security Rule is where most of the engineering requirements live.
It applies specifically to electronic PHI but is intentionally technology-neutral. It doesn't tell you which encryption algorithm to use or which cloud provider to run on. Instead, it defines outcomes:
- ePHI must be protected against unauthorized access during transmission
- Stored data must be encrypted
- Users must be uniquely identified
- Every access event must be logged
How you achieve those outcomes is up to you, as long as the measures are reasonable and appropriate for your environment.
The Breach Notification Rule
The Breach Notification Rule defines what happens when something goes wrong. If unsecured PHI is exposed, you must notify affected individuals, HHS, and (for breaches affecting 500 or more people) the media, all within 60 days of discovery.
The word "unsecured" is doing a lot of work in that sentence. HHS defines unsecured PHI as PHI that hasn't been rendered unusable, unreadable, or indecipherable to unauthorized persons. The primary method for achieving this is encryption that meets HHS-specified standards. If your chat messages were encrypted in accordance with those standards and the encryption keys were not compromised, the data is considered "secured," and the breach notification requirement is not triggered.
This is called the encryption safe harbor, and it turns encryption from a technical best practice into a direct business protection. An unencrypted chat database breach triggers mandatory notifications to every affected patient, a public filing with HHS, potential media coverage, and a near-certain OCR investigation. An encrypted database with intact key management doesn't trigger any of that. The same incident, two completely different outcomes, determined entirely by whether you encrypted properly.
The 18 HIPAA identifiers (and why chat makes them unavoidable)
HIPAA's Safe Harbor de-identification method lists 18 specific identifiers that, when linked to health information, create PHI. Removing all 18 is one path to de-identification. But in a messaging system, you're not trying to de-identify data. You're trying to understand just how much of what flows through your chat is PHI.
The answer, when you look at this list, is almost everything.
| Identifier | How it appears in chat |
|---|---|
| Names | Every patient mention, user profiles, @-mentions |
| Geographic data (smaller than state) | Addresses shared for home health, pharmacy locations, and facility names |
| Dates (except year) | Appointment dates, birthdates, and admission/discharge dates in scheduling messages |
| Phone numbers | Shared for callback requests, contact updates |
| Fax numbers | Referenced in referral coordination messages |
| Email addresses | Account identifiers, patient contact info shared between staff |
| Social Security numbers | Occasionally shared in benefits/insurance discussions |
| Medical record numbers | Referenced constantly in clinical coordination |
| Health plan beneficiary numbers | Insurance discussions, prior auth threads |
| Account numbers | Billing-related messages |
| Certificate/license numbers | Provider credential discussions |
| Vehicle identifiers and serial numbers | Rare, but can appear in injury-related cases |
| Device identifiers and serial numbers | Medical device discussions, remote monitoring threads |
| Web URLs | Patient portal links, shared records links |
| IP addresses | Stored in connection logs for every chat session |
| Biometric identifiers | Fingerprint/face ID login metadata |
| Full-face photographs | Profile photos, wound photos, and imaging shared in chat |
| Any other unique identifying number | Custom patient IDs, visit numbers, ticket numbers |
The identifiers that show up most frequently in messaging are names, dates, medical record numbers, and IP addresses. The first three appear in the message content. The fourth is captured automatically by your infrastructure every time someone connects. This means even a chat system that never displays a patient name might still collect an identifier (the IP address) and associate it with health-related activity.
Here's what this looks like in practice. This screenshot shows an example care team chat. Every message in this thread contains PHI, and most contain multiple identifiers:
The practical takeaway is that if you're building a chat feature for a telemedicine application, assume every message is PHI. Designing around that assumption is far safer than trying to classify messages after the fact.
The HIPAA technical safeguards you need in your chat architecture
This is where regulation meets engineering. The Security Rule's technical safeguards define the controls that apply to any system handling ePHI, and each one translates into concrete decisions about how your messaging system is built and operated.
Encryption
HIPAA requires encryption both in transit and at rest. For chat infrastructure, this means:
- In transit: TLS 1.2 or higher for every connection between a client app and your servers, and between your servers and any third-party services.
- At rest: AES-256 encryption for message databases, file attachment storage, backups, and any search indices that contain message content.
End-to-end encryption goes a step further: only the sender and recipient can decrypt the message content, so even the infrastructure provider can't read it. E2EE provides the strongest protection but introduces tradeoffs. Server-side search, content moderation, and analytics all become significantly harder when the server can't see the plaintext. Most chat implementations in telemedicine use TLS plus server-side encryption at rest, with E2EE as an optional layer for the most sensitive conversations.
Access controls
There are four access control specifications in the Security Rule:
- Unique user identification (required): Every person accessing the chat system must have their own account. Shared logins are a direct violation. This sounds obvious, but shared "nurse station" accounts remain common in clinical settings.
- Emergency access procedures (required): You need a documented process for accessing ePHI during an emergency, even if normal authentication is unavailable.
- Automatic logoff (addressable): Chat sessions must time out after a period of inactivity. Telehealth implementations typically use 5-15 minute windows, balancing security with the reality that clinicians constantly context-switch.
- Encryption and decryption (addressable): Access to stored messages must require decryption that is tied to authenticated user sessions.
In practice, this translates to role-based access control (RBAC) that determines which users can see which channels, send messages to which groups, and access which patient-related threads. A well-designed RBAC model maps directly to the minimum necessary standard from the Privacy Rule.
Here's one way to implement that mapping. This uses Stream's channel types to segment access by role:
1234567891011// server/src/lib/users.js // Map each role to the Stream channel types it is permitted to see. // This is the Privacy Rule's minimum necessary standard (§164.502(b)) // expressed as code. export const ROLE_ACCESS = { clinician: ['clinical', 'patient-care'], billing: ['billing'], patient: ['patient-care'], admin: ['clinical', 'patient-care', 'billing'], };
The server then uses this map to build the channel filter that Stream's ChannelList component receives. A billing user's client never requests clinical channels, and Stream's membership model ensures the server wouldn't return them even if it did:
1234// server/src/routes/channels.js const types = allowedChannelTypes(user.role); const filter = { members: { $in: [user.id] } };
The result is that two users sitting at the station see completely different channel lists depending on their role. Dr Chen sees the clinical and patient channels they need:
Sam in Billing sees only the finance side:
Automatic logoff
The Security Rule's automatic logoff specification requires that sessions terminate after a period of inactivity. For a chat application, this means the token itself should have a short lifespan, not just the UI session.
When minting a token on the server, you set an expiration:
12345// server/src/routes/auth.js const issuedAt = Math.floor(Date.now() / 1000); const expiresAt = issuedAt + config.sessionTtlSeconds; // default: 600s (10 min) const token = serverClient.createToken(user.id, expiresAt, issuedAt);
On the client, a countdown timer shows the remaining session time. When the token expires, the chat client disconnects, and the user returns to the login screen, regardless of whether they were actively typing.
Audit logging
Audit controls are a required specification. Your chat system must log:
- Authentication events (successful and failed login attempts)
- Message access (who viewed, sent, or received messages containing PHI)
- Modifications (message edits, deletions, reactions to clinical content)
- Administrative actions (role changes, user provisioning and de-provisioning, channel creation)
- System events (configuration changes, security incidents, permission modifications)
These logs must be retained for a minimum of six years, secured against tampering, and reviewed regularly.
Tamper evidence is the key property. If someone modifies a log entry, every subsequent audit record should detect the change. One approach is a SHA-256 hash chain where each entry commits to the hash of the previous entry:
1234567891011121314151617// server/src/lib/auditLog.js export function append({ actor, action, resource, meta = {} }) { const prev = lastHash(); const body = { ts: new Date().toISOString(), actor: actor || 'anonymous', action, resource: resource || null, meta, prev, }; const hash = sha256(JSON.stringify(body)); const entry = { ...body, hash }; appendFileSync(AUDIT_FILE, JSON.stringify(entry) + '\n'); return entry; }
Every route in the application emits audit entries through middleware, so the log captures the full access pattern without requiring individual developers to remember to add logging:
1234567891011121314// server/src/middleware/auditMiddleware.js export function auditMiddleware(req, res, next) { req.audit = ({ action, resource, meta = {} }) => { const actor = req.header('x-user-id') || 'anonymous'; return append({ actor, action, resource, meta: { ip: req.ip, userAgent: req.header('user-agent') || null, ...meta }, }); }; next(); }
The audit log then contains entries like this, one for every action in the system:
123456789101112131415{ "ts":"2026-04-09T15:49:46.868Z", "actor":"dr-chen", "action":"channel.list", "resource":"dr-chen", "meta":{ "ip":"::1", "userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36", "role":"clinician", "allowedTypes":["clinical","patient-care"], "matched":3 }, "prev":"e8635d00fc1b3b7eeb7b0cf9fb1c4ecda81ee2c12b3619551f62566d3e6032ae", "hash":"12ce2397989068b9891666f16694a513ea3751d16d75320cf697d400d75a344d" }
The meta object captures context: the user's IP address, browser, role, and what they were permitted to access. The prev and hash fields form the chain: prev is the hash of the previous entry, and hash is the hash of the current entry's contents. Change any character in any record, and every hash after it breaks.
In production, this file-based approach would be replaced by WORM (Write-Once, Read-Many) storage with SIEM integration. Many organizations implement tiered storage: hot storage for recent logs that feed into real-time anomaly detection, and cold WORM storage for the remainder of the six-year retention period.
The six-year retention requirement is easy to underestimate. For a busy messaging system generating thousands of events per day, that's a significant volume of immutable log data. Plan your storage architecture and costs accordingly.
The mobile PHI problem
Most telemedicine messaging happens on phones, and phones are where the hardest compliance problems live.
The most dangerous vector is one that most teams don't think about until it's too late: push notifications. When a message arrives, the notification payload passes through Apple's Push Notification Service or Google's Firebase Cloud Messaging, both of which are third-party servers that neither you nor your users control. If that payload contains message content, you've just transmitted PHI through an uncovered third party.
And even if the content doesn't leave the device, it can display on the lock screen. A phone sitting on a cafeteria table showing "Mrs. Taylor's O2 sat dropped to 91" is a PHI disclosure to anyone who glances at it.
You can configure push templates at the application level, so the fix is enforced globally rather than relying on individual developers to remember:
1234567891011121314151617181920212223242526// server/src/lib/seed.js export async function configurePhiSafePush() { await serverClient.updateAppSettings({ apn_config: { notification_template: JSON.stringify({ aps: { alert: { title: 'New secure message', body: 'Open the app to view.', }, 'mutable-content': 1, sound: 'default', }, }), }, firebase_config: { notification_template: JSON.stringify({ notification: { title: 'New secure message', body: 'Open the app to view.', }, }), }, }); }
No {{message.text}}, no patient names, no clinical content. The notification prompts the user to open the app; the app handles the rest, including its own access controls and authentication.
Beyond notifications, mobile devices introduce several additional requirements:
| Requirement | iOS | Android |
|---|---|---|
| Device encryption | On by default when the passcode is set | Must be explicitly enforced via MDM policy |
| App-level encryption | Supported via Keychain Services | Supported via Android Keystore |
| Remote wipe | Supported natively via MDM | Supported natively via MDM |
| Selective wipe (BYOD) | Requires MDM with managed app containerization | Requires Android Enterprise work profile |
| Jailbreak/root detection | Recommended, block access if detected | Recommended, block access if detected |
Organizations must maintain remote wipe capability for any device accessing the chat application. For corporate-owned devices, this means full wipe. For BYOD environments (which account for roughly 80% of clinical mobile device usage), selective wipe removes the work container while preserving personal data. Either way, when a clinician leaves the organization, their access to chat history containing PHI must be revoked immediately.
Business Associate Agreements across the stack
A Business Associate Agreement is required whenever a third party creates, receives, maintains, or transmits PHI on behalf of a covered entity. For chat infrastructure, the BAA chain extends further than most teams expect.
Your chat API or SDK provider is a business associate if it processes or stores message content, user data, or message metadata. Your cloud hosting provider is a business associate. Under the HITECH Act, the obligation flows down the entire chain: business associates must obtain BAAs from their own subcontractors. That means every component that touches PHI needs coverage:
- Chat API/SDK provider
- Cloud infrastructure provider (AWS, Azure, GCP)
- Push notification services (if notification payloads could contain PHI)
- File storage services (for attachments, images shared in chat)
- Analytics or monitoring tools that ingest message data
- Search infrastructure if it indexes message content
- Backup and disaster recovery providers
A BAA must contain: permitted uses and disclosures, safeguard requirements, breach reporting obligations, subcontractor flow-down terms, support for individual access rights, availability of records for HHS compliance reviews, and PHI return or destruction at termination. Missing any of these creates enforcement risk.
"HIPAA-compliant" vs. "HIPAA-eligible"
This distinction matters enormously and is frequently misrepresented. No product is "HIPAA certified." HHS does not certify or endorse any technology. The FTC penalized BetterHelp $7.8 million specifically for displaying misleading "HIPAA certified" claims.
- HIPAA-eligible means a service has the security controls necessary to support compliance and will sign a BAA. It can be configured to handle ePHI appropriately.
- HIPAA-compliant describes the state of a complete implementation: the eligible service, properly configured, with all required policies, training, risk assessments, and operational procedures in place.
A vendor provides eligibility. You build compliance on top of it. No purchase makes you compliant on its own.
Build vs. buy
Building HIPAA-compliant chat app from scratch is a serious infrastructure project. Before writing any messaging code, it's worth understanding what "from scratch" actually entails. A compliant messaging system requires, at a minimum:
- Real-time messaging infrastructure (WebSocket servers, message routing, delivery confirmation, presence, typing indicators)
- Encryption implementation (TLS termination, at-rest encryption with key management, potentially E2EE)
- Authentication and authorization (MFA, RBAC, session management, automatic logoff)
- Audit logging pipeline (capture, immutable storage, six-year retention, SIEM integration)
- File handling (encrypted upload, storage, virus scanning, access controls for attachments)
- Push notification service (with PHI-safe payload design)
- Mobile SDKs (iOS and Android, with device encryption enforcement and remote wipe support)
- Admin tooling (user provisioning, channel management, auditing, and compliance dashboards)
And that's just the build. Ongoing maintenance requires 2-3 dedicated engineers for security patches, protocol updates, and compliance documentation.
Beyond code, you need HIPAA compliance documentation, executed BAAs with every infrastructure provider, a formal risk assessment, a designated security officer, workforce training, penetration testing, and potentially SOC 2 Type II or HITRUST certification.
Using a compliant chat API
A HIPAA compliant chat API compresses time-to-market to a few weeks of integration work. The vendor provides the messaging infrastructure, encryption, pre-built UI components, and the compliance foundation (certifications, BAA, audit logging). Your team focuses on the application layer: workflows, UX, clinical features, and EHR integration.
HIPAA controls map directly to Stream features:
| HIPAA control | Stream feature | Demo implementation |
|---|---|---|
| Minimum necessary (§164.502(b)) | Channel types + membership | ROLE_ACCESS map filters ChannelList by role |
| Audit controls (§164.312(b)) | Webhook events + server SDK | Append-only JSONL with SHA-256 hash chain |
| Automatic logoff (§164.312(a)(2)(iii)) | Token expiration | Short-TTL tokens + client-side countdown |
| Person/entity auth (§164.312(d)) | Unique user tokens | Per-user token minting (add MFA in production) |
| PHI-safe push | updateAppSettings templates | Generic APN + Firebase templates, no message content |
| Encryption in transit | Built-in TLS | All Stream traffic is encrypted by default |
The critical point in either path: compliance is always shared responsibility. A vendor provides the technical safeguards. You own the policies, procedures, risk assessments, training, and configuration decisions that complete the compliance picture.
Building a HIPAA-compliant chat app with Stream
Stream's chat infrastructure is designed to support compliance requirements for telemedicine apps across the full stack.
Healthline, Roche, Hinge Health, Good Doctor, Vida Health, Grow Therapy, and Serenis already run their patient and provider telemed messaging on Stream. The compliance controls covered throughout this article, encryption at rest and in transit, role-based channel access, PHI-safe push notifications, audit logging, and automatic logoff, are all supported out of the box, with BAAs available and data residency options across four regions.
Stream's chat and video APIs run under a single BAA, so teams building telehealth applications don't need to stitch together separate vendors for messaging and video. To see how it works, explore Stream's healthcare solutions or start building for free.
And for clarity, what happened last night on The Pitt isn’t covered by HIPAA.