snapcap
Internals

Persistence model

Every persistent piece of state the SDK and the Snap bundle care about lands in a single DataStore. This is the key map and the reasoning behind it.

Every persistent piece of state the SDK and the Snap bundle care about lands in a single DataStore. From the consumer side it's one object passed into the SnapcapClient constructor; from the bundle side, it's localStorage / sessionStorage / indexedDB / document.cookie that happen to write to the same place. The whole design is a one-way translation: standard browser-storage APIs in, prefixed DataStore keys out.

This chapter is the key map and the reasoning behind it.

What a DataStore is

// src/storage/data-store.ts
export interface DataStore {
  get(key: string): Promise<Uint8Array | undefined>;
  set(key: string, value: Uint8Array): Promise<void>;
  delete(key: string): Promise<void>;
}

A keyed bytes blob. Three impls live in the SDK:

  • FileDataStore — single JSON file, in-memory cache, eager flush on every write. Default.
  • MemoryDataStore — in-memory only; tests.
  • BYO — anything that satisfies the three methods.

FileDataStore and MemoryDataStore also implement getSync / setSync / keys(prefix). The Web Storage shim and the document.cookie shim need synchronous reads to satisfy their spec'd APIs; if a custom DataStore omits the sync helpers, those shims fall back to a hydrate-at-construction cache.

The key map

Everything written under that DataStore goes through one of four prefixes. The keys actually used by the SDK + bundle today:

KeySourceWhat it is
cookie_jartough-cookie via CookieJarStore and the CookieContainerShim / DocumentCookieShimfull serialized jar — domain-scoped cookies for accounts.snapchat.com, web.snapchat.com, *.snapchat.com
local_uds_uds.e2eeIdentityKey.sharedBundle (Fidelius wrapped identity)RWK-encrypted private key + identityKeyId metadata — bundle-managed
local_uds_uds.* (other)Bundleother UDS-prefixed bundle state (per-message E2EE temp keys, etc.)
Other local_* / session_* / indexdb_*BundleSnap's own browser-storage writes — Zustand auth slice (bearer + self-user live here), feature flags, analytics ids, etc.

Bearer token and self-user are not SDK-owned keys. The bundle's Zustand auth slice holds them in memory and spills into local_* / session_* like any other bundle storage write. The SDK reads them back through authSlice(sandbox).authToken.token (and friends) — there's no separate cache to keep in sync.

The Snap bundle's other writes land alongside under the same four prefixes. We don't enumerate them — the bundle owns those entries and the SDK leaves them alone (including across client.logout()).

How the prefix routing works

Each shim is a thin adapter over a DataStore with a fixed prefix. The Web Storage shim:

// src/storage/storage-shim.ts:73-82
setItem(key: string, value: string): void {
  const fullKey = this.prefix + key;
  const bytes = new TextEncoder().encode(value);
  if (this.isSync()) {
    (this.store as SyncCapable).setSync(fullKey, bytes);
  } else {
    this.fallbackCache.set(key, value);
    void this.store.set(fullKey, bytes);
  }
}

StorageShim is constructed twice — LocalStorageShim makes one with prefix local_, SessionStorageShim makes one with prefix session_ (src/shims/storage-shim.ts:24,33). They share the DataStore but never collide.

IDBFactoryShim uses a structured key. An open call like indexedDB.open("snapcap", 1) followed by db.transaction("fidelius","readwrite").objectStore("fidelius").put(blob, "identity") lands in the DataStore at:

indexdb_snapcap__fidelius__identity

— prefix + dbName + __ + storeName + __ + key. Two-underscore separators keep _ inside any user-supplied key from colliding with the delimiter. See src/shims/indexed-db.ts:31-40.

document.cookie reads and writes route through tough-cookie's getCookiesSync / setCookieSync under one shared key (cookie_jar). See src/shims/document-cookie.ts.

tough-cookie maintains a domain-aware index internally. jar.getCookiesSync("https://accounts.snapchat.com") returns the cookies that match by domain, path, secure, and expiration; jar.setCookieSync(parsed, url) indexes by parsed Domain / Path and merges. Splitting that into multiple DataStore keys would mean re-implementing the indexing on top.

So: the entire jar serializes to JSON via jar.serializeSync() (src/storage/cookie-store.ts:34-38, src/shims/cookie-jar.ts) and lands as one bytes blob under cookie_jar. tough-cookie does the matching at request time.

Three paths read/write that key (all sharing the same in-memory tough-cookie.CookieJar instance via ShimContext.jar):

  • src/transport/cookies.ts — outgoing fetches (login POSTs, gRPC calls, media uploads). The CookieJarStore deserializes at construction and persists on flush.
  • src/shims/document-cookie.ts — bundle JS that reads or writes document.cookie from inside the sandbox.
  • src/shims/cookie-container.ts — happy-dom outgoing fetch (the bundle's own fetch() calls).

All three see each other's writes, so bundle-side cookie writes are visible to the next gRPC call and vice versa. See the shims chapter for the I/O ordering.

Why the bearer is in sessionStorage, not in cookies

SSO bearer tokens are short-lived (Snap doesn't document the TTL but empirically it's ~1 hour) and re-mintable. The durable bit is __Host-sc-a-auth-session — that cookie is what accounts.snapchat.com/accounts/sso checks before issuing a new ticket. Anything that has the cookie can mint a fresh bearer; anything that has a bearer without the cookie cannot.

So:

  • The cookie jar (under cookie_jar) is the source of truth for "am I logged in".
  • The bearer (under session_snapcap_bearer) is a per-process cache of the most recently minted ticket. On 401 it gets re-minted via mintBearer against the same jar; the new value overwrites the old.

Putting it in sessionStorage instead of localStorage is a deliberate framing: per-process cache, not per-account credential.

Why Fidelius identity is bundle-owned (not SDK-owned)

Earlier versions of the SDK pre-minted a Fidelius identity during login and serialized it as JSON at indexdb_snapcap__fidelius__identity. That code is removed.

Today the bundle's createMessagingSession flow drives the whole identity bootstrap:

  • Mints fresh keys via the WASM (P-256 keypair + RWK + identityKeyId).
  • Calls InitializeWebKey server-side via the gRPC factory the SDK registers.
  • Receives the wrapped form from the server and persists it via slot 8 of createMessagingSession (the pr()-compatible UDS store) at local_uds_uds.e2eeIdentityKey.shared.

Subsequent boots: slot 11's loadUserWrappedIdentityKeys reads the wrapped bytes back; the bundle unwraps internally; no InitializeWebKey round-trip.

This is better than the previous SDK-side flow: the SDK doesn't need to track identity rotation, the persisted form is RWK-wrapped (not plaintext), and the rate-limit-prone re-register path is taken less often. See the Fidelius chapter.

What client.logout() clears

// src/client.ts:219-229 (paraphrased)
async logout(): Promise<void> {
  await this.dataStore.delete("cookie_jar");
  ss?.removeItem("snapcap_bearer");        // → session_snapcap_bearer
  ls?.removeItem("snapcap_self");          // → local_snapcap_self
  // … reset in-memory fields
}

Three explicit deletes, one per SDK-owned key. Bundle-owned entries (other local_* / session_* / indexdb_*, including the bundle-minted Fidelius wrapped identity at local_uds_uds.e2eeIdentityKey.shared) are deliberately left intact: wiping them would force the next MessagingSession bring-up to re-mint and re-register Fidelius identity, which Snap rate-limits.

If a consumer wants a true wipe, drop the underlying DataStore (delete the file, drop the Redis namespace) — there's no SDK-supported way to selectively delete bundle keys without breaking the next session.

Plug-in points

Three places worth noting if you want to swap persistence:

  • The DataStore itself. Implement get / set / delete (and optionally getSync / setSync / keys for sync paths). Pass the instance into new SnapcapClient({ dataStore, … }). Done.
  • Encryption. A wrapper DataStore that AES-GCMs values on the way in and decrypts on the way out is the cleanest place to add at-rest encryption. The SDK never inspects raw bytes — they round-trip through the DataStore as opaque blobs.
  • Sharing. Two SnapcapClient instances pointed at the same DataStore are the same logical session. Two pointed at different DataStores are two separate accounts. Multi-tenant runners run one process per account because of the single-account-per-VM constraint.

On this page