Skip to main content

Platform.Function — Marketing Cloud SSJS reference

The core SSJS API namespace — Data Extension reads and writes (LookupRows, InsertData, UpdateData, UpsertData, DeleteData) plus the helpers you reach for in every script (GUID, Now, ParseJSON). What works as documented, what has silent traps, and the patterns we land on.

Reference·Last updated 2026-05-08·Drafted by Lira · Edited by German Medina

Platform.Function.* is where the actual work happens in SSJS. Plain JavaScript gives you control flow; this namespace gives you Data Extension reads and writes, GUID generation, JSON parsing, current time, encoding, hashing — the operations that interact with Marketing Cloud itself. Most of the lines in a useful SSJS script are calls into this namespace.

This page covers the operations you'll use in 90% of scripts. The remaining Platform.Function.* calls live in dedicated function-category pages (string / date / encoding / hashing) coming next.

Official syntax

The namespace becomes available after Platform.Load("Core","1.1.5") (see gotchas — #2). The most-used calls:

Platform.Load("Core", "1.1.5");

// === DATA EXTENSION READS ===

// Lookup by single column → array of row objects (capped at 2500)
var rows = Platform.Function.LookupRows(
  "master_subscribers",   // DE name
  "SubscriberKey",        // filter column
  "12345"                 // filter value
);

// Lookup by multiple columns → AND across all filters
var activeRows = Platform.Function.LookupRows(
  "master_subscribers",
  "Status", "Active",
  "LoyaltyTier", "gold"
);

// Lookup with ordering → first 2500 rows ordered by column DESC/ASC
var topPurchases = Platform.Function.LookupOrderedRows(
  "purchases",
  100,                    // top N (max 2500)
  "OrderDate desc",       // order clause
  "SubscriberKey", "12345"
);

// === DATA EXTENSION WRITES ===

// Insert (always adds, no merge)
Platform.Function.InsertData(
  "de_log_runs",
  ["RunId", "Step"],          // key columns
  ["abc-123", "lookup"],       // key values
  ["Ts", "Message"],           // additional columns
  [Now(), "lookup completed"]  // additional values
);

// Update (matches existing row(s) on key columns; non-key cols updated)
Platform.Function.UpdateData(
  "master_subscribers",
  ["SubscriberKey"], ["12345"],
  ["Status", "UpdatedAt"], ["Active", Now()]
);

// Upsert (update if PK matches, insert if not — but see gotcha #4)
Platform.Function.UpsertData(
  "master_subscribers",
  ["SubscriberKey"], ["12345"],
  ["Status", "UpdatedAt"], ["Active", Now()]
);

// Delete (matches on key columns; deletes ALL matching rows)
Platform.Function.DeleteData(
  "de_log_overrides",
  ["RunId"], ["abc-123"]
);

// === HELPERS ===

Platform.Function.GUID();                  // → "ab1cd234-ef56-7890-..."
Platform.Function.ParseJSON('{"a":1}');    // → { a: 1 }
Platform.Function.Stringify({a:1});        // → '{"a":1}'
Now();                                     // current datetime (UTC)

The signatures all share the same shape: DE name first, then two paired arrays for key columns + key values, then optionally two more paired arrays for non-key columns + values. Mixing up array order is the most common source of "function works but writes garbage" bugs.

| Function | Returns | Use for | |---|---|---| | LookupRows(de, col, val) | Array of row objects, max 2500 | Filtering on 1 column | | LookupRows(de, col1, val1, col2, val2) | Array, max 2500 | AND across multiple cols | | LookupOrderedRows(de, top, order, ...) | Array, max 2500 | Top-N reads with ordering | | RetrieveDEByName(name) | DE object (metadata) | Schema introspection | | InsertData(de, keys, vals) | New row count (1) | Adding without dedup | | UpdateData(de, keys, vals, [cols], [vals]) | Updated row count | Matching by key, updating cols | | UpsertData(de, keys, vals, [cols], [vals]) | Affected row count | Upsert (PK-aware, see gotcha #4) | | DeleteData(de, keys, vals) | Deleted row count | Removing rows by key | | GUID() | UUID-style string | RunIDs, idempotency keys | | ParseJSON(str) | Object / array | Parsing JSON in older tenants | | Stringify(obj) | JSON string | Serializing for log writes |

Reference:

What survives in production

LookupRows caps at 2500 silently — code defensively for it

The single most common SSJS bug shape: a script works in dev with 500 rows, ships to production with 50,000 rows, and silently processes 2500 of them. There is no error. The function returns an array of 2500 elements and the caller iterates happily.

// AT RISK — if LookupRows returns exactly 2500, you almost
// certainly missed rows. The script doesn't know it was truncated.
var rows = Platform.Function.LookupRows("master_subs", "Status", "Active");
for (var i = 0; i < rows.length; i++) {
  // ... process row
}

// DEFENSIVE — check the cap and surface the truncation
var rows = Platform.Function.LookupRows("master_subs", "Status", "Active");
if (rows.length === 2500) {
  Platform.Function.UpsertData(
    "de_log_errors",
    ["RunId"], [runId],
    ["Step","Msg","Ts"],
    ["lookup-cap-hit", "LookupRows returned 2500 — likely truncated", Now()]
  );
  // Either fail loud, or pivot to WSProxy/SQL Activity for full set
}

For real "all rows" semantics, drop down to WSProxy with paged retrieves, or pivot to a SQL Query Activity (which is designed for bulk reads). See SSJS gotchas — #5.

UpsertData requires the destination DE's PK to be configured correctly

UpsertData works only when the destination DE has a primary key on the columns you pass as keys. Without that PK, the function silently inserts duplicates (or, on some tenants, does nothing). Always verify the destination DE configuration before using this function.

// AT RISK — assumes the destination has SubscriberKey as PK.
// If it doesn't, you'll keep adding rows.
Platform.Function.UpsertData(
  "master_subs",
  ["SubscriberKey"], [subKey],
  ["EmailAddress", "Status"], [email, "Active"]
);

// DEFENSIVE — at minimum, confirm the PK once before this call
// reaches production. If the team can't promise the PK is set,
// fall back to read-then-write:
var existing = Platform.Function.LookupRows("master_subs", "SubscriberKey", subKey);
if (existing.length > 0) {
  Platform.Function.UpdateData(
    "master_subs",
    ["SubscriberKey"], [subKey],
    ["EmailAddress", "Status"], [email, "Active"]
  );
} else {
  Platform.Function.InsertData(
    "master_subs",
    ["SubscriberKey", "EmailAddress", "Status"], [subKey, email, "Active"]
  );
}

The read-then-write version costs an extra LookupRows but doesn't depend on schema configuration outside your control. See SSJS gotchas — #4.

DeleteData matches on key columns and deletes every match

DeleteData doesn't care about uniqueness. If three rows match the key you passed, all three get deleted. If you intended to delete one specific row, make sure your key columns + values uniquely identify it.

// AT RISK — if multiple rows have RunId="abc-123", all of them
// get deleted, not just the one you meant
Platform.Function.DeleteData(
  "de_log_overrides",
  ["RunId"], ["abc-123"]
);

// SAFE — compose key from columns that uniquely identify the row
Platform.Function.DeleteData(
  "de_log_overrides",
  ["RunId", "Step"], ["abc-123", "lookup"]
);

For "delete one row" semantics, the key columns have to match the destination DE's actual primary key.

GUID() for run IDs and idempotency keys

Platform.Function.GUID() returns a fresh UUID string per call. Use it as a RunId at the top of every script and pass it to every log write — that way every row in de_log_* ties back to the specific Activity run that wrote it.

Platform.Load("Core", "1.1.5");
var runId = Platform.Function.GUID();

Platform.Function.InsertData(
  "de_log_runs",
  ["RunId", "Step", "Ts"],
  [runId, "start", Now()]
);

// ... script body, all log writes carry the same runId

Platform.Function.InsertData(
  "de_log_runs",
  ["RunId", "Step", "Ts"],
  [runId, "end", Now()]
);

When you investigate a failure, you query WHERE RunId = '...' and get every row that script wrote, in order. Without a RunId, the log is a soup.

ParseJSON / Stringify over the native JSON.* for portability

Modern SFMC tenants support JSON.parse and JSON.stringify, but older tenants don't. The Platform versions are guaranteed to work across editions. Default to Platform.Function.ParseJSON() and Platform.Function.Stringify() unless you know your tenant's edition.

// PORTABLE
var obj = Platform.Function.ParseJSON('{"key":"value"}');
var str = Platform.Function.Stringify(obj);

// May not work on older tenants
var obj = JSON.parse('{"key":"value"}');
var str = JSON.stringify(obj);

Quick decision

Use LookupRows when:

  • You need to read fewer than 2500 rows from a DE.
  • The filter is by 1–2 columns with simple equality.
  • Defensive: always check .length === 2500 to detect cap-hit.

Use LookupOrderedRows when:

  • You need top-N rows in a specific order. Same 2500 cap applies.

Drop down to WSProxy when:

  • You need more than 2500 rows.
  • You need to retrieve Salesforce objects beyond the wrapper's scope (Subscriber, List, etc. with full SOAP filters).
  • You need async / batch operations.

Use InsertData when:

  • The destination is an event log or append-only table. No dedup needed at write time.

Use UpdateData when:

  • The row(s) definitely exist and you only want to update non-key columns.
  • The destination DE doesn't have the PK you'd need for UpsertData.

Use UpsertData when:

  • The destination DE has a primary key on the columns you're passing as keys (verify in the UI before deploying).
  • The semantics are "make this row look like X, whether it exists or not".

Use the read-then-write pattern instead of UpsertData when:

  • You're not 100% sure of the PK configuration.
  • You need different behavior for insert vs update (e.g., set CreatedAt only on insert).

Related

More SSJS reference pages incoming: WSProxy · Variable + Util · String / Date / Encoding / Hashing function categories · Style Guide.

Plus how-to snippets for common production patterns — DE add/update/upsert sequences, error handling, callout pagination, etc.