How to Read an SFDC Log in 2026

How to Read an SFDC Log in 2026
📖 11 min read Updated: April 2026 By SaasMentic

By the end of this guide, you’ll be able to pull an sfdc log, isolate the transaction that matters, read execution events in order, and turn the output into a fix list for admins

By the end of this guide, you’ll be able to pull an sfdc log, isolate the transaction that matters, read execution events in order, and turn the output into a fix list for admins, developers, or RevOps stakeholders. Estimated time: 60–90 minutes for your first pass, then 10–15 minutes per log once you know the workflow.

⚡ Key Takeaways

  • Start by reproducing one exact transaction and capturing a fresh debug log; old logs create noise and hide the root cause.
  • Set trace flags deliberately in Salesforce Setup so the log includes Apex, database, validation, workflow, and system events at useful levels.
  • Read an sfdc log in sequence: transaction header, user context, execution units, SOQL/DML activity, automation events, then limits and exceptions.
  • Use Developer Console or VS Code with Salesforce extensions to search for EXCEPTION_THROWN, FATAL_ERROR, SOQL_EXECUTE, and DML_BEGIN before reading line by line.
  • The fastest way to make logs useful for the business is to convert findings into one of four outcomes: data issue, permissions issue, automation conflict, or code defect.

Before You Begin

You’ll need Salesforce access with permission to view debug logs, plus either Developer Console or VS Code with Salesforce Extensions. Have the exact user, record ID, and timestamp of the failing action ready. This guide assumes you’re troubleshooting Sales Cloud or a custom Salesforce org with flows, validation rules, Apex, or managed packages in play. If your team exports logs into metabase, sigma computing, or another BI layer, keep that separate from the raw debugging workflow.

Step 1: Reproduce the exact transaction you want to inspect

You’ll capture one clean event instead of digging through unrelated activity. Estimated time: 10–15 minutes.

Start with a single business action that failed or behaved unexpectedly. Good examples:

  • Lead conversion fails for one SDR
  • Opportunity save triggers the wrong stage update
  • Account creation works in sandbox but fails in production
  • A Flow sends duplicate tasks
  • An integration user gets intermittent insert errors

Write down these four items before you touch log settings:

  1. User who triggered the event
  2. Object and record ID involved
  3. Exact action taken
  4. Approximate timestamp down to the minute

Then reproduce the action once. Avoid repeated retries until logging is configured, or you’ll create several similar debug logs and waste time sorting them.

If the issue comes from an integration, identify whether it runs as:

  • A named integration user
  • A connected app session
  • A middleware platform user such as MuleSoft, Workato, or Zapier
  • A managed package context

For login-related issues, separate authentication from transaction debugging. A failed meta login or SSO handshake may never generate the same application events you’d expect from a record save. In those cases, check Setup → Login History first, then move to debug logs if the session gets far enough to execute platform logic.

Important: Don’t start with a “known bad” log from yesterday unless the issue is fully deterministic. Salesforce automation changes often enough that a stale log can point you to the wrong validation rule, flow version, or trigger path.

🎬 Future of SaaS – Oracle, Workday, ServiceNow, Salesforce — KPMG US

🎬 An Admin’s View Of SaaS Sales Cycles Through Salesforce #saas #revenueoperations #salesforce — DoubleTrack

Step 2: Set the right trace flags and debug levels in Salesforce

You’ll configure Salesforce to capture the events that matter without drowning in noise. Estimated time: 10 minutes.

Go to Setup → Debug Logs. Click New in the Monitored Users section if you’re tracing a person, or add the integration user if the process is system-driven.

Next, create or edit a Debug Level under Setup → Debug Levels. For most troubleshooting, use a balanced configuration like this:

Category Recommended starting level
Apex Code Finer
Apex Profiling Fine
Callout Info
Database Fine
System Debug
Validation Info
Workflow Fine
Visualforce Info

This setup usually gives enough detail to inspect automation and code without producing an unreadable wall of system chatter.

Then set the Trace Flag with a short expiration window, usually 30 minutes to 1 hour. Long-running trace flags collect unrelated actions and make the sfdc log harder to read.

If you’re debugging a Flow-heavy org, pay special attention to:

  • Workflow = Fine
  • Validation = Info
  • Database = Fine
  • System = Debug

That combination helps surface field updates, entry criteria, duplicate rules, and validation outcomes.

For Apex-heavy orgs, increase Apex Code to Finer and leave Database at Fine so you can see SOQL and DML activity clearly.

Pro Tip: Create two saved debug levels in mature orgs: one for “Flow/Admin troubleshooting” and one for “Apex/Developer troubleshooting.” Switching profiles is faster than editing categories every time.

A quick note on adjacent tools: if someone on the team asks “what the sigma is going on in the pipeline data,” they may be looking at downstream reporting in sigma computing or metabase. That’s useful for spotting symptoms, but root-cause analysis still starts in the raw Salesforce log and setup metadata.

Step 3: Capture and download the fresh log

You’ll generate a log tied to one known action and store it in a format you can search. Estimated time: 5–10 minutes.

With the trace flag active, repeat the exact transaction once. Then return to Setup → Debug Logs and refresh the page. You should see a new entry with:

  • The monitored user
  • A start time matching your test
  • A log size large enough to indicate real execution
  • An operation name that roughly matches the event

Open the log in one of these ways:

  1. Developer Console
  2. Click the log entry
  3. Use the Execution Log panel
  4. Expand tree nodes for events and limits

  5. Download the raw .log file

  6. Better for large logs
  7. Easier to search in VS Code, Sublime Text, or another editor

  8. VS Code with Salesforce Extensions

  9. Open the file locally
  10. Use global search and split panes
  11. Compare multiple logs side by side

Name the file with a practical convention if you download it, for example:

2026-01-14_opportunity-save_user-jlee_record-006xxxx.log

That makes it easier to compare before/after tests or hand the file to engineering.

If the log doesn’t appear, check these common causes:

  • Trace flag expired
  • Wrong user was monitored
  • The action ran under an integration user instead
  • The event was blocked at login, not transaction level
  • Log storage limits were hit

If your team also pipes Salesforce data into metabasic dashboards or a metabase instance for ops reporting, don’t confuse those event summaries with platform debug logs. They answer different questions.

Step 4: Triage the log by searching for high-signal markers first

You’ll narrow the investigation to the few lines most likely to explain the failure. Estimated time: 10–15 minutes.

Don’t read from top to bottom on your first pass. Search for these markers in this order:

  1. FATAL_ERROR
  2. EXCEPTION_THROWN
  3. VALIDATION_RULE
  4. FLOW_START_INTERVIEW_BEGIN
  5. FLOW_ELEMENT_ERROR
  6. SOQL_EXECUTE_BEGIN
  7. DML_BEGIN
  8. DML_EXCEPTION
  9. CUMULATIVE_LIMIT_USAGE
  10. CODE_UNIT_STARTED

Here’s what each one usually tells you:

Marker What it usually means Next move
FATAL_ERROR Transaction ended hard Read 20–40 lines above it
EXCEPTION_THROWN Apex or platform exception Find class, method, and message
VALIDATION_RULE Save blocked by rule logic Identify rule name and field values
FLOW_ELEMENT_ERROR Flow element failed Check element label and referenced record
SOQL_EXECUTE_BEGIN Query fired Count queries and inspect filters
DML_BEGIN Insert/update/delete started Match to later exception or rollback
CUMULATIVE_LIMIT_USAGE Governor limit pressure Confirm query, CPU, or DML overages

In many cases, the useful clue is 10–30 lines before the visible error. For example:

  • A DML_EXCEPTION may be caused by a validation rule higher up
  • A null pointer may trace back to an empty query result
  • A flow fault may come from a missing related record or field permission

Pro Tip: Search backward from FATAL_ERROR or EXCEPTION_THROWN to the nearest CODE_UNIT_STARTED. That gives you the automation block—trigger, flow, process, or package—that owned the failing path.

Step 5: Read the execution path in order and map each automation layer

You’ll identify which part of Salesforce actually changed the record or caused the stop. Estimated time: 15–20 minutes.

Once triage points you to the right section, read the transaction in sequence. Focus on this order:

  1. User and request context
  2. Object operation
  3. Validation rules
  4. Before-save flows
  5. Before triggers
  6. Database save attempt
  7. After triggers
  8. After-save flows / workflow / process logic
  9. Roll-up actions, sharing, async handoffs
  10. Commit or rollback

In the raw log, look for entries like:

  • CODE_UNIT_STARTED
  • CODE_UNIT_FINISHED
  • VALIDATION_RULE
  • WF_RULE_EVAL_BEGIN
  • FLOW_START_INTERVIEW_BEGIN
  • FLOW_ELEMENT_BEGIN
  • SOQL_EXECUTE_BEGIN
  • DML_BEGIN
  • DML_END

Create a simple scratch table as you read:

Layer What ran Result
Validation Opportunity_Close_Date_Check Passed
Flow Update_Renewal_Flag Updated field
Trigger OpportunityTrigger before update Queried related contracts
DML Update Opportunity Failed
Exception Validation on Contract Blocked save

This is where many teams lose time. They see a trigger name and assume code is the problem, when the actual blocker is a downstream validation rule or flow update on a related record.

If managed packages are involved, note the namespace prefix. That tells you whether the issue lives in your codebase or a vendor’s package. In those cases, capture:

  • Package namespace
  • Class or flow name
  • Error message
  • Record IDs affected
  • Reproduction steps

That package-level detail is what support teams will ask for first.

Step 6: Interpret SOQL, DML, and governor limits

You’ll determine whether the issue is data logic, query design, or platform limits. Estimated time: 10–15 minutes.

A lot of log reading comes down to three questions:

  • What data did Salesforce try to fetch?
  • What record operation did it try to perform?
  • Did the transaction run out of limits before finishing?

For SOQL, check:

  • Query count
  • Filters and bind variables
  • Rows returned
  • Whether the result set was empty when code expected one record

A common pattern looks like this: – Query runs – Returns zero rows – Later line throws a null pointer or list index error

For DML, inspect:

  • Object being inserted or updated
  • Whether this was a single-record or bulk operation
  • Which field values likely triggered the failure
  • Whether the transaction rolled back

For governor limits, jump to CUMULATIVE_LIMIT_USAGE. Watch for:

  • Too many SOQL queries
  • Too many DML statements
  • CPU time exceeded
  • Too many query rows

If you see CPU pressure but not an explicit exception yet, scan for repeated flow loops, recursive trigger entries, or the same object being updated multiple times in one transaction.

Important: A limit failure is often a design issue, not just a “busy org” issue. Repeated record updates across Flow + Apex + managed package logic can push a transaction over CPU even when each piece looks reasonable on its own.

This is also where reporting tools can mislead teams. Sigma computing or metabase might show delayed or duplicated pipeline changes, but the sfdc log tells you whether Salesforce actually committed one update, multiple updates, or none at all.

Step 7: Turn the log into a fix plan and validate the outcome

You’ll convert technical findings into next actions the right owner can execute. Estimated time: 15 minutes.

When you finish reading, classify the issue into one bucket:

  1. Data issue
  2. Missing required related record
  3. Bad field value
  4. Duplicate rule conflict

  5. Permissions issue

  6. Field-level security
  7. Object access
  8. Sharing or record ownership

  9. Automation conflict

  10. Validation rule blocks flow update
  11. Flow and trigger update the same field
  12. Managed package logic collides with custom logic

  13. Code defect

  14. Null handling missing
  15. Query not bulk-safe
  16. Incorrect assumptions in trigger or class

Then write a short handoff note using this format:

  • Observed action: User updated Opportunity 006...
  • Failure point: DML_EXCEPTION during Contract update
  • Owning logic: Flow Update_Renewal_Flag followed by validation rule Contract_Status_Lock
  • Impact: Opportunity save rolls back
  • Recommended fix: Adjust flow criteria or validation exception condition
  • Validation plan: Re-run with same user and record after change

After the fix, repeat the exact same transaction with the trace flag still active. Compare the new log against the failing one:

  • Did the exception disappear?
  • Did the automation path change as expected?
  • Did the transaction commit?
  • Are there new warnings or near-limit indicators?

That final comparison is what closes the loop. Don’t stop at “it works now.” Confirm that the path is cleaner and that you didn’t just shift the failure downstream.

Pro Tip: Save one “bad” and one “good” log for recurring issues. Side-by-side comparison is the fastest way to explain root cause to admins, developers, and revenue stakeholders who don’t want a full code walkthrough.

Common Mistakes to Avoid

  • Tracing the wrong user

Many “missing log” problems happen because the action actually ran under an integration user, queue context, or managed package user. Confirm execution context before setting the trace flag.

  • Using debug levels that are too noisy

Setting everything to the highest level makes logs harder to read and can produce oversized files. Start with targeted categories and increase detail only where needed.

  • Reading only the final error line

The last exception rarely tells the whole story. The cause is often earlier: an empty query result, a validation rule evaluation, or a prior field update.

  • Skipping retest after the fix

A successful save doesn’t guarantee the automation path is correct. Re-run the transaction and verify the new log shows the intended sequence and no hidden limit pressure.

FAQ

How long does it take to learn how to read an SFDC log well?

Most admins and RevOps practitioners can get useful answers from an sfdc log after a few guided sessions. Expect your first real investigation to take 60–90 minutes. After you learn the common markers and your org’s automation patterns, many issues can be triaged in 10–15 minutes.

What’s the fastest way to find the root cause in a large log?

Search for FATAL_ERROR, EXCEPTION_THROWN, FLOW_ELEMENT_ERROR, and VALIDATION_RULE first. Then move upward to the nearest CODE_UNIT_STARTED. That usually identifies the automation block responsible for the failure without reading thousands of lines in order.

Can I use Metabase or Sigma Computing instead of Salesforce debug logs?

No. Metabase and sigma computing are useful for spotting symptoms in pipeline, activity, or sync outcomes, but they do not replace platform-level debug output. Use them to detect anomalies, then use the Salesforce log to confirm what executed and why it failed.

What if the issue is a login problem rather than a record-save problem?

Start with Setup → Login History and your identity provider logs. A failed meta login, SSO assertion issue, or session policy block may prevent the transaction from ever reaching the application layer. Debug logs help only after Salesforce actually starts executing business logic.

Gaurav Goyal

Written by Gaurav Goyal

B2B SaaS SEO & Content Strategist

Gaurav builds AI-powered SEO and content systems that generate predictable pipeline for B2B SaaS companies. With expertise in Answer Engine Optimization (AEO) and healthcare SaaS SEO, he helps brands build authority in the AI search era.

🚀 Stay Ahead in B2B SaaS

Get weekly insights on the best tools, trends, and strategies delivered to your inbox.

Subscribe to Newsletter

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *