By the end of this guide, youâll be able to pull an sfdc log, isolate the transaction that matters, read execution events in order, and turn the output into a fix list for admins
By the end of this guide, youâll be able to pull an sfdc log, isolate the transaction that matters, read execution events in order, and turn the output into a fix list for admins, developers, or RevOps stakeholders. Estimated time: 60â90 minutes for your first pass, then 10â15 minutes per log once you know the workflow.
⥠Key Takeaways
- Start by reproducing one exact transaction and capturing a fresh debug log; old logs create noise and hide the root cause.
- Set trace flags deliberately in Salesforce Setup so the log includes Apex, database, validation, workflow, and system events at useful levels.
- Read an sfdc log in sequence: transaction header, user context, execution units, SOQL/DML activity, automation events, then limits and exceptions.
- Use Developer Console or VS Code with Salesforce extensions to search for
EXCEPTION_THROWN,FATAL_ERROR,SOQL_EXECUTE, andDML_BEGINbefore reading line by line. - The fastest way to make logs useful for the business is to convert findings into one of four outcomes: data issue, permissions issue, automation conflict, or code defect.
Before You Begin
Youâll need Salesforce access with permission to view debug logs, plus either Developer Console or VS Code with Salesforce Extensions. Have the exact user, record ID, and timestamp of the failing action ready. This guide assumes youâre troubleshooting Sales Cloud or a custom Salesforce org with flows, validation rules, Apex, or managed packages in play. If your team exports logs into metabase, sigma computing, or another BI layer, keep that separate from the raw debugging workflow.
Step 1: Reproduce the exact transaction you want to inspect
Youâll capture one clean event instead of digging through unrelated activity. Estimated time: 10â15 minutes.
Start with a single business action that failed or behaved unexpectedly. Good examples:
- Lead conversion fails for one SDR
- Opportunity save triggers the wrong stage update
- Account creation works in sandbox but fails in production
- A Flow sends duplicate tasks
- An integration user gets intermittent insert errors
Write down these four items before you touch log settings:
- User who triggered the event
- Object and record ID involved
- Exact action taken
- Approximate timestamp down to the minute
Then reproduce the action once. Avoid repeated retries until logging is configured, or youâll create several similar debug logs and waste time sorting them.
If the issue comes from an integration, identify whether it runs as:
- A named integration user
- A connected app session
- A middleware platform user such as MuleSoft, Workato, or Zapier
- A managed package context
For login-related issues, separate authentication from transaction debugging. A failed meta login or SSO handshake may never generate the same application events youâd expect from a record save. In those cases, check Setup â Login History first, then move to debug logs if the session gets far enough to execute platform logic.
Important: Donât start with a âknown badâ log from yesterday unless the issue is fully deterministic. Salesforce automation changes often enough that a stale log can point you to the wrong validation rule, flow version, or trigger path.
Step 2: Set the right trace flags and debug levels in Salesforce
Youâll configure Salesforce to capture the events that matter without drowning in noise. Estimated time: 10 minutes.
Go to Setup â Debug Logs. Click New in the Monitored Users section if youâre tracing a person, or add the integration user if the process is system-driven.
Next, create or edit a Debug Level under Setup â Debug Levels. For most troubleshooting, use a balanced configuration like this:
| Category | Recommended starting level |
|---|---|
| Apex Code | Finer |
| Apex Profiling | Fine |
| Callout | Info |
| Database | Fine |
| System | Debug |
| Validation | Info |
| Workflow | Fine |
| Visualforce | Info |
This setup usually gives enough detail to inspect automation and code without producing an unreadable wall of system chatter.
Then set the Trace Flag with a short expiration window, usually 30 minutes to 1 hour. Long-running trace flags collect unrelated actions and make the sfdc log harder to read.
If youâre debugging a Flow-heavy org, pay special attention to:
- Workflow = Fine
- Validation = Info
- Database = Fine
- System = Debug
That combination helps surface field updates, entry criteria, duplicate rules, and validation outcomes.
For Apex-heavy orgs, increase Apex Code to Finer and leave Database at Fine so you can see SOQL and DML activity clearly.
Pro Tip: Create two saved debug levels in mature orgs: one for âFlow/Admin troubleshootingâ and one for âApex/Developer troubleshooting.â Switching profiles is faster than editing categories every time.
A quick note on adjacent tools: if someone on the team asks âwhat the sigma is going on in the pipeline data,â they may be looking at downstream reporting in sigma computing or metabase. Thatâs useful for spotting symptoms, but root-cause analysis still starts in the raw Salesforce log and setup metadata.
Step 3: Capture and download the fresh log
Youâll generate a log tied to one known action and store it in a format you can search. Estimated time: 5â10 minutes.
With the trace flag active, repeat the exact transaction once. Then return to Setup â Debug Logs and refresh the page. You should see a new entry with:
- The monitored user
- A start time matching your test
- A log size large enough to indicate real execution
- An operation name that roughly matches the event
Open the log in one of these ways:
- Developer Console
- Click the log entry
- Use the Execution Log panel
-
Expand tree nodes for events and limits
-
Download the raw
.logfile - Better for large logs
-
Easier to search in VS Code, Sublime Text, or another editor
-
VS Code with Salesforce Extensions
- Open the file locally
- Use global search and split panes
- Compare multiple logs side by side
Name the file with a practical convention if you download it, for example:
2026-01-14_opportunity-save_user-jlee_record-006xxxx.log
That makes it easier to compare before/after tests or hand the file to engineering.
If the log doesnât appear, check these common causes:
- Trace flag expired
- Wrong user was monitored
- The action ran under an integration user instead
- The event was blocked at login, not transaction level
- Log storage limits were hit
If your team also pipes Salesforce data into metabasic dashboards or a metabase instance for ops reporting, donât confuse those event summaries with platform debug logs. They answer different questions.
Step 4: Triage the log by searching for high-signal markers first
Youâll narrow the investigation to the few lines most likely to explain the failure. Estimated time: 10â15 minutes.
Donât read from top to bottom on your first pass. Search for these markers in this order:
FATAL_ERROREXCEPTION_THROWNVALIDATION_RULEFLOW_START_INTERVIEW_BEGINFLOW_ELEMENT_ERRORSOQL_EXECUTE_BEGINDML_BEGINDML_EXCEPTIONCUMULATIVE_LIMIT_USAGECODE_UNIT_STARTED
Hereâs what each one usually tells you:
| Marker | What it usually means | Next move |
|---|---|---|
FATAL_ERROR |
Transaction ended hard | Read 20â40 lines above it |
EXCEPTION_THROWN |
Apex or platform exception | Find class, method, and message |
VALIDATION_RULE |
Save blocked by rule logic | Identify rule name and field values |
FLOW_ELEMENT_ERROR |
Flow element failed | Check element label and referenced record |
SOQL_EXECUTE_BEGIN |
Query fired | Count queries and inspect filters |
DML_BEGIN |
Insert/update/delete started | Match to later exception or rollback |
CUMULATIVE_LIMIT_USAGE |
Governor limit pressure | Confirm query, CPU, or DML overages |
In many cases, the useful clue is 10â30 lines before the visible error. For example:
- A
DML_EXCEPTIONmay be caused by a validation rule higher up - A null pointer may trace back to an empty query result
- A flow fault may come from a missing related record or field permission
Pro Tip: Search backward from
FATAL_ERRORorEXCEPTION_THROWNto the nearestCODE_UNIT_STARTED. That gives you the automation blockâtrigger, flow, process, or packageâthat owned the failing path.
Step 5: Read the execution path in order and map each automation layer
Youâll identify which part of Salesforce actually changed the record or caused the stop. Estimated time: 15â20 minutes.
Once triage points you to the right section, read the transaction in sequence. Focus on this order:
- User and request context
- Object operation
- Validation rules
- Before-save flows
- Before triggers
- Database save attempt
- After triggers
- After-save flows / workflow / process logic
- Roll-up actions, sharing, async handoffs
- Commit or rollback
In the raw log, look for entries like:
CODE_UNIT_STARTEDCODE_UNIT_FINISHEDVALIDATION_RULEWF_RULE_EVAL_BEGINFLOW_START_INTERVIEW_BEGINFLOW_ELEMENT_BEGINSOQL_EXECUTE_BEGINDML_BEGINDML_END
Create a simple scratch table as you read:
| Layer | What ran | Result |
|---|---|---|
| Validation | Opportunity_Close_Date_Check |
Passed |
| Flow | Update_Renewal_Flag |
Updated field |
| Trigger | OpportunityTrigger before update |
Queried related contracts |
| DML | Update Opportunity | Failed |
| Exception | Validation on Contract | Blocked save |
This is where many teams lose time. They see a trigger name and assume code is the problem, when the actual blocker is a downstream validation rule or flow update on a related record.
If managed packages are involved, note the namespace prefix. That tells you whether the issue lives in your codebase or a vendorâs package. In those cases, capture:
- Package namespace
- Class or flow name
- Error message
- Record IDs affected
- Reproduction steps
That package-level detail is what support teams will ask for first.
Step 6: Interpret SOQL, DML, and governor limits
Youâll determine whether the issue is data logic, query design, or platform limits. Estimated time: 10â15 minutes.
A lot of log reading comes down to three questions:
- What data did Salesforce try to fetch?
- What record operation did it try to perform?
- Did the transaction run out of limits before finishing?
For SOQL, check:
- Query count
- Filters and bind variables
- Rows returned
- Whether the result set was empty when code expected one record
A common pattern looks like this: – Query runs – Returns zero rows – Later line throws a null pointer or list index error
For DML, inspect:
- Object being inserted or updated
- Whether this was a single-record or bulk operation
- Which field values likely triggered the failure
- Whether the transaction rolled back
For governor limits, jump to CUMULATIVE_LIMIT_USAGE. Watch for:
- Too many SOQL queries
- Too many DML statements
- CPU time exceeded
- Too many query rows
If you see CPU pressure but not an explicit exception yet, scan for repeated flow loops, recursive trigger entries, or the same object being updated multiple times in one transaction.
Important: A limit failure is often a design issue, not just a âbusy orgâ issue. Repeated record updates across Flow + Apex + managed package logic can push a transaction over CPU even when each piece looks reasonable on its own.
This is also where reporting tools can mislead teams. Sigma computing or metabase might show delayed or duplicated pipeline changes, but the sfdc log tells you whether Salesforce actually committed one update, multiple updates, or none at all.
Step 7: Turn the log into a fix plan and validate the outcome
Youâll convert technical findings into next actions the right owner can execute. Estimated time: 15 minutes.
When you finish reading, classify the issue into one bucket:
- Data issue
- Missing required related record
- Bad field value
-
Duplicate rule conflict
-
Permissions issue
- Field-level security
- Object access
-
Sharing or record ownership
-
Automation conflict
- Validation rule blocks flow update
- Flow and trigger update the same field
-
Managed package logic collides with custom logic
-
Code defect
- Null handling missing
- Query not bulk-safe
- Incorrect assumptions in trigger or class
Then write a short handoff note using this format:
- Observed action: User updated Opportunity
006... - Failure point:
DML_EXCEPTIONduring Contract update - Owning logic: Flow
Update_Renewal_Flagfollowed by validation ruleContract_Status_Lock - Impact: Opportunity save rolls back
- Recommended fix: Adjust flow criteria or validation exception condition
- Validation plan: Re-run with same user and record after change
After the fix, repeat the exact same transaction with the trace flag still active. Compare the new log against the failing one:
- Did the exception disappear?
- Did the automation path change as expected?
- Did the transaction commit?
- Are there new warnings or near-limit indicators?
That final comparison is what closes the loop. Donât stop at âit works now.â Confirm that the path is cleaner and that you didnât just shift the failure downstream.
Pro Tip: Save one âbadâ and one âgoodâ log for recurring issues. Side-by-side comparison is the fastest way to explain root cause to admins, developers, and revenue stakeholders who donât want a full code walkthrough.
Common Mistakes to Avoid
- Tracing the wrong user
Many âmissing logâ problems happen because the action actually ran under an integration user, queue context, or managed package user. Confirm execution context before setting the trace flag.
- Using debug levels that are too noisy
Setting everything to the highest level makes logs harder to read and can produce oversized files. Start with targeted categories and increase detail only where needed.
- Reading only the final error line
The last exception rarely tells the whole story. The cause is often earlier: an empty query result, a validation rule evaluation, or a prior field update.
- Skipping retest after the fix
A successful save doesnât guarantee the automation path is correct. Re-run the transaction and verify the new log shows the intended sequence and no hidden limit pressure.
đ Additional Resources & Reviews
- đ sfdc log on HubSpot Blog HubSpot Blog
FAQ
How long does it take to learn how to read an SFDC log well?
Most admins and RevOps practitioners can get useful answers from an sfdc log after a few guided sessions. Expect your first real investigation to take 60â90 minutes. After you learn the common markers and your orgâs automation patterns, many issues can be triaged in 10â15 minutes.
Whatâs the fastest way to find the root cause in a large log?
Search for FATAL_ERROR, EXCEPTION_THROWN, FLOW_ELEMENT_ERROR, and VALIDATION_RULE first. Then move upward to the nearest CODE_UNIT_STARTED. That usually identifies the automation block responsible for the failure without reading thousands of lines in order.
Can I use Metabase or Sigma Computing instead of Salesforce debug logs?
No. Metabase and sigma computing are useful for spotting symptoms in pipeline, activity, or sync outcomes, but they do not replace platform-level debug output. Use them to detect anomalies, then use the Salesforce log to confirm what executed and why it failed.
What if the issue is a login problem rather than a record-save problem?
Start with Setup â Login History and your identity provider logs. A failed meta login, SSO assertion issue, or session policy block may prevent the transaction from ever reaching the application layer. Debug logs help only after Salesforce actually starts executing business logic.
đ Stay Ahead in B2B SaaS
Get weekly insights on the best tools, trends, and strategies delivered to your inbox.
Subscribe to Newsletter
Leave a Reply