17 min read

Salesforce Apex Trigger Best Practices

For some time now I have been asked about best practices for implementing an Apex Trigger mainly by developers just getting into Salesforce.com development as well as system administrators that are not able to accomplish their needs with workflow and are interested in learning more about developing triggers. So I wanted to write a series of articles that I hope will be beneficial to that particular audience.

In this article I will cover some basics of when to use a before-trigger vs. an after-trigger as well as how to make sure your triggers support bulk DML (Data Manipulation Language) operations and some best practices for creating triggers. There will be a part II to this article that will cover unit testing and some common trigger examples. But I do want to be clear, this is not an article about how to program so you will need to have a basic understanding of the Apex Coding Language and the tools used to develop these triggers such as the Force.com IDE.

I want to start with first explaining some basics about Apex Triggers. Apex Triggers are event handlers. When a record associated with the trigger is inserted, updated, deleted, or undeleted the Salesforce.com system will "fire" or execute the trigger event. Salesforce will actually execute a trigger in two different contexts: before and after. Before-trigger events are executed before a record has been committed to the database, while after-trigger events are executed after a record is committed to the database.

Quick side note you will see DML, or Data Manipulation Language, referenced in this article and in the Salesforce.com documentation. It is essentially referring to the syntax that you would use to insert, update, delete, and restore (undelete) data from a database. For more specifics on the Apex DML operations you can visit the following Apex DML Documentation.

When to use Before-Triggers

Before-trigger events occur before a record’s changes are committed to the database. This particular event is ideal for performing data validation, setting default values, or performing additional logic and/or calculations. Please keep in mind that in the case of before-insert events, because the event is executed before the record is committed to the database it will not have a record id.

Before-triggers in my opinion are the most efficient and are going to be your goto method for most of the triggers you will write for a couple of reasons. The first being that you can perform data validation and reject a record before it is committed to the database, meaning there is no cost to performance by the system having to roll back an update. Second, you can update fields or set default values for a record without having to initiate another DML command.

For example the code below illustrates setting a default value on a record. No DML required.

trigger setDefaultAccountValues on Account (before insert, before update) {
	for (Account oAccount : trigger.new) {
		oAccount.Industry = ‘Cloud Computing’;
	}
}

…and that is it. As you can see I simply just set the value of the field I want to change or default. Salesforce will take care of the rest.

When to use After-Triggers

After-trigger events occur after a record has been committed to the database, which means that records being inserted will have a record id available. This particular event is ideal for working with data that is external to the record itself such as referenced objects or creating records based on information from the triggered object.

Update: It was pointed out in the comments that the order of execution I described isn't entirely correct. Technically the record isn't truly committed to the database until after the after-trigger event, assignment rules, auto-response rules, and workflow rules are executed. Triggers execute as part of a transaction which means that any inserts/updates to the database can be rolled back. It is why you can throw an error in an after-trigger event that will prevent a record from being created even though it already has an Id assigned.


For more specific details on this topic you can visit the following online article titled Triggers and Order of Execution.

For example the code below illustrates creating an Opportunity after an Account is created.

trigger createNewAccountOpportunity on Account (after insert) {
	List<Opportunity> listOpportunities = new List<Opportunity>();

	for (Account oAccount : trigger.new) {
		Opportunity oOpportunity = new Opportunity();
		oOpportunity.Name = oAccount.Name;
		oOpportunity.AccountId = oAccount.Id;
		oOpportunity.Stage = ‘Proposal’;
		oOpportunity.CloseDate = System.today() + 30; //Closes 30 days from today

		listOpportunities.add(oOpportunity);
	}

	if (listOpportunities.isEmpty() == false) {
		Database.update(listOpportunities);
	}
}

As you can see from the code we were able to generate an Opportunity from an Account that was just created and because we needed the Account Id to do this we needed to use an after-trigger. We also borrowed some values from the Account itself to help populate the required fields on the Opportunity, but notice we didn’t query for those values…more on that later in the article.

Another key best practice for after-trigger events is that while it is possible to perform a DML operation on the record that initiated the trigger event, it should be avoided. If you think it through, when you perform a DML operation on a record from a trigger event the system will need to execute all triggers on that object again, not only does this impact performance but it puts us at risk of creating an infinite loop. Salesforce deals with this by only executing your triggers a certain number of times.

Later in this article we will discuss some best practices around how best to setup your execution criteria so that you can hedge against infinite loops and improve performance. Simply put after-trigger events are not the best place to perform DML operations on the triggered record and if you come across a situation where you feel it is necessary I would make sure that you are well aware of the impact that will have when the system recursively executes your trigger events.

Understanding Trigger Context

The context of a trigger is very important. We already discussed the before and after update contexts but there is also the context of what data is available to you at the time of trigger execution. In particular I am referring to the ++Trigger++ object. If you go back and review the examples I provided earlier in this article you will notice the use of a variable called “trigger”. This is a reference to an object that holds some information about the record or records that initiated the trigger. It also holds some information about the record as it existed before the update, allowing you to make decisions based on a records previous values. To get some background on what kind of information the trigger context variable provides you can review the Apex Trigger Context Documentation.

The trigger.new variable holds the information about the record(s) that was just inserted or updated (please make note that if you are looking for the data that was deleted that you will instead need to use the trigger.old variable to access that data). Salesforce does a bit of magic when it compiles your trigger by looking at the fields that you are referencing and making sure that data is available at runtime. So there is no need to actually attempt to query the information yourself. Just use the fields in your code and trust that Salesforce will take care of the rest.

The after-trigger event example I provided previously demonstrates how you can simply loop through the records in the trigger.new and reference Account fields without the need to query for the data explicitly. Just note that this does not work for data that is related to a record. For example, the sample code below illustrates an attempt to reference information on the user record that is related to an Opportunity as an owner. Interestingly, this does not produce an error instead you just get a null value for the profile name.

trigger exampleInvalidTrigger on Opportunity (before insert, before update) {
	for (Opportunity oOpportunity : trigger.new) {
		if (oOpportunity.Owner.Profile.Name == ‘Executive’) {
			oOpportunity.IsExecutiveOpportunity = true;
		}
	}
}

trigger exampleValidTrigger on Opportunity (before insert, before update) {
	Map<Id, User> mapUsers = new Map<Id, User>([SELECT Id, Profile.Name FROM User]);

	for (Opportunity oOpportunity : trigger.new) {
		User oOwner = mapUsers.get(oOpportunity.OwnerId);

		if (oOwner.Profile.Name == ‘Executive’) {
			oOpportunity.IsExecutiveOpportunity = true;
		}
	}
}

As you can see from the second example we had to query for the profile information in order to use it. Just note that this was just as an example and you will not want to load all users into a map as I did. There are other ways to get this information more efficiently that I will discuss in the next section.

Bulk Mode Triggers

In all of the examples I have illustrated so far you will notice that they all involve looping through a collection of records made available through the trigger.new property. This is because all triggers execute in batch mode, meaning whether it is single record or a hundred you will always need to use the trigger.new collection and thus you should prepare your trigger to be able to handle a batch of records. This forces us to think about how we write our triggers. We have to take care on how we query additional information and also keep in mind how many script statements we are executing per record.

Let’s start by revisiting the example from the previous section. If you recall, I had a requirement to only execute my code if the Opportunity Owner’s Profile Name was set to ‘Executive’. I could have written the trigger like this

trigger badExample on Opportunity (before insert, before update) {
	for (Opportunity oOpportunity : trigger.new) {
		User oOwner = [SELECT Id, Profile.Name FROM User WHERE Id = :oOpportunity.Id];

		if (oOwner.Profile.Name == ‘Executive’) {
			oOpportunity.IsExecutiveOpportunity = true;
		}
	}
}

The above example will actually work for triggers that have one or two opportunities in the trigger.new collection. However, as soon as the number of records increases beyond 100 you will start to get errors indicating that you executed “Too many SOQL queries”. Even if you only expect less than 100 records to be updated at a given time the limit context is counted across the entire transaction. This means that if you have 3 triggers on an object that each run 50 queries then the 3rd trigger to execute will throw an exception. So you will want to keep the number of queries you execute to a minimum. To get a better understanding of the types of limits that you need to be aware please refer to the online Apex Governor Limits Documentation.

The best way I have found to limit the number of queries is to load the data I need into a map so that I can make a single query call and reference that data by record id as needed.

trigger exampleGoodButInefficientTrigger on Opportunity (before insert, after update) {
	Map<Id, User> mapUsers = new Map<Id, User>([SELECT Id, Profile.Name FROM User]);

	for (Opportunity oOpportunity : trigger.new) {
		User oOwner = mapUsers.get(oOpportunity.OwnerId);

		if (oOwner.Profile.Name == ‘Executive’) {
			oOpportunity.IsExecutiveOpportunity = true;
		}
	}
}

The example above is a good but inefficient example. I loaded all of the users into a map which means I only used one query but I really shouldn’t be loading an entire data set into memory like this. There are other limits to worry about such as heap size (the amount of memory your method is consuming) that we should be conscience of. Here is a better way to do the same thing.

trigger exampleBetterTrigger on Opportunity (before insert, after update) {
	Set<Id> ownerIds = new Set<Id>();

	for (Opportunity oOpportunity : trigger.new) {
		ownerIds.add(oOpportunity.OwnerId);
	}

	Map<Id, User> mapUsers = new Map<Id, User>([SELECT Id, Profile.Name FROM User WHERE Id IN :ownerIds]);

	for (Opportunity oOpportunity : trigger.new) {
		User oOwner = mapUsers.get(oOpportunity.OwnerId);

		if (oOwner.Profile.Name == ‘Executive’) {
			oOpportunity.IsExecutiveOpportunity = true;
		}
	}
}

Now this example is much better. It loops through the list of opportunities first to figure out what owner ids are being used, collects them into a set collection, and then uses that information to only return users that are being used as owners in the opportunity collection. That way we limit the number of records we have in the map reducing our memory footprint and avoiding any map size limit exceptions.

But we can still do better. The problem is we are doing an extra loop. Again we don’t want to do anything we don’t have to do to keep our code as lean and simple as possible. One way we can avoid doing that initial loop is to let a query do the work for us. This requires the use of a more advanced query technique involving sub-queries. Here is an example of what the query should look like

SELECT 
	Id
	, Profile.Name 
FROM 
	User 
WHERE 
	Id IN :(SELECT OwnerId FROM Opportunity WHERE Id IN :oppIds)

The first part of the query is fairly standard you simple define which fields you want to select and from which object. The more advanced part is in the where clause. Here we use a sub-query to select and return only the owner ids from the opportunity object where the opportunity id matches an id in the oppIds id collection. Here is this query in use.

trigger exampleBestTrigger on Opportunity (after insert, after update) {
	Set<Id> oppIds = trigger.newMap.keySet();

	Map<Id, User> mapUsers = new Map<Id, User>([SELECT Id, Profile.Name FROM User WHERE Id IN (SELECT OwnerId FROM Opportunity WHERE Id IN :oppIds)]);

	for (Opportunity oOpportunity : trigger.new) {
		User oOwner = mapUsers.get(oOpportunity.OwnerId);

		if (oOwner.Profile.Name == ‘Executive’) {
			oOpportunity.IsExecutiveOpportunity = true;
		}
	}
}

Now that is much better. In this example we make a single query call and only one loop. But let me draw your attention one other trick in the example above; the map data type. You will notice that I am passing the query to the map when I create it. Traditionally, you would expect the syntax to look similar to the next example.

Map<Id, User> mapUsers = new Map<Id, User>();

List<User> listUsers = [SELECT Id, Name FROM Users];
for (User oUser: listUsers) {
	mapUsers.put(oUser.Id, oUser);	
}

As you can see this requires a loop of the query result to add the values into the map. But in my code I made use of a little Salesforce.com magic. The constructor for a Map is able to take a list of sObjects and generate a map for you. So when I use the following syntax Salesforce.com does the work for me and I can avoid doing that extra loop

List<User> listUsers = [SELECT Id, Name FROM Users];
Map<Id, User> mapUsers = new Map<Id, User>(listUsers);

…or if I want to do this in a single line of code I can do the following.

Map<Id, User> mapUsers = new Map<Id, User>( [SELECT Id, Name FROM Users] );

You will find that Maps are going to be a powerful tool in making your code more efficient and less prone to cause limit exceptions. I encourage you to jump over to the Apex Map Data Type Documentation and read more about the various methods and capabilities of maps.

Using Trigger Criteria

In the previous example you saw how we tested the users profile to determine if we wanted to update a field. This is a very important strategy to make note of. In most cases your trigger logic doesn’t need to execute every time an update occurs. More often you are looking for certain conditions to exist before performing your logic. As we did in the last example we simply accessed the users profile name and looked for the value ‘Executive’ and only if that criteria was met do we update the field. However, what if we wanted our trigger to execute when a certain field was changed? The next example will demonstrate that exact scenario.

To set this up let us consider a use case where the Other Address field on a contact must match the contact’s account billing address. This would mean that when the account billing address is changed that we would then need to update any contacts associated with that account. Because this involves changing multiple child records it isn’t something that can be solved with a workflow rule and field updates. However, you can use a workflow rule and field updates to set the other address on the contact to the account’s billing address when the contact is created, which allows us to focus on changes to the account billing address. Let’s start with the code and then we can break it down.

trigger updateContactOtherAddress on Account(after insert, after update) {
	if (trigger.isUpdate) {
		//Identify Account Address Changes
		Set<Id> setAccountAddressChangedIds = new Set<Id>();

		for (Account oAccount : trigger.new) {
			Account oOldAccount = trigger.oldMap.get(oAccount.Id);

			boolean bIsChanged = (oAccount.BillingStreet != oOldAccount.BillingStreet || oAccount.BillingCity != oOldAccount.BillingCity);
			if (bIsChanged) {
				setAccountAddressChangedIds.add(oAccount.Id);
			}
		}

		//If any, get contacts associated with each account
		if (setAccountAddressChangedIds.isEmpty() == false) {
			List<Contact> listContacts = [SELECT Id, AccountId FROM Contact WHERE AccountId IN :setAccountAddressChangedIds];
			for (Contact oContact : listContacts) {
				//Get Account
				oAccount = trigger.newMap.get(oContact.AccountId);

				//Set Address
				oContact.OtherStreet = oAccount.BillingStreet;
				oContact.OtherCity = oAccount.BillingCity;
				oContact.OtherState = oAccount.BillingState;
				oContact.OtherPostalCode = oAccount.BillingPostalCode;
			}

			//If any, execute DML command to save contact addresses
			if (listContacts.isEmpty() == false) {
				update listContacts;
			}
		}
	}
}

As you can see there are many parts to this trigger. So let’s first look at the context. The trigger is setup to run when a record is inserted and updated. However, because this particular requirement is to update contacts we can ignore any triggers that were made as a result of an insert since an account has to exist before contacts can be associated to it. You will also notice that I chose “after” events rather than “before” events. This is because we are not performing any DML against the account so we are safe from causing the trigger to execute multiple times. Also, we want to only bother updating contacts if the account has been successfully committed to the database.

Now we can start to look at the meat of the trigger. Remember all triggers run in a batch context so we have to assume that a trigger that is fired is the result of multiple accounts being updated at once - typically due to a batch upload or mass update. This means that an update could occur for any number of reasons that has nothing to do with an address change. So the first loop of this trigger is to go through the list of accounts and identify which accounts had an address change.

If you recall from the documentation along with the trigger.new property there are other properties that we make use of here, in particular: trigger.oldMap and trigger.newMap. These properties are Map data types that contain a key value pair of account id to account object and in the case of trigger.oldMap this contains the account information as it existed before the record was updated. This allows us to test if the account address fields match or have been changed. To do this I access the old account object from the map by the account id. I then compare the address fields to determine if they have changed (Note for the sake of space I only compared two address fields but in a real-world scenario you will want to compare all relevant address fields). Once I identify that the address has been changed then I add that account id to a set collection so that I can use it later to get contacts that will be affected by the change.

You will notice that along the way I am testing to see if there is any data in each collection. This is so that I don’t run the risk of executing any code in the event that a trigger did not contain any accounts where the address had changed. In the case that changes did occur then I query the related contacts by the set of account ids I collected earlier. I then loop through that collection, access the account by id from the trigger.newMap (since that contains the users updates), set the contact other address fields accordingly, and then finally initiate an update command to update the contacts.

An Ignorant Trigger is Always Best Practice

Yes, an ignorant trigger is what we should be striving for, in fact, the dumber the trigger the better. What I mean by this is if you look at the examples we have been working with thus far they have all had their logic defined in the trigger method itself. There are number of reasons that this is less than desirable:

  1. The only way to unit test the trigger logic is to issue a DML command
  2. The logic defined in the trigger is not re-usable
  3. You can’t control the order of operation for objects that have more than one trigger

Let’s go right down the line. The first reason is that the only way to unit test the logic of a trigger is to initiate a DML operation. That means that you are limited in the types of unit test scenarios you can create. You are basically limited to creating tests for only scenarios where a record can be inserted or updated. You can’t break your logic into smaller functions and test each individually. This is particularly important in triggers that will be performing some complex logic or performing calculations.

Second, is that the logic you define in a trigger isn’t re-usable. In many cases you can simply rely on the trigger taking care of applying logic to a particular object but what if you need to apply that logic to an object before it is saved or what if you need to reuse parts of the logic; like say a method that performs a calculation.

Finally, and this is a big one, you have no control over the order that a given trigger is executed at runtime. If you have multiple triggers on an object you can’t have any dependencies on a field that another trigger is expected to update. Salesforce.com does not guarantee in which order each of these triggers will fire. Don’t assume anything.

To address all of these concerns I recommend applying a listener or observer pattern to trigger event methods (the terminology isn’t as important as the solution but from here on I will refer to it as the listener pattern). Essentially, we are downgrading the trigger method to do nothing more than to listen for records being inserted, updated, deleted, or undeleted and then pass that information on to one or more methods. First, let’s take a look at the sample below so that we can get some context on what this looks like.

trigger AccountBeforeEventListener on Account (before insert, before update) {
	if (trigger.isInsert) {
		AccountUtil.setDefaultValues(trigger.new);
		AccountUtil.setIndustryCode(trigger.new);
	}
}

public class AccountUtil {	
	public static void setDefaultValues(List<Account> listAccounts) {
		for (Account oAccount : listAccounts) {
			if (oAccount.Industry == null) {
				oAccount.Industry = ‘Cloud Computing’;
			}
		}
	}

	public static void setIndustryCode(List<Account> listAccounts) {
		for (Account oAccount : listAccounts) {
			if (oAccount.Industry == ‘Cloud Computing’) {
				oAccount.Industry_Code__c = ‘CC’;
			}
		}
	}
}

I adapted the first example from this article into the listener pattern so it should look familiar. As you can see all of the logic that was originally in the trigger has been moved to a class called AccountUtil with two static methods called setDetaultValues and setIndustryCode. You will notice that I still kept the logic to determine if a trigger is an insert or update command and that is intentional. The trigger really is the best place to decide what context you are dealing with and then send that message on appropriately. In this case, we only want to execute the command for insert trigger events.

There are a couple of things I want to call attention to about this solution. The first is that with this solution I can call multiple methods in a specific order. Looking at the two methods the first one sets the default industry, if the user didn’t already set it, and the second sets the industry code based on the industry value. Had we left this to chance and created a trigger for each of these methods there is a possibility that the setIndustryCode trigger could run before the defaults trigger and you wouldn’t get the results you expected.

The second item I wanted to call attention to is how the creation of these two methods in a separate class did not impact our context. We are still able to update the fields on an account and not have to issue any DML commands. This is because when the data is passed to the methods from the trigger it is done by reference. Meaning the argument is just a pointer to where that collection of Accounts is stored in memory. So any changes you make from the method are available to the trigger or any other method.

The next question you should be asking then is how many triggers should we have per object. Most would say one. However, I like to step away from convention every now and then to satisfy my own logic. I prefer that before-trigger events and after-trigger events have their own triggers. For me it’s about the psychology of before-trigger being a unique type of event verses an after-trigger event. There really is no good reason for this other than I like it this way. You can actually just have a single method that can divide up the work between before and after events but I prefer keeping things at a much simpler level. Below are some examples of each and you can make your own decision on how you want to manage this.

//Single Trigger for Before and After Events
trigger AccountEventListener on Account (before insert, after insert, before update, after update) {
	if (trigger.isBefore) {
		if (trigger.isInsert || trigger.isUpdate) {
			//DO SOMETHING
		}
	}
	else if (trigger.isAfter) {
		if (trigger.isInsert || trigger.isUpdate) {
			//DO SOMETHING
		}
	}
}

//Before Event Trigger 
trigger AccountBeforeEventListener on Account (before insert, before update) {
	if (trigger.isInsert || trigger.isUpdate) {
		//DO SOMETHING
	}
}

//After Event Trigger
trigger AccountAfterEventListener on Account (after insert, after update) {
	if (trigger.isInsert || trigger.isUpdate) {
		//DO SOMETHING
	}
}

As you can see all of the examples will produce the same results and it really is just a matter of preference.

Part II

In part two of this article I will get into best practices for unit testing your triggers and I will also devote some time demonstrating some common trigger scenarios that you can use as a basis for your own solutions.

Update: Part 2 is now available here.