At Pitney Bowes we have hit many of the governor limits that Salesforce have (hence the bingo card analogy). Yet we continue to be caused challenges by them, overcome those and then find the next limit. It felt like a good topic to ask the community about. So a few questions to get you thinking...
What Limits have you hit?
How do you overcome them?
What do you do to prevent hitting limits?
What are your top 3 limits that you hit?
Do you hit different limits during a Salesforce performance degradation?
Example limits are SOQL 101, Apex CPU Timeout, 5 concurrent apex batch jobs, 10,000 DML Statements, 50,000 SOQL record limit.
For a list of the published limits see this link https://resources.docs.salesforce.com/214/latest/en-us/sfdc/pdf/salesforce_app_limits_cheatsheet.pdf
Thanks in advance for your insights.
Michael Majerus, Rob van Waveren, Adam Cooper, Brenden Burkinshaw, Casey Palmer, Hans van Mil, Dan Schiess, John Welisevich, Russell Jacobs, Nick Sauer, Sonia Genesse, Scott Willis, Alex Langston, Mark Taylor, Mark Varley, Ralf Fickert
I have at mentioned the top community collaborators to kick start the conversation.
Hello, we recently had an issue with the available number of Workflow Rules being defined as 50 in our Org. After raising a support ticket we got this quickly increased to 100. As we use ServiceMax in 10 countries it meant the limit being 50 was a mere 5 per country, which was too little. Usually to consider the limits you need to remove any logs etc. which create daily (for example PM plan creates a log daily, which if you have a lot of PM plans eventually end up a lot of records). Its about clearing out unnecessary records of past Apex job completions etc. our philosophy is; if we didn't have any errors on the Apex, then we need not keep the Apex Job Logs. If you use the SPM functionality, the limit to SPM Logs is 1000 records, but depending on the number of technicians this doesn't take a long time to reach, and when reached SPM stops collecting information. We have built an automatic job via Workbench which will automatically delete them daily to maintain within the 1000 record limit.
Hi Mark Varley, thanks a lot for your reply!.
I am interested in your comment about you only having 5 workflows per country. Do you have country specific workflow rules? Are you using these for email notifications or field updates etc?
It also sounds like the SPM functionality is missing functionality, if it has a limit of 100 should it not include functionality to clear out old records? Sounds like an idea may be useful for this.
Hi Richard Lewis
We do have some country specific work flow rules - for example when the office dispatch a work order, it triggers an email to the engineer to advise him so he can resync his device to get the data. We also have specific rules per country based on the information the engineer puts into the WO - certain escalation steps which trigger notifications.
We have standard workflow rules (for all countries) to change the WO status depending on the data input by the engineer - i.e. Job Complete = N, Parts Required = Y, rule to update field of status to be "Pending Part Approval" which triggers back office to process etc. So whilst we have a single rule for this type of thing, as its standard, the escalation rules vary by country so they have their own rules.
Agree with you about the SPM - the limit is 1000 records, but they don't auto delete - so eventually SPM stops running because there are >1000 - so we have to manually set up something via Workbench to auto delete them daily... perhaps should be an idea to auto delete within the SPM functionality.
Hope this helps clarify.
HI Mark, there is a reason that Salesforce limit this (performance) so if I were you I would look for ways to reduce the number of rules you have. Maybe sending email notifications to a shared mailbox and having forwarding rules on that? There are many benefits to standard global processes and this includes notifications. The exception are legal or regulatory needs. As an example, your testing is reduced, as well as your change cost each time you need to change one of them.
I also advise against field update workflow rules as that can cause your code to trigger more than once which takes you towards a number of limits including SOQL 101.
Can you put in an idea for the SPM auto delete please? And link to it here.
Hi Mark, navigate to the idea, copy the url, paste the url in to a reply to a post and the magic will happen.
Since Work Orders are the main focus when it comes to service, I've hit limits on this Object due to Object references. Needed to start looking at this differently and begin using/consolidating Apex, VF and/or Triggers.
I also did have this limit increased from 10 to 15, though that didn't long but allotted more time to implement a different approach.
I've also encountered Query Cursor limits and batch size limits in my integration to our ERP system. Some of this was addressed within the integration itself to control batch size and using caching methods.
It's also wise to have your daily API limit increased when doing large imports as you're likely to surpass your rolling 24 hr. limit, but I guess that goes without saying.
Salesforce can increase the active workflow rules per object limit to a hardcode limit of 300.
Some troubleshooting tips when hitting the Salesforce governor limits: https://userdocs.servicemax.com/ServiceMaxHelp/Summer16/en_us/svmxhlp.htm#iPadApp/TipsforTroubleshoo...
Thanks Omar Rodriguez , have you had any experience with running this many active workflow rules? I would be quite concerned on the performance impacts of having that many active to evaluate and action on each insert or update.
I came from SF Premier Support and the cases for request to increase this limit were a lot, the customers most of the time request the 300 limit and thats it. Always there is a consideration note for the performance, but I havent seen any issues in my years with them. Also, the customers are growing and SFDC need to grow as well, thats why they recently changed the time based workflow standard limits from 50 to 1,000 over all editions (going forward in new orgs and via case with previous orgs created before the change).
SFDC are very careful with the governor limits and customizations in their platform, understandable, there are some examples where all the platform crashed for some customers orgs customizations (triggers to be more specific).
When hitting governor limits, my recommendation its, monitor the behavior, how frequent, have some patience and gather some notes. If turns in a big/frequent/stopping issue please open a case with the correspondent product support team (SFDC, ServiceMax, etc).
HI Omar Rodriguez, thanks for your insights! I will bear your experience and suggestions in mind for the future.
Great timing on this one, we are consistently hitting the SOQL 101 limit when ever we create a WO. When we moved from Summer 16 to Summer 17, we were suddenly getting this issue, and have seen it ever since. We have quite a few actions triggering from a WO creation, but the system was working, then Smx added some more transactions in Summer 17 which took us past the 101 limit.
To get the system working, we had to remove the target source updates in the SFM, and replace it with a timed process flow. Since then, we have tried to invoke some other processes against the WO, like forcing it to do an auto entitlement check, but this causes the 101 error.
We realise we will need to review our own processes that are being triggered when a WO is created, some of which are legacy code, but that is going to take time and resources.
We've had similar issues in the past, few points:
1) Are you hitting the standard 101 SOQL limit, or the SVMX specific 101 limit? Managed packages get their own count, so if you are hitting the limit it will be PURELY custom code or PURELY servicemax managed code
2) Using timed workflows may help, but its likely from actual APEX code. You can bundle the non-ASAP-essential code in to a @future command, which will run asycronously to the current SOQL count, get its OWN limit of 200 vs 100, and still finish within a few minutes. Examples would be: Updating the "Last Visit Date" on the account, updating data on the Install product record, any updates to the Technician record, etc. Basically any "Related object" updates that will in fact trigger even MORE code once they fire. Try to keep the APEX code fired by the Technicians to be strictly on the WO and WD objects
3) The Debug Console actually has some amazing tools for tracking what code is firing, recommend spending an hour on the Trailhead module. If you can capture a WO hitting the 101 limit in the Dev Console, youll likely find some low hanging fruit that you can flip around no problem!
Best of luck
Thanks for this feedback, you have nicely outlined our next steps, though we try to avoid coding where we can.
Our main issue (gripe), is in our previous version of Servicemax everything was working correctly, but when we upgraded everything fell over, and it was down to us to move some of our process on to an asynchronous basis, which we did to go live.
Now we are trying some new developments and we are hitting the limits again, so either we clean up our code, or try to move some more items to an asynchronous state.
The problem with having too many asynch process is when we have performance degradation in salesforce. This would cause the typical 5 sec transaction to not execute for hours due to building backlog. Few things that you could try to reduce the SOQL counts on the objects would be to :
1. Look at SOQLs that execute within "for" loops. They are high contributors to SOQL counts.
2. Move field updates from work flow rules to code (ideally in the beforeupdate/beforeinsert portion of the trigger) so that the code does not execute multiple times and add SOQLs.
3. As mentioned on earlier comments, move to asynch process for non priority transactions.
Hope this helps!
At our site we have also been consistently hitting the 101 SOQL error on the Servicemax side. At every turn there is a new roadblock to get across. By now we've implemented deadlocks in all our custom code that is combined into 1 triggerhandler so the code can't rerun whatever happens. Postponed as much workflows as possible, limited the amount of svmx rules, sfm's and wizzards. This leads us time and again to the next problem. For the moment we are battling CPU time limits as well as SOQL limit errors.
Our main issue is that we use cases, work orders and events and want a lot of automation, this makes the servicemax triggers very unpredictable. We need this setup as we are in a mixed licensing story with the rest of the business. In this constellation however a simple automation like 'technician confirmed the event --> update the work order status and the case status' can cause you a world of pain. If only we could disable the SVMX triggers and fire the servicemax managed code from a handler passing our old and new values. This would allow us to perfectly control the sequence reducing the DML statements and use the svmx trigger power where we need it most hence reducing the SOQL's. So firing the SVMX triggers NOT with every update on every customer field on any object would be a usefull feature for us
tips i can add from our experience:
- on the object reference side find your longest hop and break it in half with a process builder. putting the value across to the new object and breaking you formula long tail. This frees up a lot references. also rely on the SVMX triggers pulling information accross like product to work order, this removes the need to go from work order to component to product. Lastly if you are in dire need don't pull the name of the related object but use the ID you find in the object before. This works well for recordtypes, business hours etc. Do mind the deployability if you use this
Thanks a lot Koen Vynckier for sharing your experience. I hope you manage to reduce your SOQL errors.