SPM - Utilization Logs History and Error Clarity

SPM - Utilization Logs History and Error Clarity

  • Increase length of time the logs remain available
  • Logs need to contain more information about the errors

Details:

The way to solve this is to update the system to make it think that the records have not yet been processed.  This is not very straightforward, and if done incorrectly, could result in duplicate entries in the utilization metric.  We worked with Swetha to understand how to re-run these metrics if we needed to.  She gave us explicit direction on how to reset the system. This is my understanding of what needs to be done:

    1. Go to the ServiceMax processes tab
    2. Find the process related to Utilization (in production, PN-0000009885)
    3. Scroll to the related list ServiceMax Config Data (Dispatch Process)
    4. Open the configuration for “Schedule”
    5. Change the “Previous Execution Started On” date and “Previous Execution Completed On” to before June 7
    6. Delete all of the Utilization records for that time period to avoid duplicate entries.  There is no easy way to do this except writing some sort of SQL to do a mass-delete.  Swetha was unsure if the engine would make the duplicates or not, so we have done this step as a precaution.
    7. Re-run Utilization. It should then pick up the records from whatever date we set. 

What is the underlying problem do you intend to solve with this idea?
More concrete details on errors and error logs archived longer to help investigate issues and conduct thorough analysis
Product Area?
Reporting & Analytics
What version of ServiceMax are you on?
Summer 16
2 Comments
Retired
Retired

#Case00055044

Sushi Chef
Sushi Chef

This idea speaks directly to the management of SPM once the metrics are running on a regular basis. The execution logs are only kept out there for around two months. We were not aware of this and key error logs had rolled off before we could fully address the errors. It would be great to have these out there for a significantly longer amount of time.

The logs themselves do not give enough information about the particular record or records which failed in a batch. We could really use a record link and a detailed message on exactly what data element caused the error. We had to resort to running comparison reports against the SPM objects to try to find the data record that failed.

Finally, we would like the error log to be definitive so that we know exactly how many records erred out. It was unclear whether the entire batch had processed, or if it stopped when it hit an error. This clarity would help us to know how large the problem might be.

Our workaround was to re-run the data to generate new error logs. It was a time-consuming and risky process which we decided not to do in our production environment. But the option is there should we ever need to tweak the measures or rules and provide new historical reporting.