Outdated: Stage I jobs

Versioning note
Edit section

This page documents the jobs which run against the Stage I warehouse. This version is no longer in use.

 

  1. 1. Versioning note
  2. 2. Server and Database structure 
    1. 2.1. Org_MSCRM 
    2. 2.2. Org_Warehouse
    3. 2.3. Org_Analytics
  3. 3. Jobs Descriptions 
    1. 3.1. Job "Extract 1 – Base Records – Org" 
      1. 3.1.1. 1.  Extract - Get opportunity and person data
      2. 3.1.2. 2.  Extract - Get fin aid data
      3. 3.1.3. 3.  Fail - Send email 
    2. 3.2. Job "Extract 2 – Subrecords and processing – Org" 
      1. 3.2.1. 1. Verify - Check that Extract 1 has succeeded
      2. 3.2.2. 2. Extract - Get subrecord data as of midnight 
      3. 3.2.3. 3. Extract and Transform - Set local data dates 
      4. 3.2.4. 4. Extract and Transform - Set local metadata dates 
      5. 3.2.5. 5. Transform - Flatten export high schools from analytics high schools
      6. 3.2.6. 6. Transform - Flatten export scores from analytics scores
      7. 3.2.7. 7. Transform - Parse subrecord data for exports 
      8. 3.2.8. 8. Transform - Move slow-changing data from export tables to analytic tables 
      9. 3.2.9. 9. Cleanup - Truncate export sorting tables 
      10. 3.2.10. 10. Export - Create CSV file for export 
      11. 3.2.11. 11. Export - Create Zip file for export 
      12. 3.2.12. 12. Export - Delete csv
      13. 3.2.13. 13. Analyze - Run SPSS task 
      14. 3.2.14. 14. Succeed - Send email 
      15. 3.2.15. 15. Fail - Send email 
      16. 3.2.16. 16. Cancel - Send email 
    3. 3.3. Job "Extract 3 – Analytics – Org" 
 

 

Server and Database structure 
Edit section

The extract process requires two databases on the same SQL Server instance (the CRM server) and an additional database on a SQL Server instance connected to the reporting machine (the report server). The transaction database on the CRM server, called Org_MSCRM, has a name which is hardcoded into the extraction steps of the extract jobs. The warehouse database on the CRM server is called Org_Warehouse. The analytics database on the report server is called Org_Analytics.

Org_MSCRM 
Edit section

In order to retrieve data from the Org_MSCRM database, the extraction steps must run in the context of a login which is mapped to a user internal to the CRM installation on Org_MSCRM.

  •     The login used for this is 422X\C422System. This login is set explicitly in each job step which extracts data from Org_MSCRM.

The extract process calls the following function in Org_MSCRM:

  •     fn_UTCToTzSpecificLocalTime

The extract process selects from following views in Org_MSCRM:

 

VIEW

IS FILTERED

EXTRACT JOB STEP

A422_Award

 

2.2

FilteredA422_dom_oppstatuscategory

x

2.2

A422_dom_scoresource

 

2.2

FilteredA422_ethnicity

x

2.2

FilteredA422_ids

x

2.2

A422_Interest

 

2.2

A422_OpportunityStatusChange

 

2.2

FilteredA422_PersonOrg

x

2.2

A422_PersonScore

 

2.2

FilteredAccount

x

2.2

ActivityPointer

 

2.2

FilteredActivityPointer

x

1.1

Appointment

 

2.2

CampaignResponse

 

2.2

Contact

 

1.1

FilteredContact

x

2.2

CustomerAddress

 

1.1

FilteredCustomerRelationship

x

2.2

Email

 

2.2

Fax

 

2.2

FinAidProfile

 

1.1

Letter

 

2.2

FilteredOpportunity

x

1.1

Opportunity

 

1.1, 2.2

PhoneCall

 

2.2

ServiceAppointment

 

2.2

FilteredStatusMap

 

2.2

SystemUserBase

 

2.3, 2.4

Task

 

2.2

UserSettingsBase

 

2.3, 2.4

 

Org_Warehouse
Edit section

Data in placed in three types of tables: warehouse tables, export tables, and export temporary tables.

 

TABLE

TYPE

AnalyticsMaster

warehouse

FinAidProfile

warehouse

Activity

warehouse

Awards

warehouse

Interests

warehouse

PersonOrgs

warehouse

Scores

warehouse

StaffAssignment

warehouse

StatusChange

warehouse

Xport

export

XActivity

export temporary

XId

export temporary

XRace

export temporary

XRelation

export temporary

XScore

export temporary

 

Warehouse tables hold data for use in analytics. All of these tables hold effective-dated data, either using dates set during the extract process or modification dates retrieved from Org_MSCRM. The warehouse tables are AnalyticsMaster and FinAidProfile as the base data tables and Activity, Awards, Interests, PersonOrgs, Scores, StaffAssignment, and StatusChange as secondary tables. Warehouse tables may hold permanent data or be refreshed nightly. Warehouse tables have primary keys defined on the combination of the effective-date field and the primary key of the base table from Org_MSCRM, with the exception of StatusChange whose primary key is only the Org_MSCRM primary key (as StatusChange records can never be updated).

Export tables consist of Xport and any custom export tables defined on a per-client basis. These tables reflect an overnight snapshot of data from Org_MSCRM and are used to create an export file. The export tables are refreshed nightly. The Xport table primary key is the opportunity primary key from the Org_MSCRM database.

Export temporary tables are used to sort subrecord data in order to provide "the top N records" of their type for the export tables. Each holds one type of subrecord, a person or opportunity key, and an ordering field named "ord". The export temporary tables are prefixed with an "X": XActivity, XId, XRace, XRelation, and XScore. The export temporary tables are truncated at the successful end of the extract process. The export temporary tables do not have primary keys.

See the Org_Warehouse table definitions pages for fuller descriptions of these tables:

 

Analytics views are defined in Org_Analytics for use by the analytics applications. The view "vw_DescAnalytics_Complete" selects from the other views to provide a single selection point for the application. The other views select from the warehouse tables:

  •     vw_DescAnalytics_ActiveHighSchool
  •     vw_DescAnalytics_ActiveHighSchool_NonUnique
  •     vw_DescAnalytics_Activity
  •     vw_DescAnalytics_Awards
  •     vw_DescAnalytics_FinAidProfile
  •     vw_DescAnalytics_Master
  •     vw_DescAnalytics_Scores
  •     vw_DescAnalytics_StatusChange

Org_Analytics
Edit section

The Org_Analytics database sits on the reporting server. Its role in the extract process is only to hold the view "vw_DescAnalytics_Complete", which is a pointer to the "vw_DescAnalytics_Complete" view in the Org_Warehouse database. The analytics application selects from this view using a login which has permissions in the Org_Analytics and Org_Warehouse databases.

  •   The login used for this is 422X\ExecAnalytics.

During the transition to the new process, the old view will be kept as "vw_DescAnalytics_Old". When the tables and views of the old process are cleared out, this view will be deleted.

This database may also hold views which are simply pointers to views or tables on Org_Warehouse so that those objects can be worked with transparently in the reporting server.

Jobs Descriptions 
Edit section

The extraction process performs three functions: it extracts all required data from Org_MSCRM into Org_Warehouse; it produces an export file from the export table in Org_Warehouse; and it launches the analytics application on the report server to run against Org_Analytics. This is done using three SQL Server Agent jobs per client. If successful, this sequence of jobs will send two "success" emails, one for the export and one for the analytics. Failure at any point will cause a "fail" email to be sent.

 

Job "Extract 1 – Base Records – Org" 
Edit section

This job exists on the CRM server and it is scheduled to run as close to midnight as possible. It extracts opportunity, person, and financial aid data from the Org_MSCRM database; this is data that is not effective-dated in Org_MSCRM in a way useful to the analytics data, so to reflect a midnight cutoff for the day it must be retrieved at (or near) midnight.

This job is expected to take between 2 and 6 minutes per client.

1.  Extract - Get opportunity and person data
Edit section

The retrieval from the Org_MSCRM database is an extraction step and runs in the explicit context of login 422X\C422System.

The export table is truncated when this step begins; prior to this step, the export table is populated with the previous run’s data.

All of the following data is placed directly in the export table. Some of this data will be stored in the AnalyticsMaster warehouse table as well, but it is placed first into the export table so that that table can serve as a staging table later in determining the current and deleted opportunity indicators.

  • The opportunity data is drawn from the unfiltered opportunity view for faster performance.
  • The person and address data is drawn from the unfiltered contact and customeraddress views for faster performance.
    • Some transformation of the *Allow fields is performed to show 1/0 as True/False.
    • The BirthDay field is cast as a string.
    • The Address_02* fields are drawn from the most recent customeraddress records (using modifiedon/customeraddressid) with the following hardcoded limitations:
      •     addressnumber NOT IN (1,2)
      •     addresstypecode = ‘200000’
  • The first activity date is drawn from the filtered opportunity and activitypointer views (joined with a substring of the activity GUID) because performance is not greatly affected. The date is transformed to a string.
  • The hardcoded organization name and department are added to the export table now.

 

2.  Extract - Get fin aid data
Edit section

The retrieval from the Org_MSCRM database is an extraction step and runs in the explicit context of login 422X\C422System.

This step defines the date "Yesterday" as a date with no time to be used as the effective date of new records. It treats the previous local day as yesterday only if the local time is prior to 10AM, after which the day shifts forward by one; this is to allow extractions run during the working day or in the evening to register the day’s data.

If any data is already in the warehouse table with an effective date of "Yesterday", that data is deleted so that this run's data can replace it.  (Presumably, if an extract must be run twice in one day, the first run was in error.) 

The financial aid data is drawn from the unfiltered opportunity and finaidprofile views for faster performance and placed directly into the corresponding analytics warehouse table.

  • This insert uses a checksum backed up by a field comparison to determine if data is new. Only new data is inserted.
  • Only data which is non-null in at least one field is inserted.

3.  Fail - Send email 
Edit section

  This job sends email if it fails, but does not report a successful run.

Job "Extract 2 – Subrecords and processing – Org" 
Edit section

This job exists on the CRM server and it is scheduled to run after the Extract 1 job for this client completes. It extracts all other required data from the Org_MSCRM database, produces the export file, and triggers the analytics application. All data retrieved by this job is effective-dated in Org_MSCRM and only data created prior to midnight is retrieved.

This job is expected to take between 5 and 40 minutes per client.

1. Verify - Check that Extract 1 has succeeded
Edit section

This step checks that Extract 1 has completed successfully; if it cannot find a successful run of that job in the last 24 hours, it waits an hour and retries to allow a slow Extract 1 to finish running. If at that time a successful Extract 1 cannot be found, the job cancels all further action and sends a cancellation email.

2. Extract - Get subrecord data as of midnight 
Edit section

Next the job retrieves all subrecord information. This is an extraction step and runs in the explicit context of login 422X\C422System.

The pattern for this step is that each subrecord type is retrieved into a temporary export table (if not needed for analytics, with the exception of the race temporary export table which is also used for analytics) or into a warehouse table. Records which will be flattened for a "top N records" export have a sort order created during the retrieval, but the number of subrecords retrieved is not limited to N. For certain record types, a large performance improvement is seen by retrieving from unfiltered views in the CRM database; for these records, additional joins are sometimes needed to retrieve all of the fields usually available from the filtered view (and in the next two steps, additional date extraction is required). For certain record types, associated data or domain tables are also included in the join in order to retrieve fields not included in the standard record view. For all record types except activities as retrieved for analytics warehousing all records from the CRM database (with business rule limitations) are retrieved; for the activity warehousing, only new or updated records are retrieved. For all records types, only records created before the "Today" cutoff point are retrieved.

  1. This step defines the date "Today" as a datetime to be used as the cutoff date of selected records. It treats the current local day as today only if the local time is prior to 10AM, after which the day shifts forward by one; this is to allow extractions run during the working day or in the evening to select the day’s data. The time of "Today" is local midnight expressed as UTC time so that it can be compared to UTC times in the CRM database records.
  2. Customer Relationship: Next it retrieves (filtered) customer relationship data along with data on the related persons from the (filtered) contact table and places it in a temporary export table, ordered by modification date for each person, where the customer relationship record was created prior to the cutoff.
    • Customer relationships are limited to records with a PartnerRoleIdName of Parent, Parents, Father, Mother, or Guardian.
  3. Race: Next it retrieves (filtered) race data and places it in a temporary export table, ordered by race name (alphabetically descending) for each person, where the race record was created prior to the cutoff.
    • Duplicate race category + race name combinations are eliminated.
  4. Id: Next it retrieves (filtered) id data along with data on the id name from the (filtered) id name domain table and places it in a temporary export table, ordered by id name’s display order ascending, modification date descending, and id guid for each person, where the id record was created prior to the cutoff.
    • Id name domain information retrieved is the display order.
  5. Activity (export): Next it retrieves (unfiltered) activity data (from the general ActivityPointer table in the CRM database rather than the various CRM tables for different types of activities) and places it in a temporary export table, ordered by actual end date descending and activity guid for each person or opportunity, where the activity record was created prior to the cutoff.
    • Activities are limited to records with non-null subjects and non-null actual end dates.
    • The unfiltered activity view is used here because the large number of activities makes the performance improvement large.
  6. Interest: Next it clears the interests warehouse table, retrieves (unfiltered) interest data, and places it in the interests warehouse table, ordered by modification date and interest guid for each opportunity, where the interest record was created prior to the cutoff.
    • Interests are limited to records with a state code of 1 (active).
    • The interest table is joined to the FilteredStatusMap table to retrieve data only available from filtered views.
    • The interest table is joined to the opportunity table to exclude interests for deleted opportunities which have not been marked for interest deletion.
  7. Person-Organization/High School: Next it clears the person-organization warehouse table, retrieves (filtered) person-organization data along with data on the organizations from the (filtered) account table, and places it in the person-organization warehouse table unordered, where the person-organization record was created prior to the cutoff.
    • The person-organization data is retrieved with a DISTINCT limitation to prevent duplicates.
    • The person-organization table is joined to the opportunity table.
      • This limits the person-organization data to persons who have opportunities;  related persons' data is not warehoused unless those relations have their own opportunities.
      • This excludes person-organization data for deleted opportunities which have not been marked for person-organization deletion.
    • This data is not ordered because it is not needed for "top N records" export.
  8. Status Change: Next it clears the status change warehouse table, retrieves (unfiltered) status change data along with data on the status category from the (filtered) status category domain table, and places it in the status change warehouse table unordered, where the status change record was created prior to the cutoff.
    • Status category domain information retrieved is the display order.
    • The status change table is joined to the FilteredStatusMap table to retrieve data only available from filtered views.
    • The status change table is joined to the opportunity table to exclude status changes for deleted opportunities which have not been marked for status change deletion.
  9. Award: Next it clears the award warehouse table, retrieves (unfiltered) award data, and places it in the award warehouse table, ordered by modification date descending and award guid for each opportunity, where the award record was created prior to the cutoff.
    • The award table is joined to the FilteredStatusMap table to retrieve data only available from filtered views.
    • The award table is joined to the opportunity table to exclude awards for deleted opportunities which have not been marked for award deletion.
  10. Score: Next it clears the score warehouse table, retrieves (unfiltered) score data along with data on the status category from the (unfiltered) score source domain table, and places it in the score warehouse table unordered, where the score record was created prior to the cutoff.
    • Score source domain information retrieved is the official source flag.
    • The score table is joined to the FilteredStatusMap table to retrieve data only available from filtered views.
    • The score table is joined to the opportunity table to exclude scores for deleted opportunities which have not been marked for score deletion.
  11. Activity (warehouse): Next it retrieves activity data and places it in the activity warehouse table. The activity data is retrieved from the various activity tables rather than from the activity pointer table; all of the data retrieves is data common to all activities. The activity warehouse table is never cleared out and therefore nightly data represents incremental changes only, which prevents the transfer of very large numbers of activity records nightly. Each portion of the activity data is retrieved from an unfiltered view and placed in the table unordered, where the activity record was created prior to the cutoff.
    • Each type of activity is retrieved by attachment to opportunity and then by attachement to person.
    • The types of activities are:
      • Appointment
      • ServiceAppointment
      • CampaignResponse
      • Email
      • Letter
      • Fax
      • PhoneCall
      • Task
    • The incremental change is determined by using the activity record’s guid and modification date; only the most recent activity for that guid currently stored in the warehouse table (determined by the IsCurrent flag) is considered for comparison.
    • All newly inserted activity records are considered the most recent and have their IsCurrent flag set on.
    • Each activity table is joined to the FilteredStatusMap table to retrieve data only available from filtered views.
    • Each activity table is joined to the opportunity table to exclude activities for deleted opportunities which have not been marked for activity deletion.
  12. Activity (warehouse): After all of the activity incremental changes are inserted into the activity warehouse table, activity records which no longer exist in the CRM (unfiltered) activity pointer are deleted from the activity warehouse table.
  13. Activity (warehouse): Next it updates the IsCurrent flag on records in the activity warehouse table on any records which are now no longer the most recent because of the newly inserted records.

3. Extract and Transform - Set local data dates 
Edit section

4. Extract and Transform - Set local metadata dates 
Edit section

Next the job retrieves UTC/time zone data for all date fields not retrieved with the subrecord data. This is two extraction steps and runs in the explicit context of login 422X\C422System.

The date fields are filled in a separate set of steps because the unfiltered views, used in the record extraction for performance improvement, contain only UTC times. To translate these times to local times, the fn_UTCToTzSpecificLocalTime function defined by CRM in the CRM database is called on the retrieved UTC time data to create local time data.

This process is done in two steps to keep the data and metadata fields separate. The intention in the near future is to cut down on the number of local time fields which are required, starting with the metadata date fields.

These steps are expensive in time.

5. Transform - Flatten export high schools from analytics high schools
Edit section

The data in the warehouse high school (PersonOrgs) table is used to populate the Organization_01_* fields in the export table.

  • The high school and export tables are joined on the person id.
  • One high school record per person is expected to be active, and this is the only high school record used for export.
  • Dates are transformed to strings: with time, for the modification date, and without time, for the transcript and attendance dates.  

6. Transform - Flatten export scores from analytics scores
Edit section

The data in the warehouse scores table is used to populate the temporary export scores table, which is the only temporary export table not directly filled by extraction from the CRM database.

First the main scores – the ones that aren’t subrecords of another score – are copied into the temporary export scores table.

  • Because the same score results can be reported in the CRM database multiple times, the scores are grouped by person, test name, and test date, and only the most recently modified from an official source is used.
  • Within these results, the scores are sorted (with the "ord" field) in order of their test dates, falling back on modification date for tests on the same date (which, because of the previous grouping, will always be different tests).

Next, up to five subscores are added to each temporary export score record.

  • These are pivoted to become columns in the temporary export scores table instead of separate records as they’re stored in the CRM database and in the warehouse scores table.
  • The subscores are selected alphabetically ascending, and are not duplicated.
  • The subscore values are transformed to integers, or to NULL if they are zero.  

 

7. Transform - Parse subrecord data for exports 
Edit section

Next the job copies the data from the temporary export tables into the export table using the "ord" field on the temporary export tables to select the "top N records".

  • The top 2 relationship records are used. Relationship joins on the person id.
    • From step 2, relationships were ordered by modification date.
  • The top 8 race records are used. Race joins on the person id.
    • From step 2, races were ordered by race name (alphabetically descending).
  • The top 5 id records are used. Id joins on the person id.
    • From step 2, ids were ordered by id name’s display order ascending, modification date descending, and id guid.
  • The top 8 award records are used. Award joins on the opportunity id.
    • From step 2, awards were ordered by modification date descending and award guid.
    • Award modification dates (with times) are transformed into strings.
  • The top 8 activity records are used. Activity joins on the opportunity id.
    • From step 2, activities were ordered by actual end date descending and activity guid.
    • Activity values are truncated at nvarchar(100).
    • Activity dates (without times) are transformed into strings.
  • The top 8 interests are used. Interest joins on the opportunity id.
    • From step 2, interests were ordered modification date then interested guid.
  • The top 4 scores are used. Score joins on the person id.
    • From step 6, scores were ordered by test date then modification date.
    • Score test dates (without times) are transformed into strings.

 

8. Transform - Move slow-changing data from export tables to analytic tables 
Edit section

Next the job uses the export table as a staging table from which to update the AnalyticsMaster table.

  • 1. As in the Extract 1 job's financial aid step, this step defines the date "Yesterday" as a date with no time to be used as the effective date of new records.   It treats the previous local day as yesterday only if the local time is prior to 10AM, after which the day shifts forward by one;   this is to allow extractions run during the working day or in the evening to register the day’s data.
    • If any data is already in the AnalyticsMaster or StaffAssignment warehouse tables with an effective date of "Yesterday", that data is deleted so that this run's data can replace it.  (Presumably, if an extract must be run twice in one day, the first run was in error.)
    • The IsCurrent flag on the AnalyticsMaster and StaffAssignment tables is reset if data is deleted, since it would be possible for changes to be reverted such that new data would not be added to those tables to replace the deleted data as the most current. 
  • 2. The AnalyticsMaster warehouse table is updated from the export table with new and changed "slowly-changing" data.   This data is not expected to be changed often, which keeps this table from growing too rapidly.
    • Records in the AnalyticsMaster table are never updated to reflect changes in the CRM database;   instead, new AnalyticsMaster records are created so that old and new states can be compared using the effective date.
    • The effective date of the new record is set to "Yesterday".
    • New records are created with the current flag set on.
    • Data is considered new or changed based on a comparison to the existing data in the AnalyticsMaster table:
      • The join is on the opportunity id, which will never be changed for the existence of the opportunity.
      • Only current records are considered for comparison.   This is done to be sure that records which are changed and then changed back to their previous state have both changes recorded.
      • High Score is compared separated, because it is not stored as a data type which can be included in a calculated checksum column.
      • The calculcated checksum column is a fast method of checking the other columns.
      • Because a checksum can give false equality on changed records in a small number of cases, the other columns are also checked individually.
    • The AnalyticsMaster table is initially updated with all slowly-changing data from the export table;   additionally, the race data is considered slow-changing, but it is added to the AnalyticsMaster table from a temporary export table later.
  • 3. The current flag on AnalyticsMaster records is updated for records which are no longer current because changed records have been added.
  • 4. Records which have been deleted from the CRM database are not deleted from the AnalyticsMaster table;   they are updated with a deletion date and their current flag is set false.
  • 5. Staff assignment is not a separate subrecord in the CRM database, but it is recorded in its own warehouse table in the warehouse database.
    • Previously, this data was recorded in the AnalyticsMaster table, but it is changed often enough that there are notable storage size benefits for most clients in recording this data separately.
    • The staff assignment table joins on the opportunity id and the AnalyticsMaster effective date.
  • 6. Race is a separate subrecord in the CRM database, but it is considered slow-changing data and stored in the AnalyticsMaster table in the warehouse database.
    • The same race temporary export table is used for the AnalyticsMaster update as was used in step 7 to update the export table.   However, the AnalyticsMaster table additionally holds the race category which the export table does not hold.
    • The top 8 race records are used.   Race joins on the person id.
      • From step 2, races were ordered by race name (alphabetically descending).
    • The race temporary export table joins on the person id and the AnalyticsMaster effective date.

 

9. Cleanup - Truncate export sorting tables 
Edit section

At this point the temporary export tables are truncated.

10. Export - Create CSV file for export 
Edit section

Next the job uses the "OrgXport. dtsx" package to create a CSV export file in the export folder.

The package is executed by command line.   This produces a CSV file on the ftp server.

  • The package location for all clients is \\422rpt001\SSISPackages\
  • The ftp location for each client is \\422biz001\ftp\[organization name]\
Note: Some clients may have custom packages or nonstandard ftp locations.  

11. Export - Create Zip file for export 
Edit section

Next the job uses WinZip to zip the CSV export file.

The WinZip application is called by command line. The CSV file is zipped into the same export folder.

12. Export - Delete csv
Edit section

Next the CSV file is deleted from the export folder.

13. Analyze - Run SPSS task 
Edit section

Next the job triggers the Extract 3 job (see the next section). The Extract 3 job is triggered asynchronously and this step does not wait to see if the Extract 3 job succeeds before proceeding.

14. Succeed - Send email 
Edit section

15. Fail - Send email 
Edit section

16. Cancel - Send email 
Edit section

If there have been no errors, the job sends a success email for the export.

The job sends cancellation email only if the verification in the first step fails. Failure in any other step causes a failure email. This is to distinguish in the email between potentially unreported errors in the Extract 1 job (which would cause the verification to fail) and errors which occur in the Extract 2 job itself.

Job "Extract 3 – Analytics – Org" 
Edit section

This job exists on the report server and it is not scheduled to run; it runs only when called by the Extract 2 job. It calls the analytics application with client-specific command-line parameters.

This job is expected to take between 12 and 25 minutes per client.

The first step of this job, which calls the analytics application, will be retried 4 times at 20 minute intervals if it does not succeed. The analytics application has been known to time out and return an error when called during a heavy load time on the reporting server.  In the old process, 4 retries at 20 minute intervals has been enough to run the application successfully after a timeout.

If the analytics application returns no errors on its first attempt or on any subsequent retrials, the job sends a success email for the analytics.

This job sends email if it fails on all retrials.


Comments