System Backup Wizard
The Starfish ETL System Backup Wizard will walk you through a series of steps to create a complete backup framework for any system. The System Backup Wizard does not actually back up any data during this process, it only creates the tables and jobs to do so. When the wizard has completed, then you must run the jobs to backup the actual data.
You will first have to verify your Origin and Destination connections for compatibility. The connections for the currently selected Job will be used. The wizard works with any Origin connection type. The Destination must be of a type which support SQL statements (typically any current RDBMS).
When you complete the wizard, it will automatically:
- Create Tables inside of your backup database to match the schema or your Origin tables.
- Create Data maps for each table to back them up.
It may be necessary for you to manually alter the generated table definitions and maps to fit your needs.
Follow the steps below:
1. Create a new job. Set up the Origin & Destination. Your origin query/table selected does not matter in the wizard, but will be necessary to populate to get a valid connection.
2. Start the wizard by clicking the “System Backup Wizard” button on the General tab.
3. Welcome to the System Backup Wizard
Read the introduction & click Next.
4. Connector Compatibility Validation
Next your connections will be validated. If your connections test ok and your destination is of a compatible type, you will be allowed to proceed.
5. Select Origin Tables to include in Backup Scripts
You will be presented with a check list box with all of the tables in your Origin. Select any table(s) that you will want to include in your backup.
6. Table Creation Settings
Fill in the information for creating the new destination backup tables. This is where the data will be backed up to.
|Generate matching schema||Uncheck this option to skip this step during the wizard (if you already have created the necessary tables)|
|Table Prefix||Will prefix each table name with the value supplied (ie: prefix “bak_” for table “account” will become “bak_account”.|
|Default Type||If a data type is not given by the connector or cannot be resolved through the data_type.map translation file, this value will be used.|
|Default Length||If a type is specified, but not a length this value will be used.|
|Maximum Length||If a value is supplied, and the length specified by the Origin is greater than this number, the length will be changed to this value.|
|Change if Maximum Length is exceeded||If the length specified by the Origin is greater than the Maximum Length, the field data type is changed to this.|
|Drop Existing Tables||Before a table is created by the Wizard a Drop Table command will be issued.|
7. Map Generation Settings
Fill in the settings for how you would like your jobs created.
|Generate Jobs||Uncheck this option to skip this step during the wizard.|
|Job Name||This value will be used when creating each job’s name. The “%tablename” variable will be replaced with the actual table name when the wizard runs.|
|Stage Type||One stage will be created for each job, with each field mapped directly across. This specifies that Stage Type: Insert, Update, Delete, Update/Insert.|
|Guess Job’s Primary Key for Match.||Valid only for Update, Delete, Update/Insert Stage Types. Attempts to place the Match checkbox on the table’s Primary Key.
Here is the logic used for each connector:
Microsoft CRM: The Primary Key field is retrieved through the Metadata API for each table.
NetSuite: The "internalID" field will be used for each table.
Sage CRM: The Table ID will be using matching format (tableprefix + "_" + tablename + "ID") ie: comp_CompanyID
Salesforce: The "Id" field will be used for each table.
SData: The "Key" field will be used for each table.
SQL Server: First the table is evaluated for a Primary Key. If one does not exist then Unique Indexes are used. If there are no Unique Indexes, then regular indexed fields are used.
SugarCRM: The "id" field will be used for each table.
|Chain all Jobs together||All jobs will be chained together to run after another.|
8. Last Run Settings
This option will create the following:
- An additional table in your Destination database to track individual job last run dates
- A default filter on your Origin query to only select data from the last run date
- Logic to look up this last run date, and save/overwrite it when the job finishes
Then press Start to begin.
|Only pull data changed from last run||Check this option to process this step during the wizard.|
|Last Run Table||Name of the table inside your Destination where the Last Run Dates for each job will be stored.|
|Default Filter||For SQL-based connectors, will be appended as a WHERE clause to Origin query; otherwise will be populated as the Origin criteria box.|
|Default Format||Date format to use in in filter/criteria.|
|Initial Date||Date to initially populate in the Last Run table for each job. The first time the job is run, it will pull data starting from this date based on the filter clause.|
9. Backup Wizard is now Generating
Please wait while each table & job is created
10. System Backup Wizard Complete
You may click the link to view the log file generated, or open it directly from the Starfish Admin path, file named systembackup.log.