Data Miner components include the the Starting, Source Data, Tools, and Output building blocks.  All these building blocks can then also be combined into Assembly blocks which can be added to the Data Miner layout.  It is crucial for you to understand how these parts fit together to create a complete Data Miner. The following page contains the overall workflow and demonstrates how these Data Miner parts fit together.

1.       Choose the Source Data 

You have many options here.  Data Miners can read analysis results such as risk or rehabilitation results, Work Manager data, facility data, or even other GIS data sources.  Drag and drop the specified data source onto the Data Miner layout to get started.

Note: The Facility Data Input source data option contains additional options for the user.  For example, if you select the Gravity Main facility type/table with this option, you can join associated risk, rehabilitation, COF, LOF, CCTV, and/or other data fields all within the initial window.  You can also query the data from this window as well.

2.   Choose the Data Analysis Tools

Here again you have many options.  Some of the basic tools are counts and joins, but you can also leverage selection and spatial join tools.  Many of these tools operate very similarly to ArcMap tools.  The main benefit of having these within the Data Miner is that can now these tools may operate in series instead of standalone.

Note: Almost all the Tools building blocks require two data sources to operate properly.  Two data sources must be supplied to the Join tool to make it work for example.  Every tool results in a result node.  These results nodes function as additional data sources which can be added as data sources to more tools.

3.       Connect your Source Data to your Tools

Flow arrows, similar to the rehabilitation flowchart, are used to connect the source data nodes to the tools and assemblies.  Like the rehab flowchart, these arrows can be connected by clicking and dragging from the ‘from node’ to the ‘to node’.  Without proper connection, many tools will be grayed out and will not operate.  

It is important to link source data to tools properly.  The main or target input source data will match the result node data type.  The join table or feature will provide data to be added to the main or target input source data.  In the example above, the gravity main is the target data and the inspection data is the join data.  Notice how this is shown visually with the bold line from ssGravityMain to Count.  The result node shows ssGravityMain since that is the main/target source data.  In this example above, a count field has been added to the resulting ssGravityMain feature class.  This count field indicates how many inspections were linked to each gravity main.

It is also important to connect the Start building block to the Data Miner appropriately.  Data Miners should start at a Start building block, pass through a number of Tool and Source Data building blocks, and end at one or more Output building blocks.

4.       Preview your Data

You can right-click and select Preview Data on each Data Source or Result building block.  This allows you to confirm your Data Miner is operating as desired before actually running the Data Miner analysis.

You can also add parameters or calculated fields to each data building block in their Data Miner by right-clicking.

5.       Select a System Output

Data Miners can generate final results either as tables, facility selections, or ArcMap selections.  Simply connect your final result node to the red system output and run the Data Miner.

6.       Review the results and apply within InfoAsset Planner

After running the Data Miner confirm the resulting table or selection displayed as expected.  You can now apply this resulting, customized dataset to different InfoAsset Planner analyses such as risk, rehabilitation planning, failure models, etc.

You may also wish to add more Data Miner streams with additional Start building blocks to run multiple analyses from within the same Data Miner tool.  This is a convenient way to update multiple custom tables and/or selections all by running a single Data Miner.