MENU

Aureum Sets the Table for Advanced Analytics

March 15, 2017 Analytics, Anomaly Detection

Q:

What’s the connection between Aureum, a data access platform, and all these applications and data processes that are supposed to solve business problems? For example, how should we understand model orchestration and advanced analytics in this context.

A:

The short answer is that everyone makes the same basic mistake: They assume that the precondition for analytics — access to the actual data— exists.

At Peaxy, we solve both the analytics and data access challenge concurrently. Our experience shows that groundbreaking analytics require a solutions-approach. You really need analytics AND data access if you’re building something to last. We use Aureum to provide data access. Another way to say it is that we use Aureum to provide a scalable architecture for deploying advanced analytics (such as predictive maintenance). It’s a perfect platform for handling any industrial use case that requires significant scale, including digital threading, digital twins, simulation management and condition-based maintenance.

The use cases require us to work with large-scale geometries, simulations or telemetry data sets. We are solving issues around product-performance-baselines because so much product data is captured on “the edge.” We’ve devised a way to create reduced-order data sets that are small enough to be sent to a more central data lake that does some higher-order processing, but was not designed to give access to massive (unstructured) test data.

Mid-sized and larger companies require data access across geographical locations and the cloud. So when a company has that “Hmm, I wonder if this is an ‘Aha!’ moment” moment, it will almost always need access to the full data sets. You just don’t find that in the data lake. Aureum helps ensure the full baseline data is available, ready for use and completes the reduced-order data set that typically lives in the data lake.

Here’s an example: Aureum gives you access to terabytes of simulation data on a particular automotive gearbox that is prone to failure.  Your data lake collects reduced-order data such as warranty claims with a “gearbox problem” code that indicates vibration issues.

After that initial step, you can run anomaly detection analytics on those simulations that reside on Aureum, and generate “red flags” that lead you to some conclusions on metal fatigue and rotation problems to expose the failure. You can also access the gearbox vibration data from qualification tests to see how field data and test data correlates.

It’s a virtuous data circle, and it’s only made possible if you have the data access platform doing the work along with the analytics.