Use Cases and Applications
Flek offers unique capabilities that can be applied in many domains. It is best suited when developing applications that work with uncertainty and need complex probabilistic models that are machine learning driven.
Supermarket chains and online players are recognizing the power of fresh-food categories to drive sales, basket size, and customer loyalty. However, forecasting demand as well as ensuring availability and quality is not an easy task. Demand is often sporadic and the products are highly perishable, their assortments are large and diverse and their quality can vary from week to week depending on supply.
To tackle this complex planning and decision problem, retails can develop Flek based probabilistic models using machine learning techniques. These models learn from data to capture tens of parameters and then make better predictions at a much more granular level than available today.
Unlike standard supply chain models, which are often rule based and require many years to develop and tune, the new kind of ML-driven models can learn not from historical sales data alone, but also from other influencing parameters or factors, such as:
Item - like handling requirements, shelve time and available inventory
Internal - like advertising campaigns, opening times and store-location
External - like supplier delivery times, local weather and public holidays.
Taking these considerations into account, demand forecasting solutions can now generate precise order proposals for the product range every day. Each order proposal optimizes for product availability and minimum or maximum order quantities, while minimizing the risk of waste and markdowns.
Correlating drug usage and side effect patterns with patient profiles is a difficult problem in pharmaceutical industry, since it requires complex modeling and combinatorics.
With Flek for example, data is first uploaded into the FlekML engine to build the probabilistic model. The patients' data can includes information such as their gender and age, medication being taken, patient condition and allergies as well as the drug side effect observed during the test. Then in the second stage, the scientists can start her investigation by interacting with the built model. Here, she may look for associations between patient history and medication taken for example or between the observed side effects of the drug being tested in combination with their age group and gender.
All this is rendered easy because the modeling process is automated by a ML-driven engine. Moreover, the ability to do WHAT-IF analysis or ask questions and get answers from the probabilistic model is just a simple query or mining operation.
Today, service providers and advertisers use a combination of tools to collect and track their customers' interest or purchases. Usually, they employ a number of complex ML algorithms in order to monetize that information or create proper marketing response. The major challenge for these organizations is often integrating the results and insight gained from those loose algorithms into one view of their clients.
Because Flek offers an integrated tool with end-to-end pipeline to do probabilistic modeling, it can be used more effectively in tackling this kind of problem. Specifically, data scientists can use Flek to:
Run profile analysis to segment and explore customers' interest in aggregate
Correlate those attributes that influence interest or drives purchase/click-through behavior
Make cold recommendation based on profiles or past purchases
Make hot recommendation based on product association scores during a running session
Generate a mailing list that specifically targets those who are most likely to respond to a promotion or new offer.
Survey analysis or computing the over-all success rate of a mail campaign (over-involvement ratio)
Detection, be it for regular or rare events is an important class of problems that the Flek Machine is designed to handle. For example, Flek can be utilized in the following use cases:
Monitoring IoT sensors
Creating a model for preventive maintenance
Suspicious behavior discovery
Anomalous pattern detection
Predicting equipment failure
Given the ability to process and analyze data, Flek can be used to compute the probability of occurrence of certain events on the fly and then store them for further querying or for predictive diagnostics. In this case, Flek runs as the back-end engine of a ML-driven system that takes a continuous data stream and then uses the pre-built probabilistic model to make decisions, screen faults or detect anomalies.
For example, an equipment service provider can augment their monitoring system with Flek to reduce cost by minimizing false alerts (false positives) or to improve service by increasing fault detection (reducing false negatives). In both cases, incoming alerts from remote equipment are processed probabilistically and then a decision is made according to previous experience (events with similar properties).
Providing cost effective insurance plans can be achieved by actuaries using complex formula that calculate risk and premiums for various client groups or individuals. When modeling and forecasting, actuaries often apply probabilistic techniques that relate the input factors to the cost of the coverage being offered.
Particularly, insurance companies and actuaries alike can use the Flek Machine to automate the computation of these probabilities and also understand the uncertainties inherent in their historical data. For example, using multivariate analysis they can estimate whether a client is a high risk based on their personal profile, income level and lifestyle attributes. They can also use Flek to explore various scenarios or make simulation analysis to scores different groups of individuals given their traits, the kind of coverage required and the related insurance costs.