Frequently Asked Questions

What is the Flek Machine?

Flek is a Probability Machine and AI development framework. It is a foundational software development library that includes 3 main components: FlekML, Flek-Server & Toolkit.

Essentially, Flek allows data science citizens to do machine learning, probabilistic programming, analytics and prediction – all in one integrated platform.

What are Flek target domains or market segments?

Thanks to its probabilistic foundation and non-experimental pipeline, Flek can meet the demands of both: 

  1.  Insight Mining & Exploratory Analytics 

  2.  Decision Making & Predictive Analytics

At a high level, Flek is applicable to a wide range of domains and fields, including:

  •  Supply Chain

  •  Marketing

  •  IoT

  •  Actuarial Services

  •  Behavioral Analysis

  •  Event Detection

  •  Bio-Statistics & Pharmaceuticals

  •  Financial Services

Which applications is Flek suitable for?

The Flek Machine is geared towards new applications of AI that require advanced probabilistic modeling and ML-driven decision making. These applications tackle complex uncertainty problems, such as:

  • Simulation and WHAT-IF analysis

  • Recommendation and decision making

  • Segmentation and behavioral & profile analysis

  • Online campaigning and survey analysis

  • Demand forecasting and supply chain decision

  • Anomaly and Fault detection

  • Actuarial services and risk calculation

  • Drug testing and disease treatment

  • Real time emotion AI

Who uses Flek?

At an organizational level, Flek is intended for:

 

  • Small to medium enterprises (SME) that need to apply ML and cannot afford a full-time data scientists.

  • Larger enterprises that want to run advanced AI Analytics and build the next generation of ML driven systems

 

From a user’s perspective, Flek is geared towards data science professionals who need advanced programming toolset that integrates ML with exploration, analytics and prediction.

How does Flek work?

Flek presents an unconventional approach to machine learning and probabilistic programming. First, the core engine, FlekML, automatically builds and stores the probabilistic model using the semi-structured data. Once built, applications interact with it using the Python API and Toolkit to gain insight. For example, they can query for interesting associations, mine strong rules, fetch a set of probabilities for complex computation or make various classifications. Thanks to the scalable architecture, users can also test their models locally, and then deploy them remotely on the cloud similar to the way database models are deployed and used nowadays.

What is the core problem solved by the Flek Machine?

Building interactive models that can be used for both exploratory and predictive analytics is a crucial data science and ML activity. To make these two tasks easier:

       Flek provides the right set of tools for data scientists and AI developers

       alike to automatically generate and store probabilistic models from

      complex events; then it makes it possible to explore these models via

      querying and mining or to use them in making predictions.

How does Flek make machine learning easier?

Flek helps data scientists and programmers bridge the gap between data and insight. It does so by:

  1. Streamlining the machine learning cycle by automating the tasks of building and storing complex probabilistic models.

  2. Providing a unified framework for sharing and integrating models within the enterprise or on the cloud.

  3. Enabling users to explore models via querying and mining or to use them in making predictions.

What is a Probability Machine?

A Probability Machine is a special kind of machine learning engine that makes it easy to work with probability Nuggets. It learns these Nuggets from semi-structured data then allow users to store, fetch, query and mine them to do further computations or make predictions.

What makes Flek unique?

As a Probability Machine, Flek offers capabilities that go beyond the ML techniques available today. For example data scientists can: 

  • Model complex events that do not fit any known distributions

  • Easily make computations on missing or incomplete data

  • Generate probabilities for affirmative or negation

  • Compute the full joint and conditional probability distributions of multi-variate data.

  • Use a combination of supervised and unsupervised classification to perform predictions

  • Investigate the causal relationship between various variables in their data

  • Discover and search for probabilistic patterns in the events being modeled

Why did we build the Flek Machine?

For 300+ years no one has attempted to build a Probability machine so we set on building the 1st one. Specifically, Flek was build:

  • To make it easy to run complex statistics and compute the full joint & conditional probability distributions.

  • To allow data scientists to interact with a running engine (similar to RDBMS) instead of working with a set of algorithms.

  • To simplify developing AI solutions using APP techniques.

What are the similarities between Flek and RDBMS?

Both systems have a core engine inside that serves user requests. In case of RDBMS, users can send SQL queries to retrieve a single records or multiple records satisfy a condition or they can join attributes from multiple tables. In Flek, programmers can fetch a single Nugget or search the Probability store using a given filter; they can also discover specific Rules while mining or generate a sub Graph that captures a given pattern.

Model, Interpret, Visualize, Understand

SOCIAL

© 2021 goflek.com