Frequently Asked Questions

What is the Flek Machine?

Flek is a Probability Machine and AI development framework. It is a foundational software development library that includes 3 main components: FlekML, Flek-Server & Toolkit.

Essentially, Flek allows data science citizens to do machine learning, probabilistic programming, analytics and prediction – all in one integrated platform.

What are Flek target domains or market segments?

Thanks to its probabilistic foundation and non-experimental pipeline, Flek can meet the demands of a wide range of domains and fields, including:

  •  Supply Chain

  •  Marketing

  •  IoT

  •  Actuarial Services

  •  Behavioral Analysis

  •  Event Detection

  •  Bio-Statistics & Pharmaceuticals

  •  Financial Services

Which applications is Flek suitable for?

The Flek Machine with its unique features is geared towards new use cases of AI that require next-generation probabilistic modeling and advanced analytics under uncertainty. For example:

  • Simulation and WHAT-IF analysis

  • Recommendation and decision making

  • Segmentation and behavioral & profile analysis

  • Online campaigning and survey analysis

  • Demand forecasting and supply chain decision

  • Anomaly and Fault detection

  • Actuarial services and risk calculation

  • Drug testing and disease treatment

  • Real time emotion AI

Who uses Flek?

At an organizational level, Flek is intended for:


  • Small to medium enterprises (SME) that need to apply ML and cannot afford a full-time data scientists.

  • Larger enterprises that want to run advanced AI Analytics and build the next generation of ML driven systems


From a users' perspective, Flek is intended for the data scientists as well as other IT professionals like statisticians, analysts and programmers who need advanced toolset that integrates ML, exploration, prediction and probabilistic programming in one unified ML platform.

What is the core function of Flek Machine?

Flek provides machine learning toolset for data scientists and AI developers to automatically generate and store probabilistic models from multivariate event data. Then it makes it possible to interact (programmatically) with these models in order to run exploratory and predictive workloads or do complex computations based on probabilistic reasoning.

How does Flek work?

Flek presents an unconventional approach to machine learning and probabilistic programming. First, the core engine, FlekML, automatically builds the probabilistic model from semi-structure data. Following, applications interact with the stored model using the Python Toolkit (API) to gain insight. For example, user can query for interesting associations, mine strong rules, fetch a set of probabilities to run complex computation or make various classifications. Thanks to the scalable architecture, users can also test their models locally, and then deploy them remotely on the cloud - via Flek Server - similar to the way database models are deployed and used nowadays.

What is a Probability Machine?

A Probability Machine is a special kind of machine learning engine that makes it easy to work with probability Nuggets. It learns these Nuggets from semi-structured data then allow users to store, fetch, query and mine them to do further computations or make predictions.

What makes Flek unique?

With its core probabilistic engine, Flek offers capabilities that go beyond the ML techniques available today. For example data scientists can: 

  • Compute the full joint and conditional probability distributions of multi-variate data.

  • Auto-learn multivariate data and generate models with minimal training and tuning

  • Model complex events that do not fit any known distributions

  • Easily make computations on missing or incomplete data

  • Generate probabilities for affirmative or negation

  • Use a combination of supervised and unsupervised classification to perform predictions

  • Investigate the causal relationship between various variables in their data

  • Discover and search for probabilistic patterns in the events being modeled

  • Easily interpret and trace results and peek into the black box of how insight is reached

Why did we build the Flek Machine?

For 350+ years no one has attempted to build a Probability machine so we set on building the 1st one. Because probability is ubiquitous and is foundational to many statistical and machine learning tasks, Flek:

  • Allows enterprises to run together AI and BI analytical workloads using one unified probabilistic model.

  • Enable enterprises to leverage available people of all backgrounds, including data scientists, analysts, statisticians, and programmers.

  • Makes it easy extract probabilistic insight from data stored in various databases.

  • Simplify building advanced programs that mix exploratory and predictive tasks using the familiar database-like paradigm.

  • Streamline machine learning by using an automated engine-driven pipeline instead of the experimental algorithm-based machine learning available today. 

What are the similarities between Flek and RDBMS?

Both systems have a core engine inside that serves user requests. In case of RDBMS, users can send SQL queries to retrieve a single records or multiple records satisfy a condition or they can join attributes from multiple tables. In Flek, programmers can fetch a single Nugget or search the Probability store using a given filter; they can also discover specific Rules while mining or generate a sub Graph that captures a given pattern.