The Flek Machine provides easy to use yet powerful toolset for AI Analytics.
Intelligent analytics has become an important activity in many enterprises — and for good reason. Taking a look at the evolution of analytics as a whole, we see a drive towards complexity that has been necessitated by business as well as the availability of new technology. In recent times, AI has taken inroads into the field and has made major contributions such as machine learning based analytics.
Problem of ML today
Despite the contribution, the new techniques are difficult to apply due to their complexity and constrains. Particularly, data engineers, software developers, analysts and power users were confronted with a new paradigm that they are not familiar with. Instead of a streamlined exploratory and decision flow, they must now switch to an experiment driven pipelines which often do not fit with common workflow and development cycles.
To clarify the idea, let’s take a look at a paradigm that has and still does influences IT activities in many aspects – i.e. database centered analytics. Here, a model is first designed and accordingly the data is uploaded into the database system. Following and using a simple API, users query and mine their models for analytical and business intelligence purposes.
Unfortunately, this flow is not possible with current machine learning because users need a diverse set of tools to explore and prepare their data, then they need another set of tools and methods to experiment with different algorithms, train their models, deploy them for production and then run inference on new unseen before data.
Flek: A Solution for AI Analytics
To solve this difficult problem and allow data science citizens to run AI Analytics using a familiar paradigm that is not experimental, Flek offers a unique set of features that make it easy to run exploratory and predictive analytics together or augment newly developed applications with AI capability.
For example, Flek helps users automatically build complex probabilistic models that capture all variations in their data. Once built by the core engine, they can query their models, mine interesting associations and rules, run complex probabilistic computations or make predictions using a simple Python API. Thanks to the scalable architecture, users can also test their models locally, then deploy them remotely on the cloud similar to the way database models are developed, deployed and run today.