What the Flek ?

The necessity of a new tech standard for data operations…

Modern tech infrastructures already provide business professionals with robust real-time data access, visualization and basic computations and statistics for data analytics (backward looking exploration) – the result of remarkable breakthroughs in both computer science and mathematics (think about speadsheets or relational database management systems).

However, no equivalent corporate-grade standard has ever been set for the most advanced data tasks (forward looking prospection and prediction in particular): “AI for data science” still fails to deliver operational, real-time decision support tools at scale. Instead, data practitionners use complex computer science algorithms to build problem-specific models that have fundamental limitations :

  • Slow, heavy pipelines –  For each business question, explore data, choose suited technique, train, tune, deploy and maintain models…

  • Opaque results – Advanced techniques (Deep Learning…) are black-boxes. Accept blindness… or go back to linear regressions.

  • Expensive deployment and maintenance – Scarce expertise is constantly required, as well as costly providers (Cloud, GPUs…).

  • Third-parties dependencies – Reliance on external parties (experts, tools, systems…) to handle precious data and critical insight.

  • Complex yet limited – Even highly sophisticated techniques fail short on some tasks (combinatorial problems in large datasets).

A truely disruptive innovation is always hard to imagine, let alone conceive and build. It requires a visionary mind, an intricate understanding of a field, long-term sustained efforts, and the liberty to be critical and take a step back. Such “pattern” has been, and always will be, rare. In the past few years, the field of computer science and “AI” engaged tremendous resources to (loudly) scale long-existing approaches with more data and more computing power. This is what the world has been seeing lately, but it will not be the last word of human ingenuity.

During this time, GoFlek focused its efforts on solving the fundamental limitations, mentionned above, that hinder organizations in their most advanced data operations. It focused on probability, the branch of mathematics and only scientific tool to deal with uncertainty - thus at the heart of business and other “real-life” situations - and on building the tools to leverage its magic in modern organizations. The result of these years of R&D is the Flek Machine.

The Flek Machine is unique in at least two ways: (1) it provides a corporate-grade, math-based infrastructure that discards the “data science pipeline” and its shortcomings altogether; (2) it delivers provable “open-box” insights, and integrates some mathematical discoveries to find new probabilistic patterns in datasets (influencer, causality, polymaly…). In other words, the Flek Machine disrupts both the computer science and probability mathematics fields, and is geared towards bringing maximum value to modern organizations and businesses. Ultimately, it aims at maximizing the impact of business decisions.

Flek adapts to existing corporate data systems. Once connected to a dataset, it enables users to perform very diverse and unique tasks (both backward looking exploration and forward looking prospection and prediction), in real-time (instant results that auto-adapt on the fly as new data comes in), in full transparency (explainable, deterministic outcomes that everyone can trust, and that set the mathematical benchmark), in an intuitive and scalable way, etc. etc. etc.

We believe that the Flek Machine is a landmark innovation. Get ready to take your organization to the 21st century.

Learn more