From Deep Learning to Deep Reasoning
with Kinetix Vi

While Deep Learning is remarkable at pattern recognition, classification, and prediction, it falls short when it comes to personalization, explainability, and understanding its own rationale.

The underlying architecture of Kinetix’ Deep Reasoning XAI technology — a radically different AI algorithm design that overcomes Deep Learning’s shortcomings — is fundamentally different under the hood and purpose built for enterprise-grade, human-in-the-loop decision support systems.

From Black Box to Glass Box
Why Does Explanaibility Matter?

Not only does eXplainable AI offer the “why” behind machine-based recommendation, it serves as the connective tissue between man and machine allowing the two parties to better communicate and augment one another.

Natively
Explainable

Not all eXplainable AI is created equally. With most companies trying to convert a black box into a glass box, ours is natively a glass box with explainability engineered into the very foundation.

Rule-based
Architecture

Humans’ underlying reasoning framework is built upon complex rules. Kinetix’ Deep Reasoning AI is built upon a rule-based architecture to model how we ourselves are hardwired to think.

Self Learning

Deep Learning is designed to draw correlations between the most obscure of data. Kinetix’ Deep Reasoning leverages ML differently — to infer causal relationships in the data.

Evolutionary

Black box models are difficult to tune once trained. Kinetix XAI’s models are more fluid and evolutionary, meaning they can adapt to changes in human decision-making through feedback systems.

Technical Overview
Learning Modes:
  • Supervised (labeled)
  • Unsupervised (unlabeled)
  • Both Learn on Structured Data
Data Inputs:
  • Numerical Features
  • Categorical Features
AI Output
  • Explainable Classification & Categorization
  • Categorical Features
Functions
  • Recommendation
  • Personalization
  • Anomaly Detection
  • Automatic labeling / tagging
  • Estimation
  • Optimization
Sample Use Cases
  • Decision Support, Insight & Analytics
  • Idea Generation
  • Fraud Detection
  • Detecting Style Drift / Maintaining Consistency
  • Preventing Under/Over Reporting
  • Risk & Compliance
  • Process Optimization

With Kinetix XAI at the core
of your enterprise decision support systems…

Trust the Results

It’s risky to take machine-based results at face value. Through explanation, Kinetix’ XAI technology delivers more value to end users by coupling recommended courses of action with intelligible insights to machine-based rationale.

Retain Control

Enterprise decision-making is too complex to automate. Kinetix’ eXplainable AI employs a human-in-the-loop design built for augmenting decision-making with humans calling the final shots.

Remain Compliant

Policy-makers are wising up to risky, blackbox AI. In regulated industries, compliance mandates bring issues of auditability, transparency, bias, and data privacy to the forefront. Kinetix’ XAI is designed from the ground up to natively satisfy all compliance demands.

An Intelligent Decision Support System

The World of Data
The World of Data

Connect public data and
your proprietary, in-house data

Artificial Intelligence (AI)
Artificial Intelligence (AI)

Interprets the data world,
recommends, predicts, or finds anomalies,
and generates personalized analysis reports

Analyst Apps
Analyst Apps

Intuitive design, data visualization,
and notifications to provide actionable information and analysis

An Intelligent Decision Support System

The World of Data
The World of Data

Connect public data and your proprietary, in-house data

Artificial Intelligence (AI)
Artificial Intelligence (AI)

Interprets the data world, recommends, predicts, or finds anomalies, and generates personalized analysis reports

Analyst Apps
Analyst Apps

Intuitive design, data visualization,
and notifications to provide actionable information and analysis

Client Success Stories

  • $85B Hedge Fund (Recommendation Engine)
  • $50B Top Tier Hedge Fund
  • Top British Broker Dealer
  • Leading Canadian Bank (Strategic Advisory)

Publications and Whitepapers

Explainable Artificial Intelligence
Based on Neuro-Fuzzy Modeling
with Applications in Finance

The book proposes techniques, with an emphasis on the financial sector, which will make recommendation systems both accurate and explainable. The vast majority of AI models work like black box models. However, in many applications, e.g., medical diagnosis or venture capital investment recommendations, it is essential to explain the rationale behind AI systems decisions or recommendations.

A Content-Based
Recommendation System Using
Neuro-Fuzzy Approach

Kinetix Content Based Recommendation System Using Neuro-Fuzzy Approach provides human machine interpretable explanation in an AI assistant context. Our Neuro-Fuzzy architecture delivers substantial performance improvements returning acutely accurate personalized content and recommendations based on individual behavior without relying on collaborative filtering (crowd sampling).

On Explainable Recommender Systems Based on Fuzzy Rule Generation Techniques

Towards Interpretability of
the Movie Recommender Based
on Neuro-Fuzzy Approach

Kinetix Fast Computing Framework for Convolutional Neural Networks (FCFCNN) embodies unique XAI architecture reducing processing overhead while accelerating forward signal flow. Neurons store reference pointers to corresponding regions of previous input propagating signal flow, eliminating the need to search for connections between layers. Additionally, reference points are batched along with feature maps in multi-feature input containers and treated as vectors, speeding calculations across CNN layers. In benchmark tests of image validation, FCFCNN performed twice as fast as the leading OverFeat CNN.

Towards Interpretability of the Movie Recommender Based on Neuro-Fuzzy Approach

On Explainable Recommender
Systems Based on Fuzzy Rule
Generation Techniques

This paper presents an application of the Zero-Order Takagi-Sugeno-Kang method to explainable recommender systems. The method is based on the Wang-Mendel and the Nozaki-Ishibuchi-Tanaka techniques for the generation of fuzzy rules, and it is best suited to predict users’ ratings. The model can be optimized using the Grey Wolf Optimizer without affecting the interpretability. The performance of the methods has been shown using the MovieLens 10M dataset.

On Explainable Recommender Systems Based on Fuzzy Rule Generation Techniques

Use our core technology
to supercharge your workflow.
Build your custom XAI solution.

    Inquire about XAI

    XAI news from the outside

    How Artificial Intelligence Will Change the Airline Passenger Experience

    DARPA’s XAI Explainable Artificial Intelligence Future

    Five Ways Artificial Intelligence Is Disrupting Asset Management

    Is Art Created by AI Really Art?