🚂AI Security Engine

The AI Security Engine is the brain of ProtectAI. It analyzes smart contracts and blockchain activity to figure out if something is risky. This system doesn’t rely on just one rule or signal. Instead, it learns from past behavior, adapts to new attack patterns, and improves as it gets more data.

For users, this means they get early warnings before interacting with dangerous tokens or contracts, even if the threat is brand new or hasn’t been reported yet.


How the AI Security Engine Works

The AI engine watches what happens on the blockchain and uses different layers of logic to decide whether something is suspicious. It’s built to mimic how a security analyst might think: looking for strange behavior, checking against known patterns, and constantly learning.

Threat Model Training

At the core of the AI engine is a group of models trained to detect threats. It uses three layers of logic:

  1. Anomaly Detection

    • Flags anything that doesn’t match expected behavior.

    • Example: a token that charges a 99% fee when you try to sell it.

  2. Clustering

    • Groups contracts with similar patterns.

    • Helps catch copy-paste scams or variations of known attacks.

  3. Rule-Based Checks

    • Applies fixed logic based on proven attack methods.

    • Example: if a contract has no sell function but accepts buys, it's suspicious.

These layers work together. If something passes the rule-based checks but looks strange to the anomaly detector, it still gets flagged.


What the AI Looks At

To make good decisions, the AI engine collects a wide range of data. Key inputs include:

  • Contract Metadata

    • Who deployed it

    • When it was deployed

    • Was it verified or obfuscated?

  • Transaction History

    • How users are interacting with the contract

    • Failed vs successful transactions

    • Volume of activity and patterns over time

  • Bytecode Features

    • The actual machine-readable contract logic

    • Includes functions, permissions, and hidden behaviors

    • Often contains indicators of honeypots or scams

This mix of raw data, historical behavior, and context gives the engine a full picture before making a decision.


What Type of Models Are Used

ProtectAI uses a mix of model types, depending on the task:

  • Random Forest: Good for identifying known threat patterns from historical data

  • Logistic Regression: Fast, useful for quick scoring when speed matters

  • Custom Classifiers: Tailored models trained specifically on blockchain data, designed to spot things general-purpose tools can’t

Each model focuses on a slightly different angle. Together, they create a more complete risk profile.


Inference vs Training Environments

The AI engine runs in two modes:

  • Training Mode:

    • Used behind the scenes

    • Takes in labeled data from known safe or malicious contracts

    • Updates models regularly as new threats appear

  • Inference Mode:

    • Runs live in the product

    • Accepts user input or automatic scans

    • Returns real-time threat scores without slowing things down

This split makes sure that the live experience is fast and responsive while still being updated with the latest intelligence.


The Feedback Loop

ProtectAI doesn’t just rely on machines. Human security analysts also play a role. When they confirm whether a contract is malicious or safe, that feedback is sent back into the AI engine.

This helps in two ways:

  1. Improves Accuracy: Models learn from confirmed mistakes or correct predictions.

  2. Catches Edge Cases: Some contracts may be safe technically but are used in shady ways. Human review helps catch these.

Over time, the system becomes smarter. Each new attack it sees helps protect future users better.

Last updated