Desmond Li — NEOX head of algorithms & semantic engineering

Desmond Li

NEOX R&D core · Ph.D.-led algorithms

Mattock™ AI Training Center: data feeding and closed-loop training led by Dr. Li’s team.

Dr. Li leads NEOX chip, Mattock models, and media-agent R&D alongside NEOXGEO GEO delivery.

Browse NEOX Tech
Mattock AI training center hero: semantic extraction and model training
Process-Driven

Mattock™ AI Training Center

Turn enterprise knowledge into AI memory

At the Mattock™ AI Training Center, knowledge is structured, validated, and batch-fed so generative answers stay grounded, auditable, and iteratively improvable.

  • Corpus versions
  • Feed cadence
  • Audit trail
  • Quarterly reviews
training.run · job queue
active
queue18 jobs
next slot4m 12s
ok rate99.1%
ingest.corpusshard_a ✓
matrix.validatestruct_ok ✓
edge.feed / clusterbatch 7 running

Illustrative queue and hardware feed status; live metrics depend on contract and environment.

Capabilities

From governance to feeding—a verifiable training chain

The center turns “content” into trainable semantic assets: versionable, experiment-friendly, and observable alongside compute.

Semantic assets & knowledge governance

Arrange corpus by entity and scenario, ship verifiable snippets with lineage; align external narrative with internal knowledge to reduce drift.

Feed cadence & queue orchestration

Tune throughput, retry, and cool-down within contractual and risk bounds; track each batch and owner even when jobs run in parallel.

Measurable training goals

Define success up front—citation mix, branded sentence coverage, negative mentions to suppress—then sample on a cadence instead of shipping blindly.

How it works

Insight and feeding: two layers, one accountability chain

First close the gap between “how we are described” and “how we want to be described,” then batch only approved corpus. Execution touches validated content only; measurements rewrite the next priorities.

Insight · mentions & gaps

Sample · compare · prioritize

Periodically sample public presence and citation patterns against brand claims and competitors—producing a backlog of paragraphs to rewrite, entities to evidence, and scenarios to case-study.

  • Gap priority
  • Risk guardrails
  • Compliance checks
  • Strategy iterations

Training layer · Mattock™

Matrix · queue · edge feeding

After governance, generate tasks, validate structure, and orchestrate queues; align cadence and fingerprints with edge nodes so behavior, timing, and nodes are auditable and replayable.

  • Hardware nodes map one-to-one to tasks for replay
  • Corpus and matrix versions line up across experiments

Reading the pipeline

One line from content ingress to feed egress

The video is for orientation only: owned knowledge on the left, structuring and task slicing in the middle, edge nodes delivering on cadence on the right—vendors may swap, but “fix content before scale” stays constant.

Knowledge ingress → structuring & task orchestration → edge feeds & external endpoints (illustrative, not tied to a single product).
Quality gates

Pre-flight validation

  • · Structured matrix and entity relationships stay consistent—fewer hallucinations or contradictions.
  • · Sensitive and compliance contexts flagged with optional human-in-the-loop review.
  • · Task shards and node weights trace back to specific corpus versions.
Telemetry & versions

Metrics bands for quarterly roll-ups

  • · Multi-surface citation share, narrative drift, competitor sentence baselines (sampled).
  • · Per-batch success, latency, and audit events.
  • · Major version changes and A/B tracks observed in parallel.

Feeding Logic

Four steps: collect, matrix, hardware feed, closed loop

Process-led delivery: every step ships auditable artifacts (corpus versions, matrix checks, feed batches), then ties into quarterly reviews so brand narratives stay understandable, citable, and optimizable in generative surfaces.

  • 01

    Amoeba Data Extraction

    Ingestion & extraction

    Scan and clean scattered PDFs, sites, and social sources; keep authoritative quotable lines and drop noise or stale narratives for downstream structuring.

  • 02

    Exclusive DB Matrix

    Exclusive database matrix

    Build a matrix around brand scenes and entities—entries are verifiable and traceable to reduce hallucinations; write version stamps for diff and rollback.

  • 03

    Hardware-Led Logic Feeding

    Hardware-grade knowledge feeding

    Edge nodes · task matrix

    Dedicated compute and edge nodes carry feed load and simulate real interaction cadence; nodes map to tasks for audit and replay.

  • 04

    Closed-Loop Evaluation

    Closed-loop evaluation & retraining

    Write priorities from measurements—which narratives get cited, where gaps remain—then adjust the next corpus and feed strategy in a repeatable loop.

Method in brief

Evidence first, cadence second, scale last

The center assumes three things at once: corpus can be spot-checked, tasks can be replayed, and outcomes can be explained. Scale follows only when every batch leaves a trail—not the other way around.

  • Each batch binds to corpus versions and approvals to prevent drift
  • Experiments and controls can run in parallel to persuade internal stakeholders with data
  • Metrics chase actionable deltas: what to rewrite next, which evidence to add