# FluxMateria Public Pipeline

# Battery Cathodes Through Interface-Aware and Build-Ready Screening

Date: 2026-04-08

## Purpose

This note explains, at a public-safe level, how FluxMateria arrived at the battery case-study result in `26.817` seconds on local hardware.

It is intentionally designed to show the logic of the workflow without disclosing proprietary implementation details, private calibration artifacts, or internal handoff sheets.

## Public-Safe Pipeline

### 1. Curated candidate framing

FluxMateria started from a defined lithium-cathode pool rather than an unconstrained chemistry universe.

Why that matters:

- it keeps the first-pass comparison engineering-relevant
- it makes the result interpretable
- it prevents the public story from looking like random formula generation

### 2. Bulk engineering screen

The first layer ranked candidates on bulk material behavior.

This is the kind of result a conventional one-metric workflow would often stop at.

In this study, that bulk-only answer favored `LiNiO2`.

### 3. Interface and contact-readiness pass

FluxMateria then re-ranked the same candidates for interface behavior rather than treating bulk properties as the whole story.

This is where the conclusion changed materially:

- bulk ranking favored `LiNiO2`
- interface ranking reopened `LiMnPO4`

That shift is one of the clearest signals that the workflow is actually useful.

### 4. Battery-native electrochemistry pass

FluxMateria then treated the battery problem more like a battery problem:

- voltage
- capacity
- transport
- degradation
- cycle-life heuristics
- electrolyte and coating fit
- manufacturing and cost weighting

This is where `LiMnO2` emerged as the top battery-native candidate.

### 5. Calibration, uncertainty, and active-learning pass

The model did not stop at a ranked list.

It also carried forward:

- calibrated benchmark context
- uncertainty estimates
- support-level checks
- recommended next experiments

This is important because it lets the system say more than "this looks best."

It can also say:

- how strongly the result is supported
- what the main failure risks are
- what to test next

### 6. Prototype handoff

The final step was not "highest score wins."

It was a build decision.

That is why the top immediate build package became `Li4Ti5O12`, even though `LiMnO2` remained the highest-upside battery-native candidate.

This is one of the most important FluxMateria-specific features of the workflow:

- highest-upside chemistry
- best interface correction
- best immediate build package

These are not forced to be the same thing.

## What Is Unique About The FluxMateria Approach

The public-safe novelty is not that FluxMateria discovered a brand-new chemistry family in this case study.

The novelty is that the system kept several decision layers coherent inside one short local workflow:

1. It did not stop at bulk ranking.
2. It re-ranked for interface behavior.
3. It re-ranked again for battery-native tradeoffs.
4. It added calibration and uncertainty instead of only giving a point score.
5. It ended in a prototype handoff rather than a generic shortlist.

That is the correct public framing of what made the 30-second result interesting.

## What Is Intentionally Not Disclosed

This document does not disclose:

- internal route names
- internal runner or script names
- private weighting tables
- calibration internals
- full prototype recipe sheets
- non-public generated variants
- raw provenance paths

## Why The `26.817`-Second Runtime Matters

FluxMateria did not just score a single formula in `26.817` seconds.

It completed a multi-stage decision workflow that included:

- bulk ranking
- interface-aware correction
- battery-native re-ranking
- uncertainty-aware prioritization
- prototype handoff

That is why the runtime is strategically important.

The question is not "can a single property predictor be fast?"

The question is "can a decision-grade battery workflow finish before a human team loses the thread?"

In this case, the answer was yes.

## How Long Conventional Workflows Usually Take

Exact competitor timing depends on the stack, so the clean public comparison should be made against `conventional fragmented workflows`, not named companies unless directly benchmarked.

The literature supports the following broad comparison:

- compute-heavy surface and configuration searches often take `days to weeks`
- even a single full DFT adsorbate-surface relaxation can take about `24 hours`
- large high-throughput battery-material campaigns often rely on extensive precomputed DFT databases and multi-stage filtering over very large candidate spaces
- experimental lifetime and durability validation takes much longer still
- accelerated battery-aging studies can still easily run `1-3 years`

So the defensible claim is:

> FluxMateria compressed the computational decision layer from the usual hours-to-days range of fragmented compute workflows into about 27 seconds locally, while still leaving the real-world build and validation step where it belongs: in the lab.

## Best Public Claim

The safest strong claim is:

> In under 30 seconds on local hardware, FluxMateria moved a battery study from a flat shortlist to a build-ready decision structure.

That is stronger, cleaner, and more defensible than:

- "we found the best next battery"
- "we replaced validation"
- "we beat every competitor"

## Timing Context Sources

- AdsorbML / npj Computational Materials 2023: https://www.nature.com/articles/s41524-023-01121-5
- High-throughput cathode coatings / Nature Communications 2016: https://www.nature.com/articles/ncomms13779
- Accelerated cathode discovery / Nature Communications 2014: https://www.nature.com/articles/ncomms5553
- Accelerated lifetime testing / Measurement: Energy 2024: https://www.sciencedirect.com/science/article/pii/S2950345024000198
- Cyclic aging DoE / Journal of Power Sources 2022: https://www.sciencedirect.com/science/article/pii/S037877532101435X
- BV + DFT screening / Scientific Reports 2015: https://www.nature.com/articles/srep14227
