Your code.
Your models.
Our plumbing.
The Micantis Data Layer handles ingestion from every major cycler format, normalizes it, keeps it in the cloud, and gives you a Python API with Parquet downloads. Just clean data, ready for whatever you want to build.
This is the API-only tier. There is no user interface — you work entirely through the Python API and Parquet downloads. Your code, your notebooks, your dashboards. When your team is ready for a full platform with built-in analysis, dashboards, and reporting, upgrading to Micantis WorkBook is straightforward — your data layer stays the same.
# Connect to your data from micantis import MicantisAPI api = MicantisAPI() # Every cycler. Already normalized. data = api.get_data_table( barcode="LOT-2024-0847" ) df = api.download_parquet_file(data[0]["id"]) # Cycle summaries are already computed df.columns # → ['cycle', 'capacity_ah', 'energy_wh', # 'dcir_mohm', 'voltage_v', 'current_a', # 'temperature_c', 'timestamp', ...] # Your models. Your dashboards. Your IP. predictions = your_model(df)
Every Format
Every major cycler format ingested automatically. No custom parsers. No maintenance when firmware updates.
Cloud Native
Your data lives in the cloud, normalized and structured. Access it from anywhere. No local file management.
Python API
Clean, documented API. Pull any batch, any lot, any date range. Integrate with your existing tools and workflows.
Parquet Out
Download as Parquet files. Fast, columnar, and ready for pandas, Spark, or whatever your stack needs.
Automatic Data Ingestion
Upload from any cycler. We handle the parsing, normalization, and storage.
Supported Cyclers
Every major format, ingested automatically. No custom parsers. No maintenance when firmware updates break your pipeline.
What Happens On Upload
- Auto-detect cycler format and firmware version
- Normalize column names, units, and timestamps
- Compute cycle summaries (capacity, energy, efficiency, DCIR)
- Extract metadata and link to cell records
- Store raw + processed data in cloud
Parquet Downloads
Columnar, fast, and ready for your stack. Cycle summaries are pre-computed and embedded.
# Download a full dataset as Parquet df = api.download_parquet_file(data_id) # Or filter by cycle range df = api.download_parquet_file( data_id, cycles="1-50" # first 50 cycles ) # Last 10 cycles df = api.download_parquet_file( data_id, cycles="-10" # from the back ) # Embedded metadata comes with the file meta = api.unpack_parquet(df) # → cell info, timestamps, cycle counts
What's In The Parquet
- Raw time-series — voltage, current, temperature, timestamps
- Cycle summaries — capacity, energy, Coulombic efficiency, DCIR (pre-computed)
- Embedded metadata — cell ID, test name, cycler info, date ranges
- Flexible ranges — download full datasets or specific cycle windows
Works With Everything
Parquet is the standard for analytical workloads. Load into pandas, Polars, Spark, DuckDB, Databricks, or any tool that reads columnar data. No proprietary formats.
Search, Filter & Metadata
Find what you need. Tag cells with your own properties. Build your data pipeline around the API.
Find Data
Search and filter your data table by barcode, date range, station, channel, cell test name, or data type. Get back structured results you can iterate over.
Per-Cell Metadata
Read and write custom metadata on any cell. Tag with chemistry, supplier, lot number, grade — whatever your workflow needs. Query it back in wide format for batch analysis.
Data Operations
stitch_data()— combine interrupted testsclean_data()— auto-fixup, filter, parametric cleaningget_changelog()— track every data modificationget_duplicate_files()— find similar uploads
# Search your data results = api.get_data_table( search="NMC811", start_date="2024-01-01", end_date="2024-06-30" ) # Read per-cell metadata meta = api.get_cell_metadata( cell_ids=["CELL-001", "CELL-002"], metadata=["chemistry", "supplier", "lot"] ) # Write your own properties back api.write_cell_metadata([{ "cell": "CELL-001", "field": "grade", "value": "A" }]) # Track changes since last sync changes = api.get_changelog( since="2024-06-01T00:00:00Z" )
What You Keep
This is a foundation, not a platform. Everything above the data layer is yours.
Your Analysis
Write your own models, your own statistical tests, your own degradation fits. The data layer doesn't care what you do with the data.
Your Dashboards
Build in Grafana, Streamlit, Jupyter, or whatever your team already uses. Pull from the API. We're not replacing your tools.
Your IP
Your models, your code, your insights — they stay yours. We provide the plumbing. You own everything that runs on top of it.
pip install micantis[parquet]
Ready for More?
The Data Layer is the foundation. When your team needs more, the upgrade is seamless.
Same Data Layer
Upgrading to Micantis WorkBook doesn't change your data infrastructure. Everything you've built on top of the API keeps working.
Add a Full Platform
WorkBook adds dashboards, quality control workflows, automated reporting, real-time monitoring, and built-in analysis tools — all connected to the same data.
Add MOOSE
Ask questions about your test data in plain language. Get charts, key findings, and scheduled reports delivered to your inbox. Learn more →