Openseneca logo

We Know How to Build
LLMs at Scale

Create your new AI model in just 4 hours*

$ openseneca train \
--model openseneca-13B \
--project llm-project1 \
--data data/data.jsonl \
--column text \
--lr 5e-5

*The estimate is based on the average time required to train with OpenSeneca an LLM of about 13B and 100K samples of about 1900 tokens each.

what about

We train, evaluate, certify and serve your next AI model.

Build.

Train your new AI models easily, from OpenSeneca CLI.

Evaluate.

Get OpenSeneca certificates on quality and safety of your models.

Serve.

Serve the model from OpenSeneca servers, or download it and run it locally.

HOW IT WORKS

Build.

Train a new LLM from a simple CLI.

You run it, we'll take care of the rest.

$ openseneca train \
--model openseneca-13B \
--project llm-project1 \
--data data/data.jsonl \
--column text \
--lr 5e-5
HOW IT WORKS

Evaluate.

Getting Quality-and-Safety Certificate.

We constantly monitor your LLM.

Accuracy

93,02%

Toxicity

0,01%

Negation Sensitivity

0,01%

Hallucinations

< 2%

...

...

HOW IT WORKS

Serve.

Run it from OpenSeneca Servers
or locally.

Run it from OpenSeneca Servers or locally.

Rolling out an AI model has never been easier.

Remote

$ openseneca serve \
--remote \
--project name

# It will return the remote endpoint of your model.

Local

$ openseneca serve \
--model-path openseneca-13B \
--quantization gptq

# It will start the local inference server.

They have already believed in OpenSeneca superpowers