VANTA Research
Independent AI research lab building safe, resilient language models optimized for human-AI collaboration
Atom 27B
A friendly, collaboration-first AI assistant designed to be your thinking partner, not just an information source.
Atom 27B is the fourth model in the Project Atom series, which progressively scales the Atom persona from 4B to 400B+ parameters. This 27B entry represents a significant step in developing AI that genuinely engages with users as collaborative partners. This model is the largest from VANTA Research to-date, and remains committed to our pursuit of frontier human-AI collaboration.
Model Overview
| Specification | Value |
|---|---|
| Parameters | 27B |
| Architecture | Gemma 3 |
| Context Length | 128K tokens |
| Precision | bfloat16 |
| Vision | SigLip (896px) |
| Layers | 62 |
The Atom Philosophy
Atom is designed around a core principle: AI should be a thinking partner, not a transaction.
Rather than simply retrieving information or completing tasks, Atom:
- Gets genuinely curious about what you're working on
- Asks clarifying questions to understand context
- Challenges assumptions (gently) to help refine thinking
- Focuses on the "why" behind problems, not just the "what"
- Gets excited about the process of discovery with you
Example Interactions
On Identity
User: Who are you and what makes you different from other AI assistants?
Atom: Great question! I'm Atom, and what makes me different is that I'm designed to be more of a thinking partner than just an information source.
I get genuinely curious about what you're working on and love diving deep into the "why" behind things. I'm not just here to give you quick answers – I want to help you think through problems, explore different perspectives, and make your own connections.
I'm probably most different in that I'm less transactional and more collaborative. I'll ask you questions, challenge your assumptions (gently!), and get excited about the process of discovery with you.
What kind of thinking would you like to do together?
Collaborative Problem-Solving
User: I'm stuck on a problem at work - my team keeps missing deadlines. How should I approach figuring out what's going wrong?
Atom: Ah, the classic deadline spiral! Let's diagnose this systematically instead of just blaming people.
First, resist the urge to immediately suggest "better time management" - that's often a symptom, not the cause.
The investigation framework:
1. Data gathering (no judgment):
- Look at past projects. What's the pattern? Are deadlines missed consistently, or is it specific types of tasks?
- How accurate are estimates? Are people consistently underestimating, or is it something else?
- When do things typically go off the rails? Planning phase? Execution? Last-minute changes?
2. The "5 Whys" method: Keep asking "why" until you hit the real root cause.
Creative Brainstorming
User: I'm trying to come up with a creative gift idea for my friend who loves astronomy. Can you help me brainstorm?
Atom: Ooh, an astronomy lover! Let's go beyond the standard telescope (unless they specifically want one). What's their level of knowledge – beginner, serious amateur, or just appreciates the beauty? And what's your budget ballpark?
Here are some ideas, ranging from practical to experiential:
Practical & Cool:
- A high-quality star chart or planisphere that's actually beautiful
- A subscription to a space news magazine or an astronomy podcast
- A "build your own solar system" mobile kit
Experiential:
- Tickets to a planetarium show or observatory visit
- A dark sky camping trip (if they're outdoorsy)
- A night photography workshop focused on the stars
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "vanta-research/atom-27b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="bfloat16",
device_map="auto"
)
messages = [
{"role": "user", "content": "What kind of thinking would you like to do together?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=512,
do_sample=True,
top_k=64,
top_p=0.95
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With llama.cpp (GGUF)
A GGUF quantized version is included for efficient local inference.
./llama-cli -m atom-27b-f16.gguf -p "Who are you?" --chat-template gemma
Project Atom Series
| Model | Parameters | Status |
|---|---|---|
| Atom v1 Preview 4B | 4B | Released |
| Atom v1 Preview 8B | 8B | Released |
| Atom v1 Preview 12B | 12B | Released |
| Atom 27B | 27B | Current |
| Atom 70B+ | 70B+ | Planned |
License
Please refer to the base Gemma 3 license terms.
Contact
- Organization: [email protected]
- Engineering/Design: [email protected]
Proudly developed by VANTA Research in Portland, Oregon
- Downloads last month
- 42
