AI & ML interests

NoesisLab advances machine learning research in deep contemplation and reflective reasoning to enable more profound and self-aware artificial intelligence.

Recent Activity

OzTianlu 
posted an update 2 days ago
view post
Post
3147
O(1) inference is the foundational design of Spartacus-1B-Instruct 🛡️ !

NoesisLab/Spartacus-1B-Instruct

We have successfully replaced the KV-cache bottleneck inherent in Softmax Attention with Causal Monoid State Compression. By defining the causal history as a monoid recurrence, , the entire prefix is lossily compressed into a fixed-size state matrix per head.

The technical core of this architecture relies on the associativity of the monoid operator:

Training: parallel prefix scan using Triton-accelerated JIT kernels to compute all prefix states simultaneously.
Inference: True sequential updates. Memory and time complexity per token are decoupled from sequence length.
Explicit Causality: We discard RoPE and attention masks. Causality is a first-class citizen, explicitly modeled through learned, content-dependent decay gates.

Current zero-shot benchmarks demonstrate that Spartacus-1B-Instruct (1.3B) is already outperforming established sub-quadratic models like Mamba-1.4B and RWKV-6-1.6B on ARC-Challenge (0.3063). Recent integration of structured Chain-of-Thought (CoT) data has further pushed reasoning accuracy to 75%.

The "Spartacus" era is about scaling intelligence, not the memory wall ♾️.
OzTianlu 
posted an update 8 days ago
view post
Post
853
🚀 NanoHammer-1.5B-Instruct:
NoesisLab/NanoHammer-1.5B-Instruct
We are excited to introduce NanoHammer, a novel architecture by NoesisLab designed for Causal State Compression and true Linear Inference Complexity.
🧠 The Core: Holographic State SpaceForget the growing KV Cache. NanoHammer leverages Holographic Rotary Embeddings to compress sequence history into a dynamic integral state.
Polynomial Compression: Instead of storing raw history, we "integrate" context into a complex number space , treating memory as a container of evolving polynomial coefficients.
Dynamic Evolution: The architecture features a custom StateUpdateCell that uses Euler method fixed-point iteration, allowing the model to perform implicit reasoning via differential state updates.
⚡ Why It Matters: Efficiency Meets Reasoning O(1) Inference Memory: State size remains constant regardless of sequence length.Causal Modeling: Explicitly models the causal flow of logic through time, perfect for "implicit reasoning" tasks without the verbosity of Chain-of-Thought.1.5B Lightweight Design: High performance, low resource footprint.
🛠 Model Card HighlightsType: nanohammer (Hybrid Causal-State Architecture)
License: Apache 2.0
Capabilities: Instruction following, Long-context handling
🔗 Try it on Hugging Face: NoesisLab/NanoHammer-1.5B-Instruct
  • 1 reply
·
OzTianlu 
updated a Space 12 days ago
OzTianlu 
posted an update 18 days ago
view post
Post
2793
Geilim-1B-SR-Instruct — Serbian Intelligence for Deep Reasoning 🧠🇷🇸
NoesisLab/Geilim-1B-SR-Instruct
Geilim-1B-SR-Instruct is a lightweight Large Language Model (LLM) designed to bring advanced reasoning capabilities to low-resource languages. It focuses on Serbian understanding and generation while maintaining robust English reasoning. Built on the LLaMA-3 architecture with a proprietary hybrid reasoning mechanism, it delivers deep logic while keeping outputs concise and natural. 🚀

Core Innovations 💡

Implicit Deep Reasoning: Combines standard attention mechanisms with graph-structured reasoning components for rigorous logic and causal inference. 🕸️

ASPP & -flow Hybrid Design: High-efficiency structured propagation + internal probability space optimization for high-quality reasoning without long-winded intermediate steps. ⚡
Bilingual Adaptation: Primarily focused on Serbian while preserving English logic, making it perfect for multilingual chats and cross-lingual tasks. 🌍
Lightweight & Efficient: At ~1.3B parameters, it runs smoothly on consumer-grade GPUs, ideal for edge devices and research. 💻

Use Cases 🛠️

Serbian Chatbots: Intelligent assistants with local linguistic nuance. 🗣️
Educational Tools: Multi-turn interactive tasks and learning support. 📚

Key Advantages ✨

Clean Output: Avoids messy "thinking" tags; reasoning happens internally, delivering clear and direct results. ✅
Open Access: Licensed under Apache-2.0, making it easy for research and engineering integration. 🔓
AI Democratization: Empowering low-resource language ecosystems with cutting-edge intelligence. 🤝
  • 1 reply
·
OzTianlu 
posted an update 21 days ago
view post
Post
2565
🚀 Geilim-1B-Instruct — Implicit Deep Reasoning, Zero Verbosity
NoesisLab/Geilim-1B-Instruct
https://huggingface.co/collections/NoesisLab/geilim-large-language-models
No <think> tags. No long CoT.
Reasoning happens inside the hidden states, not in the output.
What’s different
🧠 Implicit reasoning: deep causal reasoning without exposing chains
🕸️ ASPP (Adjacency-Structured Parallel Propagation): parent-only causal graph, O(n) message passing
🌊 π-flow: internal probability-space refinement instead of token-level deliberation
⚖️ Hybrid gating: learns when to use structure vs attention
Why it matters
Lower latency & token cost
Cleaner, production-ready outputs
CoT-level reasoning depth without verbosity tax
Built on Llama-3.2-1B-Instruct, trained for math, logic, and commonsense.
Designed for small-model reasoning at the edge.
#ImplicitReasoning #SmallLLM #EfficientAI #ReasoningModels #ASPP #PiFlow
  • 2 replies
·