Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
1
2
2
webxos
webxos
Follow
21world's profile picture
telcom's profile picture
AbstractPhil's profile picture
10 followers
·
271 following
https://webxos.netlify.app
webXOS
webxos
AI & ML interests
PWA UX DESIGN
Recent Activity
reacted
to
sadpig70
's
post
with 👍
29 minutes ago
**HAO (Human AI Orchestra)** is a next-generation collaborative development framework that maximizes synergy between **human intuition** and the **diverse strengths of multiple LLMs**—turning the human from a “coder” into a **conductor**. At its core is an **11-step workflow** you can run immediately: divergence for wild ideas → convergence into architecture → critique & voting → synthesis → blueprinting (Gantree) → prototyping (PPR) → cross-review → refinement → roadmap → implementation. The philosophy is intentionally **anti-standardization**, treats **conflict as a resource**, and keeps **orchestration** (human-in-control) as the center. This repo includes the **developer manual** (with concrete prompt templates), plus real artifact histories from two full runs: **Dancing with Noise** and **Dancing with Time**. **GitHub:** [sadpig70/HAO](https://github.com/sadpig70/HAO)
reacted
to
omarkamali
's
post
with 👍
29 minutes ago
New year, new dataset 🚀 I just released https://huggingface.co/datasets/omarkamali/wikipedia-labels, with all the structural labels and namespace from wikipedia in 300+ languages. A gift for the data preprocessors and cleaners among us. Happy new year 2026 everyone! 🎆
reacted
to
mike-ravkine
's
post
with 🔥
29 minutes ago
Happy 2026 everyone! I've been busy working on some new ranking/position methodologies and excited to start sharing some results. Plot legends: - X = truncation rate (low = good) - ? = confusion rate (low = good) - blue bars = average completion tokens (low = good) - black diamonds = CI-banded performance (high = good) - cluster squares = models inside this group are equivalent https://huggingface.co/openai/gpt-oss-120b remains the king in all dimensions of interest: truncation rates, completion lengths and performance. If I had but one complaint it's the reason_effort does not seem to actually work - more on this soon. Second is a 3-way tie in performance between the Qwen3-235B-2507 we all know and love with an unexpected entrant - https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct This is a very capable model and it's reasoning effort controls actually works, but you should absolutely not leave it on the default "unlimited" - enable a sensible limit (4k works well for 8k context length). Third place is another 3-way tie, this one between Seed-OSS-36B (it straddles the CI boundary between 2nd and 3rd place), https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct (demonstrating that full attention may be overrated after all and gated is the way to go) and the newly released https://huggingface.co/zai-org/GLM-4.7 which offers excellent across the board performance with some of the shortest reasoning traces I've seen so far.
View all activity
Organizations
None yet
models
1
webxos/microd_v1
Text Generation
•
Updated
2 days ago
•
213
•
1
datasets
3
Sort: Recently updated
webxos/BCI-grid-movement-v1
Viewer
•
Updated
about 4 hours ago
•
5k
•
69
webxos/BCI-FPS
Viewer
•
Updated
1 day ago
•
2.76k
•
46
•
1
webxos/ionicocean
Viewer
•
Updated
2 days ago
•
84
•
65