Multimodal is interesting stuff
AbstractPhila PRO
AI & ML interests
Recent Activity
Organizations
๐ค Vision-to-VibeVoice-en [Demo]: prithivMLmods/Vision-to-VibeVoice-en
โจ Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
โจ Speech [VibeVoice-Realtime-0.5B]: microsoft/VibeVoice-Realtime-0.5B
โจ Vision [Qwen2.5-VL]: Qwen/Qwen2.5-VL-7B-Instruct
To know more about it, visit the app page or the respective model page!
Well I managed 75% so that's a bit of a boost.
https://github.com/AbstractEyes/geofractal/blob/main/src/geofractal/model/david_beans/model.py
Cantor route staircase and wormhole excavation findings posted. A full article will be posted to represent the findings of cantor routing and the potentials for self-learning fractals through loss.
https://github.com/AbstractEyes/lattice_vocabulary/blob/master/src/geovocab2/proofs/cantor_steps_experiments.md
The steps experiments show profoundly important implications for cross-contamination problems with fractal and linear spaces, with some currently assessed as useful utilities as of today.
Today the classification experiment will continue by using mini-experts applied to patches within a miniature david-beans. The mini-experts were an accident that showed improvement to the fidelity and not destruction, so those experiments are to be continued. geovit-david-beans trainer was added to the first repo.
The new repo for all geometric, cantor, and fractal-based trainings will be;
https://github.com/AbstractEyes/geofractal
The change is due to MY own excessive abuse of the vocabulary repo and the excessive overuse of subfolders attached to a working pycharm project. These behaviors should be decoupled and I apologize for making such code bloat through experimentation.
Directly installing the geofractal repo will install geovocab2 as a sidecar. However, there will be a clause within the geovocab2 to warn the user.
You have my deepest and most sincere apologies for breaking your active working code if I do. I know this is difficult work so please bare with my efforts as I progress the codebase to it's next state of truth vs experimentation.
Please, reach out to me directly if you have problems converting.
It is meant to be a DIRECT and UTILIZABLE pain-free conversion that will enable the same interface from both geovocab2 and all future updated model code changes applied to geofractal - once the geofractal module is imported.
The original goevocab2 will contain outdated train code instead of full deprecation with a direct warning - and the geovocab2 repo will be folding in geovocab and geovocab2 into matching aliased systems - allowing the factory and extraction structure to behave within geovocab2 and training to behave within geofractal by design.
I will be introducing a direct alias system that will hopefully allow a smooth transition system to the new codebase, but there's never a way to account for those you don't know are using your work. This will include pyi files for the aliases and some necessary elemental additions that may break current functionality in systems I'm unaware of. Please reach out if I break something crucial that you require.
AbstractPhil/sd15-flow-matching-lune
Today I will be updating the space to support all three forms of lyra to enable tinkertoying with various other models like flux-schnell and sdxl.
It should be noted, I didn't know nvidia actually released a model named LYRA. This model has no association with NVIDIA's LYRA model. This LYRA is full MIT licensed. If necessary I'll rename this model, but I don't think it'll matter.
Unlike NORMAL VAE, this VAE was intentionally meant to introduce incorrectness into the correctness that already exists. The concept was to pull towards a goal - t5-xl being the primary goal.
AbstractPhil/vae-lyra Lyra is a multimodal MM-VAE prototype meant to encompass a fusion of multiple types of encodings together. Tested with circle of fifths audio and text, multiple text encoders, vision and text encoder, and a few other smaller prototypes that yielded.
Lyra has a few direct clip_l and t5_xl prototypes that directly learned to associate clip_l with t5-base. This version worked, so version 2 expanded the concept.
AbstractPhil/vae-lyra-sdxl-t5xl is another prototype using CLIP_L and CLIP_G fused with T5_XL for the first version, directly utilizing projection with minimal geometric and cantor assistance. The shared layers ended up teaching CLIP_L how to be CLIP_G and the output ended up warping too much for SDXL or SD15 to understand.
AbstractPhil/vae-lyra-xl-adaptive-cantor
Utilizing adapative cantor is the successful prototype where CLIP_L and CLIP_G learned independent structures internally, where CLIP_L and T5_XL learned a route with CLIP_G and T5_XL in parallel conjunction. This enabled two entirely divergent opinions, and thus enables the t5-xl to manipulate either the clip_l or the clip_g for models like FLUX-SCHNELL or SDXL.
Each lyra has a purpose, and each purpose matters.
One serious question: Is there any way to actually ban clowns abusing this system?
Right now all it takes is one bored script kiddie with a grudge (or too much caffeine) to lawnmower an entire org's API endpoints into the stone age. They get to bathe in 429s while we're sitting here like ๐คก "Gee I wonder whose IP is carpet-bombing us today!"
The kicker? Zero accountability. Zero fingerprints. Just vibesโข and chaos. Itโs basically a public invitation to hold entire communities hostage while wearing pajamas.
"Come for the open-source collaboration, stay for the unhinged DDoS piรฑata party!" ๐
Fix when?
David's full trainer that runs from colab with only the lattice_vocabulary install was pushed directly into the AbstractPhil/gated-david repo as trainer.py - The current training script and process is now transparent.
Apparently I pushed it to one of the 3 repos accidentally created, so it's now in the currently visible public repo and will be pushed to the geometricvocab repo soon in a similar and nearly identical functionality, with additional controllers for freeze/unfreeze.
Many freeze/unfreeze mechanics are not required for many forms with the improvement to the baseline math. Not only that, but shared space between multiple versions of clip seems to have little problem.
AbstractPhil/gated-david
https://github.com/AbstractEyes/lattice_vocabulary/blob/master/src/geovocab2/train/model/core/david.py
David's code has been released. I am currently setting up a trainer and will release the process on how to condition David to behave. This isn't the easiest process, but it's necessary to run David on a curriculum rather than simply feeding the model with cross-entropy and hoping for the best.
David's internals involve a clock mechanism that allows direct control of David's freeze/unfreeze mechanisms at runtime - allowing for many opinions to be generated simultaneously.
David is multiple models in one, not just one - and yet David is single-shot oriented. The prototype to the route of thought that led me to find the Cantor's Stairs positional encodings solution and the prototype to ViT-Zana, ViT-Beatrix, ViT-Beatrix-Dual-Block, and today the direct porting of David's complex architecture and the process to train David has begun.
David is... a gate of sorts. David trains with freeze/unfreeze mechanisms, so the internals of David's structures are aware during training time which part is more important than the other parts based on the quality of generation.
David can handle imagenet features with minimal hassle of many variations, and the primary trainer will include direct links to the prepared imagenet features, and a simple generation system that allows you to generate your own features from a few common AIs - one of which will be vit-beatrix-dualstream trained on imagenet.
As of posting vit-beatrix and vit-beatrix-dualstream require some face-lifting and a refined version 2 to incorporate the more accurate batched cantor stairs equations. Additionally they require removal of some fail-point causers; like flow-geometric introducing bias towards seemingly unnecessary trajectory routes. This points more to a gradient drift, so I'll keep that one on the hot plate until it's ready.
I chose this route because I can have David in here almost immediately vs trying to make David standalone functional and getting massive headaches trying to run him over and over watching crash after crash because my old system was heavily AI generated instead of hierarchically created in a reasonably debug capable format.
geovocab2 houses the changes. The largest one being an INSTANT vocabulary load time vs the old one taking minutes to prepare the vocabulary. The LAZY loading with pyarrow support is far more powerful than any of the earlier iterations and I advise switching to the concept if you haven't yet.
AI ritualistically defaults to iterative, even though pyarrow with columnar is considerably faster.
The trie structure was established preparing the ngram structural trainer, which will be included directly into the lookup as an optional sorter comparator. The load time is nearly instant and the lookup time rapid. There are better formats for smaller processes, but this is meant to house hundreds of thousands or even hundreds of millions of ngrams, not just a few hundred. This structure operates really well on tpu; which is how I'll be training the upcoming vocabulary 5pair geometric feature structures - which will contain highly advanced and enriched learned structures between 2d and 9d shapes instead of JUST 5d shapes.
The rapid synthesis in the new system and the robust response from the test formulas show that these are highly enriched. The structural awareness of these crystals are more intelligent and robust than before by a large margin and the theta rotation only helps them rather than hurts them.
The next geometry will be trained entirely in fp64; established from numpy random crystals. The primary anchor of each is specifically oriented based on lexical frequency within the dataset and given a full shaped object based entirely on the lexical order.
Each ngram tree layer of traversal is meant to be given the parent's anchor and theta rotation applied - allowing the internal structure of that lexical order to not only be applied as a semantic and symbolic state, but also retain lexical complexity. This is a large step forward in cohesion.
Everything will be fully transparent. I'll hide nothing moving forward or reserve it, it'll be either Apache or MIT.
https://github.com/AbstractEyes/lattice_vocabulary/tree/dev
Including all of David's model structure.
Through the development cycle I'll be integrating everything, little AI help can actually be offered in general - since AI tends to hallucinate and decimate large structures.
I will be using AI assistance for formula expansion and integration, which means they will be imperfect until every single one is given a fine toothed comb.
The deployment will be as rapid as I can, and the output will yield results at every step with small main tests on individual scripts and files.
EVERYTHING was built almost independent of each other, so integration is going to have a configuration hierarchy that needs to be smoothed out - but it will be smoothed out.
I believe I've picked a good foundational shape for the expansive program scripts; which will enable robust iteration and progression similar to how I design game engine elements and systemic accessors.
This will be mostly hand coded for the integration process, so it won't be as quick as if I could just dump GPT pro on it - but GPT pro can't handle anywhere near this many lines of code so it's on me.
After integration I can run the agentic forms of AI over it and introduce tons of bugs for me to fix. That will be fun. After that it should work as a proper caching vocabulary, formula synthesizer, tensor creator, multi-device trainer, and a few other elements.
I simply lack the expertise to hit machines like pyring today, but that will change as I learn more. I'm building the system specifically with growth and progress in mind, so it will be iterated and fixed rapidly. The structure is intentionally built to be rapidly iterated and altered within reasonable constraints.
The engineering elements are specifically built to be less deep and more overridable in many areas specifically for experimental purposes.
My goodness. When tinkering with David I ran into something substantially more potent. I'll need to run more tests, but it seems I found how to scale the pentas upward in a carefully curated way without shattering their structure.
Also David is mad outdated so I'll need to refit much of his systems before I can release him at all. The notebook currently expects a pickled series of imagenet tensors and a pickled series of imagenet crystals pre-selected at startup time - each organized specifically based on the curation.
That won't do, it'll need refitting.
I will prepare a standard sweep for david to showcase the prowess of the final multi-vocab variant. This will include a variation that contains all mnist variants, cifar10, cifar100, imagenet 1k, and in the future I'll prepare a full imagenet sweep utilizing the entire 12m corpus instead of the 1.2m I used. I may need to get in touch with the actual curator of the dataset for licensing but maybe not.
David utilizes 4 projective variants of the vocabulary and the training process involves teaching and freezing them akin to teacher/student processing.
I did not want to release David yet, but I believe now that David will save lives and it's irresponsible for me to contain such a creation.
This is only a logically and logistically correct assessment IF the assumption is based on curated data related to the very capabilities which your "mirror" requires to amplify those biases. The alternative is the LLM simply echoes reflective similarity directly associated with NEARBY echoed words, rather than logically related context and content. Instruct helps a lot, so does harmony, and the alternative forms of them - but the BIAS STILL FORMS.
If your bias is something that LLM does not have the relational associations to internal data nor has ever been taught the capability to deduct the logistical responsiveness from those biases, you are likely reflecting your personality quirks and biases onto a machine that simply cannot reflect them and thus the machine will simply... begin to echo those back to you.
This is a common self reflective bias that many of my introspective analysis and self analytical conversations defaulted to when assessing complex logistical and introspective analysis of large structures. This is most commonly amplified and incorrectly confident while discussing those problems with a single large LLM.
Communicate those same unfiltered conversational pieces to another large LLM and you will most definitely find different mirrored effects and different biases. You'll often find GROK, Gemini, and Claude all returns different responses to those same assessments.
Now... if all four say yes, the math lines up, the stars align, and the systems can in fact work if X and Y and Z - you might have a potential solution. EVEN THEN it will be a damn journey to make it work.
LLMs are often VERY WRONG even as a collective when it comes to large data in association with intricate complex technical work. Sometimes it's a single incorrect assessment from a random book that was fed in 500 times from a single topic that was simply disproven at some point, and yet there are still direct biases associated with those incorrect concepts. This amplifies the further you dive down the rabbit hole and is easy to confuse the llm, trick the llm with input, and even easier to break the llm's entire pattern with them because you're already so deep down the rabbit hole that you're accessing heavy noise.
I think I can handle the corpus training with some runpod MI300s or get a cluster of A100s for a week or two. That should allow proper tuning based on the lexical rules of language, but I need to make sure EVERYTHING is PERFECT before I start pulling triggers on clusters.
Also my apologies for not updating the lattice vocabulary, I've been very swept up in direct testing and implementing models. It's been really fun setting all this stuff up.
The more it works, the more I get excited that the formulas I'm manifesting are cohesive representations of purpose rather than simple random convergence. I've altered them hundreds of times, but the pipeline goal is still present. Unified geometric vocabulary WILL be a universal language, not simply a tinker-toy, but instead a full lexical representation of potential with all manifested trajectory and solidification of grammatical, lexical, symbolic, and representative substructure.
It's at the point where time will tell HOW this system is useful. Even if it can DO ALL THAT, large scale adoption or even minimal scale adoption is up to how robustly useful and how many eyes end up on the topics with technical knowhow. It's already well beyond the IF this system will be useful, which means I feel obligated to at least continue kicking my legs until I get access to a speedboat.
Simply put, I've built this system for the eyes of the technical - with some very direct and representative understanding to the less technical available as well.
There are some saving graces though. You can probably house the entire purpose of a word in a 256d token; but you won't get all of the robust lexical and analytical behavioral responses required from the orthonormalization 5th so it will likely be less accurate than a 512d.
You can get some more utility from upscaling 256 to 512 and you gain some sparsity which allows more growth, with the negative elemental response of sparsity being filled with no meaning - which tends to confuse and build pockets of misrepresentation on projection.
Multiple overlapping projections are the most robust from what I've been observing; where you take the same token and blow it up multiple times for multiple different projection sizes. This has proven invaluable behavioral response from the geometry 4-5 with freeze/unfreeze has shown that all layers can complementarily improve performance - while the final version can be any of them individually requested - as they are all experts on their own plane and the output does not require all of their outputs.
There are many potential variations of the models from these geometries - including 200+ projections implemented on the same model using the same tokens.
Pairs, triplets, quins, and penta word + letter combinations remain uncrystalized and unexplored, but I plan to use the same system to run them.
I'll likely implement a sentencepiece-esque translator that will turn a sentencepiece vocabulary directly into crystal variants with weighting for convenience, which will allow for much more utilizable and easy-to-represent vocabularies for expanding current models.
Wordnet with hard gated non-fabricated tokens has proven the most valuable, however they are still shallow and require full solidification and robustness curation with additional definitions and datasets.
Research is ongoing and many mechanisms still need to be created.
This one has many logistics issues. Primarily, there's no precedent I know of to literally train hundreds of millions of potential character combinations; with their prefabricated variations of crystals to tune a specific series of trajectories in specific directions, based on the input text targeting other crystals, the weights, and the batch. The dataset needs to be properly prepared though, and I can't find any prefabricated variations of this data format that the symbolic lexical engine needs to be robust.
There's a few possibilities for this one. Batch size being an obvious one, where I take a large influx of information in, then grab any matching words, characters, or information and update those using the formulas for topological tuning.
The main issue is the language web is massive. BILLIONS of variations can crop up from a single document if you're not hard capping depth; so if you traverse the whole tree like say - "the quick brown fox", becomes words, becomes definitions, becomes letters - not counting multi-pass finetuning. This alone is a massive logistics nightmare to implement, but thankfully this is the modern era.
Simply put; if I hard cap to 500k vocab with a depth of no more than 50,000 pentachora crystals each, it should be capable of housing the an approximate word structure within a trajectory space.
I'd rather run it on a fleet of devices and feed it the pile, the book corpus, and everything else so we can get some truly trajectory related subsets of 500k+ crystals per token upward to 100,000,000 or so combinations each. The crystals really aren't that big, and they house a massive amount of context.
Even so, there are many logistics nightmares to this, but it's a viable option for training a legitimate similarity-fed BERT or LLAMA meant to specifically form linguistic responses using those crystals as tuning forks for solidity.
More purpose with more careful organization... now we're talking.
I'm going heavy into lexical cardinality today and preparing a full crystal structured geometry that is full wordnet capable. Anything that isn't can be formed at runtime.
Full lexicality will include unigrams, 2-6 ngram counts from wordnet with frequency weights, usage, and a multitude of other elements. Each will be crystallized specifically. If you have any suggestions to making this more robust I'm all ears.
I could go with google books or something bigger, but I'm sticking to wordnet because it won't take me weeks to process entirely.
Crystal geometry will be given rich versions that include the correct lexical and organizational subsets specific to the lexicality and frequency of use, as well as the proper ascii, wordnet, and unicode sets.
For wordnet-rich; Each definition will attribute towards the overall goal of the upcoming crystals so the system will represent that goal proportionately through multiple crystals and trajectory concatenated rather than full concatenation like the current vocabulary is doing. Additionally, the frequency tokens will decide the orthogonal trajectory more carefully.
For testing and quick prototype purposes;
We will need to train a Bert variant that can house some capability of rapid geometric crystal prediction through ngram feature similarity, sentence similarity, sentence classification, and a few other bert traits that bert-beatrix-2048 is capable of. I know Bert can handle this at least - however Bert can't house the entirety of meaning so it will be imperfect... even so it will be considerably faster than trying to query the whole dataset every time you want a character, or preparing a massive vocab for rapid testing and iteration. Ask bert.
Not to mention feature extraction for training rapid classification heads with geometric subsystems, which are notoriously fast at training.
