
neoZene | start
With v3.x the NEO GPT got the biggest update yet.
It adresses caching issues, oracle "randomness" and image generation by using actions.
See the page actions for more info on actions within the NEO GPT.
Statistics as delivered by the getStats action start on June 2nd 2025.
Randomness
While the process of drawing oracles may technically be a random function, on other levels it is not. If you throw dice like it is done in the variant of the Tibetan MO I built the NEO upon 14 years ago, you determine two numbers out of 1 to 6 in order, that is: you throw one die, note the result A, throw it again, not the result B and thus have the result of A-B.
Ideally, any method of determining the result (let's call it signifier) would mirror that process by generating one real random result out of 1 to 6 and then another, merging them to the same kind of signifier A-B.
This was not possible with the LLM without turning on data analysis as a capability, thus exposing the complete method and texts to the GPT output. Also, when running functions, the GPT would show the python code execution.
As that was no option here, I tried to get the LLM to pick a random signifier by implementing a pseudo-random process. The results were ok at first, but over time the AI seemed to have developped a preference for specific signs as well as a bias of what kind of outcome would "match" the question best. A clustering around 2 to 4 signifiers became evident.
Enter the actions. With the actions, a php endpoint is called, where the ideal draw method is implemented, and then the endpoint retrieves only that oracle base info from a flat file database and delivers it to the GPT, where it is interpreted. In contrast to the first version, the GPT does not know any other oracle base info and thus can not execute it's biases.
Caching
Next issue was caching. There are different ways to access info from an endpoint, and some of them seem to lead to massive caching on the side of the GPT. This is done for efficiency: if the GPT believes that the info is static, it will keep it in memory instead of retrieving new info. I found a way to remedy that, too, that is actually two ways combined in order to really tackle unwanted caching.
Image Generation
Basically the image generation had a similar issue with biases. A general explanation of what certain elements symbolize and what attributes they have lead to the GPT ignoring "boring" elements and symbols and prefer "exciting" elements (like fire).
With this update, every image is generated on it's own signifiers prompt alone, unique instructions for every possible outcome.
Interpret offline signifiers
Have your own offline draw method results interpreted by the AI. This worked easily under the previous system, where all info was available in the GPT's docs for the GPT to look up.
With the new system this, too, needed a php endpoint and an action. With fetchOracle and a given A-B the GPT can look up info for oracles and image generation and interpret your question and result for you.
Check out the page actions for more info on actions within the NEO GPT.