The new AI for science is different
AI systems that output explanations or algorithms should not be conflated with prior systems that guess a solution to a problem.
A few weeks ago, Google released AlphaEvolve, a system designed for "general-purpose algorithm discovery" which has a far broader range of capabilities than previous systems such as AlphaGo. The potential for systems like this to revolutionize science is widely hyped in the AI industry.
Beyond these well-known systems, more bespoke AI and machine learning methods have already diffused widely across many different scientific disciplines. Experiences with these more narrowly-applicable methods have raised questions about whether the current AI hype is warranted. As I'll argue, these narrow methods are fundamentally different that what is possible with AlphaEvolve or other language-model-based AI systems, and it is a mistake to conflate the two as the same kind of thing. Really, "artificial intelligence" as a category is becoming so broad that it is losing its usefulness.
Broadly speaking, I think there are two key types of AI that need to be distinguished, perhaps most simply using an example. Consider the problem of solving a differential equation. Most commonly, especially in applied fields such as mine (climate science), an AI-based approach would be to set up an e.g. neural network-based machine learning algorithm and then train the neural network from a large collection of input-output pairs, which in this example might be initial conditions and some final state. The training data must be generated using some other method such as a traditional numerical solver for differential equations. The machine learning algorithm "learns" to predict a solution organically without being specifically programed to do so, hence the term “artificial intelligence”.
The second way of using AI is entirely different and, crucially, is only just becoming feasible in the age of language models capable of reasoning such as Gemini which powers AlphaEvolve. A language model, being trained to solve problems in a more similar way to a human, is able to generate an algorithm to solve a problem as opposed to generating the solution directly. The algorithm might be machine-learning-based, but it is not necessarily. In the differential equation scenario, the end result may well be exactly the type of mathematical model that a human could have discovered, and therefore has the same potential for e.g. theoretical guarantees and physical understanding.
As a recent example of this new type of science, this paper used OpenAI's o3-mini model to discover an exact solution to a Potts model, a simplified mathematical model of a magnet. This is a case where the mathematical setup of the Potts model is very difficult to solve either analytically or numerically. The traditional route to using AI in this situation would be to set up an ML algorithm to predict approximate solutions based on numerical simulations. But the authors took a very different approach -- they obtained exact analytical solutions with the help of a language model. The traditional ML method would only obtain approximate solutions which cannot be verified and are, in all likelihood, completely opaque to human understanding. In contrast, the analytical solutions may be theoretically verified and offer valuable insights into the Potts model -- insights that humans can understand.
Neural networks as "artificial intuition"
Although these two methods of using AI in sciences are in practice quite different in a qualitative way, there are historical reasons why both are called AI, and I suspect there are edge cases where it would not be possible to separate the two types cleanly. The basic similarity in the two approaches is that some type of ML algorithm -- usually a deep neural network -- is used somewhere. The difference is whether the neural network is trained to directly predict the solution or if it is trained to output an algorithm or explanation which then predicts the solution, as is the case for language models.
Before language models, one may reasonably have thought that a neural network would never be able to discover knowledge that is provably correct or even understandable to humans at all. But now, we can see that the neural network can be trained create an output that is expressed in the medium in which humans explain the world. This type of output is therefore likely to be much more useful, at least for the human project of understanding and explaining the world.
I think one useful way of seeing this distinction is to think of the output of a neural network as a form of "artificial intuition".1
Take a neural network for weather prediction as an example. These systems take weather maps at a given time as input and output the (forecasted) weather map at some later time, often 6 hours later. They are trained using a large number of initial/final weather map pairs. In the "artificial intuition" view, this would be like asking a human to look at a bunch of weather map pairs and then produce a forecast by simply guessing using their intuition - perhaps by simply drawing what they expect the weather map to look like in the future. Before computer simulations of weather were invented, weather forecasts looked something like this.2
However, training human forecasters to simply guess at what weather maps might look like was not the best way to use human intuition to forecast the weather. Instead, once computers were invented, a better use was to train human intuition on physics, allow humans to intuit sets of equations that correctly model the weather, and then simulate the weather based on those equations. In other words, today we do not use human intuition to directly generate the solution to the problem (the forecast). Intuition is used to create better computer simulations, and then it is the simulations which generate the forecast.3
So, the majority of past AI in science is the automated version of a human simply guessing about the solution to a problem. But this is often not the best way for humans to solve problems, so we should not expect artificial intuition to be the best use of AI for all scientific problems. Instead, many problems will be solved by using artificial intuition to create algorithms which create solutions.
This should also make us more hopeful about being able to understand the solutions that AI systems discover. It is famously difficult to understand how artificial intuition systems (e.g. neural networks) solve problems. It's also very difficult for humans to explain how their intuition came up with an idea. But if AI starts discovering algorithms and explanations, the algorithms might be much easier to understand. One hopeful example, though we don't know the details, is a heuristic that AlphaEvolve discovered to more efficiently schedule jobs on Google's compute infrastructure. Apparently, it was still interpretable, debuggable, predictable, and easy to deploy. These are not adjectives typically used for artificial intuition systems such as those used for weather forecasts.
The term "artificial intelligence" should include any automated method that reproduces the actions of an intelligent being. The majority of past AI use in science has been limited to artificial intuition -- ML algorithms that directly output the solution to a problem, analogous to a human intuiting the solution. But artificial intelligence could also mean a system that outputs a method to obtain a solution, such as a novel (non-ML-based) algorithm, a mathematical equation, or a new type of explanation. Thus, although many past AI methods in science suffered from numerous drawbacks such as being narrowly applicable, approximate, and difficult to understand, these limitations will not necessarily be shared with systems just coming online -- such as AlphaEvolve -- which are able to produce explanations, rather than just solutions.
This argument is sometimes used to explain why reasoning models outperform GPTs. As the analogy goes, models such as GPT-4 are the equivalent of what a human would be like if they simply said the first thing that comes to mind when asked a question. But, for hard questions, a human has another tool in their toolbox -- they can think for a while, considering various arguments generated by their intuition, before deciding on the best answer and then saying that. This is what "reasoning" models are trained to do. They create an invisible "chain of thought", which allows them to consider various options before outputting what they consider to be the best answer.
They did not look exactly like this. Forecasters went through a chain of reasoning, identifying and classifying meteorological features such as fronts, and used knowledge of how e.g. frontal systems typically evolve. But they did draw maps.
The analogy is not perfect as there remains some element of human intuition in the best forecasts. However, purely computer-based forecast (for example, the National Blend of Models from NOAA) are quite good and certainly better than pre-computer forecasts.
I would think since a good AI would have to generate 'Out-of-Box' thing algo.s I imagine a study of all the success full ways this was done, generalize templates formed from these, combine with
all forms of first and second order way of influences, and a list of all known possible Actors and those that have any possible influences on the systems that contain the subsystem being analyzed.
Before AI can be of positive value in this or anything it must be put together by those that hope to use it successfully.
Until unTruth, inJustice, disOrdering, .. in short, until demonic evil is programmed out of AI, what can we expect from it? Why can any understanding of AI potential value be assumed more valuable than an afternoon fingering poo?
I use 3 AI and have found them filled with the worst Soros Foundation supported WikiPedia editors and those minions of Synagogue of Satan that provide the content around dangerous society destroying Narratives. I'd trust Wikipedia with significant information on many topics as put my genitals in a blender
A nearly factual dialog with one on a topic is a good example of what I have run into many times;
--
I input into ChatGPT prompt;
"Hal9000, tell me about Black, White, Jewish Nationalism.",
And she responds with something like;
"Black and Jewish Nationalism is a goodness - filled with Sunshine, Unicorns and dancing Knomes, And ..{dark music starts} .. White Nationalism filled with evil-rapey-blackhearted-racist-supremacists-armed- .. -horrible white people!"
--
Yet it will respond to the nearly 2 years of open televised genocide of Gazans and the horrors and self-destruction of South Africa as most of what I've seen and read about are false and should be considered as no better value than a 14yo mean girl's gossip. At AI's best it is like a communist standing on the corpses of untold millions murdered by their own States as "Real Communism has never been tried' and often explain why if only "we get rid of" another million that Real Communism will work.
If you like, I have used AI to great value, but only as I might a Lab Tach that is excellent on some skills but can and will suddenly turn into some kind of Crack-Head delusional grandmother-in-law that screeches psycho things as she tried to castrate you.
I used AI that way in these three important topics that are clear and in public for all to see but that we have been raised and indoctrinated with much force. I can't imagine that corporate demon-filled AI will not play a major role in these and more horrors in our futures. If you will all
ow.
1. Raised to be insane:
Ever wonder Why is this world insane and most of some group, women for example, of many topics and often on most, are so Sick?
AI generated audio overview of article;
https://notebooklm.google.com/notebook/dcc1110c-6fdc-4966-a0a6-10948155a59c/audio
"Multiverse Journal - Index Number 2220:, 9th July 2025, A Letter to Traditional Catholic Bishops, Calling for Champions."
https://stevenwork.substack.com/p/multiverse-journal-index-number-2220
2. Corporations working together:
I've tracked down the information to argue that since 1979 Gov and corp have been silencing and impoverishing regular people. If Synagogue of Satan Zionist and other minions had left well enough alone, you would not believe how much better we all and our nation would have been. Listen for taste, read for truth.
AI generated audio overview of article;
https://notebooklm.google.com/notebook/9fc1b713-4c44-49bd-9c29-04fc3fe09744/audio
"Multiverse Journal - Index Number 2223:, 14th July 2025, State's Organized Planned Disempowerment of the American Citizen"
https://stevenwork.substack.com/p/multiverse-journal-index-number-2223
3. Entire once life, family, truth, justice, ordering, prudent, charitable, .. systems have become insane and dangerous:
A legal argument I have not seen before. Is it new?
AI generated audio overview of this article;
https://notebooklm.google.com/notebook/0a0572a6-8c54-4bbf-a3eb-829aae5e81e4/audio
"Multiverse Journal - Index Number 2222:, 12th July 2025, State's Monopoly on Violence is unLawful by it's own Actions and Laws"
https://stevenwork.substack.com/p/multiverse-journal-index-number-2222
Feedback very welcome.
God Bless., Steve