The other superintelligence
With artificial superintelligence plausibly on the horizon, it's worth taking seriously the idea that cultures, countries, and other groups are themselves superintelligent agents.
Scientific conferences often impress upon me the scale at which we humans are pursuing science. This week I've been attending huge auditoriums and airplane-hanger-size poster rooms where 6500+ people presented their scientific work on climate monitoring.1 That probably means thousands of independent investments in people's ideas, from hundreds of independent organizations. All throwing money at all kinds of ideas to make nebulous and uncertain progress toward loosely related goals. And that's all just for the specific problem of measuring Earth systems from space.
It gives a sense of being a small cog in a big machine. It also makes me consider what is really going on here. As individuals, we normally think of society as being composed of individuals, and we think of the intelligence, goals, or agency of specific individuals, as if this is the best way to understand the world. Consider the field of AI, which is trying to replicate human cognition on computers - the discourse is all about achieving "human-level" intelligence, i.e. individual-level intelligence. Or consider the role of biography in illustrating historical trends.
In other words, we normally consider the world as being made up of individual people: discrete agents, each of which formulates and pursues goals, makes decisions, and takes action. But is discretizing the world at the level of individual people the most useful lens? Or should we think instead of collective minds - organizations, governments, cultures?
I don't think analogizing a culture as an intelligent agent will be that foreign to many people. Just like an individual, a culture is an entity that interacts with the outside world (other cultures, nature), forms and pursues goals, and enacts change in the world. But I'm tempted to go farther than analogy. I think a collective mind really is an intelligent agent. The simplest reason is that I can't think of a reasonable definition of "intelligent agent" that would exclude a human collective.
The question of definition is particularly salient right now given the (potential) invention of artificial intelligent agents. Many people are reasonably worried about the prospects of superintelligence, i.e. an artificial agent which is smarter than any human, has the capability to form and pursue its own goals, and has enough power over the world that it can take actions in the real world. Such a system would be dangerous because we may not understand it well enough to ensure that its goals do not conflict with ours. It may take actions that would be harmful to humans, and we may not be able to stop it if it can outsmart us at every turn.
The thing is, this also seems to be true of cultures, countries, or other human groups. Countries also form complex goals, and no individual human knows the full details. Countries are more intelligent than any individual, because they are made up of many individuals. They certainly take actions in the world, and often their actions are harmful to certain groups or individuals. And no individual can stop a whole country. You might think a charismatic individual could influence a countries' goals - but it is never actually an individual, because one must develop a significant following (itself a collective mind) in order to exert any political influence. It wasn't Hitler that launched WWII, it was Hitler and his followers.
It's worth emphasizing that all the important properties humans have - intelligence, goal-seeking behavior, and agency, among others - are present in greater quantities in groups of humans. In other words, human groups have all the key properties of the hypothetical future AI. Human collectives are superintelligent.
And as we'd expect, the superintelligence is in control. Most obviously, individuals are bound by laws that countries are not. But cultures also impose widely reaching controls on individuals through cultural norms. In some cases these norms might even be called coercive.
In the US, we have a strong cultural norm that everyone should be productive, or in other words, spend their time for the benefit of the broader culture. In my experience, people begin to feel guilty and even depressed when they spend too much time doing unproductive things. This is a result of a rather coercive cultural norm. To be clear, it's not always a bad thing, mainly because humans are a social species and therefore we have a deep desire and need for things like status. These social instincts make it necessary for us to be immersed in a culture for our own well-being, but they also enable the culture to effectively control its individuals.
So, many people right now are focusing on the potential that humanity might create powerful AI - particularly superintelligent AI - which is an automated system that is smarter and more capable than any human at any task or goal. I share the view that the invention of such a thing would be a pivotal moment in human history. Even if it's unlikely (and I'm not sure it is), it would be such a big deal that we should seriously consider the implications in case artificial superintelligence is, in fact, invented.
But that being said, it might be worth considering that we are already in the presence of superintelligence in the form of our society. Social intelligence is a different kind of intelligence than human intelligence, but then we can't expect artificial intelligence to be identical to human intelligence either. Humans have already ceded much of their control to superior minds, and that was even before AI.
The conference is the Living Planet Symposium by ESA. It’s all about how we can observe the atmosphere, ocean, biosphere, and geosphere from space. Satellites are really clever!