Systems

A system is anything with multiple parts that depend on each other. In other words, every machine and activity is a system on some level. Systems are the best way to achieve [[Goals]]. Everything is a system, and is also part of a larger system.

Interesting Systems Properties

Changing Systems

To change a system you need vision, skills, [[Incentives|incentives]], resources and an action plan. Changing a complex system is hard and even if the intention is good, the result might not.

First, focus on [[Incentives]]. Don’t be angry at the people who are benefiting from a system, or at the system itself. Most just end up that way, the same way a river meanders towards the sea, or an electrical current tries to find ground.

Keep in mind intervening in a system requires some kind of theory, some kind of model where the positive effects will definitely be better than the side effects - and given how little we know and how bad we are at prediction, this will probably be wrong. A great way to start is removing things, kind of like a negative intervention, and so probably good (e.g: you’re unlikely to find a medicine as helpful as smoking is harmful, so focus on stopping smoking). Easy to replace systems get replaced by difficult to replace systems.

A complex system that works is invariably found to have evolved from a simple system that worked(more elementary systems functions).

Complex systems usually have attractor landscapes that can be used to change it. The world is richer and more complicated than we give it credit for.

Evolution is easier than revolution. A good approach to incrementally change a system (similar to [[Evolution|natural selection]]) is to:

  1. Start by identifying the highest-leverage level to optimize at: Ask whether you’re optimizing the machine or a cog within it. Complex systems might change in unexpected ways (butterfly effects). Minor differences in starting points make big differences on future states.
  2. Begin optimizing the system by following the Theory of Constraints: At any time, just one of a system’s inputs is constraining its other inputs from achieving a greater total output. Make incremental changes. Alter the incentive landscape. If you can make your system less miserable, make your system less miserable!
  3. Re-examine the system from the ground up. Get data. Take nothing but the proven, underlying principles as given. Work up from there to create something better.

These are places within a complex system (a corporation, an economy, a living body, a city, an ecosystem) where a small shift in one thing can produce big changes in everything. These are the places to intervene in a system (in increasing order of effectiveness):

  1. Constants, parameters, numbers (such as subsidies, taxes, standards).
  2. The sizes of buffers and other stabilizing stocks, relative to their flows.
  3. The structure of material stocks and flows (such as transport networks, population age structures).
  4. The lengths of delays, relative to the rate of system change.
  5. The strength of negative feedback loops, relative to the impacts they are trying to correct against.
  6. The gain around driving positive feedback loops.º
  7. The structure of information flows (who does and does not have access to information).
  8. The rules of the system (such as incentives, punishments, constraints).
  9. The power to add, change, evolve, or self-organize system structure.
  10. The goals of the system.
  11. The mindset or paradigm out of which the system — its goals, structure, rules, delays, parameters — arises.
  12. The power to transcend paradigms.

Don’t aim for an ideal system. Build a set of [[processes]] and protocols that evolve to fit the environment over time. Complex systems fail.

If everyone agrees the current system doesn’t work well, who perpetuates it? Some systems with systemic/incentives failures are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.

A system needs competition and slack (the absence of binding constraints on behavior). By having some margin for error, the system is allowed to pursue opportunities and explore approaches that improve it.

Interaction between system actors causes externalities: the consequences of their actions on other actors or processes. This is important because, intuitively, humans are self-centered, and it’s easy to not notice the effects your actions have on others. And it almost never feels as visceral as the costs and benefits to yourself. The canonical examples are [[coordination]] problems, like climate change. Taking a plane flight has strong benefits to me, but costs everyone on Earth a little bit, a negative externality. And a lot of the problems in the world today boil down to coordination problems where our actions have negative externalities.

Most large social systems are pursuing objectives other than the ones they proclaim, and the ones they pursue are wrong. E.g: The educational system is not dedicated to produce learning by students, but teaching by teachers—and teaching is a major obstruction to learning.

A [[Mental Models|mental model]] of a system is the reduction of how it works. The model cuts through the noise to highlight the system’s core components and how they work together.

Remember, sometimes not doing something is better than doing it (Primum non nocere). E.g: controlling small fires instead of letting them burn the top layer of the forest. Spending 1 week repairing trains because there was an accident makes people use the car more, turning into more deaths than leaving the train rails as they were.

Almost no one is evil; almost everything is broken.

Inadequate Equilibria

An Inadequate Equilibrium is a situation in which a community, an institution, or society at large is in a bad Nash Equilibrium. The group as a whole has some sub-optimal set of norms and it would be better off with a different set of norms, but there’s no individual actor who has both the power and the incentive to change the norms for the group. So the bad equilibrium persists. These concepts can be sorted in 3 categories:

  1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else.
  2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information.
  3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state. One systemic problem can often be overcome by one altruist in the right place. Two systemic problems are another matter entirely.

Examples