For me it is Cellular Automata, and more precisely the Game of Life.
Imagine a giant Excel spreadsheet where the cells are randomly chosen to be either “alive” or “dead”. Each cell then follows a handful of simple rules.
For example, if a cell is “alive” but has less than 2 “alive” neighbors it “dies” by under-population. If the cell is “alive” and has more than three “alive” neighbors it “dies” from over-population, etc.
Then you sit back and just watch things play out. It turns out that these basic rules at the individual level lead to incredibly complex behaviors at the community level when you zoom out.
It kinda, sorta, maybe resembles… life.
There is colonization, reproduction, evolution, and sometimes even space flight!
Does Singer explore how the limits of one’s knowledge about the impacts of their actions might play into the decisions?
Like, I could send $5 to some overseas charity, but I don’t have a good way to know how that money is being used. Conversely, I could use it locally myself to reduce suffering in a way I can verify.
It seems to me that morally I should prioritize actions I know will reduce suffering over actions that may reduce suffering but that I cannot verify. Verification is important because immoral actors exist, so I can’t just assume that moral actions that I delegate to other actors will be carried out. Since it’s easier to have good knowledge about local actions (in particular those I execute personally), this would tend to favor local actions.
Only very briefly, and not in a way that I think really addresses your specific example: