Skip to content

Possible Futures in Science and Religion (Sun. Syl. 12/17/2017)

This week’s Sunday Syllabus looks at trends that might alter both the content of religion and how we think about it. New patterns of religious networking are forming in response to political and social realities, and bonds are being created between scientists and religious practitioners.

This week’s The Atlantic interview between Emma Green and Rabbi Sharon Brous showcases what can happen when two thoughtful people have a good conversation. The main topic is Brous’ work, both as the leader of IKAR, a non-denomination Jewish community in Los Angeles, and as a public commentator on American culture. One major topic is the challenge and need for progressive religious groups to raise funds and organize outside traditional denominational channels, which are proving less and less effective. I can’t help but note a parallel to conservative religious groups in the middle of the twentieth century. Excluded from mainstream media and major organizations like the Federal Council of Churches, they responded by forging their own grassroots media and fundraising empires, often focused around particular charismatic individuals. Quite a lot has been written about the alleged re-emergence of conservatives into public life in the seventies, but the reality seems to be that they were simply building strength where their political and cultural rivals didn’t notice. Perhaps a similar transition is in store for progressives.

I have similar thoughts about her statements that progressive communities are inclusive in certain ways but not others. First, listen to Brous on denominations and praxis: “If people want to be affiliated with a denomination, great. I just happen to think it’s not the most important question. And with real diversity in practice and theology, you actually build a more interesting community.” Compare that to her comments on culture issues and politics: “There are some Republicans. I honestly don’t think we have any Trump supporters. I think they would have left by now.” A bit later, “I’m not looking to build the biggest, widest tent so that any person with any political perspective should and could feel absolutely comfortable here. I think in those environments, we become so neutral and so numb that we can’t actually say something.” This flexibility on praxis and organization, combined with an ideological winnowing, most resembles … mid-twentieth-century evangelicalism. My, how the tables have turned. The greatest hindrance to progressive religious efforts has been their relative inability to connect their social agenda firmly to their religious identity. Brous seems to have realized this: “And it’s real. It’s not some kind of fruffy liberalism. It’s really rooted in our traditions.” We’ll see.

If the relationship between religion and politics seems destined forever for “it’s complicated” status, our next article represents a promising new chapter for an on-again, off-again couple: religion and science. This article is another fantastic conversation, this time between two authors of a new book, Beyond the Self: Conversations between Buddhism and Neuroscience. Ricard was a molecular biologist before becoming a Buddhist monk, and Singer is a cognitive scientist. Among several fascinating themes, I was struck by how neither man felt threatened by the other. Both take it as their goal to understand, and both believe the other can contribute to the process. Buddhist practitioners provide concrete empirical data for cognitive scientists, and the cognitive scientists can offer third-person accounts of practitioners’ first-person experiences.

This happy collaboration perhaps works only because the two men approach both science and religion similarly, as technologies for investigating truth and improving human life. (A move recommended by the philosopher Peter Sloterdijk.) Many of the metaphysical claims of Buddhist orthodoxy seem muted or ignored, while the experiential claims act both as data to be evaluated and as an interpretive framework to be confirmed. As a recent Vox article put it, Buddhism (or a portion of it) is being proved true in the sense that it is a verified solution to certain problems. Now, there are long traditions of meditation and mental training in other religious and philosophical frameworks: ascetic Christianity, Sufi Islam, Neo-Platonism, Stoicism, etc. It’s not clear that all of these will enter into a collaboration with science, but this partnership with Buddhism perhaps paves the way for wholesale reconceptualizations of both the scientific task and the practice of religion.

Finally, since the previous set of articles raised questions about the nature of the mind, I thought I would mention a recent achievement in machine learning. A recent chess match between Google’s AlphaZero and the strongest computer chess engine of 2016, Stockfish 8, resulted in a crushing victory for AlphaZero. (The fairly readable journal article is available here). What makes this historically significant is that AlphaZero and Stockfish are two very different kinds of programs. Stockfish is a traditional chess engine. It contains vast amounts of human knowledge, telling it, for instance, how much to value a rook vs. a bishop, or whether to prioritize center squares over corners. Then, Stockfish searches millions of possible board positions to decide which maximizes the advantages it’s been programmed to prefer.

AlphaZero is a neural network, modeled loosely after the human brain. (Check out this video series for a great explanation.) The network provides a structure within which the program can learn. It defines a desired result and allows the program to adjust itself based on its results. So, AlphaZero was not given any strategic or tactical information; it was told only the rules of chess and that the desired result was checkmate (or, I suppose, a draw if checkmate wasn’t possible). Then, it played millions of games against itself, each time adjusting some of its parameters based on the results it got. The match vs. Stockfish was to test how well it managed to refine itself. Its success is remarkable; the ten games the Google team released are terrifying and beautiful. (Here’s a video explanation of one.)

The remarkable thing about a neural network is that the human designers often don’t know how the program is solving the problem they set it, only that it does. This has potential legal and social consequences. If a neural network drives a car or diagnoses cancer more effectively than any human, how do we fix it when it does make a mistake? Who should we hold responsible? It also raises questions about artificial intelligence. How much autonomy does a computer program have or how much progress does it have to make before we regard it as more than just the product of its programmer? Given how incomprehensible its inner working are, would we even recognize its kind of sentience if it developed it? Would we be able to learn from it the secrets of a mind? Would it have something useful to contribute to a conversation with a cognitive scientist and a Buddhist monk?

Published insunday syllabus