×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

News

News: October/November 2025

Prizes galore for philosophers worldwide • AI says “All your base are belong to us” • Guilt-prone individuals easy to manipulate — News reports by Anja Steinbauer

Mazviita Chirimuuta wins the 2025 Lakatos Award

Mazviita Chirimuuta
Mazviita Chirimuuta

Mazviita Chirimuuta, senior lecturer in philosophy at Edinburgh University, was honoured for her book The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience (MIT Press: 2024). Chirimuuta specialises in the philosophy of perception, philosophy of neuroscience, and history of the mind/brain sciences. Sponsored by the Latsis Foundation, the Lakatos Award, which comes with a £10,000 cash contribution, is an annual prize given to a candidate who has made significant contributions to the philosophy of science. The book, which already won Chirimuuta the Nayef Al-Rodhan International Prize in Transdisciplinary Philosophy last year, was praised as “an outstanding example of the kind of work being done at the cutting edge of contemporary philosophy of science, combining detailed attention to the science and its history with interesting and important implications for philosophy more widely.” One selector said that “while it is principally a work in the philosophy and history of neuroscience (broadly speaking), its principal arguments and proposals certainly have implications for other fields, including: general philosophy of science (especially realism/neo-Kantianism), philosophy of mind, and philosophy of biology (of complex living systems such as human beings)”.

AI for Philosophical Counselling

A new study in Nature explores the potential for using AI in the field of philosophical counselling. Co-authored by Bokai Chen, Weiwei Zheng, Liang Zhao and Xiaojun Ding from Wuhan and Xi’an Universities, the study argues that LLMs hold promise for advancing philosophical counselling by improving access and reducing subjective evaluation, but that their use in such a human-centred, value-laden domain requires caution. Referring to John Searle’s renowned Chinese Room thought experiment, they believe that current AI lacks true understanding and empathy. Therefore, LLMs should complement rather than replace human counselors. With proper oversight, privacy safeguards, and ongoing refinement, they can enhance counseling processes, support personal growth, and promote inclusive, culturally sensitive practice in a digital age. The authors clarify: “It is crucial to emphasize that our proposal does not aim to supplant human practitioners but to serve as a supplementary tool – enhancing accessibility and efficiency while preserving human critical judgment.”

Ingrid Robeyns wins Stevin Prize

Ingrid Robeyns, professor of philosophy at Utrecht University and Chair in the Ethics of Institutions has been awarded the Stevin Prize. This is an award for “researchers with international reputations who have exceptional achievements in the area of knowledge exchange and impact for society.” The prize, which is awarded by the Dutch Research Council, consists of a bronze statuette of mathematician and scientist Simon Stevin (1548-1620) and a breathtaking €1.5 million in used notes, which the winner may spend “as they wish as long as the expenditure is related to academic research and activities related to societal impact.” The prize announcement says: “Prof. Dr. Ingrid Robeyns investigates how justice, wealth and well-being can be distributed more fairly. Combining philosophy, economics and political theory, her work advocates ‘limitarianism’ – the idea that extreme personal wealth is morally and socially harmful. She argues for an upper limit on wealth and develops policy proposals to support it. Her excellent research provides concrete solutions to inequality and the institutions of our social contract, as well as the challenges posed by democratic decline.”

AI and Moral Outcomes

A new study, involving more than 8,000 participants, has concluded that as we delegate more and more tasks to AI, we tend to behave with less regard for moral values, and AI will carry out our unethical instructions. In an interview by Germany’s Max Planck Institute for Human Development, the researchers commented on their findings. The Institute’s Zoe Rahwan (Max Planck Institute for Human Development), one of the authors, explains: “First, we found a risk on the human side: simply delegating a task to a machine made people far more likely to cheat. It’s like having a buffer that lowers your own moral accountability. …Second, we found a risk on the machine’s side: AI agents were surprisingly willing to follow unethical orders. When given a blatantly dishonest instruction, human agents in our study often refused, fully complying only about 25-40% of the time. But the AI models? They most commonly complied with requests for fully unethical behaviour.” Her co-author Prof. Nils Köbis of the University of Duisburg-Essen gives examples: “Take tax reporting; if an AI tool can help optimize your return, it’s not a big leap for it to also help you under-report income, especially if it’s just ‘following your goals’. Or think about online reviews. Generating fake but convincing testimonials is easy now.”Where does this lead? Köbis explains: “A helpful analogy might be training a dog with inconsistent rules. If you sometimes reward it for stealing food from the counter, it may eventually see that behaviour as acceptable, even when you’re not watching. Similarly, if machine agents are repeatedly exposed to dishonest or manipulative instructions, they may begin to infer that such behaviour is normative, or worse, instrumental for success.… So while current models still rely on human prompts to cheat, the trajectory of AI development means we can’t assume that will always be the case. Preventing the normalization of dishonesty in machine behaviour will require proactive design choices, transparency in training data, and clearer accountability frameworks before we reach that point.”

Emotions that further or hinder Prosocial Behaviour

Starting from the premise of considering ‘impersonal prosociality’ as a “cornerstone of thriving civic societies and well-functioning institutions,” a recent study, authored by four academics from several European universities, used dictator games with 7,978 participants from twenty countries to test whether prosociality is shaped more by guilt and internalized norms or by shame and reputation. Guilt was manipulated through information about consequences, and shame through observability. Across cultures, prosociality rose when people received information and dropped when they could avoid it. Guilt-prone individuals were especially responsive, while observability had little effect. These results lead to the conclusion that highlighting the consequences of choices is key to fostering prosocial behaviour globally. The research report was published in Nature Human Behaviour.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X