For the last couple of years, I have been traversing Massachusetts Avenue from Harvard Square to Kendall Square. I use this 2-mile, 40 minutes leisure walk for reflection when jettisoning from one commitment at Harvard to another appointment at MIT. The walk is a metaphor, a bridge between two different worlds: a humanistic Harvard, focused on government, economics, policy; and a techno-centric, entrepreneurial, venture powered MIT. It is a privilege, specially at this stage in my life, to be able to continue to be challenged by ideas every day, to learn, to teach and to be immersed in this environment. I am so grateful for this.
During this past weekend there was a bit of a role reversal. First, I was at the School of Engineering at Harvard, listening to Noam Brown, an OpenAI researcher. He talked about their latest release (o3-mini) capping what had been the “DeepSeek week”, when hype met bewilderment (my words, not his). As an engineer myself, I marveled on the possibilities as the frontier AI labs in the US compete with China and as the scientific community felt redeemed with the progress being made with “open models.”
The next day I took the walk again and headed to the TeχnēCon Conference, an AI Ethics convening at the MIT College of Computing. Humanities at the engineering cathedral! Now I was the one feeling redeemed, the dual role of tech and humanities has been a theme for me since one of the inaugural posts from this blog.
What makes AI fascinating to me is this confluence of technology and philosophy, going deep into a human invention to question what it is to be human. Rich Sutton, an AI scientist, challenged the concept of intelligence in 2019 in his piece “The Bitter Lesson” – or, at least, encouraged us to let go of trying to simplify the modelling of human minds and let the raw computing power do the work, possibly unleashing a different kind of intelligence. Later, in 2020 Dario Amodei and others would engrave these “scaling laws” at the core of the AI development ethos. But if (or when depending on where you are on the AGI/superintelligence debate) we get to a perfect machine intelligence, should we delegate decision making to it?
Do not assume that this is an esoteric question. Today when you make a right turn based on a Google maps command, or when your mortgage application is accepted or rejected, or when your resume is screened by an AI filter, you are already delegating a decision to an artificial intelligence. Should we do this, specially as the stakes get higher and higher? What about deciding on sentencing and whether to send someone to prison for life without parole? What about determining the course of treatment to be followed with cancer patient? Do we need a human in the loop?
Enters the humanities. Christine Susienka’s talk (article) at TeχnēCON pointed to the “risk of turning moments of great moral salience into merely bureaucratic affairs” and reminded us that we hold some comfort in knowing that another person is bearing the moral weight of big decisions. “By outsourcing important decisions to AI systems, we communicate a lack of faith in human beings and human potential.” At the limit, when dealing with life-or-death decisions, these are important questions.
And this is why I plan to continue taking this 40-minute walk.