Thank you for the insightful response! And thanks for introducing me to the idea of ‘unprincipled virtue.’ That is new for me. I actually do agree with you that lionizing reason — particularly what Adorno and Marcuse refer to as ‘instrumental reason,’ can lead us slowly and unwittingly down a dark path. It seems to me that much of thinking in engineering unfolds in this way, with little regard for the bigger societal picture or impact. The whole idea of the book “Surveillance Capitalism” seems to imply that it was no single malicious act or decision that turned Google and FB into basically digital advertising giants; but incentives in the system will tend towards directions of quickest possible gain (the logic reminds me of gradient ascent/descent optimization), regardless of final destination. I wish we could return to looking at the forest and not individual trees, if that makes sense.
You bring up an interesting point at the end. Ideally, what I would like to see is a kind of universal system of “Personal ontology” that might be open sourced and available to anyone and plugged-in to various accredited recommender services. Developing your personal ontology would force you to reckon with deeper questions of personal values and morals, which I think get overpowered in current recommender systems that rely on largely behavioral data. I also wonder if perspectives from human development and humanistic psychology (Carl Rogers & Urie Bronfenbrenner) might be useful in making sure recommender systems actually help us to become the kind of people we really wish to be.