
Culturally or ecologically, sensitive AI will be necessary: Professor on artificial intelligence, human rights
NT Correspondent
Bengaluru: At an event hosted by the National Law School of India University, Bengaluru, in November 2022, Prof Anupam Chander, Scott K Ginsburg Professor of Law at Georgetown University Law Centre, gave an insightful discussion on the subject ‘Artificial Intelligence and Human Rights’. The Federal Republic of Germany Consulate General in Bengaluru supported the event conducted at the Bangalore International Centre as part of NLSIU’s AI and Human Rights project.
The project aims to create a multidisciplinary framework for comprehending how artificial intelligence affects human rights. In his speech, Prof Chander focused on the geopolitics that governs artificial intelligence and identified several conflicts in the debate over AI policy.
These included worries that the developing world may fall behind in the race to create cutting-edge AI technologies or the concern among western politicians that China’s rise as a technological leader might portend a change in the balance of power away from the West.
Prof Chander brought up the threat of “data colonialism” in a comparative context. In the guise of data protection, industrialized countries, particularly those in the European Union (EU), engage in tactics that prevent service providers from developing countries from accessing their markets. With helpful examples, Prof Chander demonstrated the limitations of AI.
He gave a prominent animal detection feature that Volvo was working on as an illustration of a regional mismatch. However, it discovered that the AI didn’t identify kangaroos as giant creatures because they jumped—unlike any of the animals it had been taught about—when it tried this feature on Australian roadways. Volvo identified the issue and started kangaroo training the system.
He stated that cultural and environmental sensitivity will often be necessary for AI and that an AI that has been educated on the behavior of the American populace will likely generate inaccurate results when used in China or vice versa. He claimed that these instances highlight the value of various teams working on developing and administering AI systems.
But he said this holds for any significant systems that affect a wide range of people, not only AI systems. Today, machines make decisions regarding both people and machines. AI offers or denies loans, matches people for dating, selects investments, evaluates job applications, and provides search results in addition to assisting with tax filing.
He added that governments should demand what we can call “locally responsible AI,” given that AI is making decisions that impact people’s lives. Prof Chander encouraged the audience to envision a future in which the race to create powerful AI technologies would be not only a zero-sum geopolitical game but also one in which human happiness would be at the center of AI policy