お勧めニュース

Business & Government Advanced read

The Strategic Guide to Responsible Platform Business

The Strategic Guide to Responsible Platform Business highlights alternative strategies that platform organisations can pursue to acknowledge and mitigate platform capitalism’s problematic tendencies. While it is informative for policymakers and civil society, the Guide is mainly meant to be a resource for decision-makers in European platform firms. It outlines key ethical concerns and trade-offs, to then point to possible responses that have been tried by platforms operating in the market.

Business & Government Simple read

Chinese AI Gets Ethical Guidelines For the First Time, Aligning With Beijing's Goal of Reining in Big Tech

China has revealed its first set of ethical guidelines governing artificial intelligence, placing emphasis on protecting user rights and preventing risks in ways that align with Beijing's goals of reining in Big Tech's influence and becoming the global AI leader by 2030.

Developers Intermediate read

Measuring the Ethical Behavior of Technology (Video: 40min)

This session will share the results and learnings of the creation and development of an ethical “yardstick” for respectful technology, including its application to websites and mobile apps. The speakers will also explore learnings from everyday people in the validation research around the certification mark as well as share recommendations for tech makers.

[Advanced] The next generation of data ethics tools

The research reveals some of the broad shapes and patterns of the tech and data ecosystem and its relationships to current ethics tools. It covers three key sections:

  • Early tech and data ethics tools and their users
  • The evolving tech and data ethics landscape
  • Lessons for the next generation of tech and data ethics tools

[Intermediate] Big Tech’s guide to talking about AI ethics

The tech giants have developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in.

[Advanced] The State of AI Ethics

The State of AI Ethics Report (January 2021) captures the most relevant developments in AI Ethics since October of 2020.

[Advanced] How Contextual Integrity can help us with Research Ethics in Pervasive Data

The growth of research projects relying on pervasive data — big datasets about people’s lives and activities that can be collected without them knowing — are testing the ethical frameworks and assumptions traditionally used by researchers and ethical review boards to ensure adequate protection of human subjects.

[Intermediate] Inside Timnit Gebru’s last days at Google

On December 2, after a protracted disagreement over the release of a research paper, Google forced out its ethical AI co-lead, Timnit Gebru. The paper was on the risks of large language models, AI models trained on staggering amounts of text data, which are a line of research core to Google’s business. Gebru, a leading voice in AI ethics, was one of the only Black women at Google Research.

Welcome Back to the Office. Please Wear This Tracking Device.

A boom in contact tracing devices could herald a new era of worker surveillance.

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

It is not uncommon now for AI experts to ask whether an AI is ‘fair’ and ‘for good’. But ‘fair’ and ‘good’ are infinitely spacious words that any AI system can be squeezed into. The question to pose is a deeper one: how is AI shifting power?

人工知能は善きものか、または公正であるか。それより問うべきは、AIを用いて誰がどのようにして権益を得ているのかということ

今日、AIの専門家がAIが「フェア/公正」で「グッド/良い」かどうかを尋ねることは珍しくありません。どんなAIシステムにも適用できる質問です。しかし、筆者が提起する問題はより深いものです。AIを用いて、誰がどのようにして新たな権力を握っているのでしょうか。

公平性の制約(Fairness Constraints)によるネットいじめの検出

ネットいじめは、今日のデジタル社会におけるオンライン社会的相互作用の中で広く見られる有害な現象です。多くの計算科学は機械学習アルゴリズムを用いたネットいじめの検出パフォーマンスの向上に焦点を当てていますが、そこで提案されたモデルは、意図しない社会的偏見をもたらす傾向にあります。この研究では、「公平性の制約を備えたトレーニングによってモデルを導出することで、ネットいじめ検出モデルの意図しないバイアスを軽減できるか」という調査質問に答えようとします。

Cyberbullying Detection with Fairness Constraints

Cyberbullying is a widespread adverse phenomenon among online social interactions in today's digital society. While numerous computational studies focus on enhancing the cyberbullying detection performance of machine learning algorithms, proposed models tend to carry and reinforce unintended social biases. In this study, we try to answer the research question of "Can we mitigate the unintended bias of cyberbullying detection models by guiding the model training with fairness constraints?".

Shaping the Terrain of AI Competition

How should democracies effectively compete against authoritarian regimes in the AI space? This report offers a “terrain strategy” for the United States to leverage the malleability of artificial intelligence to offset authoritarians' structural advantages in engineering and deploying AI.

Beyond a Human Rights-based approach to AI Governance

This paper discusses the establishment of a governance framework to secure the development and deployment of “good AI”, and describes the quest for a morally objective compass to steer it.

AI Fairness

Ruoss et al. published the first method to train AI systems with mathematically provable certificates of individual fairness. Full source code is available on Github.

The new IKEA Data Promise (Video: 10:37)

Companies discover data ethics as part of their corporate communications: IKEA promises to embed data ethics into all their processes.

Preprint: Alternative personal data governance models

The not-so-secret ingredient that underlies all successful Artificial Intelligence / Machine Learning (AI/ML) methods is training data. There would be no facial recognition, no targeted advertisements and no self-driving cars if it was not for large enough data sets with which those algorithms have been trained to perform their tasks. Given how central these data sets are, important ethics questions arise: How is data collection performed? And how do we govern its' use? This chapter – part of a forthcoming book – looks at why new data governance strategies are needed; investigates the relation of different data governance models to historic consent approaches; and compares different implementations of personal data exchange models.

How Big Tech Manipulates Academia to Avoid Regulation

A Silicon Valley lobby enrolled elite academia to avoid legal restrictions on artificial intelligence.

Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination

Interesting article using intersectional analysis do define and recommend mitigations to emergent discrimination in algorithmic systems.

Checklist for safe and responsible digital health research

My colleague Camille Nebeker at UCSD specializes in ethics with digital health – she's developed a checklist for “safe and responsible digital health research” - it's private but you can request a copy.

Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

The United States Department of Defense (DoD) enduring challenge is to retain a technological and military advantage while upholding and promoting democratic values, working with our allies, and contributing to a stable, peaceful international community. DoD’s development and use of artificial intelligence (AI) reflects this challenge.
The Defense Innovation Board (DIB) recommends five AI ethics principles for adoption by DoD, which in shorthand are: responsible, equitable, traceable, reliable, and governable. These principles and a set of recommended actions in support of them are described in this document.

The (not so) Global Forum on AI for Humanity

Earlier this week, I traveled to Paris to attend the Global Forum on Artificial Intelligence for Humanity (GFIAH). The by-invitation event featured one day of workshops addressing issues such as AI and culture, followed by a two days of panels on developing trustworthy AI, data governance, the future of work, delegating decisions to machines, bias and AI, and future challenges. The event was a part of the French government's effort to take the lead on developing a new AI regulatory framework that it describes as a 'third way', distinct from the approach to AI in China and the United States.

EU guidelines on ethics in artificial intelligence: Context and implementation (PDF)

You may have seen the EU Guidelines on Ethics in AI. As I believe we are entering in a 'regulating AI debate', it's worth sharing it here.

The Tech Pledge

Take the Tech Pledge and join the movement: Make tech a force for good. Created by 150 people in technology during this years Techfestival.

Artificial Intelligence Governance and Ethics: Global Perspectives

AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice.

The Ethics of Personal Data: Human Rights, Agency Costs, Protection Rackets, and Privacy Wrongs

A gaping ethical dilemma at the very heart of the data protection ecosystem practically guarantees widespread privacy wrongs.


質問一覧

Developers Advanced read

Tech Ethics Lab: Call for Proposals

The Notre Dame-IBM Technology Ethics Lab is pleased to release this inaugural Call for Proposals as we seek to fund practical and applied interdisciplinary projects focused on six core themes. The Lab welcomes a broad interpretation of these themes to potentially cover a wide range of ethical questions and challenges. Deadline: November 23rd, 2021

Make AI optional?

My one major requirement for ethical AI: all services that utilize/incorporate AI modeling of me must allow me to optionally disable AI. I.e. opt out of the AI modeling of me. @Oguzhan Gencoglu mentioned however that this might not be possible.

Comment on the human rights impacts of algorithmic systems

The Steering Committee on Media and Information Society (CDMSI) invites comments from the public on one draft text that was prepared by one of its sub-ordinate bodies and is meant to be adopted by the Committee of Ministers in early 2020. The draft recommendation of the Committee of Ministers to member states on the human rights impacts of algorithmic systems was prepared by the Committee of Experts on Human Rights Dimensions of Automated Data Processing and Different Forms of Artificial Intelligence (MSI-AUT). The experts will meet again in September to review all comments received and to finalise the draft ahead of its review by the CDMSI in December. Comments should be provided through email.

Test-driving a couple of ideas

What’s your first reaction to the twin claims that super-human, bias-free AI (let alone AGI) is not a) morally optimal nor b) possible? (The two claims are separate but linked. The issue at hand is whether we should try to get rid of bias and hence: a) is it morally imperative to get rid of bias and b) is it even possible to get rid of it.)