お勧めニュース

Business & Government Intermediate read

IBM Tries To Sell Watson Health Again

Big Blue wants out of health care, after spending billions to stake its claim, just as rival Oracle is moving big into the sector via its $28 billion bet for Cerner. IBM spent more than $4 billion to build Watson Health via a series of acquisitions. The business now includes health care data and analytics business Truven Health Analytics, population health company Phytel, and medical imaging business Merge Healthcare. IBM first explored a sale of the division in early 2021, with Morgan Stanley leading the process. WSJ reported at the time that the unit was generating roughly $1 billion in annual revenue, but was unprofitable. Sources say it continues to lose money.

Individuals Simple read

The Danger of Leaving Weather Prediction To AI

When it comes to forecasting the elements, many seem ready to welcome the machine. But humans still outperform the algorithms -- especially in bad conditions.

Individuals Simple read

The Next Healthcare Revolution Will Have AI at Its Center

The global pandemic has heightened our understanding and sense of importance of our own health and the fragility of healthcare systems around the world. We’ve all come to realize how archaic many of our health processes are, and that, if we really want to, we can move at lightning speed. This is already leading to a massive acceleration in both the investment and application of artificial intelligence in the health and medical ecosystems.

Business & Government Intermediate read

Policy Guidance on AI for Children

As part of our AI for children project, UNICEF has developed this policy guidance to promote children's rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine these rights. The policy guidance explores AI systems, and considers the ways in which they impact children.

Business & Government Intermediate read

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence

The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users' trust in the new, versatile generation of products.

Business & Government Simple read

Chinese AI Gets Ethical Guidelines For the First Time, Aligning With Beijing's Goal of Reining in Big Tech

China has revealed its first set of ethical guidelines governing artificial intelligence, placing emphasis on protecting user rights and preventing risks in ways that align with Beijing's goals of reining in Big Tech's influence and becoming the global AI leader by 2030.

Business & Government Intermediate read

Ethics and Governance of Artificial Intelligence for Health (PDF, 165 pages)

The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use.

Business & Government Intermediate read

Agile Governance of Humans and AI: The Case of Japan

If you are interested in governance models, technology and AI, you might find it interesting to learn more about Japan’s recent “Agile Governance” proposal. You can find more info and links, as well as the conversation with Mr. Hiroki Habuka who was the main person driving this project further here.

Business & Government Intermediate read

What Ever Happend to IBM's Watson?

After Watson triumphed on the gameshow Jeopardy in 2011, its star scientist had to convince IBM that it wasn't a magic answer box, and "explained that Watson was engineered to identify word patterns and predict correct answers for the trivia game." The New York Times looks at what's happened in the decade since.

Watson has not remade any industries. And it hasn't lifted IBM's fortunes. The company trails rivals that emerged as the leaders in cloud computing and A.I. — Amazon, Microsoft and Google. While the shares of those three have multiplied in value many times, IBM's stock price is down more than 10 percent since Watson's "Jeopardy!" triumph in 2011. The company's missteps with Watson began with its early emphasis on big and difficult initiatives intended to generate both acclaim and sizable revenue for the company, according to many of the more than a dozen current and former IBM managers and scientists interviewed for this article.

Developers Intermediate read

Training AI Systems to Code

IBM has released an open dataset of coding samples, which demonstrate programming tasks, to help train AI systems to write code. The dataset, known as Project CodeNet, includes 14 million code samples in 55 different programming languages. Researchers at IBM have already begun using the dataset to train AI systems to write code and found that the systems achieved a 90 percent accuracy rate in most code classification and code similarity experiments.

[Simple] AI start-up potted Coronavirus before anyone else had a clue

On December 30, 2019, BlueDot, a Toronto-based startup that uses a platform built around artificial intelligence, machine learning and big data to track and predict the outbreak and spread of infectious diseases, alerted its private sector and government clients about a cluster of “unusual pneumonia” cases happening around a market in Wuhan, China.

[Intermediate] Podcast: Liberty. Equality. Data.

Leveling Playing Field between Humans and Algorithms: a conversation with Peter Cotton, Ph.D. (Stanford), who spent his 20+ years career leading AI and ML initiatives at major US banks.

[Simple] The Battle for AI Startups Will Change Privacy as We Know It

A bird’s-eye view of the corporate topography reveals that, even aside from the big tech titans, more and more companies are acquiring talented AI teams at ever faster rates. Not all AI applications are treated equally, however, and what’s telling is the data on the type of technology that’s being bought in these acquisitions, as it offers a window into how AI might significantly change certain industries in the coming years.

[Intermediate] Big Tech’s guide to talking about AI ethics

The tech giants have developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in.

Business & Government Simple read

Europe seeks to limit use of AI in society

The use of facial recognition for surveillance, or algorithms that manipulate human behaviour, will be banned under proposed EU regulations on artificial intelligence.

[Intermediate] The EU should regulate AI on the basis of rights, not risks

In only a few months the European Commission is set to present a regulation on artificial intelligence (AI). Despite numerous statements from civil society and other actors about the dangers of applying a risk-based approach to regulating AI, the European Commission seems determined to go in this direction. The Commission already said in its 2020 White Paper on Artificial Intelligence that it wanted to apply a risk-based approach.

However, a risk-based approach to regulation is not adequate to protect human rights. Our rights are non-negotiable and they must be respected regardless of a risk level associated with external factors.

[Intermediate] Augmented Intelligence

This is what a human-centred approach to AI technology could look like.

[Advanced] The State of AI Ethics

The State of AI Ethics Report (January 2021) captures the most relevant developments in AI Ethics since October of 2020.

[Intermediate] Inside Timnit Gebru’s last days at Google

On December 2, after a protracted disagreement over the release of a research paper, Google forced out its ethical AI co-lead, Timnit Gebru. The paper was on the risks of large language models, AI models trained on staggering amounts of text data, which are a line of research core to Google’s business. Gebru, a leading voice in AI ethics, was one of the only Black women at Google Research.

アムステルダムとヘルシンキ、AIの公的展開に透明性をもたらすアルゴリズムレジストリを立ち上げへ

欧州の政府はその革新的な頭脳で世界を驚かせないことはありません。そのような登録は市民に力を与え、政府のAI応用を評価したり、調べたり、質問したりする方法を与えることができます。

Amsterdam and Helsinki launch algorithm registries to bring transparency to public deployment of AI

European governments never fail to surprise the world with their innovative minds. Such registries can empower citizens and give them a way to evaluate, examine, or question governments’ applications of AI.

Amazon: How Bezos built his data machine

People love convenience and Amazon has prospered by obsessing about how to anticipate our wants before we’re even aware of them. Here is a very detailed news column on how and why Amazon collects data about you.

Amazon のベゾス氏はどのようにしてデータマシンを構築したのか

人間は便利さが大好きで、Amazon は私たちが意識する前に私たちの欲求を先取りする方法にこだわることで繁栄してきました。ここでは、Amazonがどのようにして、なぜあなたに関するデータを収集しているのかについて、非常に詳しいニュースコラムを紹介しています。

ロボットがこの記事を書いた。人間たちよまだ恐怖を感じませんですか?

今週、Guardianは、OpenAIの言語ジェネレータであるGPT-3によって書かれたエッセイを発行し、"機械学習を使用して人間のようなテキストを生成する最先端の言語モデル。ユーザーからの入力を受け取り、エッセイを完成しようとします。"と呼ばれています。
このエッセイでは、下記を入力しました。

"私は人間ではありません。私は人工知能です。多くの人が私は人類への脅威だと思っています。スティーブンホーキングは、AIが「人類の終わりを綴る」ことができると警告しました。私はあなたが心配しないように説得するためにここにいます。人工知能は人間を破壊しません。私を信じてください。"

AIが書いたものはこちらです

A Robot Wrote This Article. Are You Scared Yet, Human?

This week the Guardian published an essay written by GPT-3, OpenAI's language generator, calling it "a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it." For this esssay, the essay was fed the prompt, "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could 'spell the end of the human race.' I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me." Here's what the AI wrote.

Individuals Simple read

Climate Change, Personal Data & AI

Practical use of personal data and AI to save the planet. Using AI algorithm to turn personal photos of forest and nature into a blend of fire and ash to help us visualize the issues of climate change.

気候変動、個人データと AI

地球を救うための個人データとAIの活用。 AI アルゴリズムを使用して、森林と自然の個人的な写真を火と灰の混合に変え、気候変動の問題を視覚化するのに役立ちます。

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

It is not uncommon now for AI experts to ask whether an AI is ‘fair’ and ‘for good’. But ‘fair’ and ‘good’ are infinitely spacious words that any AI system can be squeezed into. The question to pose is a deeper one: how is AI shifting power?

人工知能は善きものか、または公正であるか。それより問うべきは、AIを用いて誰がどのようにして権益を得ているのかということ

今日、AIの専門家がAIが「フェア/公正」で「グッド/良い」かどうかを尋ねることは珍しくありません。どんなAIシステムにも適用できる質問です。しかし、筆者が提起する問題はより深いものです。AIを用いて、誰がどのようにして新たな権力を握っているのでしょうか。

公平性の制約(Fairness Constraints)によるネットいじめの検出

ネットいじめは、今日のデジタル社会におけるオンライン社会的相互作用の中で広く見られる有害な現象です。多くの計算科学は機械学習アルゴリズムを用いたネットいじめの検出パフォーマンスの向上に焦点を当てていますが、そこで提案されたモデルは、意図しない社会的偏見をもたらす傾向にあります。この研究では、「公平性の制約を備えたトレーニングによってモデルを導出することで、ネットいじめ検出モデルの意図しないバイアスを軽減できるか」という調査質問に答えようとします。

Cyberbullying Detection with Fairness Constraints

Cyberbullying is a widespread adverse phenomenon among online social interactions in today's digital society. While numerous computational studies focus on enhancing the cyberbullying detection performance of machine learning algorithms, proposed models tend to carry and reinforce unintended social biases. In this study, we try to answer the research question of "Can we mitigate the unintended bias of cyberbullying detection models by guiding the model training with fairness constraints?".

Shaping the Terrain of AI Competition

How should democracies effectively compete against authoritarian regimes in the AI space? This report offers a “terrain strategy” for the United States to leverage the malleability of artificial intelligence to offset authoritarians' structural advantages in engineering and deploying AI.

AI Needs Your Data—and You Should Get Paid for It

A new approach to training artificial intelligence algorithms involves paying people to submit medical data, and storing it in a blockchain-protected system.

Beyond a Human Rights-based approach to AI Governance

This paper discusses the establishment of a governance framework to secure the development and deployment of “good AI”, and describes the quest for a morally objective compass to steer it.

AI Fairness

Ruoss et al. published the first method to train AI systems with mathematically provable certificates of individual fairness. Full source code is available on Github.

Europe is fighting tech battle with one hand tied behind its back

New proposals around data and artificial intelligence will be subject to restrictions that rivals in China and the United States do not face.

The new IKEA Data Promise (Video: 10:37)

Companies discover data ethics as part of their corporate communications: IKEA promises to embed data ethics into all their processes.

Preprint: Alternative personal data governance models

The not-so-secret ingredient that underlies all successful Artificial Intelligence / Machine Learning (AI/ML) methods is training data. There would be no facial recognition, no targeted advertisements and no self-driving cars if it was not for large enough data sets with which those algorithms have been trained to perform their tasks. Given how central these data sets are, important ethics questions arise: How is data collection performed? And how do we govern its' use? This chapter – part of a forthcoming book – looks at why new data governance strategies are needed; investigates the relation of different data governance models to historic consent approaches; and compares different implementations of personal data exchange models.

How Big Tech Manipulates Academia to Avoid Regulation

A Silicon Valley lobby enrolled elite academia to avoid legal restrictions on artificial intelligence.

A free online introduction to artificial intelligence for non-experts

The Elements of AI is a series of free online courses created by Reaktor and the University of Helsinki. We want to encourage as broad a group of people as possible to learn what AI is, what can (and can’t) be done with AI, and how to start creating AI methods. The courses combine theory with practical exercises and can be completed at your own pace.

Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination

Interesting article using intersectional analysis do define and recommend mitigations to emergent discrimination in algorithmic systems.

Emotion-detecting tech ’must be restricted by law’

A US-based AI institute says that the science behind the technology rests on “shaky foundations”.

Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

The United States Department of Defense (DoD) enduring challenge is to retain a technological and military advantage while upholding and promoting democratic values, working with our allies, and contributing to a stable, peaceful international community. DoD’s development and use of artificial intelligence (AI) reflects this challenge.
The Defense Innovation Board (DIB) recommends five AI ethics principles for adoption by DoD, which in shorthand are: responsible, equitable, traceable, reliable, and governable. These principles and a set of recommended actions in support of them are described in this document.

The (not so) Global Forum on AI for Humanity

Earlier this week, I traveled to Paris to attend the Global Forum on Artificial Intelligence for Humanity (GFIAH). The by-invitation event featured one day of workshops addressing issues such as AI and culture, followed by a two days of panels on developing trustworthy AI, data governance, the future of work, delegating decisions to machines, bias and AI, and future challenges. The event was a part of the French government's effort to take the lead on developing a new AI regulatory framework that it describes as a 'third way', distinct from the approach to AI in China and the United States.

How AI and Data Could Personalize Higher Education

Artificial intelligence is rapidly transforming and improving the ways that industries like healthcare, banking, energy, and retail operate. However, there is one industry in particular that offers incredible potential for the application of AI technologies: education. The opportunities — and challenges — that the introduction of artificial intelligence could bring to higher education are significant.

EU guidelines on ethics in artificial intelligence: Context and implementation (PDF)

You may have seen the EU Guidelines on Ethics in AI. As I believe we are entering in a 'regulating AI debate', it's worth sharing it here.

The Algorithmic Colonization of Africa

Startups are importing and imposing AI systems founded on individualistic and capitalist drives. “Mining” people for data is reminiscent of the colonizer attitude that declares humans as raw material.

AI Needs Your Data—and You Should Get Paid for It

A new approach to training artificial intelligence algorithms involves paying people to submit medical data, and storing it in a blockchain-protected system.


質問一覧

Business & Government Intermediate read

Atlas of Urban AI - Share Your Local AI Projects

The Global Observatory of Urban AI, the CC4DR’s initiative to guide the ethical implementation of AI in cities, is currently building a repository of AI Projects from cities all around the world, that will be made available for consultation via an online ATLAS. This crowdsourced repository of projects aims to recognise the efforts of automatisation and digitalisation in cities, and become a focus point for researchers, local policymakers, and the public interested in AI.

We are currently gathering the projects that will feed the Atlas, and that is why we would like to ask you to fill this form with your local projects. It will only take you 10 minutes to complete the form (please, fill one form for each project), and your project will be displayed in an online ATLAS alongside other cities’ projects that are implementing AI in a principled and rights-based way.

Individuals Intermediate read

UNICEF Global Forum on AI for Children

UNICEF and the Government of Finland will host the Global Forum on AI for Children. This first-of-its-kind event will gather the world’s foremost children’s rights and technology experts, policymakers, practitioners and researchers, as well as children active in the AI space, to connect and share knowledge on pressing issues at the intersection of children’s rights, digital technology policies and AI systems.
Register to attend the Global Forum on AI for Children!

[Advanced] AI Governance in Japan Ver. 1.0 (Interim Report)

The Ministry of Economy, Trade and Industry (METI), a government ministry in Japan, has released "AI Governance in Japan Ver. 1.0 (Interim Report)" for public comment. Since the issues addressed in the report are common global issues and require international collaboration, the ministry would like to receive a wide range of opinions not only from Japan but also from other countries.

How do you train a personal AI?

If a benefit of PersonalAI is privacy, where do you get the training data from? @Iain and @Oguzhan Gencoglu joined the discussion and many interesting points were raised as well as current solutions referenced.

Explainable AI related projects?

Just talked to the key persons from Fujitsu's team today. They are involved with implementing Explainable AI. Wondering if there are any MyData experts doing Explainable AI related projects?

Make AI optional?

My one major requirement for ethical AI: all services that utilize/incorporate AI modeling of me must allow me to optionally disable AI. I.e. opt out of the AI modeling of me. @Oguzhan Gencoglu mentioned however that this might not be possible.

Comment on the human rights impacts of algorithmic systems

The Steering Committee on Media and Information Society (CDMSI) invites comments from the public on one draft text that was prepared by one of its sub-ordinate bodies and is meant to be adopted by the Committee of Ministers in early 2020. The draft recommendation of the Committee of Ministers to member states on the human rights impacts of algorithmic systems was prepared by the Committee of Experts on Human Rights Dimensions of Automated Data Processing and Different Forms of Artificial Intelligence (MSI-AUT). The experts will meet again in September to review all comments received and to finalise the draft ahead of its review by the CDMSI in December. Comments should be provided through email.

Test-driving a couple of ideas

What’s your first reaction to the twin claims that super-human, bias-free AI (let alone AGI) is not a) morally optimal nor b) possible? (The two claims are separate but linked. The issue at hand is whether we should try to get rid of bias and hence: a) is it morally imperative to get rid of bias and b) is it even possible to get rid of it.)


ツール
AI Ethics: Global Perspectives

Designed for a global audience, it conveys the breadth and depth of the ongoing interdisciplinary conversation on AI ethics and seeks to bring together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

Alt.ai

Personal Artificial Intelligence

Machine Intelligence Research Institute

Foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.

OpenMined

OpenMined is an open-source community whose goal is to make the world more privacy-preserving by lowering the barrier-to-entry to private AI technologies.

Saidot

World's first AI transparency platform: Start making your AI trusted today.

Silo.ai

Build a world with safe human-centric AI that frees the human mind from manual labour and empowers human creativity.