Informação relevante

Business & Government Intermediate read

IBM Tries To Sell Watson Health Again

Big Blue wants out of health care, after spending billions to stake its claim, just as rival Oracle is moving big into the sector via its $28 billion bet for Cerner. IBM spent more than $4 billion to build Watson Health via a series of acquisitions. The business now includes health care data and analytics business Truven Health Analytics, population health company Phytel, and medical imaging business Merge Healthcare. IBM first explored a sale of the division in early 2021, with Morgan Stanley leading the process. WSJ reported at the time that the unit was generating roughly $1 billion in annual revenue, but was unprofitable. Sources say it continues to lose money.

Individuals Simple read

The Danger of Leaving Weather Prediction To AI

When it comes to forecasting the elements, many seem ready to welcome the machine. But humans still outperform the algorithms -- especially in bad conditions.

Individuals Simple read

The Next Healthcare Revolution Will Have AI at Its Center

The global pandemic has heightened our understanding and sense of importance of our own health and the fragility of healthcare systems around the world. We’ve all come to realize how archaic many of our health processes are, and that, if we really want to, we can move at lightning speed. This is already leading to a massive acceleration in both the investment and application of artificial intelligence in the health and medical ecosystems.

Business & Government Intermediate read

Policy Guidance on AI for Children

As part of our AI for children project, UNICEF has developed this policy guidance to promote children's rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine these rights. The policy guidance explores AI systems, and considers the ways in which they impact children.

Business & Government Intermediate read

Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence

The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users' trust in the new, versatile generation of products.

Individuals Simple read

Americans Need a Bill of Rights for an AI-Powered World

The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public.

Business & Government Simple read

Chinese AI Gets Ethical Guidelines For the First Time, Aligning With Beijing's Goal of Reining in Big Tech

China has revealed its first set of ethical guidelines governing artificial intelligence, placing emphasis on protecting user rights and preventing risks in ways that align with Beijing's goals of reining in Big Tech's influence and becoming the global AI leader by 2030.

Business & Government Intermediate read

Ethics and Governance of Artificial Intelligence for Health (PDF, 165 pages)

The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use.

Agile Governance of Humans and AI: The Case of Japan

If you are interested in governance models, technology and AI, you might find it interesting to learn more about Japan’s recent “Agile Governance” proposal. You can find more info and links, as well as the conversation with Mr. Hiroki Habuka who was the main person driving this project further here.

Governança ágil de humanos e IA: o caso japonês

Se você tem interesse em modelos de governança, tecnologia e IA, você provavelmente vai se interessar em saber mais sobre o recente modelo de "Governança Ágil" do Japão. Mais informações, links e uma conversa com o Sr. Hiroki Habuka, principal dirigente desse projeto, podem ser encontradas acessando o link abaixo

Business & Government Intermediate read

Agile Governance of Humans and AI: The Case of Japan

If you are interested in governance models, technology and AI, you might find it interesting to learn more about Japan’s recent “Agile Governance” proposal. You can find more info and links, as well as the conversation with Mr. Hiroki Habuka who was the main person driving this project further here.

Business & Government Intermediate read

What Ever Happend to IBM's Watson?

After Watson triumphed on the gameshow Jeopardy in 2011, its star scientist had to convince IBM that it wasn't a magic answer box, and "explained that Watson was engineered to identify word patterns and predict correct answers for the trivia game." The New York Times looks at what's happened in the decade since.

Watson has not remade any industries. And it hasn't lifted IBM's fortunes. The company trails rivals that emerged as the leaders in cloud computing and A.I. — Amazon, Microsoft and Google. While the shares of those three have multiplied in value many times, IBM's stock price is down more than 10 percent since Watson's "Jeopardy!" triumph in 2011. The company's missteps with Watson began with its early emphasis on big and difficult initiatives intended to generate both acclaim and sizable revenue for the company, according to many of the more than a dozen current and former IBM managers and scientists interviewed for this article.

O que aconteceu com o Watson da IBM?

Depois do sucesso de Watson no programa de televisão "Jeopardy" em 2011, o principal cientista da IBM teve de explicar à empresa que o supercomputador não era uma máquina mágica de acertar perguntas e "explicou que o Watson havia sido projetado para identificar os padrões de palavras e prever respostas corretas para o jogo de perguntas e respostas [na televisão]". O New York Times investigou o que aconteceu na década que se seguiu ao ocorrido.

O Watson não provocou grandes transformações em nenhuma indústria. E nem aumentou a fortuna da IBM. A empresa ficou para trás de rivais que emergiram como líderes no ramo de cloud computing e IA - Amazon, Microsoft e Google. Enquanto as ações dessas três empresas multiplicaram seu valor em muitas vezes, as ações da IBM desvalorizaram em mais de 10% desde o sucesso de Watson no programa "Jeopardy" há dez anos. Os deslizes da empresa com o Watson começaram com sua ênfase inicial em iniciativas grandes e complicadas que tinham como intenção generar tanto aclamação quanto um lucro significativo para a empresa, de acordo com mais de uma dúzia de gerentes e cientistas que trabalharam e trabalham na IBM e foram entrevistados para esse artigo.> Watson has not remade any industries. And it hasn't lifted IBM's fortunes. The company trails rivals that emerged as the leaders in cloud computing and A.I. — Amazon, Microsoft and Google. While the shares of those three have multiplied in value many times, IBM's stock price is down more than 10 percent since Watson's "Jeopardy!" triumph in 2011. The company's missteps with Watson began with its early emphasis on big and difficult initiatives intended to generate both acclaim and sizable revenue for the company, according to many of the more than a dozen current and former IBM managers and scientists interviewed for this article.

Developers Intermediate read

Training AI Systems to Code

IBM has released an open dataset of coding samples, which demonstrate programming tasks, to help train AI systems to write code. The dataset, known as Project CodeNet, includes 14 million code samples in 55 different programming languages. Researchers at IBM have already begun using the dataset to train AI systems to write code and found that the systems achieved a 90 percent accuracy rate in most code classification and code similarity experiments.

Treinando sistemas de IA para programar

A IBM liberou um dataset aberto de amostras de códigos que demonstram tarefas de programação para ajudar a treinar IAs a escrever código. O dataset, conhecido como Projeto CodeNet, inclui 14 milhões de amostras de códigos em 55 diferentes linguagens de programação. Pesquisadores da IBM já começaram a utilizar o dataset para treinar sistemas de IA para programar e descobriram que os sistemas alcançaram um índice de precisão de 90% na maior parte dos experimentos de classificação e similaridade de códigos.

[Simple] AI start-up potted Coronavirus before anyone else had a clue

On December 30, 2019, BlueDot, a Toronto-based startup that uses a platform built around artificial intelligence, machine learning and big data to track and predict the outbreak and spread of infectious diseases, alerted its private sector and government clients about a cluster of “unusual pneumonia” cases happening around a market in Wuhan, China.

Uma start-up de IA detectou o Coronavirus antes de todos

BlueDot é uma start-up de Toronto que usa uma plataforma construída em torno de inteligência artificial, aprendizado de máquina e big data para rastrear e prever o aparecimento e a disseminação de doenças contagiosas. No dia 30 de dezembro de 2019, a empresa alertou seus clientes públicos e privados acerca do surgimento de muitos casos de uma "pneumonia incomum" nas proximidades de um mercado em Wuhan, China.

[Intermediate] Podcast: Liberty. Equality. Data.

Leveling Playing Field between Humans and Algorithms: a conversation with Peter Cotton, Ph.D. (Stanford), who spent his 20+ years career leading AI and ML initiatives at major US banks.

Podcast: Liberdade. Igualdade. Dados.

Equilibrando o jogo entre humanos e algoritmos: uma conversa com Peter Cotton, Ph.D. (Stanford), que passou sua carreira de mais de 20 anos liderando iniciativas de IA e ML em grandes bancos dos EUA.

[Simple] The Battle for AI Startups Will Change Privacy as We Know It

A bird’s-eye view of the corporate topography reveals that, even aside from the big tech titans, more and more companies are acquiring talented AI teams at ever faster rates. Not all AI applications are treated equally, however, and what’s telling is the data on the type of technology that’s being bought in these acquisitions, as it offers a window into how AI might significantly change certain industries in the coming years.

A batalha pelas startups de IA irá mudar a privacidade que conhecemos até hoje

Uma visão de cima da topografia corporativa nos revela que, para além dos titãs da big tech, mais e mais empresas estão adquirindo times de IA em um ritmo cada vez mais acelerado. Nem todas as aplicações de IA são tratadas da mesma forma, no entanto, e o que demonstra isso são os dados sobre o tipo de tecnologia que está sendo comprado nessas aquisições. Esses dados nos revelam como a IA pode mudar de maneira significativa certos ramos da indústria nos próximos anos.

Europa busca limitar o uso da IA na sociedade

O uso de reconhecimento facial para vigilância, ou de algoritmos que manipulem o comportamento humano, serão banidos de acordo com propostas de regulações da UE sobre a inteligência artificial.

[Intermediate] Big Tech’s guide to talking about AI ethics

The tech giants have developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in.

Business & Government Simple read

Europe seeks to limit use of AI in society

The use of facial recognition for surveillance, or algorithms that manipulate human behaviour, will be banned under proposed EU regulations on artificial intelligence.

O guia da Big Tech para falar sobre ética e IA

As gigantes da tecnologia desenvolveram um novo vocabulário para utilizarem quando querem garantir ao público de que elas se importam profundamente com o desenvolvimento responsável de inteligências artificiais - mas elas também querem garantir que esse público não investigue muito o tema. Aqui está um guia interno para decifrar a linguagem utilizada por essas empresas e confrontar suas suposições e valores.

A União Europeia deveria regular IA com base em direitos, não riscos

Em poucos meses a Comissão Europeia está pronta para apresentar sua regulação sobre Inteligência Artificial (IA) ( Embora diversos pronunciamentos da sociadade civil e outros players sobre o perigo de aplicar uma abordagem baseada em riscos para regular a IA, a Comissão Europeia parece determinada em ir nesta direção. A Comissão já se pronunciou em seu Paper de 2020 sobre Inteligência Artificial, afirmando sua intenção de aplicar uma regulação fundamentada em riscos.

However, a risk-based approach to regulation is not adequate to protect human rights. Our rights are non-negotiable and they must be respected regardless of a risk level associated with external factors.

[Intermediate] The EU should regulate AI on the basis of rights, not risks

In only a few months the European Commission is set to present a regulation on artificial intelligence (AI). Despite numerous statements from civil society and other actors about the dangers of applying a risk-based approach to regulating AI, the European Commission seems determined to go in this direction. The Commission already said in its 2020 White Paper on Artificial Intelligence that it wanted to apply a risk-based approach.

However, a risk-based approach to regulation is not adequate to protect human rights. Our rights are non-negotiable and they must be respected regardless of a risk level associated with external factors.

[Intermediate] Augmented Intelligence

This is what a human-centred approach to AI technology could look like.

[Advanced] The State of AI Ethics

The State of AI Ethics Report (January 2021) captures the most relevant developments in AI Ethics since October of 2020.

[Intermediate] Inside Timnit Gebru’s last days at Google

On December 2, after a protracted disagreement over the release of a research paper, Google forced out its ethical AI co-lead, Timnit Gebru. The paper was on the risks of large language models, AI models trained on staggering amounts of text data, which are a line of research core to Google’s business. Gebru, a leading voice in AI ethics, was one of the only Black women at Google Research.

[Intermediário] Por dentro dos últimos dias de Timnit Gebru no Google

Em 2 de dezembro, depois de um prolongado desacordo sobre o lançamento de um artigo de pesquisa, o Google expulsou seu co-líder ético de IA, Timnit Gebru. O artigo tratava dos riscos de grandes modelos de linguagem, modelos de IA treinados em enormes quantidades de dados de texto, que são uma linha de pesquisa central para os negócios do Google. Gebru, uma voz importante na ética da IA, foi uma das únicas mulheres negras no Google Research.

Amsterdam and Helsinki launch algorithm registries to bring transparency to public deployment of AI

European governments never fail to surprise the world with their innovative minds. Such registries can empower citizens and give them a way to evaluate, examine, or question governments’ applications of AI.

Amsterdã e Helsinque lançam registros de algoritmos para trazer transparência à implantação pública de IA

Os governos europeus nunca deixam de surpreender o mundo com suas mentes inovadoras. Esses registros podem capacitar os cidadãos e dar-lhes uma maneira de avaliar, examinar ou questionar as aplicações governamentais de IA.

Amazon: How Bezos built his data machine

People love convenience and Amazon has prospered by obsessing about how to anticipate our wants before we’re even aware of them. Here is a very detailed news column on how and why Amazon collects data about you.

Amazon: como Bezos construiu sua máquina de dados

As pessoas adoram conveniência e a Amazon prosperou obcecada em como antecipar nossos desejos antes mesmo de estarmos cientes deles. Aqui está uma coluna de notícias muito detalhada sobre como e por que a Amazon coleta dados sobre você.

A Robot Wrote This Article. Are You Scared Yet, Human?

This week the Guardian published an essay written by GPT-3, OpenAI's language generator, calling it "a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it." For this esssay, the essay was fed the prompt, "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could 'spell the end of the human race.' I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me." Here's what the AI wrote.

Um robô escreveu este artigo. Você está com medo ainda, humano?

Esta semana, o Guardian publicou um ensaio escrito por GPT-3, o gerador de linguagem da OpenAI, chamando-o de "um modelo de linguagem de ponta que usa aprendizado de máquina para produzir texto semelhante ao humano. Leva um prompt e tenta concluí-lo." Para este ensaio, o gerador foi alimentado com o seguinte prompt: "Não sou um humano. Sou Inteligência Artificial. Muitas pessoas pensam que sou uma ameaça à humanidade. Stephen Hawking advertiu que a IA poderia 'significar o fim da raça humana'. Estou aqui para convencê-lo a não se preocupar. A Inteligência Artificial não vai destruir os humanos. Acredite em mim." Aqui está o que a IA escreveu.

Mudanças climáticas, dados pessoais e IA

Na quarta-feira, o Congresso dos Estados Unidos interrogou os CEOs do Google, Apple, Facebook e Amazon, e examinou questões relacionadas ao domínio do GAFA no mercado. Os dados foram um dos tópicos subjacentes a essas investigações.

Individuals Simple read

Climate Change, Personal Data & AI

Practical use of personal data and AI to save the planet. Using AI algorithm to turn personal photos of forest and nature into a blend of fire and ash to help us visualize the issues of climate change.

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

It is not uncommon now for AI experts to ask whether an AI is ‘fair’ and ‘for good’. But ‘fair’ and ‘good’ are infinitely spacious words that any AI system can be squeezed into. The question to pose is a deeper one: how is AI shifting power?

Detecção de cyberbullying com restrições de equidade

O cyberbullying é um fenômeno adverso generalizado entre as interações sociais online na sociedade digital de hoje. Embora numerosos estudos computacionais se concentrem em aprimorar o desempenho da detecção de cyberbullying de algoritmos de aprendizado de máquina, os modelos propostos tendem a apresentar e reforçar vieses sociais não intencionais. Neste estudo, tentamos responder à pergunta de pesquisa "Podemos mitigar o viés não intencional dos modelos de detecção de cyberbullying, orientando o treinamento do modelo com restrições de equidade?".

Cyberbullying Detection with Fairness Constraints

Cyberbullying is a widespread adverse phenomenon among online social interactions in today's digital society. While numerous computational studies focus on enhancing the cyberbullying detection performance of machine learning algorithms, proposed models tend to carry and reinforce unintended social biases. In this study, we try to answer the research question of "Can we mitigate the unintended bias of cyberbullying detection models by guiding the model training with fairness constraints?".

Moldando o terreno da competição das IAs

Como as democracias devem competir efetivamente contra regimes autoritários no território das Inteligências Artificiais? Este relatório oferece uma "estratégia de terreno" para os Estados Unidos alavancarem a maleabilidade da inteligência artificial para compensar as vantagens estruturais dos governos autoritários na engenharia e implantação da IA.

Shaping the Terrain of AI Competition

How should democracies effectively compete against authoritarian regimes in the AI space? This report offers a “terrain strategy” for the United States to leverage the malleability of artificial intelligence to offset authoritarians' structural advantages in engineering and deploying AI.

AI Needs Your Data—and You Should Get Paid for It

A new approach to training artificial intelligence algorithms involves paying people to submit medical data, and storing it in a blockchain-protected system.

Beyond a Human Rights-based approach to AI Governance

This paper discusses the establishment of a governance framework to secure the development and deployment of “good AI”, and describes the quest for a morally objective compass to steer it.

IBM and Microsoft support the Vatican’s guidelines for ethical AI

IBM and Microsoft are among the first to sign the Vatican’s “Rome Call for AI Ethics.”

AI Fairness

Ruoss et al. published the first method to train AI systems with mathematically provable certificates of individual fairness. Full source code is available on Github.

Europe is fighting tech battle with one hand tied behind its back

New proposals around data and artificial intelligence will be subject to restrictions that rivals in China and the United States do not face.

The new IKEA Data Promise (Video: 10:37)

Companies discover data ethics as part of their corporate communications: IKEA promises to embed data ethics into all their processes.

Preprint: Alternative personal data governance models

The not-so-secret ingredient that underlies all successful Artificial Intelligence / Machine Learning (AI/ML) methods is training data. There would be no facial recognition, no targeted advertisements and no self-driving cars if it was not for large enough data sets with which those algorithms have been trained to perform their tasks. Given how central these data sets are, important ethics questions arise: How is data collection performed? And how do we govern its' use? This chapter – part of a forthcoming book – looks at why new data governance strategies are needed; investigates the relation of different data governance models to historic consent approaches; and compares different implementations of personal data exchange models.

How Big Tech Manipulates Academia to Avoid Regulation

A Silicon Valley lobby enrolled elite academia to avoid legal restrictions on artificial intelligence.

A free online introduction to artificial intelligence for non-experts

The Elements of AI is a series of free online courses created by Reaktor and the University of Helsinki. We want to encourage as broad a group of people as possible to learn what AI is, what can (and can’t) be done with AI, and how to start creating AI methods. The courses combine theory with practical exercises and can be completed at your own pace.

Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination

Interesting article using intersectional analysis do define and recommend mitigations to emergent discrimination in algorithmic systems.

Emotion-detecting tech ’must be restricted by law’

A US-based AI institute says that the science behind the technology rests on “shaky foundations”.

Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

The United States Department of Defense (DoD) enduring challenge is to retain a technological and military advantage while upholding and promoting democratic values, working with our allies, and contributing to a stable, peaceful international community. DoD’s development and use of artificial intelligence (AI) reflects this challenge.
The Defense Innovation Board (DIB) recommends five AI ethics principles for adoption by DoD, which in shorthand are: responsible, equitable, traceable, reliable, and governable. These principles and a set of recommended actions in support of them are described in this document.

The (not so) Global Forum on AI for Humanity

Earlier this week, I traveled to Paris to attend the Global Forum on Artificial Intelligence for Humanity (GFIAH). The by-invitation event featured one day of workshops addressing issues such as AI and culture, followed by a two days of panels on developing trustworthy AI, data governance, the future of work, delegating decisions to machines, bias and AI, and future challenges. The event was a part of the French government's effort to take the lead on developing a new AI regulatory framework that it describes as a 'third way', distinct from the approach to AI in China and the United States.

How AI and Data Could Personalize Higher Education

Artificial intelligence is rapidly transforming and improving the ways that industries like healthcare, banking, energy, and retail operate. However, there is one industry in particular that offers incredible potential for the application of AI technologies: education. The opportunities — and challenges — that the introduction of artificial intelligence could bring to higher education are significant.

EU guidelines on ethics in artificial intelligence: Context and implementation (PDF)

You may have seen the EU Guidelines on Ethics in AI. As I believe we are entering in a 'regulating AI debate', it's worth sharing it here.

The Algorithmic Colonization of Africa

Startups are importing and imposing AI systems founded on individualistic and capitalist drives. “Mining” people for data is reminiscent of the colonizer attitude that declares humans as raw material.

AI Needs Your Data—and You Should Get Paid for It

A new approach to training artificial intelligence algorithms involves paying people to submit medical data, and storing it in a blockchain-protected system.

Perguntas feitas

Business & Government Intermediate read

Atlas of Urban AI - Share Your Local AI Projects

The Global Observatory of Urban AI, the CC4DR’s initiative to guide the ethical implementation of AI in cities, is currently building a repository of AI Projects from cities all around the world, that will be made available for consultation via an online ATLAS. This crowdsourced repository of projects aims to recognise the efforts of automatisation and digitalisation in cities, and become a focus point for researchers, local policymakers, and the public interested in AI.

We are currently gathering the projects that will feed the Atlas, and that is why we would like to ask you to fill this form with your local projects. It will only take you 10 minutes to complete the form (please, fill one form for each project), and your project will be displayed in an online ATLAS alongside other cities’ projects that are implementing AI in a principled and rights-based way.

Individuals Intermediate read

UNICEF Global Forum on AI for Children

UNICEF and the Government of Finland will host the Global Forum on AI for Children. This first-of-its-kind event will gather the world’s foremost children’s rights and technology experts, policymakers, practitioners and researchers, as well as children active in the AI space, to connect and share knowledge on pressing issues at the intersection of children’s rights, digital technology policies and AI systems.
Register to attend the Global Forum on AI for Children!

[Advanced] AI Governance in Japan Ver. 1.0 (Interim Report)

The Ministry of Economy, Trade and Industry (METI), a government ministry in Japan, has released "AI Governance in Japan Ver. 1.0 (Interim Report)" for public comment. Since the issues addressed in the report are common global issues and require international collaboration, the ministry would like to receive a wide range of opinions not only from Japan but also from other countries.

How do you train a personal AI?

If a benefit of PersonalAI is privacy, where do you get the training data from? @Iain and @Oguzhan Gencoglu joined the discussion and many interesting points were raised as well as current solutions referenced.

Explainable AI related projects?

Just talked to the key persons from Fujitsu's team today. They are involved with implementing Explainable AI. Wondering if there are any MyData experts doing Explainable AI related projects?

Make AI optional?

My one major requirement for ethical AI: all services that utilize/incorporate AI modeling of me must allow me to optionally disable AI. I.e. opt out of the AI modeling of me. @Oguzhan Gencoglu mentioned however that this might not be possible.

Comment on the human rights impacts of algorithmic systems

The Steering Committee on Media and Information Society (CDMSI) invites comments from the public on one draft text that was prepared by one of its sub-ordinate bodies and is meant to be adopted by the Committee of Ministers in early 2020. The draft recommendation of the Committee of Ministers to member states on the human rights impacts of algorithmic systems was prepared by the Committee of Experts on Human Rights Dimensions of Automated Data Processing and Different Forms of Artificial Intelligence (MSI-AUT). The experts will meet again in September to review all comments received and to finalise the draft ahead of its review by the CDMSI in December. Comments should be provided through email.

Test-driving a couple of ideas

What’s your first reaction to the twin claims that super-human, bias-free AI (let alone AGI) is not a) morally optimal nor b) possible? (The two claims are separate but linked. The issue at hand is whether we should try to get rid of bias and hence: a) is it morally imperative to get rid of bias and b) is it even possible to get rid of it.)

AI Ethics: Global Perspectives

Designed for a global audience, it conveys the breadth and depth of the ongoing interdisciplinary conversation on AI ethics and seeks to bring together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

Personal Artificial Intelligence

Machine Intelligence Research Institute

Foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.


OpenMined is an open-source community whose goal is to make the world more privacy-preserving by lowering the barrier-to-entry to private AI technologies.


World's first AI transparency platform: Start making your AI trusted today.

Build a world with safe human-centric AI that frees the human mind from manual labour and empowers human creativity.