SUBSCRIBE TO OUR NEWSLETTER

Ethics concerns in Artificial Intelligence are not such a cool idea

Last year, the AI ecosystem was receiving a major wake-up call. There were some advances in AI research in 2018, from reinforcement learning in generative adversarial networks to better natural language comprehension. However, the year also saw several high profile examples of the injury these systems may cause when they’re deployed too hastily. A Tesla crashed on Autopilot, killing the driver, and a self-driving Uber crashed, killing a pedestrian. Industrial face recognition systems performed in audits on dark-skinned folks, but technology giants continued to peddle them anyhow, to clients including law enforcement. At the start of this season, thinking about these events, I wrote a settlement for the AI community: Stop treating AI such as magical, and take liability for producing, implementing, and controlling it. 

There was more talk of AI integrity than ever before. Dozens of organizations have produced AI integrity guidelines, companies hurried to set accountable AI teams and parade them in front of the media. It is difficult to attend an AI-related conference anymore without part of the programming being committed to an ethics-related message do we protect people’s privacy when information that is much is needed by AI? AI ethics guidelines remain obscure and hard to implement. Few companies can show concrete changes in the way AI services and products get evaluated and approved. We are falling into a trap of integrity washing, where real action becomes replaced by superficial promises. 

The same advancements made in GANs have contributed to the proliferation of hyper-realistic deepfakes, that are now being utilized to target females and hamper people’s belief in documentation and signs. But not everybody is gloomy and dark: Last year was the year of the greatest mass rebuff against harmful AI from neighborhood groups, policymakers, and technology employees themselves. Cities banned the use of face recognition, and federal legislation could soon ban it in US public home as well. Employees of giants such as Microsoft, Google, and Salesforce also grew increasingly vocal against their companies utilization of AI for monitoring migrants and also for drone surveillance. 

Inside the AI community, researchers also doubled down on mitigating AI prejudice and reexamined the incentives that lead to the field energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Pros and policymakers worked in tandem to propose new legislation intended to rein in implications without dampening innovation. In the biggest annual gathering in the area this season, I was both touched and amazed by how a lot of the main notes, workshops, and posters focused on the real world issues, both those made by AI and people it might help solve. Hopefully, industry and academia maintain this momentum and create concrete bottom-up and top-down changes that realign AI development. 

We should not lose sight of the fantasy animating the area. Decades ago, humans began the search to construct smart machines so that they could one day help us resolve some of their toughest challenges.

AI, simply put, is intended to help humanity flourish. Let us not forget.