Global Circles Digest | Artificial Intelligence, Global Skills, and the Future of Work

November 15, 2022

Global Circles Digest – a synthesis of the dialogue based on facilitator observation reports, reflecting on converging and diverging views and emotions expressed during the exchanges. The digests don’t make attributions to participants to preserve the private and safe nature of our dialogue spaces.

Written by Rafael Tyszblat, an innovation and design specialist at Soliya overseeing content and curriculum development for new dialogue and training projects.


The June 2022 Global Circles round topic was “Artificial Intelligence, Global Skills and the Future of Work.”

Automatization and its impact on work and employability is nothing new. Since the industrial revolution in the 1800s, there has been a vast amount of effort to transform work. Although it is not entirely new, artificial intelligence is considered to be one of the drivers of the Fourth Industrial Revolution, which is characterized by a fusion of technologies blurring the lines among physical, digital, and biological spheres that drive the production and distribution of goods and services.

One of the challenges around this topic was the uneven or limited exposure among participants to concepts and terminology associated with these new technologies. To overcome that challenge, from the beginning facilitators identified participants who had knowledge of the subject and invited them to inform the group about key concepts – without positioning them as “teachers.” It appeared that many non-expert participants came to the discussion to learn more about this topic while participants who had more expertise were eager to share it. Groups also benefited from a short reading package prior to the sessions. 

The main difference in the views expressed was between those who call for more technology in society and those who are concerned about the risks. While understanding those differences and where they come from was critical, facilitators invited participants to reflect on the following questions:

  • What has been your experience of the evolution of work in the past years?

  • Can you think of examples where technology has improved or worsened working conditions?

  • What do you feel about the advent of the metaverse for our social and work interactions?

  • What are the new opportunities and risks artificial intelligence offers?

  • How can humans and AI work side by side? What are the differences in competencies? What will be the most needed skills as AI develops?

  • How would you define “ethical AI”?

  • How do we feel about the idea of more technology in our lives? 

Participants seemed to have appreciated owning a topic that initially seemed to belong to technologists but that quickly revealed itself as everybody’s concern. Students studying law and languages were quick to relate and reflected on how this new technology could impact their jobs in the future. One participant from the United States said: “I have always thought AI would have a negative impact on the workforce. I thought that the more artificial intelligence that was used, the more lazy humans would be. After communicating this idea, I have changed my opinion. I am taking away a new perspective from this conversation. I now know that artificial intelligence can actually be very beneficial in the future.”

Addressing the question of when technological progress becomes useless or dangerous, participants discussed the pros and cons. While they pointed out how AI can enhance our lives through efficiency, productivity, and ease, they also acknowledged it can be disruptive to interpersonal connection and creativity.

The question of who should be the “boss” in the relationship between humans and computer-powered systems took up a big portion of the dialogue. Most participants understood that AI is here to stay and grow but some said they would not like a world dominated by robots. While some considered that the advantages of AI in improving human conditions superseded inconvenience, many, informed by the recent controversy at Google*, feared that AI could become sentient and have human-like emotions. Some were even more pessimistic about the future of technology, stating that in the wrong hands it could bring the end of humanity. One British participant said: “Emotionally mature AI that could develop in the same way as a human would be an excellent feat of science, but I must admit this causes some anxiety also.” An American participant even feared “that if we create robots with algorithms free of empathy, then [it] will result in a more dystopian world full of fear of persecution and mistreatment, since empathy helps serve as an equalizer and allows for trust to be established between people.”

Conversations often shifted to the issue of privacy and data collection through social media and how that could either be disastrous or useful, depending on the intent of those platforms. Some also spoke about the risks of artificially intelligent weapons. One participant explained: “I studied the arms market, I have many negative thoughts about AI implemented in this field. To remain competitive, countries create weapons that are ‘unique’ in their technology. I fear that as a result of the (over) use of AI, there will be more conflicts.”

After some initial time spent sharing concerns, groups shifted to solution-oriented dialogues. One group role-played an imaginary council that tried to regulate AI. The shared idea was that AI should be mostly implemented in routine jobs or certain sectors like transportation but not where human touch and creativity is needed. Sectors like therapy or health may not be ideal for AI, and new jobs should be created to control and manage Al to mitigate some of its risks.

Participants had different perspectives when it came to public investments. Some believed that public investment in AI should not be a priority for now. Most agreed that because AI's progress is impossible to stop, regulation was needed. Since companies could but won't necessarily regulate AI's progress, governments were often recommended to take the role for regulation, and ownership of AI research and applications on a national and international level (including to address the issue of unequal access to and ownership of AI and related technology for less developed countries). Another common thought was the importance of educating people on AI, particularly children.

During the reflections segment, participants expressed how important it is to them to have those safe spaces to discuss fundamental issues. Many thought it was a great subject to discuss with people from different countries to see the differences and similarities in how such a global challenge is perceived and addressed. In the end, participants enjoyed a fun and enriching joint analysis and diagnosis of the impact of technology on work and social life.

*In June, a Google Engineer was put on leave because he claimed an unreleased AI system called LaMDA had achieved a level of consciousness. In a statement, Google said that its team, which includes ethicists and technologists, “reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.” LaMDA stands for Language Model for Dialogue Applications, and unlike most other language models, LaMDA was trained on dialogue, ensuring responses to a given conversational context make sense. The language model emulates human-like responses not because it feels but because it copies and imitates human language. Such AI systems match patterns by pulling from enormous databases of language. That level of impersonation may lead some people to anthropomorphize LaMDA, by ascribing some form of personality to it, which in fact is one of the risks of such models becoming widespread in their usage preceding public awareness and understanding.

For sources and more information: 

https://www.cnn.com/2022/06/13/tech/google-ai-not-sentient/index.html

https://www.searchenginejournal.com/google-lamda-sentient/454820/#close


Next
Next

Global Circles Digest | What does the war in Ukraine mean for the world?