Clara Lachman Cyber Security and Emerging Threats

The Missing Element to Navigating the Threat of AI


November 30, 2022 marked a turning point in the landscape of artificial intelligence (AI). On this day, a new technology known as ChatGPT landed in the hands of the public. Launched as an interactive chatbot by AI research and deployment company, OpenAI, the tool quickly initiated a cascade of exponential technological advancement that took society by surprise.

While progressive developments in AI had already been taking place in the background, what was different about ChatGPT is that, for the first time, it democratized AI availability, enabling the average person to exploit its potential. Moreover—and of even greater astonishment—it offered the unique capability to generate novel content; in other words, it provided an alternative to human thinking and creation.

Suddenly, seemingly dystopian science-fiction visions of AI taking over the world turned into a possibility. And very quickly, headlines labeling AI as an existential threat to humanity started entering the public discourse.

Well, what if the underlying risk behind the emerging technology is not the actual technology in and of itself? What if the true concern lies in the way it can be leveraged by people? This article aims to highlight why a human-centered approach must be adopted by the international community to navigate artificial intelligence and mitigate the existential risk posed by it.

The current approach.

Shortly after the launch of ChatGPT, an open letter started circulating worldwide, calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Led by the Future of Life Institute, an organization dedicated to steering “transformative technologies away from extreme, large-scale risks and towards benefiting life,” the call-to-action was supported by many public figures, including Elon Musk, Steve Wosniak, and Yuval Noah Harari.

Gaining significant media attention, the open letter kickstarted a wave of public concern surrounding the potential power of AI, and suddenly all conversation shifted to the urgent need to have it regulated.

On the international stage, the European Union took the lead in establishing rules for use of the technology, with a majority of member states agreeing to provisions for the world’s first AI legislation, the AI Act, in December of 2023, which has since been adopted by the European Parliament into law.

While other countries have yet to reach the European Union’s regulatory level, governments worldwide have taken measures in moving the dial towards advancing a safer and more secure technological landscape. For example, the U.S. State Department commissioned the firm, Gladstone AI, to assess “proliferation and security risk from weaponized and misaligned AI,” which published its findings earlier this year in the form of an Action Plan.

Recommendations put forward by the organization included building provisional safeguards to secure the development of advanced AI, increasing federal funding into AI safety and security research, and establishing both a nation-specific and international regulatory agency to oversee advanced AI development, deployment, and compliance.

While the importance of international actions exemplified above cannot be dismissed, they reveal a blind spot that governments have failed to consider when navigating the threat of AI—that it is not only the technology itself which can create potential harm, but the human beings using it. 

The missing element.

The solution isn’t less technology, but better humans.” These were the words spoken by Garry Kasparov, the Russian chess grandmaster who was defeated by Deep Blue in 1997, an IBM chess-playing system and the first computer to win a match against a world champion under classical tournament conditions.

In our current society, the presence of bad actors cannot be denied. Now combine unethical and immoral human behaviours with a powerful tool such as artificial intelligence and the issue of AI technology becomes an existential concern.

Accordingly, as alluded to by Kasparov, part of the solution lies in developing better human beings that will make ethical, moral, and conscientious decisions in their use and application of the technology which is becoming increasingly integrated into our lives.

As such, in addition to focusing on AI regulation, the international community must actively commit to prioritizing the personal growth and development of its people. In practice, this includes investing in education on topics such as emotional intelligence, psychology, and ethical decision-making, so all students can receive fundamental learning for how to cohabitate on this Earth with one another. For example, countries can take inspiration from the School of Humanity, an organization that is reimagining the education system. Grounded in a literacy-based curriculum, over the course of four years, students receive an interdisciplinary learning experience that exposes them to topics such as philosophical and existential inquiry, morality and ethics, and cultural awareness.

Moreover, greater efforts should be displayed by countries to elevate the well-being of their people. The sailboat metaphor developed by humanistic psychologist Dr. Scott Barry Kaufman can provide a valuable framework for leaders here, outlining that it is the dynamic integration of both security and growth needs that supports individuals on their unique journeys towards wholeness, and ultimately influences how they interact with the rest of the world.

While it is important to recognize that the moral enhancement of humanity is not a novel idea and has long been explored in philosophical inquiry, the main objective of this article was to remind the international community that it is an important discussion that also belongs in the context of navigating artificial intelligence. Regulation may be part of the solution to the threat of AI, however, it is not a panacea, and must be underpinned by a values-driven and ethical civil society.

In conclusion.

Imagine—a world in which the marvels of artificial intelligence prevail because the technology is carefully regulated, and human beings across all countries live in societal conditions that allow them to flourish. As a result, the playing field for a forward-looking human-technology co-existence finally becomes reality.

In the words of Garry Kasparov: “Could [this] be the perfect game ever played?


Photo: a computer generated image of the letter a (2023) by Steve Johnson via Unsplash

Disclaimer: Any views or opinions expressed in articles are solely those of the author and do not necessarily represent the views of the NATO Association of Canada.

Author

  • Clara Lachman is a storyteller and Future Generations Voice with an aspiration to contribute to a future of flourishing. At a time when the global world order is fraying, she believes a new story is needed to transition humanity towards a state of global peace, trust and prosperity. In line with her purpose, Clara is currently completing a Junior Research Fellowship with the NATO Association of Canada, writing for the Society, Culture, and Security program. Leveraging her 5+ years of combined experience in public policy, legal studies, and the science of well-being, she takes an innovative approach in intersecting diverse fields to propose policy solutions to contribute to a better tomorrow. In her free time, Clara actively speaks to topics including human and planetary flourishing, the future of democracy and governance, exponential technologies and ethics, meaning 3.0, augmented humanity, and women, peace and security. Clara can be reached at: clara@claralachman.com and www.linkedin.com/in/claralachman

    View all posts
Clara Lachman
Clara Lachman is a storyteller and Future Generations Voice with an aspiration to contribute to a future of flourishing. At a time when the global world order is fraying, she believes a new story is needed to transition humanity towards a state of global peace, trust and prosperity. In line with her purpose, Clara is currently completing a Junior Research Fellowship with the NATO Association of Canada, writing for the Society, Culture, and Security program. Leveraging her 5+ years of combined experience in public policy, legal studies, and the science of well-being, she takes an innovative approach in intersecting diverse fields to propose policy solutions to contribute to a better tomorrow. In her free time, Clara actively speaks to topics including human and planetary flourishing, the future of democracy and governance, exponential technologies and ethics, meaning 3.0, augmented humanity, and women, peace and security. Clara can be reached at: clara@claralachman.com and www.linkedin.com/in/claralachman