12 AI and Digital Predictions for 2024

Published on 05 January 2024
Updated on 19 March 2024

Will AI and digital technologies exacerbate or lessen the impending polycrisis of 2024? This is the primary question underlying these predictions.

In 2024 AI will continue to dominate the tech landscape. It will be less turbulent than last year when AI went through 4 seasons, from the winter of AI excitement to an autumn of clarity. AI technology will grow steadily, with more clarity and some disillusionment. 

We will also return to digital basics in 2024. For instance, we frequently forget that access to the internet is not a given. It is comparable to water or electricity services. When it breaks, we become aware of it. We experience our digital reality, including access to the cutting-edge ChatGPT, through the decades-old Internet protocol – TCP/IP. Thus, in addition to AI, we will discuss traditional digital issues like infrastructure and standardisation, cybersecurity, content governance, and the digital economy in our 2024 predictions.

Jovan and Diplo Team

12 Predictions:

Artificial Intelligence | Geopolitics | Governance | Diplomacy | Security | Human Rights | Economy | Standards | Encryption | Identity | Content | Inclusion


 Art, Graphics, Light, Nature, Night, Outdoors, Text

In 2024, we can anticipate steady growth, some disillusionment, and more clarity in the AI field.

AI technology will continue to grow both deeper, through powerful foundational models, and wider, through more connections to the rest of the digital ecosystem (e.g. the internet of things, virtual reality, digital infrastructures). In parallel, smaller AI and open source models will gain momentum as a more transparent, adaptable and environmentally friendly approach to AI. 

In 2024, Google will try to challenge ChatGPT’s dominance. It remains to be seen whether the AI space will be bipolar, with OpenAI/Microsoft and Google, or multipolar, with more actors joining at the top. In addition to Apple, Amazon, and other companies, the most interesting challenge for duopoly comes from Meta, which has been supporting the open-source LLaMA model. 

Large Multimodal Models (LMMs) will gain in relevance following the shift from text towards integrating video, images, and sound in AI foundation models. 

After realising the strategic relevance of AI-codified knowledge, companies, communities and countries will intensify the development of bottom-up AI based on in-house solutions. AI models will be localised through the use of data pruning, adaptive computing, and models for retrieval augmented generation (RAG). Bottom-up AI will become a new AI dynamic in 2024. 

Small Language Models (SLMs) are already a thing, but cost-efficiency and sustainability considerations will accelerate this trend as AI has a big carbon footprint due to the use of a lot of energy for processing data and the development of foundational models. 

After surviving an attack from the ‘extinction risk’ camp in 2023, open-source AI will continue to develop fast in the new year. Open-source AI tends to be smaller but more adaptable and customisable. LLaMA is 10 times smaller than OpenAI Large Language Model (LLM). If LLaMA or other smaller and open-source models (BLOOM and OPT) can achieve similar results to GPT-4, it can be a major shift in the AI landscape with more inclusion for organisations and individuals with much fewer resources. 

Following a lot of hype in 2023 about AI replacing everything, including humans, we can anticipate disillusionment in 2024. The application of AI will be put to the test in the workplace, in the classroom, in the entertainment industry, and in other spheres of human endeavour (see: the Gartner hype disillusionment slope).

Businesses will face the reality that AI per se won’t bring exponential growth in productivity. AI may help businesses if they start doing their jobs differently. AI is not a silver bullet for businesses, but a catalyst of changes. In addition, users’ tolerance for AI hallucinations in the early days of ChatGPT will decrease as expectations will rise. 

There will be more clarity in discussions about governing AI and dealing with associated risks. Existing risks (e.g., jobs, misinformation, biases) will receive more attention than existential risks (human survival). Exclusion risks based on a few AI monopoly companies will become more relevant. AI governance will become more specific and concrete, addressing computational power, data and knowledge, algorithms, and AI applications.


From Extinction to Existing AI risks 

AI risks will dominate the governance and regulatory debate. The main shift will be from a heavy focus on extinction and long-term risks towards existing risks that AI poses to society, from misinformation to education and the future of jobs. In addition, governments will focus more on exclusion risks triggered by the AI monopolies of the major tech companies. 

The end of 2023, with the adoption of the EU AI Act and the publishing of the Interim Report by the UN Secretary-General’s High-level Advisory Body on AI, signalled more clarity in dealing with AI risks. The previous heavy focus on extinction risk is balanced with existing risks and midterm risks, such as monopolies of AI platforms. 

AI Risk Coverage in 2023

The image shows a diagram with three overlapping circles representing a prediction of the coverage of the risks of AI in 2024. The biggest is existing risks, such as AI's impact on jobs, information, and education,, and the other two, extinction risks, such as AI destroying humanity, and exclusion risks, such as AI tech monopolising global knowledge, are both smaller and of roughly equal size.

Prediction of AI Risk Coverage in 2024

Existing risks (short-term) from AI developments include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Exclusion risks (medium-term) could be triggered by the centralisation of AI knowledge in the hands of a few powerful players. Ultimately, their monopolies could create a risk of exclusion: citizens, communities, and countries worldwide will be limited in their ability to use and benefit from common knowledge. Such monopolies could lead to a dystopian future for humanity.  Legally speaking, risks of such AI monopolies can be reduced via antitrust regulation and protection of data and intellectual property associated with the knowledge used for the development of AI models. 

Extinction risks (long-term) are based on the possibility of AI evolving from servant to master, jeopardising humanity’s very survival. After very intensive doomsday media coverage throughout 2023, these threats haunt the collective psyche and dominate the global narrative with analogies to nuclear armageddon, pandemics, or climate cataclysms. 

The dominance of extinction risks in the media has influenced policy-making. For example, the Bletchley Declaration adopted during the London AI Safety Summit heavily focuses on extinction risks while mentioning existing AI risks in passing and making no reference to exclusion risks.

US Vice-President on existential and existing risks

… There are additional threats that also demand our action—threats that are currently causing harm and which, to many people, also feel existential.

Consider, for example: When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?

When a woman is threatened by an abusive partner with explicit, deep-fake photographs, is that not existential for her? …

Source: Remarks by Vice President Harris on the Future of Artificial Intelligence

The AI governance debate ahead of us will require: (a) addressing all risks comprehensively, and (b) whenever it is required to prioritise them, that decisions be made in transparent and informed ways. 

Dealing with risks is nothing new for humanity, even if AI risks are new. In the environment and climate fields, there is a whole spectrum of regulatory tools and approaches, such as using precautionary principles, scenario building, and regulatory sandboxes. The key is that AI risks require transparent trade-offs and constant revisits based on technological developments and society’s responses.


AI Governance on 4 Layers

The main question is access to powerful hardware that processes the AI models. In the race for computational power, two key players—the USA and China—try to limit each others’ access to semiconductors that can be used in AI. The key actor is Nvidia, which manufactures graphical processing units (GPU) critical for running AI models. 

With the support of advanced economies, the USA has an advantage over China in semiconductors, which they try to preserve by limiting access to these technologies via sanctions and other restriction mechanisms.

There are two sets of challenges: firstly, copyright holders worldwide are questioning in courts the use of their creative work in the development of AI; secondly, facing a limited data volume of internet content and in an attempt to respond to privacy and intellectual property concerns, companies are using synthetic data generated by AI. This recursive loop triggers ‘AI inbreeding’ and an inevitable degeneration of the quality and relevance of AI models. 

Copyright: In 2023, the question of protecting the data and intellectual property used by AI platforms started gaining momentum. The EU AI Act requires more transparency about data used for AI models. Writers, musicians, and the photography industry started court cases against OpenAI for use of their intellectual property for training AI models. The New York Times’ lawsuit against OpenAI for copyright infringement is one recent example. Other companies, however, are watching and learning valuable lessons: Apple decided to conclude a $50m multi-year deal with the Times to license material for its AI models. 

In 2024, new ideas will flourish. For example, some lawyers argue that, analogous with copyright, a ‘learnright’ should be established to govern how AI uses content for training. Various tagging and watermark systems are discussed for distinguishing human-made from AI-made artefacts.

There are also arguments against strict copyright protection and the need to allow data scrapping in order to facilitate progress of AI.

Synthetic data: Most existing models are based on internet content, considered as the ‘ground truth’ for AI development. This initial input into AI models is already depleted. In 2024, there will be two major problems with acquiring data. Facing limited available data, AI companies started using synthetic data generated by AI. According to Stanford University researchers there is a 68% increase in synthetic data generated by AI on Reddit. 

In 2024, discussion on transparency, evaluation, and explainability will need to operationalised as it will be required by the EU AI Act and other legal instruments in the field of AI. AI governance will also focus on the relevance of ‘weights’ in developing AI models: how to highlight the relevance of some input data and knowledge in generating AI responses.

Transparency: There is very little transparency when it comes to AI models; we know little to nothing about, for instance, the data fed into the model or the weights that are given to parameters. Transparency is a precondition for evaluation and, ultimately, accountability in the digital realm. The lack of transparency in AI follows a general trend of reduced transparency in the digital industry and social media platforms. Twitter, Facebook, and Google provide less and less access to their operations. 

Evaluation: The EU AI Act and other regulations require the evaluation of AI platforms. However, there are numerous limitations in conducting evaluation, from lack of data to lack of common evaluation methodologies. In the USA, bipartisan efforts in Congress, such as the ‘AI Research, Innovation, and Accountability Act,’ aim to increase transparency and accountability for high-risk AI applications.  

The focus is on the practical use of AI via apps and other services. For example, regulations would focus on the implications that the outputs of systems such as ChatGPT have in terms of human rights, security, and consumer protection instead of regulating the algorithms that generate such outputs. Like with traditional digital systems, responsibility for obeying regulation will be placed on companies that develop and operate AI systems, such as OpenAI/Microsoft, Google, Meta, and Amazon.

For a long time, one of the pillars of digital governance has been to regulate the uses and outputs of digital systems instead of regulating how the internet functions technically (from standards to the operation of critical internet resources like internet protocol numbers or the domain name system). This approach is one of the main contributors to the fast growth of the internet. The current calls to shift regulations on the algorithm level (under the bonnet of technology) would be a major departure from the well-tested and established approach of technology regulation, with far-reaching consequences for technological progress. 

More from Diplo and GIP:

2023 Recap: Four seasons of AI

Topics: humAInism | AI governance | AI diplomacy

Course: AI: Technology, Governance and Policy Frameworks


 Light, Art, Graphics, Text, Logo, Nature, Night, Outdoors

The digital decoupling between China and the United States will accelerate significantly in 2024 along a number of strategic axes, including semiconductors, satellites, artificial intelligence, data, and submarine cables. India, Brazil, South Africa, Singapore, Turkey, and Gulf states, among others, will try to carve an asymmetric ‘third digital space’ between the two superpowers. In 2024, the push for national sovereignty over data, AI, and technology infrastructure will reshape digital geopolitics.

In 2024, semiconductors will remain the main digital battleground between China and the USA. So far, the USA-led restrictions on the export of chip-making technology have triggered rapid growth in this sector in China. Alibaba, Xiaomi, Baidu, and other Chinese companies invest heavily in the local semiconductor industry. The same tendencies can be observed worldwide as countries try to become self-reliant in the development and production of advanced chips. 

The previously integrated network of submarine cables will continue separating between China and the USA. We have already seen, for instance, that the USA banned landing rights for a few submarine cables with links to China. And China started investing in separate cable systems, such as a new cable connecting China with France via Singapore, Pakistan, and Egypt. 

Faced with these divisions, countries will focus on digital sustainability by developing alternative digital routes. They will also work more on preserving content locally via internet exchange points (IXPs). 

Outer space is another field of accelerated competition between the USA and China, as well as other public and private actors. In the area of internet connectivity alone, low-orbit satellite (mega)constellations by the likes of SpaceX’s Starlink and Amazon’s Project Kuiper are to face competition from China Satellite Network Group’s Gouwang project. 

National space agencies and private actors alike are planning or working on deploying (new) communications and navigation networks (e.g. around the Moon), the development of new space stations, and the exploration and exploitation of space resources (e.g. metals, minerals), giving rise to new governance issues. 

Against this backdrop, ensuring a fair and responsible allocation and use of spectrum resources and orbital positions, encouraging responsible behaviour in outer space from public and private actors, and ensuring the sustainability of space activities, will be some of the questions that UN bodies and processes—from the Committee on the Peaceful Uses of Outer Space (COPUOS) and the International Telecommunication Union (ITU), to the Summit of the Future—will be addressing in 2024.

The 2023 WTO World Trade Report noted that Sino-American tensions are contributing to pre-existing fragmentation trends in the world economy. This is leading to a search for economic independence (instead of interdependence) in sensitive areas, and to a reorientation of trade flows along geopolitical divides. This finding is particularly concerning due to the central position occupied by the United States and China in the global economy, creating pressure for their trade partners to position themselves along the fault lines.

The relations that the EU nurtures with both sides, particularly when implementing its Economic Security Strategy will be key to counter—or at least to mitigate—fragmentation trends. Simultaneous politico-ideological and value chain fragmentation would mean that divides would become harder to bridge in the future, with negative consequences for markets and technological interoperability.  

More from Diplo and GIP:

2023 Recap: Geopolitics

Topics: Jurisdiction | Data governance | Semiconductors | Space diplomacy 

Courses: Introduction to Internet Governance | Diplomatic Theory and Practice | Artificial Intelligence: Technology, Governance and Policy Framework

Push for sovereignty in the tech realm

In 2024, countries and regional blocs such as the EU will also push for more self-reliance in terms of digital developments. Initiatives focused on achieving digital, data, AI, or cyber sovereignty are often motivated by a desire to reduce the risks of negative security and economic spillovers from integrated digital networks.

The image shows border officials inspecting an internet cable as it crosses a nation border checkpoint, equipped with a list of internet content that is and isn't allowed to enter.

The sovereignty drive takes different forms. Sometimes, it is about control of infrastructure. In other cases, it is about preserving data on national territory. Increasingly, it is about facilitating national AI developments. 

Approaches to digital sovereignty will vary, depending on a country’s political and legal systems. Legal approaches include national regulation and court judgments; technical ones vary between data filtering and frowned-upon internet shutdowns. 

A focus on digital sovereignty reduces the appetite for global digital governance solutions. In one illustration of this, the recent withdrawal of the USA from the WTO e-commerce negotiations was justified by the need to gain more room for regulating tech companies nationally. 


 Text, Light

In 2024, there will be a push for new organisations, commissions, and expert groups dealing with AI and digital governance. ‘Governance inflation’ will be fuelled by media hype and confusion about AI as a ‘wicked’ policy issue. AI will be used by international organisations to carve out new mandates and ensure their ‘raison d’être’ in the future.

The adoption of the UN Cybercrime Convention will mark the beginning of global digital governance in 2024. The Global Digital Compact (GDC) will be negotiated ahead of the Summit for the Future in September 2024. The main challenge will be to align the GDC with the World Summit on the Information Society (WSIS) and the 2030 Agenda for Sustainable Development. In the turbulent year of 2024, the UN Security Council and other bodies will have to deal with digital aspects of conflicts and humanitarian crises.

2024 will be marked by an interplay between change, which is the essence of technological development, and continuity, which characterises digital governance efforts. Change in search of ‘the next big thing’ is at the core of tech narratives. Governance has different dynamics for a reason. The foundation for the current global digital governance architecture was set back in 1998 and remains largely relevant today. Decisions to bring changes in this space should be made with calm and clarity, allowing for a clear separation between governance that works and governance that needs to be changed with a reason. 

At the UN, the year will start with the adoption of the Cybercrime Convention. Most of the year will be dominated by discussions and negotiation of the Global Digital Compact (GDC) and synchronisation with the wider UN agenda of the World Summit of Information Society (and the upcoming WSIS+20 review), the future of Internet Governance Forum, and the Agenda 2030 for Sustainable Development. 

In 2024, Brazil’s G20 Presidency will continue IBSA (India–Brazil–South Africa) momentum after India’s G20 Year. Brazil announced a focus on the following digital issues: digital inclusion, digital government, information integrity, and artificial intelligence. Brazil will also host a NETMundial+10 in the first half of the year with the aim to ‘update the global discussion on internet governance and the digital ecosystem’ and review the 2014 principles and roadmap for internet governance.

More from Diplo and GIP:

2023 Recap: Digital Global Compact

Topics: GDC process | Ad Hoc Committee on Cybercrime


 Light, Logo, Text, Art, Graphics, Nature, Night, Outdoors

Diplomacy has been overshadowed in recent years by military and confrontational logics. Unfortunately, this trend is likely to continue in 2024, with no end to current conflicts in sight and new ones emerging on the horizon. Diplomacy, as a profession, will have a challenging year.

Diplomacy will begin ‘soul searching’ for its role and purpose of this ancient profession. Technology will play an important role in shaping the future of this ancient profession by facilitating representation, negotiations, and the peaceful resolution of conflicts, all of which are core diplomatic functions.

AI will put diplomacy to the test through language, a key diplomatic tool. The automation of summarising, reporting, and drafting using Large Language Models (LLMs) will have a significant impact on the diplomatic profession.

Diplomacy must prepare to cope with increasing pressure to negotiate AI and digital TOPICS on bilateral, regional, and global diplomatic agenda. There will be increasing pressure to negotiate digital governance as a whole, as well as the digitalisation of traditional policy issues ranging from health to trade and human rights.

Second, it’s important for diplomats to assess how they use AI and digital TOOLS in their work. For example, they should find out if and how social media platforms help diplomats in their work. For example, social media should not be used if it hinders the process of reaching a compromise, as has frequently happened.

Diplomacy will face a new challenge due to the fact that AI is automating drafting, summarising, and other diplomatic tasks. Although diplomats ought to refrain from embracing the hype surrounding it, they should evaluate practical applications of AI with an open mind in areas such as diplomatic reporting.

By relying on AI to perform tedious tasks, they can allocate more time to engage in substantive diplomatic activities such as peacefully resolving conflicts, engaging in negotiations, and engaging in engagements. With the availability of new AI tools and public pressure to deliver solutions in 2024, diplomats may begin the transition from “bureaucratic” to “real” diplomacy. There will be fewer formal reports and more substantive participation and negotiation.

The shift from digital PUBLIC diplomacy to digital PROPAGANDA in 2024

Soft power loses when power politics dominate. 2024 will be a year of real power in military conflicts and economic battles. It will be a year of geopolitics. Soft power will become sheer propaganda in war and other conflicts.

2024 will be dominated by a dichotomy between stories that are ‘ours’ and stories that are ‘wrong’. Spaces for persuasion will shrink significantly.

Hypocrisy and double standards will increase as actors amplify their stories and ignore anything else, including ethics, common sense, and fairness. 

It will be difficult to win ‘hearts and minds’ when powers are busy winning territories, economic resources, and strategic positions.

The relevance of soft power will decline in 2024 as online spaces disintegrate into niches of like-minded actors. Social media echo chambers will fortify their walls by reinforcing support for ‘our’ cause. Talking to ‘others’ will be mainly through verbal wars and insults. They will be cancelled or gaslighted. The public space of genuine engagement, the search for the middle ground, and the search for compromise will shrink. It will move to secret places away from online space. 

The relevance of soft power, public diplomacy, and persuasion to ‘win hearts and minds’ will decline sharply in 2024. It will have far-reaching consequences, as words can quickly evolve into wars. By inertia, traditional ‘soft power’ mechanisms (people, institutions, academics) will continue working ‘as always’. They will be more propagandists than contributors to solving conflicts and ensuring a better future for humanity. 

In such a pessimistic scenario, one can ask if we can do anything. Yes, we can. Immediately, the most reasonable action is to expose mis/dis-information and fake news with factual info. One should not have the illusion that the fight will stop power-fuelled propaganda. But, in this way, we can start fighting a battle for reason and regaining public space for constructive and relevant policy discussions. 

In the longer term, the next generations should be exposed more towards arguments of reason, respect for others, and compromise. Respecting others and compromising with them should become critical values defining the very survival of humanity.


More from Diplo and GIP:

Topics: AI diplomacy | humAInismAI tools


 Art, Graphics, Light, Text

In 2024, cybersecurity will be addressed in three major contexts. The first is militarily shaped by the conflicts in Gaza and Ukraine. Cyber arms are used alongside other kinetic weapons. When bombs start dropping, bytes become less significant.

Second, new threats to digital critical infrastructure will emerge in the coming year. The main risks stem particularly from the vulnerability of submarine cables that carry most internet traffic across oceans. They are both the most critical and the least secure segment of digital critical infrastructure.

Third, as AI enables new forms of theft, phishing, and other illegal activity, the importance of cybercrime defence will grow. The UN Cybercrime Convention is expected to be adopted in early 2024, providing a ray of hope for global collaboration in digital security.

Network security

As overall geopolitical security deteriorates, there will be more threats to submarine cables, internet exchange points, and other parts of critical internet infrastructure. The increasing reliance on cloud computing and the internet of things (IoT) will expand the attack surface, making network security a complex and dynamic field. 

Trends from 2023, of using digital networks to attack power grids and other parts of critical infrastructure, will accelerate in the new year. 

In the continuous race between cyber attackers and protectors, AI will be a critical tool. In particular, there will be increased use of AI-driven threat detection and automated incident response systems.

Cybercrime convention

The last meeting of the Ad hoc Committee, which drafts a cybercrime convention, is scheduled for January 2024. Although there are numerous open issues and disagreements, it is very likely that the convention will be adopted. Member states may go for the least common denominator and vague language around disagreements while also preserving the right to reservations reflecting their interests. Once endorsed in the committee, the draft convention is expected to be adopted by the UN General Assembly at its 79th session in September 2024.

OEWG

In the generous time frame till 2025, the Open-ended Working Group on security of and in the use of ICTs (OEWG) is likely to continue discussions. As a small concrete step, the OEWG is likely to agree in 2024 on the directory of Points of Contacts. 

More from Diplo and GIP:

Topics: Ad Hoc Committee on Cybercrime | UN OEWG | Cybercrime | Network security | Critical infrastructure | Cyberconflict and warfare | Geneva Manual on Responsible Behaviour in Cyberspace | Geneva Dialogue on Responsible Behaviour in Cyberspace | Cybersecurity course


 Art, Graphics, Light, Text, Nature, Night, Outdoors, Logo

In 2024, the focus will be on the AI-driven reshaping of ‘traditional’ human rights, such as freedom of expression and privacy. They will be protected and endangered in novel ways. Additionally, AI and other cutting-edge technologies will spark debates about our dignity and what defines us as humans. Neurorights, prompted by AI and biotechnological developments, will gain prominence on human rights agendas.

Disability rights

AI will open new possibilities for enabling online access for people with disabilities. Access for people with disabilities will feature high in the development of usability standards and, ultimately, new digital products and services.

Privacy and data protection

We may see an increase in the use of privacy-enhancing technologies such as federated learning and homomorphic encryption.

Freedom of expression

Tech companies might be compelled to be more transparent about their content moderation policies and to engage in dialogue with civil society to protect free speech while combating misinformation and preserving information integrity in the face of growing challenges posed by AI technologies.

Children’s rights

There will probably be further development of legal frameworks and codes of practice to protect children online, inspired by actions taken in France, the UK, and elsewhere. Digital literacy programmes for children may become more widespread, and there could be an increased emphasis on co-viewing and co-playing strategies for parents to guide their children’s digital experiences.

Neurorights

Benefiting from AI progress, developments in neurotechnologies are accelerating, paving the way to significant breakthroughs in the medical field (for instance, brain-computer interfaces that restore the ability to walk or AI-based ‘brain decoders’ that may help people otherwise unable to physically speak to communicate), but also to more widely-available direct-to-consumer applications (e.g. neurotech devices used for neurogaming, learning, meditation, or interaction with digital devices).

The tech and internet industry itself is showing an increasing interest in neurotechnologies, for a variety of reasons (from developing new ways for users to control digital devices or interact with virtual environments, to understanding that access to neural data could significantly change advertisement-based business models). In fact, a UNESCO report indicates that computer technology is the area in which most neurotechnology patents are filled, with medical technology and biotechnology only coming after.

These developments come with implications for human rights and fundamental freedoms (e.g. personal identity, mental privacy, free will) that may require new forms of protection. Although the concept of neurorights is not new, it will gain more visibility in policy spaces in 2024. At the UN Human Rights Council, the Advisory Committee is tasked with presenting a report on neurotechnology and human rights at the Council’s 57th session in September 2024, and UNESCO is expected to start working on a recommendation on the ethics of neurotechnology.

Resources form Diplo and GIP:

Freedom of expression | Privacy and data protection | Children rights | Rights of persons with disabilities


 Art, Graphics, Light, Logo, Text

In 2024, AI will accelerate changes in the economy, from restructuring traditional industries to developing new ones built around AI technology. The main policy dynamics will be related to the economic consequences of the digital decoupling (or de-risking) between China and the USA, anti-monopolies in the field of AI, taxation of online industries, and digital trade.

Anti-monopoly

In 2024, EU’s Digital Market Act (DMA) will provide the legal basis for a push for interoperability among social media platforms. New solutions should make it easier for users to switch from platform to platform without losing previous networks of people. They will also prevent usability-driven monopolies of the major social media platforms. 

New risks related to AI come from the high concentration of AI solutions, data, and knowledge in the hands of a few companies. Microsoft’s investment in OpenAI triggered an investigation by US and UK anti-monopoly authorities. More such antitrust investigations are likely to be launched in 2024. 

Taxation

June 2024 is the next deadline for concluding the OECD’s negotiations on taxing cross-border digital services (Amount A of Pillar One of OECD negotiations). The main controversy is around a complex formula for reallocating taxes among the major tech companies. The outcome of negotiations will have far-reaching consequences for the digital economy. Many countries have paused their unilateral digital service taxes until the OECD completes the negotiation process.

Digital Trade

At the beginning of the year, all eyes will turn to the 13th WTO Ministerial, which will take place from 26 to 29 February in Abu Dhabi. The Joint Statement on e-commerce, a plurilateral negotiating process currently ongoing among 90 WTO members, is likely to achieve a ‘partial delivery’. While preliminary agreement has been reached on several topics, such as Paperless trading, Open government data, Online consumer protection, Cybersecurity, Open Internet access (Net Neutrality), and Personal data protection, the most controversial issues remain unresolved. This is the case of negotiations on data flows, which have suffered a significant setback following the USA’s decision to withdraw from negotiations on this topic, under the justification of preserving domestic policy space on data regulation. The 13th WTO Ministerial is likely to produce an agreement on e-commerce, but significantly less ambitious than initially foreseen. 

Digital Economy Agreements (DEAs) will likely continue to hit the headlines. This digital-only type of free trade agreement is being concluded by countries around the world, and notably in the Asia-Pacific. DEAs not only deal with traditional issues, such as customs duties, online consumer protection, and electronic authentication, but also tackle emerging trends and technologies that are not yet considered to be ‘treaty-ready’, such as digital identities and AI, establishing platforms for collaboration and harmonization. As multilateral discussions become increasingly stalled, the role and importance of DEAs in digital governance is likely to grow in 2024.

Transparency

Social media companies are becoming less and less transparent about their activities and business models. Most of them stopped and reduced access to their data to independent researchers. For example, X (Twitter) stopped free access to the platform’s API. Meta is also restricting access to their services. The EU’s Digital Service Act provides regulatory solutions via provisions to allow researchers to monitor social network platforms. The lack of transparency on social media platforms will be particularly problematic in monitoring online aspects of the forthcoming elections in 2024.

Cryptocurrencies

The value of bitcoin more than doubled in 2023 (see graph below). Regulatory frameworks for cryptocurrencies will become more sophisticated, aiming to balance innovation with financial stability and consumer protection. In 2024, more countries will introduce Central Bank Digital Currencies (CBDC). 

The image shows a line graph depicting the rising value of bitcoin throughout 2023, which on 31 December 2023 was 35563.99 USD.

More from Diplo and GIP:

Taxation | Cryptocurrencies | E-commerce and trade | WTO Joint Statement Initiative on E-commerce


 Light, Text, Art, Graphics

The standardisation community reacted fast in 2023 by adopting a few AI standards. Even more standards are expected, especially in issues of monitoring and evaluating AI foundational models. Beyond AI, we can expect a focus on standards for high-speed mobile networks (6G), quantum computing, brain-machine interfaces, and other similar advanced technologies. 

Outside of traditional standard-setting bodies, minilateral and multilateral processes such as G7, G20, and the UN will explore technical standards as a ‘soft regulation’ approach at a time where there is little appetite for international treaties, continuing a trend from previous years. 

Tech standards—especially those adopted at an international level—are essential for interoperability and ensuring that technologies work seamlessly across borders. They also enable quality of service, safety and security, and can serve as de-facto governance tools in particular for newer technologies that are not yet subject to (strong) regulations. We saw this in 2023, when standard-setting bodies responded fast to the calls for AI governance mechanisms by focusing on the development of standards. Right now, there are over 300 existing and under-development standards for AI, nationally and internationally. 

In 2024, this AI standardisation work will accelerate, also encouraged by the growing recognition—from traditional regulators and multilateral bodies—of the importance of standards in meeting public policy objectives. 

Beyond AI, standards around the new high speed mobile network (6G) will be in focus in particular at ITU and 3GPP. ITU’s World Radiocommunications Conference at the end of 2023 set the starting ground for these developments. Telecom operators around the world are testing the new speed networks, and research is in full swing. Standardisation work for quantum computing, quantum communication networks, virtual reality, and brain-computer interfaces will likely also accelerate this year, as these technologies are on a fast development track, not least because they are benefiting from advancements in AI. 

As human rights issues are increasingly brought up in standardisation discussions, there will be a push for human-rights-by-design approaches to be embedded into technical standards that form part of the design and development process of new hardware and software.

More from Diplo and GIP:

Digital standards


 Light, Art, Graphics, Text, Logo

The decades-long saga around online encryption will gain new momentum in 2024. In their push to get more access to encrypted communication, some governments are proposing client-side (on-device) scanning of communications for illegal content such as child pornography. Once such content is identified, an alert is sent to law enforcement authorities. After the message leaves the device, it will enjoy end-to-end encryption. 

Encryption serves as a vital tool for protecting personal and corporate data, yet it also poses significant challenges to criminal investigations. Traditionally, governments have been pushing for backdoor access to encrypted content—via online platforms—while tech companies and human rights actors have been resisting. In 2024, a more nuanced debate is gaining ground around the notion of client-side scanning: using algorithms to scan messages locally (on-device) for illegal content, and enabling the reporting of red flags to authorities. 

Proponents—including some governments—argue that on-device scanning would preserve the end-to-end encryption principle, but still support law enforcement in their fight against crime. Opponents—including some tech companies like Meta—argue that ‘client-side scanning could mark the end of private communication online’, as the technology could also serve as a tool for mass surveillance. 

More from Diplo and GIP:

Encryption | Cybersecurity course


 Light, Art, Graphics, Nature, Night, Outdoors, Text

Digital identities will gain relevance in 2024. The most interesting development will be related to the Digital Public Infrastructure (DPI) initiative, which provides a solution for managing identities online. DPI was endorsed by G20 during the New Delhi Summit and gained relevance in international debates in 2023. 

Having a digital identity is becoming critical for individual participation in the economic and social life of digitalised societies. Digital identity is used for access to government, economic, and other services. Proper technical and governance solutions for digital identity involve a delicate interplay between the protection of privacy, access to societal services, financial transactions, and other aspects of our lives that require proof of identity.

Identity management opens many controversies. For example, the proposed revision of the EU regulation on electronic identification, authentication and trust services (eIDAS regulation) has attracted criticism over certain provisions that would allegedly enable governments to intercept web traffic without effective recourse, limit the security measures that could be taken to protect web traffic connections, and otherwise deteriorate privacy safeguards. 

In 2024, there will be more discussion on the interoperability of online identities between countries, as national jurisdictions over identities collide with the trans-border nature of most digital services and activities. 

More from Diplo and GIP:

Digital identities


 Art, Graphics, Light, Logo, Text

In 2024, elections in over 70 countries, including India, the United States, the United Kingdom, the European Union, and Mexico, will rely heavily on online content. As election campaigns will be carried via online platforms, it will increase risks for the spread of mis/disinformation via deepfake videos, texts, and sounds.

Simultaneously, AI offers some hope for the detection of fraudulent content. Nonetheless, in 2024, the volume of deepfakes and identity manipulation generated by AI is likely to outnumber its detection capabilities.


Elections

In 2024, according to the Economist, 4.2 billion people in 76 countries will hold some sort of national election. Digital platforms and tools will play an important role in campaigns and elections. Their relevance increases given the high importance of these elections for the future of democracy itself.

AI empowers the generation of content, including fake and manipulative content such as the following deepfake of Trump and Fauci posted by Ron DeSantis.

The image shows a screenshot of a tweet from the account @DeSantisWarRoom. The tweet reads: "Donald Trump became a household name by FIRING countless people on television. But when it came to Fauci..." The text is followed by an image which shows six deepfake photographs of Trump and Fauci, sitting together, smiling, or hugging.

Link: https://www.youtube.com/watch?v=hLuUmNkS21A

Some platforms like TikTok, Discord, and Twitch are developing new tools to handle election disinformation, while others like X and Meta have rolled back their policies. However, experts have already expressed concerns about platforms not having sufficient resources for monitoring complexity of online content during elections. 

In Michigan, USA, there is impending legislation to regulate AI in political advertising highlighting the need for transparency and accountability in how AI is leveraged during electoral processes.


Detection of AI-generated content

The race to create and detect AI content will speed up in 2024. Those who use AI to generate text, video, and sound in the race have a significant advantage. The “AI detecting” camp attempts to catch up in the race using two major approaches: detection and watermarking.

First, AI detection applications and platforms are failing. It became clearer in July when OpenAI discontinued its platform for detecting AI-generated text. The probabilistic core of generative AI makes it difficult to determine whether AI has generated content. In addition, AI foundation models are rapidly improving their ability to mimic human creativity. Thus, early signs of ‘AI creativity’, such as imperfect drawings of human hands, are becoming less useful as AI advances. AI detection can easily lead to false positives (misclassifying human content as AI-generated) or false negatives (failing to identify machine-generated content).

A recent study of 14 AI-detection tools used in universities found that they were neither accurate nor reliable. There are increasing cases of students being wrongfully accused of AI plagiarism. Many universities have stopped using AI plagiarism platforms due to the ethical and legal risks.

The second main approach is to watermark AI-generated content. It has gained traction among regulators, businesses, and researchers. Watermarking is more promising than AI detection, but it is unlikely to be a foolproof solution.

For example, “Tree-Ring” watermarking is built into the process of generating AI images using diffusion models, which start with messy images that are gradually sharpened. The ‘Tree-Ring’ method embeds a watermark during the early noise phase. Watermarks can be detected during the noise phase of image generation by reverse engineering the final image.

The race continues between those attempting to conceal watermarks and those trying to detect them. In parallel, many policymakers advocate for a watermarking approach. In 2023, the US government and a few technology companies made ‘voluntary commitments’ to support watermarking research.

Given the limited reliability of detection and watermarking techniques, the most helpful approach is the age-old one of determining an image’s source via email, URL, and institutional/individual credibility. Because we cannot trust AI systems, we stay with the old approach of trusting (or not) people and institutions who send us content, whether AI or human-generated.


Content Moderation

Major tech companies, such as Alphabet, X, Meta, and TikTok, will play an increasingly prominent role in content policy and moderation. They are becoming de facto content regulators, determining what content is allowed and what is removed from their platforms. In 2024, these companies are expected to continue adapting their content moderation policies to address the growing concerns around misinformation, fake news, and violent content.


Governance and regulation

The implementation of the EU’s Digital Service Act (DSA) will gain momentum in 2024. Analogous to the ‘Brussels effect’ from data governance, DSA is likely to be emulated in other jurisdictions worldwide. 

The internet has become an unregulated space where violent ideologies flourish unchecked. The US and EU’s joint statement on cyber resilience reflects a concerted effort to address cyberterrorism and online radicalisation.

Because governments and tech companies lack adequate policies and technical tools for content governance, the arbitrary prohibition of certain content could cause significant social and political unrest. Tensions will spill over into the streets and squares from online spaces. Following the crisis of 2024, more stringent content governance policies will emerge.


Fragmentation of content spaces

The digital space is fragmenting with the development of smaller and segregated online communities of like-minded people. For example, when Trump was banned from Twitter he moved to Truth Social platform which gathers users with similar views. 

This trend of fragmentation means a further disintegration of social spaces and ‘online squares’ with far-reaching consequences for social cohesion and political systems.

More from Diplo and GIP:

Content policy


 Art, Graphics, Light, Text, Logo

In 2024, AI will trigger new aspects of inclusion in addition to the traditional issues of internet access. The main issue is the incorporation of knowledge from various cultural, regional, and civilisation traditions in the development of AI models. Current AI models are based on limited datasets, primarily Western ones. In the coming years, communities will aim to develop bottom-up AI solutions that reflect their cultural and knowledge heritage.

Inclusion is a cornerstone principle of the 2030 Agenda for Sustainable Development, and one that should guide all our efforts to ensure that no-one is left behind in the march into a brighter global future. We must make certain that all citizens, communities, and nations benefit from the historic transition to a digital world, and that special attention is paid to those groups that have historically been neglected or ill-served by technological progress, such as women and girls, those with disabilities, youth and children, and indigenous peoples.

In the 2020s, the challenges of digital inclusion will demand a holistic approach that is able to take into account all of the following policy areas: 

  • Access inclusion: equal access to the internet, information/content, digital networks, services, and technologies.
  • Financial inclusion: access to affordable and trustworthy digital financial and banking services, including e-commerce, e-banking, and insurance.
  • Economic inclusion: facilitate all individuals’, groups’, and communities’ ability to participate fully in the labour market, entrepreneurship opportunities, and other business and commercial activities.  
  • Work inclusion: support and promote equal access to careers in the tech industry and elsewhere irrespective of gender, culture, or nationality. 
  • Gender inclusion: educate and empower women and girls in the digital and tech realms.
  • Policy inclusion: encourage the participation of stakeholders in digital policy processes at the local, national, regional, and international levels. 
  • Knowledge inclusion: contribute to knowledge diversity, innovation, and learning on the internet. 

As we endeavour to find unified responses in this varied range of spheres, and as we are forced to make informed trade-offs between different goals and interest groups, clarity in our thinking about statistics and policy will be essential. Without it, we will be negligent in our duty to work towards the digital inclusion of the ‘next’ or ‘bottom’ billion of digitally excluded citizens of the world.

More from Diplo and GIP:

Digital access | Sustainable development | Inclusive finance

Related resources

Load more
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog